Commission on Preservation and Access

The Evolving National Information Network

Technology Primer


Digitization of information is the single fundamental theme underlying modern data communications networks and their usefulness. Images, speech, music, diagrams, and the written word can all be translated into a sequence of numbers. That sequence can be stored, processed, and/or transmitted. At the other end of the process, the numbers can be interpreted to recreate information in the form in which it was originally expressed. Since computers are able to manipulate digitized information, advances in computer technology are now applicable to information of all types and all types of information can be transported on data communication networks.

Analog circuits and modems vs. digital circuits

Most of the communications infrastructure in the United States and around the world was designed and built primarily for the transmission of sound. Today, the networks are expected to transmit a great deal more than sound. When there was only sound to transmit it was easy to use an analog transmission circuit. This type of transmission sends information as a continuous signal with the frequency of the signal changing as the tone of the communication changed. This type of signal models speech and was well suited for voice transmission. Analog circuits can be used to transmit digital information but require that the digital information be changed to an analog signal, communicated through the voice telecommunications network and then converted back to digital information. The device that changes the signals from digital to analog and then back to digital is called a MODulator/DEModulator or MODEM.

The steadily improving capabilities of digital technology led to increasing use of digital circuits in computer communications and subsequently in communication networks. At first, digital circuits were most cost effective for connecting the major hubs of communication networks. Those hubs needed to send large amounts of information at very high speed across significant distances. Digital technologies made more efficient use of the expensive long distance facilities.

As advances in microelectronics continued, digital technologies became increasingly cost effective, and a larger and larger proportion of the communications network has been digitized. Today all of the long distance communications carriers are moving toward completely digital long distance networks. Microelectronics are now being used to dramatically reduce the cost of circuits, making digital circuits cost effective for increasingly short distances.

One force leading to the increased digitization of the network is an increasing demand to directly connect computers at different locations. While digital communication is now routine in the long distance environment and easily obtained by large organizations, it is not yet provided routinely in the small business and residential environment. Most users of personal computers in homes and small businesses still use modems to communicate across analog circuits with other computers. Even so, the computers with which they communicate are usually connected to other computers via a digital network. A new data communication offering known as Integrated Services Digital Network (ISDN), is just now being made available in some parts of the country. The lowest speed implementation of this service, Basic Rate ISDN, has the capability of providing direct digital connections at affordable prices for home and small business use. Since the service has not been broadly and inexpensively tariffed for individual lines, it has not yet gained sufficient market volume to be inexpensively integrated into the hardware and software of personal computers. As a result, modems continue to be the dominant means of connecting homes and small businesses to the network.

Line speeds

At the same time that the analog transmission environment is being transformed to a digital transmission environment, the available transmission speeds have increased dramatically. In the 1970’s, the speed of digital transmission over the analog circuits then routinely available was fifteen characters per second. Toward the end of the 1970’s, it became straightforward to acquire modems that had a capability of transmitting up to 120 characters per second on a local telephone line.

Within the last few years, it has become routine for those transmissions to take place at rates of 240, 480, 960, and 1,440 characters per second. Modem prices are declining every month so that it is now possible to use a local analog telephone line to transmit information at multiple thousands of characters per second. Even so. these speeds, which can be obtained through the use of modems on analog transmission facilities, are far short of what can be obtained through digital facilities. It has long been possible to obtain, at a reasonable price, digital facilities which transmit information at 6,000 characters per second and only slightly more expensive facilities which are capable of transmitting 200,000 and 1 million characters per second. In the last year, systems capable of transmitting 5.6 million characters per second through digital facilities were beginning to appear. In other words, the analog networks and modems of the early 70’s would have required more than an hour to transmit the report you’re reading, while today’s digital network can do it in a fraction of a second!

Circuit switching vs. packet switching

As these new digital facilities have developed, there has been a parallel set of developments in the switching required to route information from source to destination. Prior to the availability of low cost computer technology, connection between an analog telephone at one location and another at a distance was accomplished through a complicated switching system which had the capability of connecting a local pair of wires to a remote pair of wires. To be specific, twenty years ago when you dialed your telephone the dial directly actuated a set of switches which made physical connection.s between the wires attached to your telephone and the wires attached to the telephone you were dialing. Since this created an electrical circuit between one location and another, this switching technology became known as circuit switching.

Today, when you enter code numbers on a touch tone pad on your telephone, you are signaling a computer-based switch. This switch transmits electrical signals from your telephone by digital technology through additional switches that recreate those electrical signals at a remote telephone. In most cases, there is no direct circuit connection between your telephone and the remote telephone. In fact, once the sound coming from your telephone is transformed into digits, those digits are interspersed with other streams of digits coming from other telephones according to a carefully worked out multiplexing scheme that enables a single circuit using digital technology to carry a number of conversations simultaneously.

In contrast to voice communications, which require a relatively constant rate of information flow from one location to another, computer communications tend to require more episodic or “peaky” information flow. That characteristic, combined with the increasing availability of higher speed data communication circuits, led in the 1960’s to the development of a new switching technology called packet switching. Unlike circuit switching where a dedicated connection is established between the source and the destination of information, packet switching relies on capturing blocks (packets) of information from the source, and attaching a destination address to each packet of information. The information is then forwarded to a computer which reads the destination address and determines the appropriate route for that information to follow toward its destination. The computer then sends the packet on to the next computer along the route, which repeats the process. This forwarding continues until the information is delivered to its destination. In most cases, a packet of information will traverse several such computers, called packet switches, before arriving at its destination. Depending on the amount of traffic that needs to be transmitted between any two locations, successive packets may be sent through the interconnected packet switches using different routes in response to circuit congestion or availability. Packet switching technology was initially designed to provide a robust communication structure that could survive a situation in which a large number of circuits became unavailable due to some natural or man made disaster. For the transmission of data, packet switching now dominates circuit switching and is the foundation for most of today’s data communication infrastructure.

Types of packet networks

Local Area Networks and Departmental Networks

There are today two different types of data networks for local use called Local Area Networks (LANs). The first type uses a distributed architecture with a packet switch in each connected device. The second type concentrates the traffic in a number of switches considerably smaller than the number of attached computers.

Local area networks such as Ethernet and IBM’s token ring all use distributed packet routing technologies through a common communications infrastructure. In these networks, all traffic from all connected machines flows over a shared wiring structure. The shared structure must have adequate capacity to pass the total traffic in the network and, in principle, allows any machine to receive any traffic, whether or not the traffic was intended for that machine.

Local area networks such as Starlan have central packet switching points that operate in cooperation with special devices in the computers attached to these networks. Only the traffic to or from a single computer flows over the wiring for any particular machine. As a result, the speed of the network only needs to be adequate to serve that machine, and traffic addressed to that machine is not accessible by other attached machines.

Campus or Corporate Networks

Networks that cover larger geographic areas such as a large campus or corporation often include some hierarchy in which local area networks are interconnected to a higher capacity network that provides connectivity to the overall campus or enterprise. In this situation, the campus network really becomes a network of networks, or an ‘internet’. The large capacity network that interconnects the local area networks is often referred to as the backbone. The backbone is made up of high capacity cabling, often fiber, and packet switches called routers that connect the local area networks to the backbone.

Metropolitan area networks (MANs)

The campus network approach can be expanded to include whole metropolitan areas. These networks are becoming major users of digital packet technology serving the needs of large numbers of customers.

Wide area networks (WANs)

The most geographically extensive networks are called wide area networks (WANs), and typically use a number of packet switches to route information across the world. There are a number of commercial value added common carriers that provide these services including Sprintnet, Tymnet and Autonet. In addition, there are special purpose packet networks that have been developed to support particular applications such as the networks that support CompuServe, MCIMail and Prodigy.

Also in the category of wide area networks are the intercampus and interestablishment portions of networks such as NSFNET, BITNET, and CSNET. These latter networks have substantial impacts on the higher education and research community and are the subject of much of the discussion later in this paper. These networks now have several levels of hierarchy and interconnect many different networks. They together have become known as the Internet.

Protocol standards

In much the same way that early telephone systems evolved from a hodgepodge of independent companies, each interconnecting only telephones owned by them, to a group of cooperating interconnected companies, the data networking environment has evolved. In the early days of telephony, telephone users were often forced to have four or five different telephones on their desk so they would be able to communicate with individuals on each of the telephone systems that served other individuals with whom they wished to communicate. As the environment evolved, they were able to acquire service from one telephone company which, in turn, connected with other telephone companies, providing the appearance for the user of a single telephone company and network which provided universal connectivity. Underlying that development were a set of agreements among the various telephone companies regarding standards for interconnection of their systems.

Similarly, standards have been developed to allow interconnection of networks.[3] A number of organizations have been instrumental in the work required to develop and codify the standards. In particular, the Institute for Electrical and Electronic Engineers, the American National Standards Institute and the National Information Standards Organization play the dominant standards roles in the United States. The International Organization for Standardization (ISO) and the International Consultative Committee for Telephony and Telegraphy (CCITT) play the dominant roles on the international scene. The standards promulgated by these agencies are usually assigned codes, which will be cited along with the names of the standards in the following discussion. Protocols are defined in detail by complementary standards, usually called protocol suites. There are two dominant protocol suites.

Transmission Control Protocol/Internet Protocol (TCP/IP)

The TCP/IP protocol suite is also known as the Department of Defense (DOD), Defense Advanced Research Projects Agency (DARPA), or Internet protocol suite. It was proposed and developed under the leadership of DARPA and has subsequently become a Department of Defense standard. TCP/IP protocols are the foundation of several thousand connected networks that together form the TCP/IP Internet. These networks form the backbone for data communications in higher education and research throughout the United States today. The TCP/IP protocol suite is very widely adopted in the U. S. and has recently become dominant in the rest of the world.

International Organization for Standardization Open Systems Interconnection (ISO/OSI)

The ISO/OSI protocol suite is overseen by the International Organization for Standardization (ISO) and is known as the reference model for Open Systems Interconnection (OSI). It has been developed by international standards-making bodies since 1980. A component standard of particular importance to information access is the National Information Standards Organization’s Z39.50 that is widely referred to as the Information Retrieval Protocol. The government of the United States has also officially adopted the OSI protocols as the future standard of the government. They have adopted the Government Open Systems Interconnection Protocols (GOSIP). However, since the full OSI protocol suite is still being implemented, it has not been as widely adopted as TCP/IP. It has the strong advantage of being built on a global standards effort and widely accepted as a future protocol suite. As a practical matter, work is now going on to merge the two protocol suites in an approach which will allow transparent interaction among networks based on the two different suites.


Both of TCP/IP and OSI protocol suites have been constructed as layered protocols. Briefly, a layered protocol is made up of standards at each of several levels required for full functionality of the network. For instance, the OSI model divides communications into seven different layers. At the lowest level there is agreement about the physical means of conveying electromagnetic signals from one location to another. This function is provided by the physical layer. At the next level u, the data link and/or network layers implement agreement on the packaging of those signals so the physical connection can be fairly allocated among multiple users, packets can be routed from the source to destination, and at least some degree of error detection provided. Then, at higher levels in the protocol[4], provisions are made to support applications that run on the network. In different protocol suites, these layers may be subdivided differently, but all protocol suites now in use for data networking contain this fundamental concept of layering.

Physical layer

At the physical layer, there are four primary media over which data is transmitted; copper wire, coaxial cable, fiber optics, and radio.

Copper wire. The first, and most heavily used, is plain copper wire. Copper wire forms the foundation for most of our data communications and telephony circuits, especially in carrying information to and from the home and the office to the nearest concentration point or switching site. In the early days of data communications, copper wire was used as an analog medium and was able to transmit only about ten characters per second. The use of microelectronics has enabled copper wire to be used for direct digital transmission and suitably installed copper wire is now capable of millions of characters per second over short distances. Over longer distances, such as those found between a telephone company switching center and a residence or small business, the ISDN Basic Rate service provides speeds in the neighborhood of 18,000 characters per second. Basic Rate ISDN is designed to provide a complete set of communications services including both voice and data. Considerable debate is underway regarding the wisdom of broad deployment of this relatively slow digital transmission technology. As these capabilities demonstrate, copper wire is now capable of providing digital data communication service and has the advantage of already being in place ubiquitously in our institutions, businesses and communities.

Coaxial cable. When higher speed transmission or longer distances must be accommodated, networks can use coaxial cable, a special type of copper wire in which, among other things, the conductors are arranged to resist interference from external electrical signals. Coaxial cable can be used in a ‘baseband’ network to transmit a single stream of digits or it can be used in a ‘broadband’ network to convey multiple streams of digits along with analog signals such as television and audio. Baseband cable is less expensive to install and maintain but is usually used for only one service at a time. On the other hand, broadband cable is more expensive to install and maintain, but can provide the basis for a large number of services. Coaxial cable, because it is physically larger and requires more complicated connectors, is more expensive to install than regular copper wire and is typically used mainly in cases where its special attributes are required. Advances in optoelectronics have led to a steady decline in coaxial cable installations, with a corresponding increase in the use of fiber optics.

Fiber Optics. The newest means of transmitting information is via pulses of light conducted through glass fibers. This technique, known as fiber optics, has the highest potential for data transmission rates and the lowest susceptibility to error in transmission. Consequently, as the technology for using fiber optics to transmit digital information improves, it is being used for an increasingly broad range of applications in data networking. Fiber optics has been used for some time now to interconnect large telephone switches. More recently, it has been installed in undersea cables for intercontinental transmission of digitized information and is starting to be widely used within institutions and businesses to provide high speed internal data communications. It is now possible to transmit more than 200 million characters per second with production fiber optic technology over substantial distances. Standards have now evolved that allow 10 million characters per second to be routinely transmitted within an institution.

Radio. The final medium for communication is radio. Prior to the rapid increase in cost effectiveness of fiber optics, radio communication was very attractive for long distance service. The two dominant uses today are satellite communications for broad area and international coverage and microwave radio relay towers for connectivity to locations that cannot be economically reached by fiber. As fiber technology has improved, satellite radio is being increasingly relegated to broadcast of information rather than point to point communication or connectivity to remote locations in which it is difficult to justify the installation of optical fiber. For instance, microwave relay is still the dominant means of moving information through the western mountainous segments of the United States, and many transoceanic circuits are still implemented via satellite. There are remote libraries and schools in the north central U. S. and Alaska that use satellite technology because the service density is so low that even microwave relay is prohibitive in cost.

There are now some new initiatives in the use of radio for local area and metropolitan networking that capitalize on radio’s wireless nature.[5]Several companies have now released radio-based packet switched local area networks designed for offices where wired networks are either difficult to install (old buildings) or where there is so much movement of the computers that the constant relocation of wiring is a major barrier to network implementation and operation. In addition there are an increasing number of products that provide for connection of personal computers through the switched circuit cellular telephone network, and IBM and Motorola have recently begun providing a nationwide packet radio networking service in support of mobile field employees.[6]

Data Link and Network Layers

A number of different standards are available for the data link and network layers, most of which are specialized to provide support for particular environments. While there are many such protocols, the most notable are:

Ethernet (usually IEEE 802.3) Ethernet is a LAN standard that operates over baseband coaxial cable, broadband coaxial cable, and plain copper wire. Ethernet can support communication at about one million characters per second among several hundred cooperating computers within a relatively small (less than one kilometer end to end) geographic region. Ethernet has for many institutions become a standard for reasonably high speed data networks.

Token Ring (IEEE 802.) An alternate standard known as the IBM token ring standard was designed primarily to use twisted pair wire and fiber at data rates comparable to Ethernet over similar distances.

Fiber Distributed Data Interface (FDDI) FDDI is a new standard designed for use with optical fiber and can interconnect up to five hundred computers over distances of approximately one hundred miles at speeds of 10 million characters per second. These standards are finalized and equipment able to interface computers to an FDDI network is available. FDDI is being used mainly to interconnect other lower speed networks such as Ethernets and IBM token rings. These connections are made through routers or bridges, small computers that switch packets between LANs, MANs, or WANs.

Integrated Services Digital Network (ISDN) ISDN, mentioned above, is another standard conceived as a significant advance to the existing voice network which is just coming into use. ISDN is designed to operate at a number of speeds. Different protocols and media are used at each of those speeds. The best known implementation of ISDN is the Basic Rate service that is designed to allow for a circuit switched connection capable of carrying digitized voice and data (at about 9000 characters per second) between an office or residence and a telephone company switching center. Several phone companies have now published tariffs for ISDN.

Several regions of the United States have or will soon have ISDN service available for use as in businesses. It may also be available for residential use in some areas but applications supporting the use of ISDN for residential use are not plentiful.

Primary rate ISDN is also beginning to appear. It allows twenty-three Basic Rate voice or data channels to be combined in one higher speed channel for transmission between switching centers. Much higher rate ISDN service, known as broadband ISDN, is now under development and it will accommodate much higher rates of speed through the use of fiber optics technology.

Synchronous Optical Network (SONET) SONET is a standard for transport of digital traffic over fiber. It is capable of a broad variety of speeds, ranging from approximately 5 million characters per second up to 1 billion characters per second. The SONET technology will provide the foundation for transmission of data in the gigabit (billion bit per second) networks now under discussion.

Asynchronous Transfer Mode (ATM) ATM is a fast packet switching protocol defined to allow various types of traffic to be intermixed over high speed, low error circuits. As such, it is well suited to SONET. Users can gain access to the capabilities of an ATM-based network through protocols such as the IEEE 802.6 metropolitan area service standard or the broad band ISDN CCITT standard.

Switched Metropolitan Data Service (SMDS) SMDS is a service developed by Bellcore to provide public WAN and MAN connectivity. While it is capable of operating across a broad range of speeds, its current trials are being conducted at speeds in the range of 100,000 characters per second.

These physical and network layer technologies can all provide a foundation for higher layers in the protocol stack. In other words, SONET, ATM, and SMDS can provide transport for either TCP/IP or OSI networks. They therefore provide a path for expansion of those protocol families into the wide area network environments provided by optical fiber technology.

Proprietary standards

IBM and Digital Equipment Corporation have also created standards used to support communication among their own products. These standards, which started out as proprietary standards, are now being migrated towards the ISO standard.

System Network Architecture (SNA) IBM’s standard, SNA, has been more important to the business community than to higher education and research, but a forerunner of the SNA standard (Network Job Entry) forms the foundation for the BITNET communications network.

DECNET Digital Equipment Corporation’s standard, known as DECNET, is moving steadily toward ISO and has provided the foundation for several national and international disciplinary networks.[7]

Future Directions

Fortunately, the definition and implementation of the upper layers of the ISO standards is a steadily developing activity. The protocol community has been geographically divided, with the TCP/IP community based primarily in the United States and the ISO community based primarily in Europe. The TCP/IP standard suite has become increasingly popular in Europe, primarily because the European information technology community needs the connectivity offered. As a result, the Internet now provides good access to the European community. The TCP/IP standard now includes definitions at the transport level and below which allow implementation of the upper layers of the ISO suite. There is a steady evolution of ISO applications on top of TCP/IP transport, and the standards appear to be converging. Key vendors are supporting the interoperation of the two dominant suites through development of products that use open data communications standards. If these trends continue, full interconnection of networks and easy passage of information from a network using one set of standards to a network using another will become steadily less cumbersome and more economical.


All of these systems are justified, of course, by the applications that they enable.

Remote logon

Packet networks were initially expected to be used primarily to support remote use of unique computing resources. For instance, an individual located at one university would be able to remotely logon through the network to a computer at another university that supported applications not locally available. Remote logon continues to be a substantial use of the national network infrastructure and is currently the most important service to the library community because it supports client access to bibliographic data bases. Standards such a Z39.50 and the MARC format for the distribution of catalog records make consistent access to such bibliographic resources possible. There is not yet a widely adopted standard for the user interface to catalogs and other bibliographic information although standards for Common Command Languages have been underway for a number of years.

Electronic mail and conferencing

The second application, not initially anticipated, was interpersonal communication through electronic mail, computer bulletin boards, and electronic conference systems. Al] of these of systems are in use today and electronic mail in particular is a large contributor to the overall use of the national network infrastructure.

File transfer

From the beginning, it was understood that it would be convenient if data files available on one machine could be easily transmitted to another. This use of the network, or file transfer, is now the largest single use of the national networking infrastructure, accounting for approximately half of the total data transmitted over the network. File transfer will provide the foundation for remote access to published works.

Figure 1 [omitted from this electronic version]

As can be seen in Figure 1, a graph of the percentage of bytes transmitted on the NSFNET backbone during the month of May 1993, remote logon, electronic mail and conferencing, and file transfer comprise almost all of the traffic on the network, with file transfer alone accounting for almost half of the traffic. The remaining fifteen percent is presently not a large share of the backbone traffic, but is expected to grow substantially in coming years.

Client server models

As data networks have increased in speed and the cost of data transmission has steadily declined, another pattern of interaction between computers has become important. These uses are motivated by a desire to combine the virtues of the central computing model that supports shared data access with the economy and ease of use of the personal computing model. The client/server model uses a central computer as a data and/or computation server and a large number of networked personal computers operating as clients. Since the client/server model requires continuous connection between the client machines and the server, and because the responsiveness of the personal computer depends on rapid responses from the server, the model has been implemented mainly in arenas where networking speeds are high and costs are low. The model has found its greatest application in local area networks and is supported with a variety of proprietary software from a number of companies such as Novell, Microsoft, and Banyan.

Increasingly, though, additional efforts are being undertaken to support the client server model in a more standards based environment. In particular, for file sharing, Sun has developed the Network File System which is broadly used in conjunction with the UNIX operating system. Work done at Carnegie Mellon University on the Andrew Project has yielded the Andrew File System which uses server machines running UNIX software communicating over standard TCP/IP networks to client machines of various types in support of file sharing. The Andrew File System is now being extended to even larger environments such as larger university campuses and the national network.

Distributed computing environments

As data networking technology continues to gain in speed and become more cost effective, substantial research and development is underway in support of a fundamentally new type of computing architecture that has been called the distributed computing environment (DCE). In the distributed computing environment, the individual computers of whatever size and capacity are not treated as independent machines but rather as part of an overall system tied together by a comprehensive data architecture and high speed network environment. While such environments have long been the subject of experiment and research, the largest product family to initially achieve commercial success was built by Apollo Computer. As was the case with most computing products of its time, the Apollo distributed computing environment was a proprietary system and lost market share as the open system environments built on standards have become steadily more popular. A number of other manufacturers have since developed similar systems. One category of these systems is built with proprietary software and is directed at personal computers of the MS/DOS or Macintosh class. IBM, Microsoft, and Apple have all brought forward products compatible with their personal computer operating systems to implement proprietary distributed computing environments.

Simultaneously, companies such as Novell and Banyan have been very successful with additional proprietary products that transform the local area networks they sell into distributed computing environments. These environments were initially built to support personal computers in small firms or individual offices and departments in large organizations, but are now being expanded to provide the foundation for distributed computing in very large organizations with geographically dispersed activities.

The other category of distributed computing products is based on open, standards-based software and targeted for the next generation of personal computer and workstation systems, such as UNIX, Windows NT, and OS/2. The Apollo work has been combined with the Carnegie Mellon work on Andrew and been adopted by the Open Software Foundation (OSF) as their standard for distributed computing. Most of the major computer equipment manufacturers are members of OSF, and are now beginning to deliver systems which can be included in a standards-based distributed computing environment. Since these systems support integration of machines and operating systems from many manufacturers, they are particularly well suited for large organizations with heterogeneous computing systems. Further, they support integration of information spaces which span multiple organizations. As a result, we can expect network based distributed computing environments to become part of the standard products available in the open systems environment over the coming few years.

As the capability and quality of networks continue to grow, DCE will become capable of spanning steadily larger and more geographically dispersed user populations. In the distributed computing environment, remote logon and file transfer are supplanted by automatic access to the computational resources and the files of other computers in the environment. In other words, if the task you are performing at your personal computer requires access to a supercomputer for a particular computation and that computation in turn requires access to a file being constructed on your personal computer, the environment would automatically invoke the supercomputer and connect it to the appropriate file without the need for explicit direction from the user. This type of a computing environment will probably dominate computing toward the end of the decade.

DCE has several characteristics that will determine the network’s impact on preservation and access. DCE is based on automatic interaction among a number of geographically distributed file servers, all of which maintain local copies of frequently used files, and only update those files when the master file is changed and local access is desired. Further, if local access is not made, the local copy will be released from storage to be supplanted by other files in more frequent demand. The file server with responsibility for the master copy will have a locally defined system for migrating the document from quickly accessible, expensive storage to less expensive media stored in less expensive space with longer access time if requests for the file decline in frequency. While the file will remain cataloged until explicitly deleted, it may well require hours to physically retrieve and provide network access to a file that hasn’t been accessed for a year or more.

Support services

For any of these network based applications to be fully useful, it is important to have available some support services. Support services provide the user with a number of tools and resources that allow him/her to use the networking resources more effectively. The support services are often described by the users of the networking services as the actual service and the networking facilities thus become transparent to the user.


Perhaps the most obvious support service is the network directory.

“White pages” One might think of this as equivalent to the white pages of the telephone book where you could look up a name and find the number (network address) at which a particular individual or organization could be reached. Because the data networking environment is created by such a large number of organizations working in a loosely linked hierarchy, it is much more difficult to develop and maintain white pages and directory services in the data networking environment than it is in the voice telephone environment. As a result, while there are local examples of user directories, there is no set of comprehensive regional or national services which provide access to this information. Fortunately, there is an international standards effort which defines standards for such directories and there are experimental implementations of this standard, X.500, now underway at a number of universities and regional networks.

“Yellow pages” Almost equally important to the network user is access to services. There is, for instance, currently no comprehensive directory of bibliographic resources accessible via the network. The same issues that complicate the creation and maintenance of the “white pages” impede the development of a services directory. As the connected community grows, and organizational structures are developed to support fee-for-service delivery over the network, service providers will probably be willing to fund their appearance in the service directory, and it is expected that commercial ventures will build directories and make them available on the network. Once again, there are prototypes and pilots already in existence.

Integrity, security, and privacy

Obviously, as networks proliferate, the issue of information integrity, security, and privacy has become an increasingly important concern. Precisely because of the highly distributed architecture that supports the overall network fabric, much of the information from any particular source to any particular destination traverses computers and packet switches not under the control of either the sending or receiving organization. As a result, it is extremely easy to construct scenarios under which the information is intercepted, inappropriately modified, or prevented from being delivered. While there are very strong norms in the network community regarding the undesirability of such intervention in the network fabric, a substantial amount of the information to be transmitted on the networks is of a nature that the users require more than normative consensus to insure that the network will perform in such a way as to maintain the integrity of the information, secure it from inappropriate tampering, and maintain the privacy of the communications involved. In order to accomplish this, two fundamental technological challenges must be surmounted, encryption and authentication.


Encryption is the process by which one ensures that the information itself cannot be viewed by other than the intended recipients. In today’s networking environment, the only technique that provides substantial assurance is data encryption before transmission. Ideally, that encryption should be based on well understood international standards for encryption. Unfortunately, the character of the international community has dictated that most of these standards are national as opposed to international. The current standard in the United States is the Data Encryption Standard (DES) and both hardware and software are widely available to implement encryption according to this standard.

User authentication

Successful implementation of encryption depends on a somewhat more difficult service, user authentication. Unless it is possible to definitely ascertain that a user is exactly who she says she is and that a computer being used by that user is in fact not being inappropriately used by someone else, the best data encryption approaches will not protect the information against users and computers masquerading as authorized to see particular information. Here again substantial work has been undertaken, the most important of which is work done at the Massachusetts Institute of Technology as part of Project Athena to develop the Kerberos user authentication tools. These tools are now being made available in an increasing number of network services, and have been chosen by the Open Software Foundation to be a part of the Distributed Computing Environment. Ultimately, it is likely that user authentication will be integrated with the directory services. with user authentication as a support service, it will be possible to use encryption to secure information being transmitted over the network.