Skip to main content Skip to Search Box
Summary Article: Computer Networks
from Encyclopedia of Social Networks

A computer network is a collection of two or more computing devices linked by communications channels. These networks allow users and devices to share resources and communicate with one another. As a technology of communication that spans much of the globe, computer networks have done much to shape contemporary social networking. Online social networking sites such as Facebook and MySpace have been made possible by the development of computer-to-computer networks. Networks like these allow hundreds of millions of people to communicate with their friends, family, and colleagues through their computers or cell phones. However, just as technology has influenced culture, culture has shaped this technology. Computer networks owe their existence to a wide variety of social networks that have preceded the existence of computer networking technology. The social networks of technologists, military officials, businesses, and academics all contributed to the development of this technology. Thus, any analysis of computer networks is necessarily an analysis of the social networks that gave rise to them. Many social actors have been involved in the production of this technology throughout this history of computer networking, which maintains its own unique culture.

Development of Computer Networks

Most of the research and implementation involved in computer networking took place in the United States in the 1960s, although researchers in Great Britain also contributed. In the United States, computer-to-computer networking was largely spurred by military and government spending. During the Cold War, and particularly after the launch of Sputnik by the Soviet Union, the U.S. government radically increased its spending on basic scientific research. The goals of this research were not predetermined; that is, there was no specific mandate to create a computer network. Rather, the U.S. government sought to increase its scientific power in response to Soviet successes in science and technology. Thus, researchers could receive money with very few strings attached, just so long as they were conducting research into cutting-edge technologies.

Computer networks did not appear fully formed in the 1960s. In fact, there is no clear beginning to computer networking; rather, it grew out of a complex technological environment and from many prior communications and technological practices, the most important of these being telegraphy and telephony, radar networks, computer time sharing, and packet switching.

Over its long history, telegraphy developed sophisticated systems of message management and delivery. The Morse code used in telegraphy inspired Claude E. Shannon's 1945 theory of information, a vital ingredient in computer science. Telegraphy, and later telephony, enabled people to overcome geographical limitations to communication by radically reducing the time it took to send information over long distances. The national telephone network in the United States, owned by the monopoly American Telephone and Telegraph (AT&T), was a key infrastructure upon which computer networks were to be built. Once digital computers were invented, they were soon put to use in managing traffic on telegraph and telephone networks; in this way, computers were closely linked to telecommunications. Telephone companies also developed the modem, which was later used for computer-to-computer connections. Thus, these technologies are direct forebears to computer networking.

Similarly, computer networks trace their lineage to the U.S. military's network of early-warning radar stations, called the Semi-Automatic Ground Environment (SAGE). Started in the 1950s, SAGE linked radar stations to computers, allowing operators to monitor air traffic for intrusions by enemy bombers. Operators would then use radio to guide aircraft to intercept targets. Computers were used to manage all of the data involved in this complex operation in real time, presenting operators data on specially designed viewing screens. This was an innovative feat, given the technology of the time. This system was designed to mitigate the threat of bomber-based nuclear attack, allowing time for a response to such intrusions. However, despite this ambitious effort, this system was made quickly obsolete by the advent of ballistic missiles, weapons that are far faster than bombers.

Computer time sharing arose because most pre-1970s computers were extremely expensive, incompatible with one another due to differences in their software, required constant maintenance, and could only be regularly accessed by what Paul Ceruzzi aptly calls the “priesthood” of computer programmers, which essentially involved users queuing up, inputting data in batches, and then waiting for results. In order to make computer use easier and more accessible, computer manufacturers and researchers began to develop ways to distribute access to computer processing. This was done via time-sharing using remote terminals. Terminals weren't computers but were rather more akin to today's monitors, printers, and keyboards. However, they were networked to the main central processing unit (CPU), often via phone lines, and allowed users to interact with the computer. They allowed users to input and receive data into mainframe computers, even over long distances.

The final ingredient, packet switching, was developed separately in the United States by Paul Baran and in the United Kingdom by Donald Davies as a means to efficiently send and receive data over transmission lines. The theory was that many small chunks of data, called packets, could be sent through distributed networks much faster and more efficiently than sending large amounts directly from one computer to another. In this system, large messages are broken up into packets and then transmitted. These packets were set to be the same size and could be routed on any path through the network from their origins to their destinations; no two packets necessarily had to use the same route through the network. Once they arrived at their destination, the packets were to be reassembled into complete messages. Even though this process involved breaking individual messages apart and seemingly casting them to the digital winds, the success of the first large-scale, packet-switching computer network, the Advanced Research Projects Agency Network (ARPANET), proved in 1969 that it would work. Packet switching was not the only model for computer networking, but the eventual success of the ARPANET solidified it as the method of choice.

The ARPANET was developed by the U.S. military in partnership with universities and private companies and was funded by the U.S. Department of Defense's Defense Advanced Research Programs Agency (DARPA). ARPANET began with four nodes: three in California (at the University of California, Los Angeles [UCLA], University of California, Santa Barbara, and the Stanford Research Institute), and one at the University of Utah. Even though it was an ambitious project—the first packet-switching network in existence—it quickly expanded through further funding from DARPA as universities and research labs were connected across the United States and eventually Europe and Canada.

However, despite the massive funding, research, and installation of infrastructure, ARPANET initially suffered from lack of use because users did not have much reason to use the system. Although its designers envisioned it as a means to share scarce computer resources, users of the system did not take advantage of this capability. In its first few years, the future of the network was uncertain. This changed when a networked messaging system—e-mail—first appeared in 1971. By the end of the 1970s, e-mail was the most common application used on ARPANET.

ARPANET was eventually linked to other networks, such as the Alohanet computer-radio-computer network in Hawaii, computer-satellite-computer networks, and small local-area networks using the new high-speed Ethernet connection technology. This was the beginning of the Internet, since these widely different networks were “Internetworked.” Again, the U.S. military was a driving force behind this soon-to-be global linkage of computer networks; the military's Defense Communications Agency had taken over ARPANET in 1975 and funded much of the research involved in connecting ARPANET to the other networks. Technologists involved in the invention of the Internet created technical standards (called protocols), which were open and allowed any device to be connected to the growing network. This enabled a system that is highly resilient, since it does not matter which computer one uses to connect. As long as the computer can speak the standard language of the Internet Protocol, it can get online. This allows for a network of heterogeneous devices, operating systems, and software applications, and it does not preclude the development of new networking technologies.

This process of inter-networking took place over the 1970s through the 1990s. In 1983, after ARPANET was taken over by the U.S. military's Defense Communications Agency, ARPANET splintered into two separate networks: MILNET (the military network) and ARPANET, with access to MILNET much more restricted. The ARPANET was managed by the National Science Foundation (NSF); eventually, the ARPANET was decommissioned in favor of the NSFNET. The NSF maintained a strict prohibition on commercial use of the network. However, since companies recognized the value of computer networks, they often set up small networks termed >intranets or local area networks (LANs). They used these small networks to share documents and word-processing applications as well as to increasingly rely on e-mail for asynchronous, global communication. These small-scale networks spurred private research into networking technologies, giving rise to networking companies like Novell and providing a large, private information technology workforce capable of solving networking problems. After the NSFNET was opened to commercial use in 1991, this large base of networks and users were eager and capable of joining the network of networks. Soon, the NSFNET was simply referred to as the Internet; by the mid-1990s, it took the shape that is recognized today, with commerce, research, communication, and resource sharing sites.

Cultural Influences on Computer Networking

Just as computer networking did not appear in a technological vacuum but rather grew out of a complex technological environment, it also has grown out of the activities of a complex mix of social actors. These actors and the networks in which they associate work with and against one another in the effort to shape the meanings, uses, and purposes of computer networks. Here, military officials negotiate with academics; academics recruit from both businesses and the hacker subculture; and, as networking becomes more ubiquitous, everyday users enter into the conversation and make their own demands.

A U.S. Air Force Communications Squadron network control center technician maintains the base computer server for over 10,000 users at an undisclosed location in Asia in 2010. The military was instrumental in developing the first networked computers.

The development of packet switching, that vital ingredient in computer networking, grew in part out of the U.S. Department of Defense's (DoD) culture of command and control, a Cold War information technology practice. The DoD desired to watch all airborne, seaborne, and land traffic in order to manage its troops and keep track of enemies. When Baran, an employee of the Air Force's RAND Corporation, developed packet switching in the 1960s, it was intended as a means to communicate reliably in case of a nuclear attack. The DoD saw this as a vital part of its Cold War strategy. If a nuclear weapon knocked out a key communications hub, the military would be unable to coordinate a response. Thus, packet switching was presented as a means of communication that could supply military leaders with complete knowledge and control of the battlefield even in the face of large, catastrophic attacks. This culture continues today in the military, for example, in the Air Force Cyber Command project, which monitors computer networks for online attacks.

However, the creation of ARPANET was not entirely a matter of military culture, nor was it driven simply by the desire to maintain communications in the event of an attack. ARPANET was also built by academics who valued their freedom of research and inquiry into computer networking technologies. Notable among them was J. R. Licklider, who conceptualized an “intergalactic computer network” that could allow researchers to share their work and collaborate. His vision for such a network was equally founded on the people involved as it was on the computers. As the first major users of this network, academic scientists and engineers played combined roles as the producers and consumers of the network, simultaneously building the network and using it to communicate. One of their main goals was to increase the capacity, reliability, and efficiency of digital communications. By networking computers together, these technologists sought to share resources and research. The development of e-mail fostered their online culture of open exchange, enabling better contact and sharing and allowing them to continue their work in solving myriad technical problems associated with digital communications networks.

As Cold War-era scientists and military branches increased the breadth and scope of their global communications systems and thus their ability to influence events around the globe, private corporations took notice. Through the migration of personnel across sectors, the social networks of the military and the academy began to influence and be integrated into the social networks of corporations. As capitalism is an economic system that seeks new markets, computer networks—which offered global communications and information processing—were adopted by businesses looking to expand. Computer networks allowed corporations to become transnational, transcending national borders in their search for cheaper labor and materials. This global expansion is now commonly referred to as globalization: far-flung networks of transnational corporations made possible by telecommunications. These companies have had a major impact on the design of networks. From the outset of the Internet, corporations sought increased security for the network, new tools for conducting business, and better access to faster computing. Their demands helped fuel the commercialization of the Internet, and by the 1990s, the Internet was seen as a new space for global commerce.

Computer hackers have also had a major impact on the development of networks. Computer hackers pride themselves on their mastery of machines and networks, which they seek out to be opened up, explored, and modified. Although hacker is commonly considered to be a pejorative term, its roots are more benign, referring to anyone who is skilled with technology, computer programming, or computer hardware. The advent of cheap personal computers in the mid-1970s, along with inexpensive modems, gave the hacker subculture a boost. Prior to the personal computer (PC), hackers had to beg, borrow, or steal access to the large mainframes owned by universities. With PCs and modems, hackers could get onto the Internet, explore, develop new software, and forge new online social networks. The threat of their intrusion (real or imagined) into sensitive parts of the network also did much to shape computer networks; today there is a massive industry dedicated to computer network encryption, security, and virus protection.

Finally, the development of the World Wide Web (WWW, or Web), an application that provides a graphical interface to the Internet, helped open up the network to a broader base of people. Before the Web, people used services such as Prodigy, CompuServe, Minitel, or AOL to connect with one another and enjoy online graphics, chat services, bulletin boards, and e-mail. However, users of these services typically could not access the broader Internet. After the advent of the Web in the early 1990s, people could use their PCs to access images, text, audio, and video content from a huge array of sources. Moreover, with some effort, users could post their own content to the Web. Alongside the expansion of access to dial-up connections, this led to the popularization of the Internet. This broader use of the network increased the number of social groups engaged with its development and continues to shape it today. The advent of the Web 2.0 business model is one result; many Websites now rely almost exclusively on user-generated content. For example, Wikipedia is almost entirely user-written, YouTube features millions of amateur videos, and social networking and blogging sites are essentially frames that users fill with content.

In short, all of these various and often pre-existing social networks worked together (and sometimes against one another) to create and shape computer networks. The military, the academy, business, hackers, and everyday users all influenced the content, uses, and meanings of the Internet. However, despite their often contradictory goals and desires for computer networks, the Internet remains a remarkably open structure, capable of accepting any form of data and nearly any device. While corporations and governments such as China, France, Iran, Egypt, and Australia have worked to regulate content on the Internet, the Internet's technological structure and culture both foster openness, even in those countries with increased regulation.

Contemporary Developments

Although predicting the future of computer networking is nearly impossible, there are a few notable contemporary trends. First, the ubiquity of computer networks—at least in the wealthiest nations—has rendered them banal. This is reflected in the latest development in networking, termed cloud computing. This mode of networking treats computer processing power and storage as a utility, much like in water delivery or electric grids. End users do not need to be aware of how electrical current reaches their homes, only that the lights come on. Likewise, cloud computing services offered by vendors such as Amazon and Google utilize broadband Internet connections to deliver data processing and storage to customers. In other words, users of these services do not need to know how or where their data is being stored or processed, only that it gets done. This development is driven by corporations, which seek to centralize computing power and gather customer data to sell to advertisers. In a sense, cloud computing is a return to the older time-sharing model, but with updated speeds and relying upon the end user's computer to provide the interface.

The development of new Internet protocols, which expand the number of unique identifiers, could possibly mean the expansion of the “Internet of things,” where everyday objects are all linked to the network. In this mode, objects might either be equipped with onboard computers or with Radio Frequency Identification (RFID) tags. In this way, the distribution, sales, and manufacture of objects can be quickly quantified and analyzed. Even if this ubiquitous embedding of computers and RFID into objects never occurs, the semantic tagging of objects, coupled with existing identifiers such as barcodes and International Standard Book Numbers (ISBNs), could give rise to something analogous to the original “Internet of things” idea.

The increasing miniaturization of computers has led to the increasing power of “smart phones,” cellular phones with Web-enabled computers inside. A growing number of people access the Internet through their phones rather than through their computers. This has created a network that reflects mobility; users are increasingly using global positioning systems (GPS) applications, image recognition software, and mapping applications to gain information about their locations. This mobile network is also accelerating changes in how people work; the distinction between work and play, and public and private spaces, becomes blurred when one is always connected to the network.

The struggle over intellectual property on the Internet is also not likely to abate in the near future. Media corporations seek to dominate intellectual property on the Internet in order to protect their profits; control distribution of content; and monopolize on the cultural meanings of brands, characters, and stories. Moreover, on many Websites, services and content are offered for free, just so long as users provide their personal data and view advertisements. In fact, many Web 2.0 companies such as Google, Facebook, MySpace, and YouTube have built their businesses upon capturing user-generated content and information, claiming perpetual ownership over it and selling it to advertisers. Corporations are claiming more and more rights to control various forms of information. In contrast, many users have flouted media companies' intellectual property claims by using computer networks to pirate software and content, distribute it freely across the Internet, and use anonymity to circumvent demands for personal information. Somewhere in the middle of these extremes is the free and open source software (FOSS) movement and “copyleft” advocates such as Creative Commons. They have crafted content licensing schemes, which allow users more control over how their content is used. However, even with the advent of copyleft, the debate over intellectual property in computer networks continues to be heated.

Finally, these developments often belie the fact that the Internet is not, in fact, ubiquitous. From the very beginning, computer network designers had to decide who would have access to the network and who wouldn't. This was largely a function of geography and of social connection. In the earliest days of ARPANET, many researchers were excluded from the network and from computer science research because they were not affiliated with DARPA. In addition, since the Internet was a creation of the United States, other nations—notably those of the Soviet sphere—were left out. Given the high cost of infrastructure, even today the Internet is not available for many people in poorer parts of the world. Finally, even within highly developed countries such as the United States, there is a gap between Internet haves and have-nots. This division, commonly termed the digital divide, is a technological and social gap, and many social researchers have been troubled by this inequality and have sought ways to increase access to the Internet.

Conclusion

While it is difficult to foresee what the future holds for the development of computer networks, one thing is certain: this process will remain a social one. That is, technological development will be debated, created, and managed by diverse social groups, each with their own needs and desires. Computer networks arose out of pre-existing social networks, and while the capacity to communicate globally has affected notions of friendship, commerce, privacy, connection, and intimacy, technological change is a two-way street.

See Also:

Communication Networks, History of Social Networks 1960-1975, Internet History and Networks, Military Networks, Telecommunication Networks, Wi-Fi.

Further Readings
  • Abbate, J. Inventing the Internet. Cambridge, MA: MIT Press, 1999.
  • Ceruzzi, P. E. A History of Modern Computing. Cambridge, MA: MIT Press, 2003.
  • Moschovitis, C. J. P. History of the Internet: A Chronology, 1843 to the Present. Santa Barbara, CA: ABC-CLIO, 1999.
  • Zittrain, J. The Future of the Internet and How to Stop It. New Haven, CT: Yale University Press, 2008.
  • W Gehl, Robert
    University of Utah University of Utah
    SAGE Publications, Inc.

    Related Articles


    Full text Article Computer Networks
    Encyclopedia of International Media and Communications

    I. Two Meanings of Computer Networks II. Technical Characteristics of Computer Networking III. Computer Networking as a Social...

    Full text Article NETWORKS, COMPUTER
    Encyclopedia of Computer Science

    A computer network consists of a set of communication channels interconnecting a set of computing devices or nodes that can communicate with...

    Full text Article Computer Networks
    Encyclopedia of 20th Century Technology

    Computers and computer networks have changed the way we do almost everything—the way we teach, learn, do research, access or share information,...

    See more from Credo