The History of the Internet

The Department of Defense's Advanced Research Projects Agency (DARPA) started the Internet in 1969, in a computer room at the University of California, Los Angeles. It wanted to enable scientists at multiple universities to share research information. Advanced Research Projects Agency NETwork (ARPANET), the predecessor to the Internet, was created 12 years after Sputnik, during the Cold War. DARPA's original goal was to develop a network secure enough to withstand a nuclear attack.

The first communications switch that routed messages on the ARPANET was developed at Bolt Beranek and Newman (BBN) in Cambridge, Massachusetts. (BBN was bought by GTE. Bell Atlantic acquired GTE, changed its name to Verizon and spun off BBN as Genuity.) ARPANET's network used packet switching developed by Rand Corporation in 1962. Data was broken up into “envelopes” of information that contain addressing, error checking and user data. One advantage of packet switching is that packets from multiple computers can share the same circuit. A separate connection is not needed for each transmission. Moreover, in the case of an attack, if one computer goes down, data can be rerouted to other computers in the packet network. TCP/IP, the protocol still used on the Internet, was developed in 1974 by Vint Cerf and Robert Kahn. It supports a suite of services such as email, file transfer and logging onto remote computers.

In 1984, as more sites were added to ARPANET, the term Internet started to be used. The ARPANET was shut down in 1984, but the Internet was left intact. In 1987, oversight of the Internet was transferred from the Department of Defense to the National Science Foundation.

While still used largely by universities and technical organizations, applications on the Internet expanded from its original defense work. In particular, newsgroups used by computer hobbyists, college faculty and students, were formed around special interests such as cooking, specialized technology and lifestyles. The lifestyles newsgroups included sexual orientation (gay and lesbian), religion and gender issues. Computer-literate people were also using the Internet to log onto computers at distant universities for research and to send electronic mail.

The Internet was completely text prior to 1990. There were no graphics, pictures or color. All tasks were done without the point-and-click assistance of browsers, such as Netscape and Internet Explorer. Rather, people had to learn, for example, UNIX commands. UNIX is a computer operating system developed in 1972 by Bell Labs. UNIX commands include: m for Get Mail, j for Go to the Next Mail Message, d for Delete Mail and u for Undelete Mail. The Internet was not for the timid or for computer neophytes.

The advent of the World Wide Web in 1989 and browsers in 1993 completely changed the Internet. The World Wide Web is a graphics-based vehicle to link users to sources of information. It is based on a method whereby users “click” on graphics or text to be transferred to a site where information can be accessed. In 1993, the Mosaic browser was developed at the University of Illinois as a point-and-click way to access the World Wide Web. This opened up the Internet to users without computer skills. It is no longer necessary to learn arcane commands to open mail, to navigate from site to site for research or to join chat or newsgroups.

In 1995, the National Science Foundation turned the management of the Internet backbone over to commercial organizations. Commercial networks such as Sprint, UUNET (now part of WorldCom) and Cable & Wireless carry a large portion of the backbone Internet traffic. Backbones are analogous to highways that carry high-speed traffic.

Bulletin Board Systems (BBSs)

Bulletin boards were used independently from the Internet. They allowed people with modems connected to their computers to read information and post information on a PC.

Users throughout the 1980s used modems, personal computers, communications software and telephone lines to dial into information on other computers. Many bulletin boards were used for “chats” and to exchange ideas around specific hobbies. For example, callers would dial in and type ideas or experiences they had with new software or computer equipment. The World Wide Web has largely replaced bulletin boards.

Who Runs the Internet?

The Internet is run informally by a number of organizations. Following is an overview of the key ones:

  • The Internet Society (ISOC) is a nonprofit group that promulgates policies and promotes the global connectivity of the Internet. The group is the closest thing to a governing body for the Internet. It was formed in 1992 and is open to anyone who wishes to join.

  • The Internet Architecture Board (IAB) is a technical advisory group of the Internet Society. It appoints chairs of the Internet Engineering Task Force (IETF). It provides architectural oversight for the protocols and procedures used by the Internet.

  • Internet Corporation for Assigned Names and Numbers (ICANN) is charged with overseeing Internet address allocation and setting rules for domain-name registrations. It oversees the creation of new top-level names. Examples of top-level domain names are .com and .net. ICANN also influences the setting of technical standards. It is a nonprofit organization created in 1998 by the U.S. government to take over from the government-funded Internet Assigned Numbers Authority.

  • Internet Engineering Task Force (IETF) is a standards-setting body. The IETF works under the aegis of the Internet Society. It focuses on TCP/IP protocol standards issues. TCP/IP is the protocol used on the Internet.

  • VeriSign (formerly called Network Solutions, Inc.) was given the task by the National Science Foundation in January of 1993 to register Internet names, assign addresses and manage the database of names. The registration service was formerly called the InterNIC, or Internet Network Information Center, a registered service mark of the United States Department of Commerce. VeriSign purchased Network Solutions in 2000. Although VeriSign manages the master lists of .com, .net and .org, other companies also register them for end users. (Internet names are discussed later in this chapter.)

  • IOPS.ORG (Internet Operators' Providers Services) was formed in May of 1997 to address Internet routing robustness—where to send packets based on conditions such as congestion. It was founded by nine of the largest Internet service providers, including AT&T, GTE (now Genuity) and WorldCom. The point is to establish standard procedures on routing data between multiple operators' networks.

  • The World Wide Web Consortium, also known as W3C, is a group formed to develop common standards for the World Wide Web. It is run jointly by the MIT Laboratory for Computer Science; the National Institute for Research in Computer Science and Automation in France, which is responsible for Europe and Keio University in Japan, which is responsible for Asia. Over 150 organizations are members.

Who Owns the Internet?

No one organization owns the Internet. Rather, the Internet is a worldwide arrangement of interconnected networks. Network service providers, including AT&T, Cable & Wireless, Sprint, Genuity, Verio, Qwest and WorldCom, carry Internet information such as email messages and research conducted on the Internet. These networks are worldwide in scope with backbone networks run by network providers in other countries.

Network providers own the high-speed lines that make up the Internet. Carriers with nationwide networks are called Tier 1 providers. Some Tier 1 providers lease fiber lines from carriers such as AT&T and connect their own switches and routers to the leased lines. The definition of Tier 1 varies by location. It generally means that the carrier has a point of presence in all of the major cities of an area. In the United States, Tier 1 providers have POPs in the 25 largest cities. They transfer data between each other at locations called “peering” sites. At the peering sites, network devices called routers transfer messages between the backbones, high-capacity telephone lines owned by dozens of network service providers.

Peering—A Way to Exchange Data Between Networks

Data carried by different Internet networks needs to exchanged so that sites and users on different networks can send data to each other. In 1995, the National Science Foundation funded four peering, or network access points. They are located in New Jersey; Washington, D.C.; Chicago and San Francisco. These sites are now run by commercial organizations. WorldCom runs MAE® East in Virginia, MAE West in San Jose, California and MAE Central in Dallas, Texas. MAE originally was defined as MERIT-access exchange. (The original exchanges were run by MERIT Access Exchange, which was later purchased by MFS, now a part of WorldCom.) The term is now generally defined as metropolitan area exchange. Internet service providers lease ports on WorldCom ATM switches at MAE sites that they connect to their routers. The asynchronous transfer mode (ATM) connections are available at port speeds ranging from 45 megabits (Mb) to 622 Mb with guaranteed or best-effort quality of service (QoS). (See Chapter 6 for ATM service.)

WorldCom has collocation space available for Internet service providers (ISPs) to rent if they wish. WorldCom posts the names of ISPs at the MAE sites and the ISPs make peering agreements to exchange Internet traffic with each other. WorldCom has registered the term MAE. European public exchanges are located in London (London Internet Exchange, or LINX) Amsterdam (AMS-IX) and Frankfurt (MAE-FFT).

In response to concerns about traffic at these peering centers “bogging down” the Internet, network service providers such as Genuity, Sprint and PSINet arranged private peering exchanges. “Meeting places” to exchange data have been set up to avoid possible congestion at the major exchange centers. Pacific Bell has a network access point in San Francisco and Sprint has one called SprintLink in New Jersey. This direct exchange method is seen as a more efficient way to exchange data. Moreover, carriers agree on levels of service, amount of data to be transferred and delay parameters. They feel they can monitor reliability more closely at private peering exchanges.

Content Delivery Networks (CDNs) and Caching—Solving the Problem of Bogged-Down Web Sites

Content delivery networks improve Internet performance by placing Web pages in servers near users, at the edge of the Internet. They replicate data in thousands of servers around the world so that Web pages load faster on people's computers. This is referred to as caching. It is analogous to publishing documents and making them available to many “readers” at the same time.

The characteristics of the protocols used in the Internet make content delivery dispersed closer to end users important. If all the requests at busy sites such as Yahoo! went to one server it would result in server “meltdown.” In TV networks, which are for the most part one-way, everyone receives programs at the same time. In contrast, the Internet is two-way and servers have the double task of sending and receiving. Servers send an acknowledgment for every request they receive. Moreover, data is not broadcast. It is sent individually to each user.

Distributing content results in fewer servers at content providers' sites and less bandwidth needed at central server sites. Because so much of the content originates in the United States, arranging data closer to end users is important in the rest of the world. Having content closer to client PCs saves bandwidth costs for local access providers, content suppliers and hosting companies. Content delivery networks are based on Layer 4 to Layer 7 protocols that can identify the origin of requests based on IP addresses assigned to access providers including cable modem, dialup and DSL providers.

Content delivery networks (CDNs) are based on two models. One is a service bureau model typified by offerings from Akamai and Digital Island (part of Cable & Wireless). Service bureaus sell to content providers (e.g., CNN, C-SPAN and MSNBC.COM), organizations with busy sites (e.g., Symantec, Lands End and Barnes & Noble) and portals such as Yahoo! and Excite. Customers pay Akamai and Digital Island by the amount of traffic they handle. CDNs place their servers, usually at no charge, in network provider sites. These sites include cable companies' regional data centers and backbone Internet providers' points of presence (POPs). Akamai operates a worldwide network of 4000 servers located in providers' data centers. Content delivery networks design their networks with enough intelligence to determine, based on the IP address assigned to each user by his ISP, the most effective server from which to direct the traffic.

The other model is the sale of caching servers and switches to network service providers. Inktomi, Nortel (through its purchase of Alteon) and Cisco (through its purchase of ArrowPoint) sell caching hardware and software. Some suppliers call their equipment Web switches. Web switches route traffic based on content requested and are able to balance traffic between multiple servers so that no one server is bogged down and others are idle. Traffic management on these platforms is based on Layer 4 to Layer 7 equipment that analyzes headers and uniform resource locator (URL) requests. A header is the preliminary information in packets that contains sender information and routing instructions. Unlike content delivery networks, these companies sell directly to network service providers such as NTT, AT&T, BellSouth and hosting companies. For example, Inktomi supplies a server with special caching software included that is installed in data centers. (A data center is synonymous with a point of presence (POP), where carriers keep their switches or routers.) The server monitors traffic and if it sees a number of requests for a particular Web site it stores that content locally.

Internet Services

Prior to 1995 and the availability of the World Wide Web and browsers, using the Internet and sending email was done without menu-driven software. People who surfed the Internet did so via services such as FTP (File Transfer Protocol) and Telnet. They sent and received electronic mail through a service called Simple Mail Transfer Protocol (SMTP). All of these services relied on users knowing the commands of a computer language.

Researchers used File Transfer Protocol (FTP) to log onto computers at other sites, such as other universities, to retrieve files that were in text form. Graphics, video and voice files were not transmitted over the Internet. Moreover, finding information was a complex task. Researchers were able to search thousands of sites worldwide but commands had to be typed into the computer in an exact format. Dots, spaces and capitalization rules were strict. For example, “dir” let you see the contents of a directory, while “get filename” let you view the file on the computer screen. To simplify the search process, programs such as Archie were created. Archie was meant to simplify FTP use by enabling searches by topic. Gopher, a precursor to Web browsers and introduced in 1991 by the University of Minnesota, was more menu-driven than Archie but was bypassed after 1994 when Mosaic, an early World Wide Web browser, was developed.

Another service available to access information prior to the availability of browsers was Telnet. While File Transfer Protocol is a way to transfer a file, Telnet is an Internet service for creating an interactive session with a computer on a different network. Telnet enables users to log onto computers located on the Internet as if they were local terminals. People used Telnet with arcane commands such as “host name”. They had to know the name of the remote computer they wished to log onto. Telnet and FTP are still used; however, access to them is via menu-driven browsers (e.g., Netscape Navigator and Internet Explorer).

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset