The last time Hackerfall tried to access this page, it returned a not found error. A cached version of the page is below, or click here to continue anyway

How the Net Was Won

How the Net Was Won

The ARPANET came before it. And the World Wide Web and browser technology would later make it accessible for the masses. But in between, a small Ann Arbor-based group labored on the NSFNET in relative obscurity to buildand ultimately to savethe Internet.

by Randy Milgrom

Click to play video

Douglas Van Houweling was collapsed in a chair, overjoyedbut daunted by the task ahead.

Van Houweling had received unofficial word a few weeks earlier that the National Science Foundation (NSF) had accepted his groups proposal to upgrade an overloaded NSFNET backbone connecting the nations handful of supercomputing sites and nascent regional computer networks, but many details still needed to be negotiated with the NSF before a public announcement could be made. With those arrangements finally completed, that announcement, with some fanfare, would come the following dayon November 24, 1987.

The core of the team that Van Houweling and Eric Aupperle had knit togetherand that for six long weeks had labored 20 hours a day, seven days a week, obsessing over every detail of its response to the NSFs Request For Proposalhad gathered in Aupperles Ann Arbor home, and had stayed late into the night. It would only have these next few hours to exchange congratulations, to celebrateand to start thinking about what would come nextbefore the real work would begin.

The Aupperle living room surged all evening with anticipation and speculation. As the night wound down, someone sitting on the floor next to the sofa said, I think this is going to change the world.

And yet they had no idea.

</>

A press conference on Nov. 24, 1987, at Wayne State University to announce that Merit had won the NSF grant to upgrade an overloaded NSFNET

Michigan at the Forefront

Van Houweling was hired in late 1984 as the University of Michigans first vice-provost for information technology. Michigan Engineering Dean James Duderstadt and Associate Dean Daniel Atkins had fought to create the position, and to bring in Van Houweling, believing it critical to the Universitys efforts to solidify and extend its already substantial computer standing.

The transformative power of computing would begin to gain credence by the 1960s, but the University of Michigan was at the forefront of the movement a full decade earlier. In 1953 its Michigan Digital Automatic Computer (MIDAC)designed and built to help solve complex military problemswas only the sixth university-based high-speed electronic digital computer in the country, and the first in the Midwest. And in 1956 the legendary Arthur Burksco-creator of the Electronic Numerical Integrator and Computer (ENIAC; considered the worlds first computer)had established at Michigan one of the nations first computer science programs.

Michigan also became involved in the U.S. Department of Defense CONCOMP project, which focused on the CONversational use of COMPuters (hence the name), and by the mid-1960s the University of Michigan had established the Michigan Terminal System (MTS)one of the worlds first time-sharing computer systems, and a pioneer in early forms of email, file-sharing and conferencing. In 1966 the University of Michigan (along with Michigan State University and Wayne State University) also created the Michigan Educational Research Information Triad (MERIT; now referred to as Merit), which was funded by the NSF and the State of Michigan to connect all three of those universities mainframe computers.

And it was Meritwith Van Houweling as its chairmanthat would be critical in securing this latest NSF grant to rescue the sputtering NSFNET.

</>

The Foundational Technologies of the Internet

Packet Switching

Early computers were costly and hugeand a mostly scarce resource. As the need to serve multiple simultaneous computer users grew, traditional telephone-based circuit switching operationsrequiring continuously open and dedicated lines for relatively infrequent data transmissionsproved expensive and impractical. Essential to the birth of the Internet, therefore, was the realization that messages could be sliced into packets and sent separatelyyet still stitched sensibly back together on arrival. Called packet switching, this system enabled single links to communicate with more than one machine at a time, and several users to access remote computers while sharing the same line.

TCP/IP Protocols

The ARPANET was the first major packet switching computer network, but as packet switching networks multiplied, connecting computers from different networks required new protocols (or rules). The new TCP/IP protocolsonce described as a handshake of recognition among computers across virtual spaceenabled virtually any computer network in the world to communicate directly with any other, no matter the hardware, software, or underlying computer language or systems used.

Border Gateway Protocol (BGP)

As regional networks proliferated, new protocols were needed to connect them. At a 1989 Internet Engineering Task Force meeting, Ciscos Kirk Lougheed and IBMs Yakov Rekhter (part of the NSFNET team) sketched the first draft of the BGP on cafeteria napkins. Still known by some as the three napkins protocol, the BGP was extensively tested before it was first deployed on the NSFNET, and it continues to send information that enables data to be routed along the appropriate pathseven as there remains no Internet map, and no Internet traffic authority.

Original set of notes, written on napkins, for the Border Gateway Protocol

The Slow Spread of Networked Computing

As computers grew in importance among academics, computer scientists and private and government researchers, efforts intensified to link them to share data among various locations. The Department of Defense Advanced Research Projects Agency (DARPA), which was leading this inquiry, had decided by the mid-1960s that large-scale simultaneous sharing of single communication links among more than one machine over long distances could be accomplished more efficiently using a new packet switching method of electronic connecting rather than the established circuit-switching method. In 1969 DARPA established the first major packet switching computer network, called the ARPANETa network to connect researchers at various locationsand for several years it would test these communications links among a few private contractors and select U.S. universities.

Michigan was a contemporary of the ARPANET through CONCOMP and Merit. By the mid-1970s Merits network had added Western Michigan University (and soon would include every one of Michigans state universities). And as one of the earliest regional networks, Merit was among the first to support the ARPANETs agreements on exchange definitionsthe so-called Transmission Control Protocol/Internet Protocol (or TCP/IP) protocolsalong with its own.

The ARPANET slowly proved an extremely useful networking tool for the still rather limited and relatively small science research, engineering and academic communities. By 1981 the ARPANETs tentative successes would inspire the Computer Science Network (CSNET)an NSF-supported network created to connect U.S. academic computer science and research institutions unable to connect to the ARPANET due to funding or other limitations. But as the ARPANET/CSNET network was growing, concerns nonetheless were building among scientists and academics that the United States had been falling behind the rest of the worldand in particular Japanin the area of supercomputing.

To address the perceived supercomputing gap, the NSF purchased access to the several then-existing research laboratory and university-based supercomputing centers, and it initiated a competition to establish four more U.S. supercomputer centers. Michigan was among those that had prepared a bidwhich it already had submitted by the time Van Houweling arrived on campus. But Van Houweling would quickly learn that Michigans proposal, though among the top-rated technically, was not going to succeedif for no other reason than because it contemplated use of a supercomputer built in Japan. Cornell, Illinois, Princeton, and UC-San Diego were awarded the first four sites (and a Carnegie Mellon-University of Pittsburgh site was added later).

With a burgeoning group of supercomputing centers now in placeand a growing number of NSF-supported regional and local academic networks now operating across the countrythe NSF needed to develop a better, faster network to connect them. Its NSFNET, operational in 1986, was at first modestly effective. But an immediate surge in traffic quickly swamped its existing infrastructure and frustrated its users.

By 1987 the NSF was soliciting bids for an NSFNET upgrade. The Merit team, and Van Houwelingwho had been discussing precisely this kind of a network with the NSF for several yearswere ready to pounce.

</>

The NSFNET Takes Off

Though the NSF and the State of Michigan funded Merit, it was and still is hosted by the University of Michigan, and all of its employees are University employees. Michigan Engineering professor Bertram Herzog was named Merits first director in 1966, and Eric Aupperlewho was now Merits president, and principal investigator for the NSFNET bid proposalhad been Herzogs first hire as senior engineer.

The NSF had encouraged NSFNET upgrade respondents to involve members of the private sector, but nobody needed to tell that to Van Houweling. As Merits chairman, Van Houweling already had been cajoling his well-established contacts at IBM, who in turn convinced MCIan upstart telecommunications company looking to make a name for itself in the wake of the breakup of the AT&T monopolyto join the fold. IBM committed to providing hardware and software, as well as network management, while MCI would provide transmission circuits for the NSFNET backbone at reduced rates. With these commitments in place, Gov. James Blanchard agreed to contribute $1 million per year over five years from the states Michigan Strategic Fund.

And the bid was won.

"All this baloney about, We knew what we were doing. When we committed to this, we didnt have anything. We had ideas, but that was about it.

Now the team would need to build an extensive and upgraded infrastructure, using newer and more sophisticated networking hardware of a type never used beforeand it would have to do it fast.

The first generation NSFNET had employed a University of Delaware faculty-produced routerthe device that forwards data among networks. That original router was nicknamed the Fuzzball, and ran at 56 kilobits per second. But the next generation was supposed to run at 1.5 megabits per secondor nearly 30 times faster.

A whole different category, says Van Houweling, and nothing like that existed. Today, you can buy a router for your house for about $50 to $100. But there were no routers to speak of then. You could buy onefor about a half million. But IBM committed to build it and write the softwarefor free!

MCIs Richard Liebhaber later recalled, during a 2007 NSFNET 20th anniversary celebration, how quickly things were movingand how much more there was to learn. All this baloney about, We knew what we were doing, said Liebhaber. When we committed to this, we didnt have anything. We had ideas, but that was about it.

But somehow, it all worked.

Merit committed to making the new backbone operational by August 1988, and it accomplished that feat by July of that yearjust eight months after the award. The newer, faster NSFNET connected 13 regional networks and supercomputer centers, representing more than 170 constituent campus networks. This upgraded network of networks experienced an immediate surge in demand of 10 percent the first montha growth rate that would hold firm year after year.

At first we thought it was just pent-up demand, and it would level off, says Van Houweling. But no!

Merit had exceeded its own early expectationsthough Aupperle modestly attributed that to the incredible interest in networks by the broader academic communities rather than to the new networks speed and reliability. But the results were indisputable. With an operations center that operated nonstop, Merits staff expanded from 30 to 65 and overflowed into a series of trailers behind the North Campus computing center.

Craig Labovitz was a newly hired Merit engineer who had abandoned his PhD studies in artificial intelligence at Michigan because he was so fascinated by his at-first-temporary NSFNET work assignment. Most people today dont know that the heart of the Internet was once on North Campus, says Labovitz. It was where the operations and on-call center was, and where all the planning and the engineering took place. Labovitzwho put to productive use the expertise he gleaned during his NSFNET tenurenow operates DeepField, an Ann Arbor-based cloud and network infrastructure management company.

The NSFNET soon proved to be the fastest and most reliable network ever. The new NSFNET technology quickly replaced the Fuzzball. The ARPANET was phased out in 1990. And by 1991 the CSNET wasnt needed anymore, either, because all the computer scientists were connecting to the NSFNET. As the first large-scale, packet switched backbone network infrastructure in the United States, almost all traffic from abroad was traversing the NSFNET as well, and its most fundamental achievementconstruction of a high-speed network service that evolved to T1 speeds (1.5 megabits per seconds) and later to T3 speeds (45 megabits per second)would essentially cover the world.

Throughout this whole period, it was all about the need to support university research that drove this project, says Van Houweling. Researchers needed to have access to these supercomputing facilities, and the way to do it was to provide them with this network. Nobody had the notion that we were building the communications infrastructure of the future.

But thats the way it turned out.

</>

Born, Not Invented

As co-developers of the TCP/IP open protocols, (1) Vinton Cerf and (2) Robert Kahn are widely considered among the several Fathers of the Internet. Others given credit for helping to give birth to the Internet include:

The Protocol Wars

To the extent that any one or more individuals are said to have invented the Internet, credit generally goes to American engineers Vinton Cerf and Robert Kahn. Along with their team at DARPA in the mid-1960s, Cerf and Kahn developed (based on concepts created by Louis Pouzin for the French CYCLADES project) and later implemented the TCP/IP protocols for the ARPANET. The TCP/IP protocols were also referred to as open protocolsand later, simply, as the Internet protocols. (Cerf also may have been the first to refer to a connected computer network as an internetthough the Internet would not fully come to the attention of the general public for another two decades.)

The significance of the NSFNETs success was not just that it scaled readily and well, but that it did so using the open protocols during a time of stress and transition.

It created what we would now call a viral effect, where everybody wanted it. It met the need and swamped the competition.

The open protocols had proved popular among computer scientists accustomed to using the ARPANET and the CSNET, but they still were relatively new and untested. As the NSF considered the NSFNETs standards, there remained deep skepticismand perhaps no small amount of self-interestamong commercial providers that the open protocols could effectively scale. Every interested corporate enterprise was pressing for its own protocols.

There was a race underway between the commercial interests trying to propagate their proprietary protocols, and the open protocols from the DARPA work, says Daniel Atkins, now professor emeritus of electrical engineering and computer science at Michigan Engineering and a professor emeritus of information at the School of Information. AT&T had intended to be the provider of the Internet.

Once the NSF made the [NSFNET upgrade] award, AT&Ts lobbyists stormed the NSF offices and tried to persuade them that this was a terrible idea, says Van Houweling.

The NSFNETs immediate challenge, therefore, was to avoid a flameout, explains Van Houweling. Getting overrun would have given this open model a black eyeenabling the telecommunications and computing companies to rush in and say, See, this doesnt work. We need to go back to the old system where each of us manages our own network.

But as Aupperle noted, those networks werent talking to each other. Proprietary protocols installed in the products of Digital Equipment Corporation, IBM, and other computer manufacturers at that time were hierarchical, closed systems. Their models were analogous to the telephone model, with very little intelligence at the devices, and all decisions and intelligence residing at the centerwhereas the Internet protocols have an open, distributed nature. The power is with the end-usernot the providerwith the intelligence at the edges, within each machine.

AT&Ts model was top-down management and control. They wouldnt have done what the NSFNET did, says Van Houweling. Unlike their proprietary counterparts, open protocols werent owned by anyonewhich meant that no one was charging fees or royalties. And that anyone could use them.

As it happened, adoption of the open protocols of the NSFNET went up exponentially, and it created what we would now call a viral effect, where everybody wanted it, including [eventually] the commercial world, says Atkins. It met the need and swamped the competition.

But the battle over proprietary standards would not be won easily or quickly. Before the open protocols could definitively prove their feasibility, a prominent competing effort in Europe to build and standardize a different set of proprietary network protocolsthe Open Systems Interconnect (OSI)was continuing to garner support not just from the telecommunications industry but also from the U.S. government. This debate wouldnt fully and finally end for at least a decadeor until the end of the 1990s.

Until the upgraded NSFNET started to gain traction, everything had been proprietary. Everything had been in stovepipes, says Van Houweling. There had never been a network that had the ability to not only scale but to also connect pretty much everything.

It was the first time in the history of computing that all computers spoke the same language, recalled IBMs Allan Weis at the NSFNET 20th anniversary. If [a manufacturer] wanted to sell to universities or to a research institution that talked to a university, [it] had to have TCP/IP on the computer.

Proprietary protocols had a control point, Weis added. They were controlled by somebody; owned by somebody. TCP/IP was beautiful in that you could have thousands of autonomous networks that no one owned, no one controlled, just interconnecting and exchanging traffic.

And it was working.

</>

Douglas Van Houweling, Former Vice Provost, Information Technology, University of Michigan

From the original 13-node NSFNET infrastructure to the sprawling global web of today

The Bumpy Road to Commerce

But continued growth would bring change, and change would bring controversy.

When the NSFNET was turned on, there was an explosion of traffic, and it never turned off, says Van Houweling. Merit had a wealth of experience, and along with MCI and IBM it had for more than two years exceeded all expectations. But Merit was a nonprofit organization created as a state-based enterprise. To stay ahead of the traffic the NSFNET would have to upgrade againfrom T1 to T3. No one had ever built a T3 network before.

To do this, you had to have an organization that was technically very strong, and was run with the vigor of industry, reasoned Weis. This would require more funding, which was not likely to come from the NSF.

In September 1990, the NSFNET team announced the creation of a new, independent nonprofit corporationAdvanced Network & Services, Inc. (ANS), with Van Houweling as its chairman. With $3 million investments from MCI, IBM and Northern Telecom, ANS subcontracted the network operation from Merit, and the new T3 backbone service was online by late 1991. The T3 service represented a 30-fold increase in bandwidthand took twice as long as the T1 network to complete.

At this point the NSFNET still was servicing only the scientific community. Once the T3 network was installed, howeverand some early bumps smoothed overcommercial entities were seeking access as well. ANS created a for-profit subsidiary to enable commercial traffic, but charged commercial users rates in excess of its costs so that the surplus could be used for infrastructure and other network improvements.

The Internet could only have been invented at a university because its the only community that understands that great things can happen when no ones in charge.

But several controversies soon arose. Regional networks desired commercial entities as customers for the same reasons that ANS did, but felt constrained by the NSF's Acceptable Use Policy, which prohibited purely commercial traffic (i.e., not directly supporting research and education) from being conveyed over the NSFNET backbone.

Even though non-academic organizations willing to pay commercial prices were largely being denied NSFNET access, the research and education community nonetheless raised concerns about how commercialization would affect the price and quality of its own connections. And on yet another front, commercial entities in the fledgling Internet Service Provider market complained that the NSF was unfairly competing with them through its ongoing financial support of the NSFNET.

Inquiries into these mattersincluding Congressional hearings and an internal report by the inspector general of the NSFultimately resulted in federal legislation in 1992 that somewhat expanded the extent to which commercial traffic was allowed on the NSFNET.

But the NSF always understood that the network would have to be supported by commerce if it were going to last. It never intended to run the NSFNET indefinitely. Thus a process soon commenced whereby regional networks became, or were purchased by, commercial providers. In 1994 the core of ANS was sold to America Online (now AOL), and in 1995 the NSF decided to decommission the NSFNET backbone.

And the NSFNET was history.

When we finally turned it over [to the commercial providers], the Internet hiccupped for about a year, according to Weis, because the corporate entities werent as knowledgeable or as prepared as they needed to be. IBMwhich had several years head start on the competition in building capable Internet routersdidnt pursue that business because others at IBM (outside of its research division, of which Weis was a part) still thought proprietary networks would ultimately win the protocol warseven as the NSFNET was essentially becoming the commercial Internet. Cisco stepped into the breach, and following this initially rocky period it developed effective Internet router solutionsand has dominated the field ever since.

Whenever there are periods of transition by definition they involve change and disruption, says Labovitz. So initially it was definitely bumpy. Lots of prominent people were predicting the collapse of the Internet.

In hindsight, we ended up in a very successful place.

</>

But What If It Had Failed?

Van Houweling is fond of saying the Internet could only have been invented at a university because academics comprise the only community that understands that great things can happen when no ones in charge.

The communications companies that resisted did so on the basis that there was no control, Van Houweling says. From its own historical perspective, this looked like pure chaosand unmanageable.

Labovitz agrees. It was an era of great collaboration because it was a non-commercial effort. You were pulling universities together, so there were greater levels of trust than there might have been among commercial parties.

"Instead of being open and democratizing, it might have been a balkanized world, with a much more closed environment.

So what would the Internet look like today if it had gone AT&Ts way? One possible scenario is that the various commercial providers may well have created a network in silos, with tiered payments depending on the type of content, the content creator and the intended consumerwithout the unlimited information sharing.

Its hard to predict precisely what would have happened, Atkins admits, but instead of being open and democratizing, it might have been a balkanized world, with a much more closed environment, segmented among telecommunications companies.

Now, you just have to register, and you can get an IP address and you can put up a server and be off and going, Atkins says. If AT&T were running it, it would have to set it up for you. It would have control over what you could send, what rates you could charge.

In the early [1980s pre-NSFNET] CompuServe/AOL days, you could only get the information they provided in their walled gardens, says Van Houweling. The amount of information you could access depended on the agreements CompuServe or AOL had with their various information providers.

Van Houweling suggests that most likely the OSI protocols would have prevailed. But it wasnt going to be easy to knit together all the telecommunications companies. OSI had been developing as a series of political compromises, with variations provided for each provider. And commercial carriers had remained fundamentally opposed to the concept that anyone, anywhere could just hook up to the network. It didnt fit their traditional revenue models.

I do think it is inexorably true that eventually we would have had a network that tied everything together. But I dont think it would have been similar [to what we have now], says Van Houweling.

How might we have progressed from the old walled gardens to what we have todaywhich is that no matter which computer you sit down to, you have access to the world?

I frankly dont know if we would have gotten there, Van Houweling says. It might have been the end of the Internet.

</>

The Commercial Internet and Net Neutrality

1993 was a watershed year. The first Mosaic browser was released. CERN made its World Wide Web technology available to anyone. And a network for research scientists, computer scientists and a handful of other tech geeks exploded into the mainstream.

An enormous amount of new information suddenly was availableonly on the Internet.

And then something totally unexpected happened. People from all over the world just kept sticking things onto itfree of charge! says Van Houweling, still in apparent awe more than two decades later.

There has, however, been pressure from time to time over the last decade and a half to place restrictions on that free flow of Internet access and information. For the most part the major communications carriers have pushed for change, while the resistors have rallied around a concept that has come to be called Net Neutralitywhich essentially means that all information should be treated equally.

The NSFNETs rapid success nearly three decades ago was crucial to creating that open mindset at its earliest stages, and Atkins argues that without that history, wed likely be way down the road of non-Net Neutrality by now.

Most recently, the Federal Communications Commission (FCC) issued tentative regulations that would have allowed carriers to charge a premium for fast lanes, and to treat various kinds of information differently. But when it issued its final Open Internet Order in February 2015following a public comment period marked by loud oppositionthe FCC instead came out strongly against site and app blocking, speed throttling and paid fast laneswhich Atkins believes was very much an attempt to preserve the original brilliance and culture of the Internet.

Current FCC Chairman Tom Wheeler says strong rules are needed to protect against large broadband companies temptations to act as gatekeepers. "ISPs have the power to decide what travels between consumers and the Internet, Wheeler says. They have all of the tools necessary to block, degrade and favor some content over others."

Facebook, Twitterthere are so many things that didnt exist not too long ago, and that just keep happening, Van Houweling says. Its important to keep the underlying vehicle open enough so that we can all just continue to add onto it at the ends. And of course the established players are not enthusiastic about thatespecially the carriers.

And it may be ever thus.

Continue reading on dme.engin.umich.edu