Skip to main content

Who really invented the Internet?

This is in response to a recent column from Wall Street Journal columnist Gordon Crovitz entitled “Who Really Invented the Internet?” This column is so misinformed and misguided that it is difficult to even know where to begin to try to correct the public record.

However, I will give it a try.

I am hardly the only one taking exception to the Crovitz column. The LA Times just published a rebuttal from author Michael Hilzik, whose book Crovitz cites in his column. In his LA Times article, Hilzik reports, "“My book bolsters, not contradicts, the argument that the Internet had its roots in the ARPANet, a government project.” So Crovitz's willful misinterpretation of the historical record is even more egregious than I first thought.

Al Gore and the Internet.
Let’s first take note of the echo of the partisan criticism leveled at Al Gore back during the Bush II-Gore 2000 election era of raucous partisan politics, ridiculing Gore for claiming that he invented the Internet. Actually, Gore never claimed he “invented” the Internet. What candidate Gore actually said in an interview with CNN in 1999, (there is a transcript here) was, “During my service in the United States Congress, I took the initiative in creating the Internet.”
So, Al Gore never said he “invented” the Internet. He has taken a lot of heat for this over the years, and it is very unfair to the guy. He actually played a pivotal role in getting Congress to fund Internet-related research in the late 80s, but this was many years after the US government funded the original research that produced the ARPANET, where the basic technology -- packet-switching, the Internet Protocol (IP), and the full TCP/IP networking stack-- that shaped what was to become the Internet was originally developed. The ARPANET was the original proof of concept.

Back in the 2000 campaign Al Gore was polishing his resume by alluding to the fact that he sponsored key legislation that funded early adoption of the technology we associate today with the Internet. This was The High-Performance Computing Act of 1991, “a coordinated Federal program to ensure continued United States leadership in high-performance computing,” also known as “the Gore Bill” that allocated $600 million for that purpose. As a US Senator, Gore introduced the legislation in 1989 and it was enacted in 1991, when it was signed into law by Bush I. Jim Clark, founder of Silicon Graphics, Raj Reddy, and other computer industry notables testified on behalf of this legislation when it was first introduced. (See the article that was published at the time in ACM SIGGRAPH that includes Gore’s statement that was read into the Congressional Record when the legislation was introduced. In retrospect, Gore certainly showed commendable foresight.)

According to Wikipedia, Gore’s staff began drafting the bill after hearing testimony from Leonard Kleinrock, a professor of computer science at UCLA – and someone widely regarded as an Internet pioneer. Kleinrock was one of the prominent academic researchers that received funding from DARPA to develop the ARPANET, the predecessor of the Internet. Many years later, Kleinrock chaired a group that then submitted a report to Congress entitled “Toward a National Research Network,” delivered in 1988.

Vinton Cerf, another Internet pioneer, in an interview Esquire Magazine published in April 2008 offered this assessment of Gore’s role: “Al Gore had seen what happened with the National Interstate and Defense Highways Act of 1956, which his father introduced as a military bill. It was very powerful. Housing went up, suburban boom happened, everybody became mobile. Al was attuned to the power of networking much more than any of his elective colleagues. His initiatives led directly to the commercialization of the Internet. So he really does deserve credit.” [emphasis added]

But instead of received any credit during the 2000 election for his accomplishment, Gore’s remark was deliberately twisted beyond recognition by his critics and turned into something for which he was regularly mocked. That is quite a trick, to take some accomplishment of great merit and twist it into something that only merits scorn. When John Kerry’s record as a decorated war hero was attacked by partisans in the 2004 election, this tactic became known as Swift-boating.

But let’s get back to the Gordon Crovits piece... 
The Crovitz column begins,
A telling moment in the presidential race came recently when Barack Obama said: "If you've got a business, you didn't build that. Somebody else made that happen." He justified elevating bureaucrats over entrepreneurs by referring to bridges and roads, adding: "The Internet didn't get invented on its own. Government research created the Internet so that all companies could make money off the Internet."

President Obama probably could have worded that a little better: “The Internet didn’t get invented on its own. Government research created the Internet.” That part is undeniably true, but watch  Crovits try to deny it anyway.

What Obama could have then said was, “Then the federal government passed the Gore Bill in 1991. It provided critical funding that led to the commercial development of the Internet as we know it today.” Also true, but, of course, this would have exposed Obama to scorn from partisans like Crovitz for daring to suggest in public that Al Gore might have played any kind of meaningful role in advancing the cause of the Internet in its formative stages.

And then what President Obama should have said next was, “Fortunately, the federal government then had the wisdom to get out the way of commercial interests after the Gore Bill passed and once the commercial potential of the Internet became more apparent, especially after Berners-Lee’s invention of the HTTP protocol and the World Wide Web.” That would be a fair reading of the historical record, but not much use as a sound bite on CNN.
But it was the next misguided paragraph in Crovitz’s column that really got me going. Let’s try to parse the whoppers in the next two incredible sentences.

It's an urban legend that the government launched the Internet.
Urban legend? Government research paid for Kleinrock’s lab at UCLA to build the ARPANET, which was designed to link researchers in four universities – UCLA, Stanford, UC Santa Barbara, and the University of Utah, all working on various Pentagon-financed projects. That’s what Kleinrock was doing in front of Congress in 1988, talking about what the government could do to fund the next steps in rolling out the technology that he played a pivotal role in developing.

The ARPANET first became operational in 1969, reached BBN in Cambridge, MA in 1970, and was extended thereafter to include dozens of government and university connections. In the early days, the ARPANET was mainly used for file transfer using FTP. By 1975, the ARPANET was declared operational, which meant DARPA no longer funded the basic research and the Defense Communications Agency took on the responsibility for operating and maintaining the network. By then, Bob Kahn was working at DARPA, and he recruited his old pal Vint Cerf, who was teaching at Stanford, to work on the packet-oriented TCP/IP protocol stack. The packet-switched ARPANET + TCP/IP + HTTP = the Internet. Berners-Lee was working at CERN, so the US government funded only 2 out of the 3. However, the Gore Bill supported development of Mosaic, the first GUI-based web browser -- that also added fuel to help the Internet catch fire. 

The myth is that the Pentagon created the Internet to keep its communications lines up even in a nuclear strike.

Well, that is a myth. Of course, I don’t see how any of that is relevant to anything in President Obama’s brief remarks. But let's throw that in to sow some doubt about what he did actually say. Guilt by association, but totally lame since there is no association. 

Crovitz appropriately debunks this myth later in the article, quoting Robert Taylor, the senior researcher at DARPA who was responsible for the conception of the original ARPANET project and had oversight for the research funding, up until 1969 when he left the government. Crovitz also characterizes the government’s funding for the ARPANET as “modest,” a rhetorical flourish intended to further diminish the seminal role of the DARPA-sponsored research. It's true that the DARPA funding wasn't enormous. But that does not diminish the importance of the seminal effort that was funded.
The US military would be, of course, quite concerned about the fault tolerance of any of its mission-critical Command and Control systems. But the ARPANET project was originally intended to link researchers involved in various DARPA-funded long term research projects. The idea was to tie researchers distributed around the country electronically to their handlers in DoD and to their fellow researchers. There was nothing remotely mission-critical about it. They wanted to pass around messages and files and were confronted with the problem of connecting a hodge-podge of geographically distributed, heterogeneous computer hardware.

The IP routing protocol that the research effort produced does use dynamic routing and is fault tolerant to the extent that IP avoids the use of any kind of centralized repository that stores the network configuration. Distributing the knowledge of the network topology does allow the network to continue to function in the face of one or more single-point failures. This design was very radical for its day when the models of computer networking were all based on the centralized switches that AT&T, which held a monopoly on long distance telecommunications, used. The large private (and proprietary) networks linking centralized IBM mainframes to remote terminals were structured similarly.

The insight to make IP an adaptive routing protocol was primarily due to Len Kleinrock’s PhD research that showed that a dynamic routing protocol that adapted to changes in the network topology outperformed static protocols for packet-switched networks. An added benefit of distributing the routing information across the network was resilience, especially since the networking hardware itself was not very reliable in the early days of the ARPANET, or the Internet either when that began to be built.
I took a class from Len Kleinrock, through his sideline training business, the Technology Transfer Institute, in the early 80s. He wrote the first textbook on "Queueing Theory with Computer Applications." In the class I took from Len, he lectured on the design principles and the performance implications of IP packet-switching technology. He explained the principles behind dynamic network routing. He also explained why IP routers shouldn't queue too many requests. At the time, I didn't even know there were Internet Protocol (IP) routers. At the Interface '80 conference on internetworking I attended two years earlier, all the talk was of the coming of X.25 and SDLC. Today, Dr. Kleinrock, of course, is widely  recognized today as one of the key innovators associated with developing the Internet. (His bio is here.)

Internet != Ethernet
Choosing to ignore the DARPA funding for the development of the ARPANET and TCP/IP by Kleinrock, Cerf and Kahn, Crovitz instead wants to give Xerox PARC the lion’s share of the credit for the technology underlying the Internet today. This is an interesting exercise in revisionist history. Robert Taylor did leave DARPA and wound up at Xerox PARC in 1970. Xerox PARC, the research arm of Xerox, invested extensively in networking technology during Taylor’s tenure there, including the development of the Ethernet protocol, including the first linkage of Xerox PARC computers running Ethernet to the ARPANET. Xerox patented the Ethernet protocol in 1975 and successfully deployed it internally.

However, the DARPA-sponsored research that led to the development of the ARPANET was focused on internetworking – tying together heterogeneous computers distributed over long distances, independent of the underlying telecommunication technology to link these computers. Ethernet was originally adopted as a wire protocol; it is at the lowest (hardware ) level of the networking protocol stack, or the Media Access (MAC) layer in the OSI layered model. In its original conception, the protocol provided connectivity over very limited distances (about 500 feet per network segment).

Both the Internetworking Protocol (IP) and the Transmission Control Protocol (TCP) were originally developed to solve the problem of network communication over long distances for the ARPANET project. The focus was entirely different from the problem of local area networking (LAN) that Metcalfe was trying to solve with the Ethernet protocol.

And Look Who Benefitted...
Because all the ARAPNET research was paid for directly by the American taxpayer, all the essential technology that was developed around TCP/IP was deposited in the public domain. The benefits of this research didn't stay bottled up in company-sponsored research labs; it was there for the asking, royalty-free. This technology proved to be a treasure trove for American-based technology companies. Networking start-ups like Cisco, for example, gained a huge advantage in the marketplace by building dedicated IP packet routers on top of the public domain TCP/IP protocol stack.

One of the successful legacies of the Gore Bill is that it directly financed the development of Mosaic, the first HTTP-based web browser GUI client. Subsequent developers of web browsers, including Netscape Navigator, Internet Explorer, FireFox, and Chrome, all had ready access to Mosaic source code, since it was in the public domain. For instance, Microsoft was able to produce a credible challenger to Netscape Navigator in short order because it had access to Mosaic source code.

I would make the case that the government-financed research that led to the ARPANET later fueled the Internet technology boom of the 90s. Over the ensuing years, the benefit of the ARPANET research to the American economy has proved spectacular, more valuable to US-based businesses than anything that came out of the space program. (Not that Velcro isn’t pretty cool.)

American-based technology companies had a huge head start in adopting this technology – many of the lead researchers worked at American universities like UCLA, University of Illinois, Carnegie Mellon and Stanford, and they shared that research with their students. And it was all published in English.
To take another example, the IP protocol header features a field called the Time-To-Live, or TTL for short. To understand how IP routing works, it is very helpful to understand the English language meaning of the field's name. Students that are fluent in English have a leg up when it comes to learning about networking.

Look, I am not saying that governments know how to pick marketplace winners and losers. Maybe the Internet technology developed for the ARPANET is a rare sucessful exception. Maybe the huge Japanese government-funded Fifth Generation Computer Systems research project (see, for example, http://www.atarimagazines.com/creative/v10n8/103_The_fifth_generation_Jap.php) during the 1980s that was designed to catapult Japanese-based computer technology businesses into the lead, might be the rule. But to re-write business history by denying the impact of at least some government-sponsored technology research is pretty nauseating. It is in service of a political agenda that wants it that the government never gets anything right.
I was in the early stages of my business career in the 1980s when Japan launched the enormous Fifth Generation Computing research project. DARPA was also funding supercomputing research that helped to launch start-ups like Thinking Machines and Kendall Square Research, but not at the same level or as well coordinated as the Japanese government’s effort. The Gore Bill in 1989 was conceived partially as a response to the level of funding for basic research in computing by the Japanese government.

I remember reading David Halberstam’s book “The Reckoning” shortly after it was published in 1986, an account comparing and contrasting the US and Japanese auto industry post World War II. The American government’s intervention in the US auto industry in response to the triumph of Japanese car makers included a bailout of Chrysler in 1979 and protectionist legislation that forced companies like Toyota, Honda and BMW to start building at least some of their products in America. This ultimately diffused much of the political pressure on Japanese car makers. The implied threat from Japan was palpable across the computer industry. We were worried that Japan, Inc. would do to the computer industry what they had to US manufacturing jobs in the steel, auto and other industries. We were next in line.

Fortunately, for the American computer technology industry, a few well-placed bets from DARPA bore fruit and we got to the Internet first. Meanwhile, the Japanese government poured money down a dry hole.

And that is the way it goes. Pure research is a crap shoot, while in the technology business, timing is everything. Researchers at Xerox PARC built the first practical mouse-based computer GUI, but the Xerox Star computers that first attempted to commercialize the technology were too far expensive. Steve Jobs at Apple ripped off the GUI technology for the original Mac, but he didn’t get it completely right either. The early Macs were on the expensive side and under-powered. Microsoft didn’t get the same GUI interface right either until Windows version 3 when there were finally powerful enough Intel 386 processors and enough RAM capable of running it.

The Apple Newton flopped as a handheld computing device, as did early versions of Windows tablets that were expensive and clunky. Years later, along comes the iPhone and the iPad. With the right amount of computing power and network bandwidth, the combination in a handheld device of knowing both who you are and where you are makes for killer apps. And the power of networked applications to make reasonable inferences from that information.

Government funding sometimes makes a difference, and sometimes not. Refusing to acknowledge that government funding had anything to do with the development of phenomenally successful Internet technology that then provided a major windfall to American technology-based businesses serves a profoundly flawed political ideology that is obstructionist in nature and ruthless in its ignorance of any historical facts that might contradict its central tenants. Those of us that know better have a duty to challenge this fundamentally flawed perspective.

Ethernet, TCP/IP, and all the rest.
But back to the alternative history that Crovitz promulgates. Ethernet did ultimately win the war for LAN connectivity, but that success walked a convoluted path. Since Wireless Ethernet is ubiquitous today on computers, laptops, tablets and phones, computing devices which are all used to access the web, I can see how someone who doesn’t know any better can mistakenly identify Ethernet as the key ingredient in computer internetworking. But it is not.

Xerox PARC was notorious for producing breakthrough technology, while the parent company, focused on its core copier business, was unable to capitalize on any of it. Bob Metcalfe, widely recognized as the inventor of Ethernet, is also mentioned in the column. Metcalfe first became enamored with the ARPANET while he was doing graduate work at Harvard. Harvard rejected his dissertation on packet switching and he wound up at Xerox PARC. There he was focused on Local Area Networking (LAN), essentially, tying together a bunch of computers and their printers on the same floor of some office building, circa 1985. Metcalfe devised an improved version of the ALOHA protocol. It was simple, yet effective.

When it became clear that Xerox was not very interested in doing much of anything practical with his invention, Metcalfe started his own company (3Com) to try and bring the technology to market. He was then able to convince Xerox to join with DEC and Intel to submit a 10 megabit (10baseT) proposal for local area networking based on Xerox’s Ethernet patents for consideration by IEEE as an industry standard. The IEEE 802.3 CSMA/CD standard – the protocol Metcalfe devised is properly called CSMA/CD; Ethernet is a nickname – was then accepted in 1982.

The ability of Ethernet to run over inexpensive, lightweight twisted-pair wiring was a key to some of its early commercial success. Ethernet came to dominate other competing hardware standards such as IBM Token Ring, widely perceived as proprietary, and FDDI (expensive) due to its low cost (a function of both its simplicity and high volume manufacturing). Especially significant was Intel putting its manufacturing muscle and VLSI fabrication technology into producing inexpensive Ethernet chips beginning in the 1990s. I suspect core networking from Internet Service Providers like the phone companies still use ATM, for example, but Ethernet is pretty much ubiquitous everywhere else today for both wired and wireless connectivity.

The level of government-sponsored research in core networking technology by the academic community was fairly modest, compared to private sources of funding for research in networking technology. Nevertheless, the success of the ARPANET project was a major public boost to packet switched data networks in general. Both Bell Labs (Frame Relay, ATM, ISDN, SONET, etc), and IBM Research (Token Ring) (as well as many others) invested heavily in very different networking technologies than the one we use today. Of course, private companies with their extensive research arms often withhold publishing research they believe has significant commercial potential until they have a chance to assess that potential. In contrast, the ARPANET research was all in the public domain, and was thus very visible within the academic community.

TCP/IP did not catch on immediately. It emerged only later. Public access internetworking (e.g., Compuserve and Tymnet, etc.) was usually accomplished using the X.25 protocol through the 1980s and early 90s. One of the major advantages of TCP/IP when it did hit was the fact that it was an open standard, based on government-sponsored research that was widely published and promulgated. Since TCP/IP was in the public domain, no one technology business was perceived as having a proprietary advantage that might give rivals pause about adopting that technology themselves. Because it was so simple to implement and the patent holder Xerox wasn’t a big player in the computer business, Ethernet was perceived by IBM rivals as having a similar advantage, compared to its main rival, IBM Token Ring (more compliczated, more expensive, IBM held all the patents, and had a huge head start on the business.)

Of course, by 2008, the controversy over who could claim credit for inventing the Internet was something that had already turned pretty ugly. Vint Cerf, who along with Bob Kahn, is credited with designing the original version of the TCP/IP protocol, had his research funded by the Defense Advanced Research Projects Agency (DARPA), which is how all the software associated with the TCP/IP protocol ended up as open source in the public domain.

Both Cerf and Kahn worked as doctoral students in the lab at UCLA where the aforementioned Len Kleinrock was experimenting with routing procedures for packet-switched networks back in the 1960s. Kleinrock’s original research in the performance of routed computer network traffic, published in 1964 as Communication Nets: Stochastic Message Flow Design, was formative in influencing the design of the IP routing protocol. Reportedly, Dr. Kleinrock was annoyed for many years when his young protégés walked off with most the credit for the design of the IP protocol. Of course, Cerf and Kahn themselves never claimed "they invented the Internet." But if you read popular accounts of the history of the Internet, like Tim Wu’s “The Master Switch: the Rise and Fall of Information Networks,” Cerf and Kahn are mentioned prominently, while Kleinrock’s seminal role in the design of the IP routing protocol isn’t mentioned .

Meanwhile, Tim Berners-Lee has always been careful to give full credit to Ted Nelson, a science fiction writer from the 60s, for the invention of hypertext. I remember using IBM’s Generalized Markup Language (GML), the predecessor of the SGML standard in 1980 when I worked as a programmer at duPont to build some documentation. Way cool stuff, which had a great influence on the HTTP markup language. (Most of the academic community was in sway to Donald Knuth's TEX initiative.) The first version of Landmark's TMON/MVS that was released in 1989 incorporated a hypertext-based online Help engine that I wrote. It ran on an IBM 3270 “dumb” terminal. The biggest technical hurdle I faced was explaining to the tech writers that worked with me how to exploit hypertext technology in their documents because it was all new to them.
But I didn’t “invent” anything either; the original idea came from Ted Nelson and was transmitted to me via GML. By 1987, Apple had also introduced the HyperCard, which also helped to popularize the idea behind hypertext documents.
At MCI where I worked briefly during the 80s, Vint Cerf was better known as the guy who led the team that developed MCI Mail, the first, commercially viable, consumer-oriented e-mail package. MCI Mail actually preceded the use of the SMTP mail protocol that sits on top of TCP/IP; the version I ran at MCI in 1987, which was shortly after Cerf left the company, on my IBM PC did not support TCP/IP.

Early versions of TCP/IP running on the ARPANET and later the Internet did not scale well at all. Van Jacobson’s research, also funded by the federal government, created the congestion avoidance and control mechanisms that were subsequently incorporated into the TCP protocol to keep Internet performance from literally collapsing under load. Still, it is Vinton Cerf and Robert Kahn who are most often cited as the inventors of the Internet, along with Tim Berners-Lee, who first developed the hypertext protocol HTTP on top of TCP/IP that led directly to the World Wide Web as we know it today.

So, Who Really Invented the Internet?
Lots and lots of very smart people that built upon each other’s successes over many, many years, based on research that was conducted long before anyone had much of an inkling about its commercial potential. Much of the initial research that was to prove so influential was under-written directly by the U.S government. This research was widely and publicly shared. Then, when the commercial potential started to become apparent, American-based technology start-ups were in a unique position to capitalize on the technology. Students in American universities learned about it from their professors, many of whom were recipients of research grants from the federal government. American-based technology firms then took the lead in commercializing this technology, leading to the Internet boom years when the US economy prospered and the US government ran budget surpluses and was starting to pay down the national debt at a rate so fast it was actually scaring economists.

That is the history that I remember, that I lived through.

Comments

Popular posts from this blog

Hyper-V Architecture: Intercepts, interrupts and Hypercalls

Intercepts, interrupts and Hypercalls Three interfaces exist that allow for interaction and communication between the hypervisor, the Root partition and the guest partitions: intercepts, interrupts, and the direct Hypercall interface. These interfaces are necessary for the virtualization scheme to function properly, and their usage accounts for much of the overhead virtualization adds to the system. Hyper-V measures and reports on the rate these different interfaces are used, which is, of course, workload dependent. Frankly, the measurements that show the rate that the hypervisor processes interrupts and Hypercalls is seldom of interest outside the Microsoft developers working on Hyper-V performance itself. But these measurements do provide insight into the Hyper-V architecture and can help us understand how the performance of the applications running on guest machines is impacted due to virtualization. Figure 3 is a graph showing these three major sources of virtualization overhead...

Memory Ballooning in Hyper-V

The previous post in this series discussed the various Hyper-V Dynamic Memory configuration options. Ballooning Removing memory from a guest machine while it is running is a bit more complicated than adding memory to it, which makes use of a hardware interface that the Windows OS supports. One factor that makes removing memory from a guest machine difficult is that the Hyper-V hypervisor does not gather the kind of memory usage data that would enable it to select guest machine pages that are good candidates for removal. The hypervisor’s virtual memory capabilities are limited to maintaining the second level page tables needed to translate Guest Virtual addresses to valid machine memory addresses. Because the hypervisor does not maintain any memory usage information that could be used, for example, to identify which of a guest machine’s physical memory pages have been accessed recently, when Guest Physical memory needs to be removed from a partition, it uses ballooning, which transfe...

High Resolution Clocks and Timers for Performance Measurement in Windows.

Within the discipline of software performance engineering (SPE), application response time monitoring refers to the capability of instrumenting application requests, transactions and other vital interaction scenarios in order to measure their response times. There is no single, more important performance measurement than application response time, especially in the degree which the consistency and length of application response time events reflect the user experience and relate to customer satisfaction. All the esoteric measurements of hardware utilization that Perfmon revels in pale by comparison. Of course, performance engineers usually still want to be able to break down application response time into its component parts, one of which is CPU usage. Other than the Concurrency Visualizer that is packaged with the Visual Studio Profiler that was discussed  in the previous post , there are few professional-grade, application response time monitoring and profi...