July 11, 2011
Far too many people were involved in the creation and nurturing of the Internet for any one individual to claim credit for inventing it. But Vinton G. Cerf probably comes closest. He was involved in just about every major development in the Internet's history, from its earliest roots more than 40 years ago as an experiment at the Defense Advanced Projects Research Agency (DARPA), through the development of email in the 1980s and the World Wide Web in the 1990s.
Cerf is probably best known for his seminal work with Robert Kahn in the late 1970s and early 1980s that lead to the development of TCP/IP, the set of protocols that control communications over the internet. Today, he continues to play a major role in the future of the Net as vice president and chief internet evangelist for Google.
Steve Wildstrom sat down recently with Cerf at Google's Reston, Va., offices for a wide-ranging discussion of the past, present, and future of the Internet. Here are some highlights of the talk..
What he would have done differently:
Cerf: Let me distinguish between what I would wish to have done differently and what could have been done differently. Certainly, I would have picked a 128-bit address space instead of 32. Bob [Kahn] and I used 32 bits in our first paper thinking, very naively, networks are expensive things, they'll probably be national scale, there won't be more that one or two networks per country, so we only need about 256 networks. Oh by the way, this is an experiment, and oh by the way, this is 1973, and oh, by the way, nothing has been implemented yet. So we pick a number, and it seemed like 4.3 billion [addresses] ought to be enough to do an experiment. Even the Defense Dept. doesn't need more than 4.3 billion of these things, whatever they were. But more important, at this time we don't have laptops and desktops and mobiles and all this other stuff. What we have are mainframes that are time shared and serve 10,000 people. We weren't expecting to have mobile, or even portable, computing available very quickly. So 32 bits—I would have picked 128, knowing better.
In 1992, we realized we were going to run out. We thought we were imminently going to run out. We rushed through IPng—next generation—and IPv6 is what that eventually turns into. Then for 16 years, nobody pays any attention to implementing it, with a few exceptions.
Here it is, 2011, and we are out of address space at the IANA [Internet Assigned Numbers Authority] level. The ARINs and APNICs and other regional internet registries will be out of address space by the end of this year or possibly sooner. So people should be buzzing to implement IPv6 in parallel with,IPv4. But it is only just beginning to dawn on these folks. So this year, June 8, 2011, is World IPv6 Day. We're all going to turn it on everywhere, at least for 24 hours, and see what happens—see what breaks.
If we don't get the platforms properly configured to resist various forms of attack or penetration, we will create a very fragile future for ourselves
The second issue is security, and here we run into another typical problem. … The technology wasn't available to me publicly to do anything about putting in stronger cryptography and stronger security. It turns out I was working with NSA in 1975, while I was still at Stanford, on what looks like a secure version of the internet, using classified cryptography. Because it was classified, I couldn't tell anyone about it. We end up building this fully secured system and having to keep all non-secured systems in parallel. If I could go back in time and knew more than I did at the time, maybe we could have designed this more securely. But in the end, I don't think it would have been possible to achieve this. And we still didn't know at the time, in 1975 or 1977, how well this was going to work. The timing wasn't very good.
On threats to the future of the internet:
Cerf: If we don't get the platforms properly configured to resist various forms of attack or penetration, we will create a very fragile future for ourselves. Part of the challenge is to build much more resilient systems, with more distribution and more ability to replicate data in many places, like we do here at Google in our cloud-based systems, systems that we are unwilling to run code that hasn't been verified and cryptographically checksummed, unwilling to communicate with things that you can't end-to-end verify or know to be appropriate correspondents as opposed to being promiscuous. All of these things have to happen for us to create a safe environment for future networking.
The contents or opinions in this feature are independent and do not necessarily represent the views of Cisco. They are offered in an effort to encourage continuing conversations on a broad range of innovative, technology subjects. We welcome your comments and engagement