Where to Discuss a New Internet?

About 15 years ago, research projects were started in several countries under banners such as "Future Internet" or "Clean Slate Internet". The general idea was that the Internet had enough fundamental engineering problems that a fundamental overhaul was required. These projects were mainly interesting, productive and successful in reaching their research goals. Some of them have produced valuable outcomes such as software-defined networking. However, we can be certain that no technology based on such projects is in widespread use unless it is incremental and backwards compatible. A "clean slate" is impossible, because the Internet is here and everywhere and fully operational. It is so pervasive that copy-editors commonly spell it in lower case, like the air we breathe. Only step-by-step changes are imaginable. If we get a new Internet, it will be by evolution, not revolution.

Even longer ago, in the early 1990s, the Internet Engineering Task Force (IETF) observed that the basic Internet Protocol (IP) underlying every single data packet on the Internet was likely to run out of addresses within twenty years if nothing was done. That refers to numeric addresses, which only have technical meaning, and are commonly written like 192.0.2.123. Underneath the covers they are binary numbers, and there are only about four billion of them. (To be exact, 232, which is 4,294,967,296, minus some addresses that are unusable for technical reasons.) To put that in context, one source says "As of mid-June 2019, there were 4,536,248,808 unique internet users around the world." So why aren't we in a big mess, with new users unable to join? There are two reasons why the Internet hasn't run out of steam:

What does all this tell us? Several very important things, I think:
  1. The Internet is too entrenched for radical or fundamental changes. Changing the basic concept of IP packets would be like trying to rip up every road and footpath in the world and replace them with monorails. It isn't going to happen. Changes happen over a period of decades, if they happen at all.
  2. Engineers are ingenious. Once it became obvious that IPv6 would take many years to deploy widely, engineers took the messy solution (address translation) and hammered it around until it worked as an effective stop-gap.
  3. Even the upgrade from the old IP to IPv6, which was planned from the start, turned out to be very hard and has taken more than 20 years to reach 30% penetration (Google statistics).
  4. The most important decision makers are not the people (like me) who write the technical standards for the Internet. Neither are they the equipment and software suppliers who make the products that compose the Internet. Still less are they politicians and government officials. They are the thousands of network operators who buy, install and run the equipment and software, not to mention answering help desk calls from their users. Operators are sensitive to cost, but even more sensitive to performance, reliability and complexity. Also, they suspect change, because it leads to unreliability. If there's to be a new Internet, it will only happen when the operators decide so.
  5. Operators avoid big changes. New technology only enters the Internet in small steps. For example, it took ten years for Google to see its IPv6 traffic grow from 0.25% to 30%. Ten years. It doesn't matter what the standards say or what the vendors try to sell. What matters is what the operators buy and install.
So what can we say in 2020 about another round of proposals for radical change in the basic Internet technology? Where can it most effectively be discussed? This topic came up recently in the media, with a couple of rather careless articles in the Financial Times (behind a paywall, so I will not cite them) and an excellent riposte by Milton Mueller. It isn't surprising that Huawei, now one of the largest telecommunications suppliers in the world, has ideas about the future of the Internet. (I have consulted for Huawei for several years.) It isn't surprising that they have outlined their ideas at the main organisations involved in telecommunications standards such as the International Telecommunications Union (ITU) and the IETF. It isn't even news, even if the Financial Times only just heard about it.

The Internet Society has also entered the fray, with a draft discussion paper. While it makes many good points, including the need for a clear understanding of requirements before considering detailed technical proposals, it doesn't (in my opinion) adequately tackle the main puzzle: where will the strategic discussion, and decision-making, take place?

New requirements

Some new requirements for Internet infrastructure are based on observations about its known defects. Why is video streaming sometimes blocked for a while, with an annoying message about "buffering"? Why do live video sessions sometimes freeze up? Why do web pages sometimes come more slowly than other times? Why do Internet Service Providers sometimes report that they are under sustained attack by untraceable sources of rubbish messages ("distributed denial of service")? How can we ensure top-to-bottom privacy for users that need it?

Other new requirements are based on technological expectations. What lies beyond HD video, for example? How can something like holographic video or remote virtual reality be delivered over the Internet? How can Internet technology be used on the factory floor for real-time control of robotic systems with very tight timing requirements? How can it be used to enhance the safety of autonomous vehicles by ultra-reliable and rapid communication between large numbers of adjacent vehicles?

This is not the place to go into details, but the fact is that existing Internet technology simply isn't up to snuff for such requirements. You don't want to see "buffering" when your self-driving car is heading rapidly into a busy intersection.

Of course there are emerging technical ideas about how to meet these requirements, from many different companies and from researchers in numerous countries. Apparently the Financial Times and the Internet Society were surprised that one such contribution came from Huawei. I wasn't. What is perhaps distinctive is that it takes a strategic, not a tactical, approach. But from a major telecommunications company, that is not really a surprise.

Who gets to decide?

Who decides about the future of the Internet? Who decides which ideas will be developed into standards and products, and which will be discarded? There's no simple answer to that. For 30 years, there's been a complex interplay between academics, hardware manufacturers, software providers, telecommunications operators, specialised Internet service providers, and major users. The Internet we see today is the result. Nobody owns the Internet, and nobody has the ultimate power of decision. Nobody. It's much more like a beehive, with a collective will. That makes it extraordinarily hard for explicit strategic changes to be made. (Again, it has taken 25 years for IPv6 to reach 30%.)

So, the short answer to the question in the title of this piece is "Huh?". Let's try to find a slightly longer answer. Think for a moment about the bees.

[Bee picture]

How does a beehive take a decision? That's reasonably well understood. Individual bees discover food supplies (nectar) at random, return to the hive, and indicate the distance and direction to their colleagues. As more and more individuals return with the same information, even more bees go to the same food supply, until it's exhausted. The process repeats indefinitely, with no single bee taking the strategic decision to switch to a new food supply. Apart from the early days when the ARPANET was a self-contained project, that is how the Internet has taken all its decisions. Technology that works is technology that wins. That's why the IETF's motto became "rough consensus and running code". Although there is a decision point in Internet standardisation (the declaration of a consensus by the IETF), decisions that are not validated by running code (i.e., hardware and software deployed in the market) are futile.

To fully understand how important this is in the success of the Internet, we need to briefly review another piece of ancient history. Starting in the mid 1970s, years before the ARPANET mutated into the Internet, three of the world's "official" international standards organisations (ISO, IEC and ITU) worked with academics and industry on formal standards for networking, under the banner of "Open Systems Interconnection (OSI)". This was not done in a vacuum; many people already doing practical networking, such as the ARPANET people, were involved. But it failed almost completely, and the standards that prevailed were the ones adopted by the Internet beehive. Why? Many authors have tackled this historical question, but my own answer is that the Internet people developed their standards in parallel with building and running the network, whereas the OSI people largely tried to develop standards in advance with an emphasis on formal correctness rather than on running code. This simplifies an enormously complex question, but I think it's the essence of the matter. By 1995, when the World-Wide Web burst onto the scene (with running code, but incomplete standards), OSI was effectively dead.

The lesson here is that pre-emptive, theoretical standardisation for something as horrendously complex as the Internet doesn't work. Any attempt to use such an approach for a future Internet will fail. Humans aren't clever enough for that; we need the beehive effect.

Where, then, can we talk about strategy?

There are many places where a strategic conversation about future requirements and future technology could, in theory, take place. Let's consider some of them: We are stuck with the beehive as the model for strategic decision taking. In a sense, the Internet will decide its own future. For the immediate future, I suggest that the IETF will be the best forum for discussion, even though it is not the decision point.

This note is the personal opinion of Brian E. Carpenter, an honorary academic at the University of Auckland. He previously worked at CERN and IBM, and is a past Chair of the Internet Architecture Board, of the Board of Trustees of the Internet Society, and of the Internet Engineering Task Force. He is the author of "Network Geeks: How They Built the Internet", and an author or editor of more than 50 Internet RFC (Request for Comment) documents. He has also provided technical consultancy to Huawei in Beijing.

Page updated 2020-04-30. Disclaimer