Where to Discuss a New Internet?
About 15 years ago, research projects were started in several countries under banners such as "Future Internet" or "Clean Slate Internet". The general idea was that the Internet had enough fundamental engineering problems that a fundamental overhaul was required. These projects were mainly interesting, productive and successful in reaching their research goals. Some of them have produced valuable outcomes such as software-defined networking. However, we can be certain that no technology based on such projects is in widespread use unless it is incremental and backwards compatible. A "clean slate" is impossible, because the Internet is here and everywhere and fully operational. It is so pervasive that copy-editors commonly spell it in lower case, like the air we breathe. Only step-by-step changes are imaginable. If we get a new Internet, it will be by evolution, not revolution.
Even longer ago, in the early 1990s, the Internet Engineering Task Force (IETF) observed that the basic Internet Protocol (IP) underlying every single data packet on the Internet was likely to run out of addresses within twenty years if nothing was done. That refers to numeric addresses, which only have technical meaning, and are commonly written like 192.0.2.123. Underneath the covers they are binary numbers, and there are only about four billion of them. (To be exact, 232, which is 4,294,967,296, minus some addresses that are unusable for technical reasons.) To put that in context, one source says "As of mid-June 2019, there were 4,536,248,808 unique internet users around the world." So why aren't we in a big mess, with new users unable to join? There are two reasons why the Internet hasn't run out of steam:
What does all this tell us? Several very important things, I think:
- A rather messy technique called Network Address Translation, invented in the 1990s.
- The gradual introduction of a new version of IP, called IPv6, which allows vastly more than four billion addresses. IPv6 was also invented in the 1990s, but only now is it used by as much as 30% of Internet traffic (with a noticeable boost caused by changing traffic patterns during the COVID-19 pandemic).
So what can we say in 2020 about another round of proposals for radical change in the basic Internet technology? Where can it most effectively be discussed? This topic came up recently in the media, with a couple of rather careless articles in the Financial Times (behind a paywall, so I will not cite them) and an excellent riposte by Milton Mueller. It isn't surprising that Huawei, now one of the largest telecommunications suppliers in the world, has ideas about the future of the Internet. (I have consulted for Huawei for several years.) It isn't surprising that they have outlined their ideas at the main organisations involved in telecommunications standards such as the
International Telecommunications Union (ITU) and the IETF. It isn't even news, even if the Financial Times only just heard about it.
- The Internet is too entrenched for radical or fundamental changes. Changing the basic concept of IP packets would be like trying to rip up every road and footpath in the world and replace them with monorails. It isn't going to happen. Changes happen over a period of decades, if they happen at all.
- Engineers are ingenious. Once it became obvious that IPv6 would take many years to deploy widely, engineers took the messy solution (address translation) and hammered it around until it worked as an effective stop-gap.
- Even the upgrade from the old IP to IPv6, which was planned from the start, turned out to be very hard and has taken more than 20 years to reach 30% penetration (Google statistics).
- The most important decision makers are not the people (like me) who write the technical standards for the Internet. Neither are they the equipment and software suppliers who make the products that compose the Internet. Still less are they politicians and government officials. They are the thousands of network operators who buy, install and run the equipment and software, not to mention answering help desk calls from their users. Operators are sensitive to cost, but even more sensitive to performance, reliability and complexity. Also, they suspect change, because it leads to unreliability. If there's to be a new Internet, it will only happen when the operators decide so.
- Operators avoid big changes. New technology only enters the Internet in small steps. For example, it took ten years for Google to see its IPv6 traffic grow from 0.25% to 30%. Ten years. It doesn't matter what the standards say or what the vendors try to sell. What matters is what the operators buy and install.
The Internet Society has also entered the fray, with a
draft discussion paper.
While it makes many good points, including the need for a clear understanding of requirements before considering detailed technical proposals, it doesn't (in my opinion) adequately tackle the main puzzle: where will the strategic discussion, and decision-making, take place?
Some new requirements for Internet infrastructure are based on observations about its known defects. Why is video streaming sometimes blocked for a while, with an annoying message about "buffering"? Why do live video sessions sometimes freeze up? Why do web pages sometimes come more slowly than other times? Why do Internet Service Providers sometimes report that they are under sustained attack by untraceable sources of rubbish messages ("distributed denial of service")? How can we ensure top-to-bottom privacy for users that need it?
Other new requirements are based on technological expectations. What lies beyond HD video, for example? How can something like holographic video or remote virtual reality be delivered over the Internet? How can Internet technology be used on the factory floor for real-time control of robotic systems with very tight timing requirements? How can it be used to enhance the safety of autonomous vehicles by ultra-reliable and rapid communication between large numbers of adjacent vehicles?
This is not the place to go into details, but the fact is that existing Internet technology simply isn't up to snuff for such requirements. You don't want to see "buffering" when your self-driving car is heading rapidly into a busy intersection.
Of course there are emerging technical ideas about how to meet these requirements, from many different companies and from researchers in numerous countries. Apparently the Financial Times and the Internet Society were surprised that one such contribution came from Huawei. I wasn't.
What is perhaps distinctive is that it takes a strategic, not a tactical, approach. But from a major telecommunications company, that is not really a surprise.
Who gets to decide?
Who decides about the future of the Internet? Who decides which ideas will be developed into standards and products, and which will be discarded? There's no simple answer to that. For 30 years, there's been a complex interplay between academics, hardware manufacturers, software providers, telecommunications operators, specialised Internet service providers, and major users. The Internet we see today is the result. Nobody owns the Internet, and nobody has the ultimate power of decision. Nobody. It's much more like a beehive, with a collective will. That makes it extraordinarily hard for explicit strategic changes to be made. (Again, it has taken 25 years for IPv6 to reach 30%.)
So, the short answer to the question in the title of this piece is "Huh?". Let's try to find a slightly longer answer. Think for a moment about the bees.
How does a beehive take a decision? That's reasonably well understood. Individual bees discover food supplies (nectar) at random, return to the hive, and indicate the distance and direction to their colleagues. As more and more individuals return with the same information, even more bees go to the same food supply, until it's exhausted. The process repeats indefinitely, with no single bee taking the strategic decision to switch to a new food supply. Apart from the early days when the ARPANET was a self-contained project, that is how the Internet has taken all its decisions. Technology that works is technology that wins. That's why the IETF's motto became "rough consensus and running code". Although there is a decision point in Internet standardisation (the declaration of a consensus by the IETF), decisions that are not validated by running code (i.e., hardware and software deployed in the market) are futile.
To fully understand how important this is in the success of the Internet, we need to briefly review another piece of ancient history. Starting in the mid 1970s, years before the ARPANET mutated into the Internet, three of the world's "official" international standards organisations
worked with academics and industry on formal standards for networking, under the banner of "Open Systems Interconnection (OSI)". This was not done in a vacuum; many people already doing practical networking, such as the ARPANET people, were involved. But it failed almost completely, and the standards that prevailed were the ones adopted by the Internet beehive. Why? Many authors have tackled this historical question, but my own answer is that the Internet people developed their standards in parallel with building and running the network,
whereas the OSI people largely tried to develop standards in advance with an emphasis on formal correctness rather than on running code. This simplifies an enormously complex question, but I think it's the essence of the matter. By 1995, when the World-Wide Web burst onto the scene (with running code, but incomplete standards), OSI was effectively dead.
The lesson here is that pre-emptive, theoretical standardisation for something as horrendously complex as the Internet doesn't work. Any attempt to use such an approach for a future Internet will fail. Humans aren't clever enough for that; we need the beehive effect.
Where, then, can we talk about strategy?
There are many places where a strategic conversation about future requirements and
future technology could, in theory, take place. Let's consider some of them:
We are stuck with the beehive as the model for strategic decision taking. In a sense, the Internet will decide its own future. For the immediate future, I suggest that the IETF will be the best forum for discussion, even though it is not the decision point.
- Academia. There are networking technology experts in many computer science and
engineering faculties around the world; and many of them are eager to contribute to
strategic discussion, by the usual mechanisms of peer-reviewed publications, seminars,
and conferences. However, not one of them has power of decision for the Internet as a
- The Internet Society (ISOC).
ISOC has about 68,000 individual members, worldwide. (I have been a member since 1992, and served on the ISOC Board of Trustees for several years.) Where better to discuss the future of the network? ISOC members come from all walks of life; many of them are users, rather than operators or technologists. They have a vast range of opinions. However, even if ISOC used its mechanisms for consulting its membership, and even if ISOC reached something approaching a consensus about the best technical strategy to meet the Internet's future needs, nobody would be in the least obliged to adopt the result.
- The United Nations. Given the UN's record in solving the world's serious problems, I don't see any need to discuss its chances of designing a technical strategy for the Internet.
- Industry alliances. In theory, a number of large players in the industry could get together to pick a strategy. In practice, this is unrealistic: there are simply too many players, in too many countries, with completely conflicting interests. All we could expect is numerous competing strategies.
- Standards organisations.
- As noted above, the ITU/ISO/IEC axis failed in a fairly spectacular way to satisfy the requirements of a worldwide data network 25 years ago. Of course they (and their feeder organisations such as ETSI)
have had some great successes, especially in mobile telephony, and in many other areas of basic telecommunications infrastructure. They are good at that, and are not enemies of the future Internet, but there isn't any reason to expect they could be the forum for Internet technical strategy. (In fact, ETSI has already blundered: see Non-IP Networking.)
- There are numerous other standards organisations with strong interest in the Internet, for example the World-Wide Web Consortium, the Broadband Forum, and 3GPP. But none of these cover the entire scope of the Internet infrastructure.
- The IETF, by contrast, is the obvious candidate. It controls the existing Internet standards, which are the inevitable starting point for future standards. The IETF has already reacted to discussion at the ITU, by a
However, the IETF has a long history of focus on small, separable work items (sometimes called "piece parts") and of ducking large, strategic issues. The
Internet Research Task Force,
which is closely associated with the IETF, is little different in this regard. The 25 year time span of IPv6 design and deployment is the only exception I can think of, and it has been extremely painful and unnatural for the IETF. Read the IETF's mission statement very carefully: The mission of the IETF is to make the Internet work better by producing high quality, relevant technical documents that influence the way people design, use, and manage the Internet. Read the
Architectural Principles of the Internet (I was the editor of that document). The IETF's
Internet Architecture Board (IAB) tries to provide "oversight of the architecture for the protocols and procedures used by the Internet"
(from the IAB charter,
which I also edited). However, it has no control of the technology. That control resides with the beehive. So, although the obvious choice, the IETF can at best act as focus
for discussion; it cannot decide the strategy, because no single body can.