Ae
Back to Notes

Internet: from experiment to planetary infrastructure

12 min read9 days agoTechnical
Table of contents

The Internet did not succeed because it was a single brilliant technology, but because it managed to coordinate networks, hardware, protocols, and distinct institutions under open and sufficiently shared rules.

Before the Web, the problem already existed

When someone opens a browser today, they usually think of pages, apps, videos, or social networks. But the Internet was not born to serve feeds or to host online stores. It was born from a much more basic and much harder technical question: how to connect different systems in a way robust enough that they could exchange information without depending on a single machine, a single network, or a single transmission method.

That detail matters because it separates two things that are often confused. One thing is the World Wide Web — a layer of documents, links, browsers, and web standards. Another thing is the Internet — the logical and physical infrastructure that allows different networks to function as a network of networks. The Web rests on the Internet; it does not create it from scratch.

ARPANET and the conceptual shift

ARPANET usually comes up at the beginning, and rightly so. It was an experimental network driven by ARPA — later DARPA — that helped demonstrate that packet switching could work in practice. The underlying idea was powerful: instead of reserving a fixed circuit between two points, data could be split into packets and move through the network more flexibly. That approach improved infrastructure utilization and opened the door to more resilient designs.

But the real leap was not just connecting a few computers. The leap was understanding how to interconnect different networks with each other. A radio network, a wired network, and a satellite network did not have to share the same physical conditions, the same latency, or the same limitations. What was needed was a more abstract architecture: a common language capable of operating above those differences.

TCP/IP: the idea that turned many networks into a single logical architecture

That is where the historical turning point appears. The work of Vint Cerf and Robert Kahn in the 1970s was decisive because it formalized the logic of internetworking: not designing a single closed network, but a way to make heterogeneous networks talk to each other. The result was the TCP/IP family. IP solved the basic movement of datagrams between networks; TCP added end-to-end reliability for applications that needed it.

The adoption of TCP/IP by ARPANET in 1983 is usually treated as a milestone because it marks the moment when the architecture stopped being a promise and became the operational foundation of the modern Internet. It was not the end of the evolution, but it was the moment when the idea of a network of networks stopped being a fragile experiment and gained a shared technical foundation.

The Web came later

The Web was another leap, but of a different nature. Tim Berners-Lee conceived it at CERN to facilitate information exchange among scientists and distributed teams. The genius of the system was not only technical; it was also institutional and cultural. By being released openly, the Web could spread without being locked under the exclusive control of a single company. That accelerated mass adoption, but the correct sequence is worth insisting on: first the Internet existed as a network architecture; then the Web appeared as one of its most successful use layers.

What pieces sustain the Internet today

If you want to understand the Internet precisely, you have to stop seeing it as a single thing and start seeing it as several coordinated layers. There is a protocol and software layer; an addressing and naming layer; a routing layer between networks; a physical infrastructure layer; and an institutional coordination layer. They all interact. None of them, on its own, explains the complete system.

In the protocol layer, TCP/IP remains the historical core, although today it coexists with many evolutions and upper layers: HTTP, TLS, QUIC, SMTP, BGP, DNS, and many more. In the naming layer, DNS maps human-readable names to concrete resources. In the routing layer, autonomous systems and BGP decide which paths traffic can take between networks administered by different actors. In the physical layer appear routers, switches, fiber, data centers, exchange points, and submarine cables. And in the institutional layer, organizations and communities maintain rules, registries, and standards.

Who maintains what

A very common mistake is imagining that a single entity "runs the Internet." It does not work that way. What exists is a combination of technical institutions, registries, operators, and standards communities. The IETF produces many of the technical documents that describe and evolve Internet protocols. IANA coordinates critical functions related to names, addresses, and protocol parameters. ICANN sustains the institutional framework for coordinating unique identifiers. W3C develops open standards for the Web. IEEE weighs especially in the local and metropolitan connectivity layer, with standards families like Ethernet and Wi-Fi. The RIRs distribute numerical resources such as IP address blocks and autonomous system numbers at a regional scale.

The importance of these organizations is not that they "command" the Internet the way a centralized regulator would. Their importance is that they allow the system to remain interoperable. Without shared rules for names, numbers, parameters, and standards, the Internet would fracture more easily into incompatible networks.

The physical part still matters a lot

Sometimes people talk about the cloud as if it had replaced geography. That is not true. The Internet still depends on very concrete physical infrastructure: data centers, network equipment, backbone fibers, landing stations, terrestrial links, and submarine cables. When a service becomes "global," matter does not disappear; what happens is that it gets distributed and abstracted over an enormous physical base.

That physical base is also not distributed in a perfectly neutral way. There are traffic corridors, hubs, infrastructure concentrations, and actors with more deployment capacity than others. That is why understanding the Internet also requires looking at operational power: who hosts, who interconnects, who accelerates content, who sells capacity, and who manufactures the silicon on which many of these networks run.

Key companies and new operational concentration

For a long time, the Internet was discussed primarily in terms of openness and decentralization. That dimension remains real at the standards and architecture level, but in daily operation there is significant concentration. AWS, Google Cloud, and Microsoft Azure control an enormous share of large-scale compute and storage. Cloudflare, Akamai, and other network operators distribute and protect a relevant portion of traffic and content delivery. Manufacturers like Cisco, Juniper, Arista, Nokia, Broadcom, and NVIDIA influence the real capabilities of networks through hardware and silicon.

That does not mean these companies "own the Internet" in a total sense. It means something more interesting and more precise: they concentrate critical parts of its contemporary operation. The architecture remains largely open; the operational muscle is much more concentrated.

Where the Internet might go

The near future of the Internet does not seem to point toward a complete replacement of its foundations, but toward a reconfiguration of its layers. Quantum computing pressures cryptography and pushes the transition toward post-quantum schemes. Artificial intelligence promises to change the base protocol less than the operation of networks: anomaly detection, capacity planning, observability, automation, and optimization. Blockchain, for its part, does not seem like a realistic substitution for the current Internet, but it can serve as a complementary layer in certain identity, naming, or coordination systems. And advances in hardware — optics, switching silicon, accelerators, energy efficiency — will continue to condition how far the network can scale without triggering runaway costs and complexity.

The important idea here is that the Internet has never been just software. It has always been a mix of protocols, cable, institutions, incentives, and physical deployment. Anyone who wants to understand its future will have to look at all five things at once.

Closing

The right way to start this series is not with a textbook definition of the Internet, but with a shift in perspective. The Internet is not a single invention and it does not belong to a single layer. It is a cumulative work of engineering and coordination. It was born from the problem of interconnecting different networks; it matured with TCP/IP; it exploded socially with the Web; it became navigable for humans with DNS; and it is still sustained by a mix of physical infrastructure, open protocols, technical communities, and large operators. That is the general map. The next logical step is to understand the piece that made the entire map possible: TCP/IP.