A short history of the Internet

The chances are that one of the first things you do in the morning is to check your phone for messages and notifications that came in while you were asleep. Maybe you’re also checking the status of your portfolio or sending an obligatory “GM” to your followers on Twitter.

While cooking your morning coffee, you might already start your laptop or read the news on a tablet. If you work remotely, you’ll spend most of the time online.

The Internet is omnipresent in our lives, and we’ve become ever-more so reliant on it during the pandemic when all we could do was connect online. Imagine staying in lockdown, and your only entertainment would’ve come out of offline sources. Arguably, remote working would be impossible without it, let alone cryptocurrencies.

In January 2021, 4.66 billion people used the Internet; out of those, 4.32 billion also used the Internet on their mobile, and a staggering 4.2 billion used Social Media to keep up with their peers (Source: Statista). Looking at the numbers, using the Internet means being active on social networks for a vast majority.

The Internet has become such a big part of our lives that we don’t spend much time thinking about how it has evolved, which has genuinely been at an impressive pace.

In the early 90s, estimates suggest that just 0.5% of the world population (smaller back then) were online. Even during the infamous dot-com crash, just around 7% of the world population was online. Now, it’s more than 59% within roughly two decades.

“Internet is a technology of freedom, in the terms coined by Ithiel de Sola Pool in 1973, coming from a libertarian culture, ironically financed by the US Department of Defense.”

Today, on October 29th, we celebrate World Internet Day. This day honors the first-ever communication between two computers and is the basis for all we do online in the 21st century. But how did we get here?

The historical background

Unlike inventions such as the lightbulb, the Internet wasn’t invented by a single genius inventor but evolved. It all started in the 1950s in the US, deep into the cold war.

On October 4th, 1957, the Soviet Union was the first to launch a satellite into orbit successfully. While Sputnik didn’t do much except relaying some beeps from its radio transmitters, it triggered a re-thinking in the US. Researchers in the states had focused on making better TVs and bigger cars, so the enemy gained a head-start on space.

In a broader move, technology and science were more deeply embedded into society — including it into school curriculums and creating agencies such as NASA and implementing Advanced Research Projects Agency (ARPA), forming part of the Defense Department.


One primary concern during the cold war was that just one single missile of the Soviet Union could render the existing phone network unusable. That would mean that states wouldn’t be able to communicate with each other in an emergency.

To prevent that, scientists went to work, and in 1962 came up with the first idea of a “galactic network” that would let world leaders communicate with each other through computers.

In 1965, a scientist came up with a way to enable computers to send information by breaking it into little packets sent via different routes. Else, this way of communication would be just as vulnerable as the phone lines. They called this process “Packet Switching”.

Sketch of the original Internet, developed by the DoD ARPA (Advanced Research Project Agency). The computer nodes connected the Stanford Research Institute, UCLA, UCSB, and the University of Utah. (Source: Computer History Museum)

Arpanet, the government’s computer network, was the first to implement this new technology, and finally, on October 29th, 1969, the first node-to-node communication took place. The first message was short, consisting of “LOGIN”, but still enough to crash the system. Only the first two letters were delivered to the recipient.

Over the next few years, the network kept growing, adding more and more universities, including institutions outside of the US. But with a growing number of separate networks and increasing participants in the overall network, it became increasingly difficult to bring all of them together.

TCP/IP & the Birth of the Worldwide Web

With the Transmission Control Protocol, developed by Vinton Cerf in the 80s, all these networks could communicate. Later he added to it the protocol we all know: Internet Protocol IP. The networks could connect, building a Network of Networks.

Throughout the 80s, the Internet was used mainly by researchers and academia, through a national initiative by the National Science Foundation, more and more universities connected to it.

Connecting to the Internet was very different from what we know now. Users called a modem via a serial port to their computer and used a dial-up service. Internet connectivity was established through phone lines. As long as a user knew the phone number of the connected computer, they could use it to connect to others. With the introduction of the worldwide web, this would change.

Depending on how old you are, you might remember the days when you couldn’t use the Internet and phone simultaneously.

The Internet was decentralized with no central authority. It still is if you managed to access it in its original form.

In 1989, Sir Tim Berners-Lee created the world wide web, the “Internet” as we know it. He did so by suggesting three technologies that we still use today: HTML, URL, and HTTP.

Implementing these meant that all computers could understand each other and paved the way for the first web browser named Mosaic. What followed this fundamental improvement in usability was a flurry of commercial activity.


Initially a nerd-fest, the Internet wasn’t spared commercialization. This was driven by the introduction of the personal computer, fuelled by the advances made in integrated circuit technologies. Another development supporting the rapid growth was an increase in local area networks, enabling more and more users (and out of a company’s perspective potential customers).

In 1993 federal legislation enabled private businesses to start using the Internet for commercial purposes, and a flurry of commercial activities ensued.

As so often, when businesses start competing for a field, centralization follows. Microsoft might come to mind when thinking about the earlier days of the commercial world conquering the world wide web. By bundling Internet Explorer with the then leading Operating System running on their workstations, they effectively killed off competition in the browser space.

But it wasn’t just businesses operating on the web that led to centralization. Today we all need internet service providers, which in some markets such as the US enjoy deregulation, to access the internet - to the disadvantage of us consumers.

In the aftermath of the dot-com crash, a few powerful big tech companies emerged.

Roughly two decades fast forward, and “googling” something has become synonymous with searching the web, and 4.2 billion people use Social Media Platforms owned by a shockingly small number of companies.

This brings us to Web 2.0. We will cover what it means and what its inherent opportunities and threats are in our next post.