All my life, I have been incredibly interested in technology, and as I have grown older it has been fascinating to see how fast technology has advanced in such a short period of time. What is more interesting to me is discovering theories on how technology will affect our future.

One week I was listening to the Technocracy podcast hosted by my personal favorite podcaster Ken Ray, when I first heard about a concept known as the Technological Singularity. I found the theory extraordinary and wanted to share it with other people. According to the theory, technological advances will give birth to smarter-than-human entities that, as a result, will exponentially accelerate technological progress. The invention of the super-intelligent entity is referred to as the singularity.

The theory was popularized in 1993 by writer and professor Vernor Vinge in a brilliant essay titled “The Coming Technological Singularity: How to Survive in the Post-Human Era.” In the abstract, he states, “Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.” Similar to the rise of human life on Earth, this proposed technological singularity will elevate mankind to the next evolutionary step.

In order to achieve this, Vinge states two primary ways humans might achieve this singularity, either through artificial intelligence or intelligence amplification.

Artificial intelligence is creating a computer with intelligence that is greater than that of humans. With the rise of AI, many people fear that these machines will take over the planet and humans will be powerless to stop them. As a result, many have written safety measures to prevent such events, the most popular being Issac Asimov’s three laws of robotics, although most of his writings depict how they fail.

Another possibility known as intelligence amplification is creating computer interfaces that can enhance our own selves in order to achieve this singularity. According to Vinge, because of the incredible complexity of recreating human intelligence, intelligence amplification might be the more likely outcome. Either way, Vinge states that if the singularity is possible, it will be inevitable and the humans’ competitiveness and possibilities in technology will bring about this event.

With the coming of the singularity, the human era as we know it today will be ended. This does not necessarily mean that humans will be overrun by machines, as in the case of films such as “The Matrix” or “A.I.,” it just means that like the rise of human life on earth, we as humans will take the next evolutionary step in existence. Another result is that once a super-intelligent entity is created, nothing really stands in the way of creating even more intelligent beings, resulting in an intelligence explosion that is unimaginable to people today. An example of how someone in a two-dimensional world cannot possibly imagine how a three-dimensional world would be, illustrates why we cannot understand what will happen post-singularity.

While all of these possible outcomes are described in detail, Vinge makes a point that the singularity may not even happen at all. One of the reasons it may not happen is the limited intelligence of human beings in creating a computer that is smarter than us. He states that while people might create incredibly powerful computers, we won’t be able to make that final push into creating the super-intelligent being.

To some, science fiction theories, such as the technological singularity, make them very nervous. But nonetheless, people find them very fascinating. For myself, the most interesting point Vinge makes is that, if the singularity is going to happen, it will happen sometime between the year 2005 and 2030. Whether that is good or bad, it gives me an immense sense of excitement and anxiety.