Is the Singularity coming?

Dave Bowman, in 2001: A Space Odyssey, interacts with the HAL 9000 computer.

Dave Bowman, in 2001: A Space Odyssey, interacts with the HAL 9000 computer.

Stephen Hawking made headlines at the end of 2014 when, in a BBC interview, he said that we should be very wary of developing “full artificial intelligence” as it “could spell the end of the human race.”

His doomsday musings were hardly original. SpaceX’s Elon Musk had said the same thing earlier that year, warning that AI is “potentially more dangerous than nukes.”


The worrisome idea of computers possessing greater than human intelligence, coupled with a sudden independent consciousness, was first termed “The Singularity” back in 1993, in a paper by the computer scientist Vernor Vinge. And while his initial predictions about vast computer improvements merely mirrored the foresight of others – like the expected frequent doubling in computer power envisioned by Intel co-founder Gordon Moore in 1965 – Vinge believed it would lead to “change comparable to the rise of human life on Earth.”

As we all know, computers already control and facilitate much of our daily life from banking to robotic automobile assembly, and no one wants to return to the old days of manual drudgery for menial tasks like repetitive spot welding. We’re even used to machines understanding commands and correctly responding to questions. Major advances are reported annually. Last year a Berkeley, California team unveiled a new, powerful artificial intelligence technique known as “deep learning” that lets a robot quickly learn new tasks with only a small amount of training. That robot rapidly learned to screw the cap on a bottle, even figuring out the need to first apply a slight backward twist to find the thread before turning it the correct way.

The fear generated by the Singularitarians is that artificial intelligence will someday reach a point of complexity where the machines become self-aware. It is this trait that produces the sci-fi fantasies of machines acting for their own purposes in a way that bypasses human control.

We have, of course, seen this theme in the Terminator film series, Westworld (where a gunslinging robot runs amok at a theme park), and in 2001: A Space Odyssey. But there’s a very clear and spooky distinction that arises with singularity. It is one thing for computers to screw up in some fashion that causes us trouble. It is quite another for them to gain perception.

The self-aware creepy business is given credence because it’s promulgated by a few reputable authorities, such as Cornell University computer engineer Hod Lipson. He has pointed out that with ever-growing complexity, computer problems will increasingly require that we design them to deal with split-second issues by adapting and making decisions on their own. As machines get better at learning how to learn, Lipson believes it invariably “leads down the path to consciousness and self-awareness.”

This brings up an important issue: What is the basis of consciousness? With supercomputers improving their capabilities, and speeds of four exaflops per second expected by 2020, might we actually arrive at the singularity – the amazing event predicted by people like futurist Raymond Kurzweil, the man who designed the first text-to-speech synthesizer? In Kurzweil’s 2005 book, The Singularity Is Near: When Humans Transcend Biology, he has flat-out predicted that the first computer will become self-aware by 2045. After the arrival of this dreaded Singularity, we humans and animals will be sharing Earth with another intelligence, possibly forever.

Needless to say, all this catches the notice of everyone who knows that the external world and consciousness are linked, if not correlative. So when it comes to predictions of sentient machines, a little skepticism may be realistic. We’ve never seen inanimate material suddenly come to life. Even if future computer brains are designed to more closely match the architecture of ours, why should that bring the silicon entity to true self-awareness? As Dutch computer scientist Edsger W. Dijkstra said, after winning the 1972 Turing Award, “The question of whether a computer can think is no more interesting than the question of whether a submarine can swim.”

After all, there is a huge functional gap between the human and the computer mind, and even comparing performance levels is an apples-to-oranges affair. Computers possess vast search engines that can call up data with an efficiency far beyond what human brains can accomplish. But computers fail when it comes to most stone-simple human tasks, like understanding nested structures in the language of someone trying to convey subtle concepts.

But again, lurking behind any capability comparisons is the bedrock issue of what is involved in an entity being conscious. So far, as science is concerned, we must still confess that we simply do not know. It is even possible that the universe is a single consciousness, despite the appearance of countless centers of awareness. When the great physicist Erwin Schrödinger said, “Multiplicity is only apparent, in truth there is only one mind,” it may have seemed illogical, even ridiculous. But it highlights how little we truly know when it comes to the most intimate aspect of reality: the act of being aware.


This week’s column was adapted from a chapter in Bob’s newest book, Beyond Biocentrism, co-authored with Robert Lanza, M.D.


There is one comment

  1. garrett v andrews

    Certainly computers will overcome the current inability to understand nested structures in language and, for that matter, many things we have yet to even conceive. This seems inevitable, inexorable. The only question is: Does absence of animal spirits preclude a seemingly all-powerful ‘being’ from acting with evil intent?

Comments are closed.