by Adam Lee on June 12, 2023

[Previous: Our AI future, part 1]

The history of AI is a history of people getting way too enthusiastic.

AI researchers have been making excessively optimistic predictions for decades. In 1960, computer scientist Herbert Simon predicted that “machines will be capable, within twenty years, of doing any work that a man can do” (source).

AI pioneer Marvin Minsky said in 1970, “In from three to eight years we will have a machine with the general intelligence of an average human being” (source). Other people working on AI at the time thought Minsky was too optimistic, but only a little: the same article says that “‘give us 15 years’ was a common remark.”

More recent predictions have also overshot the mark. Robotics expert and futurist Hans Moravec predicted in 2008, “By 2010 we will see mobile robots as big as people but with cognitive abilities similar in many respects to those of a lizard,” capable of doing household chores like dusting and taking out the garbage. Automakers like Ford, GM, Nissan and Volvo promised fully self-driving cars by 2020.

Last but not least, the inventor Ray Kurzweil has been forecasting the curve of future technological progress for three decades. Some of his predictions were dead-on: high-speed wireless networks, small wearable computers, voice-controlled machines and text-to-speech, medical teleconferencing and working from home, to name a few. However, others were flops.

In 1999’s The Age of Spiritual Machines, Kurzweil predicted that driverless cars would be in wide use by 2009 and humans would be prohibited from driving by 2019. He also predicted that by 2019, we’d have fully immersive virtual reality including haptic (touch-based) feedback good enough for medical exams or sex; ubiquitous household robots; poverty eliminated and average life expectancy increased to 100 through AI-assisted economic growth; most education delivered by simulated teachers; and brain scans underway to transfer the architecture of the human brain into software. (Here’s the source for these predictions.)

Temper our predictions with skepticism

Belief in the Singularity—the glorious future where AGI will take over the world and solve all our problems for us—has similarities to religion. Not least of these is that many of its advocates use Pascal’s Wager logic to defend it. Some of them argue that we should pour every dollar we can spare into AI research because even a tiny chance of an infinitely good outcome is worth any price.

The other similarity is their high confidence in what the future holds, despite a track record of error. Christian believers say that Jesus is coming back any day now, and he’ll take over the world and solve all our problems for us. And they’ve kept making that prediction for centuries, while sweeping previous failed prophets under the rug.

It won’t be robot oracles proclaiming the answers for us. It will be humans and computers working together to supplement each other’s capabilities.

The fact that so many past predictions crashed and burned should give us a healthy sense of skepticism when dealing with the latest batch. Anyone who forecasts (yet again!) that superintelligent AI is going to appear soon should, at the very least, have a heavy burden of proof to explain why this time is different.

We may think that our technological advances have brought us to the cusp of AGI, but computer scientists of past decades thought the same thing. Some of the new AI programs seem tantalizingly human-like, but that may be a mirage. Unless we know for certain what it takes to create intelligence—and we don’t!—we have no way of knowing how close we actually are.

Also, when encountering a new prediction, we should ask how disinterested it really is. When AI researchers say that superintelligence is inevitable, and the first investors to pour money into it will gain an unbeatable head start over their rivals… let’s just say they may have self-serving motives for wanting others to believe this.

Intelligence doesn’t come from an armchair

You might say that, even if specific date predictions are wrong, we’re sure to create true artificial general intelligence eventually. I agree this is possible. However, I don’t believe the “intelligence explosion” or Singularity scenario as envisioned in fiction is going to happen.

To me, the most compelling argument against the intelligence-explosion hypothesis is this. Contrary to laughably ridiculous claims that an AGI could deduce general relativity from three video frames of a falling apple, greater thinking speed doesn’t automatically confer superior ability to understand the world.

Granting for the sake of argument that AGI can exist, I can believe it would be good at hypothesizing. An ultra-fast mind, if not preprogrammed with knowledge of physical laws, could watch a video and come up with billions of conjectures to explain what might be happening.

But that’s the less important half of science. The more important half is testing your ideas to see which ones hold up and which ones are disproved. In short, you have to do experiments. And you can’t accelerate that to arbitrary speed, no matter how fast you can think.

Adding more processors might make an AI able to invent hypotheses faster, but it doesn’t increase the speed at which experiments can be performed to test them. Thus, the pace of “superintelligence” will always be gated by what’s possible in the physical world.

It will also be constrained by the same problems of bad data and faulty experiments that trip up human scientists. Computer engineers call it the GIGO problem: Garbage In, Garbage Out. The smartest AI possible, if its training set is poisoned with faulty or fraudulent experimental data, won’t be able to produce true theories about the world.

The “law” of accelerating returns isn’t a law, not in the same sense as the law of gravity.

The story of AI finding a new antibiotic is an instructive example. A deep-learning algorithm was trained on the structure of existing antibiotics. Armed with this information, it helped find a compound that can kill a multidrug-resistant pathogen, Acinetobacter baumanii. This is a huge achievement and a genuine win for AI-guided research.

However, the winning molecule didn’t just drop out of the computer. The AI identified hundreds of candidate molecules, and the researchers then had to do weeks of experimental work to test them for safety and effectiveness:

First, they used a technique called high-throughput drug screening to grow Acinetobacter baumanii in lab dishes and spent weeks exposing these colonies to more than 7,500 agents: drugs and the active ingredients of drugs. They found 480 compounds that blocked the growth of the bacteria.

…They then had the model screen more than 6,000 molecules, which Stokes said the AI was able to do over the course of a few hours.

They narrowed the search to 240 chemicals, which they tested in the lab. The lab testing helped them whittle the list to nine of the best inhibitors of the bacteria. From there, they took a closer look at the structure of each one, eliminating those they thought might be dangerous or related to known antibiotics.

This is what AI-aided science of the future will look like. It won’t be robot oracles proclaiming the answers for us while we lounge on our couches. It will be humans and computers working together to supplement each other’s capabilities.

Trends aren’t laws

As evidence of an imminent technological explosion, Singularity believers point to the law of accelerating returns. They observe, correctly, that the pace of technological progress has been accelerating over time, as each new discovery builds on previous knowledge.

It took thousands of years to go from the Stone Age to the Industrial Revolution, but only a few decades from the first airplane flight to the Moon landing. The speed of change that we’re accustomed to would have bewildered our ancestors. We’re ascending an exponential curve, even if we don’t realize it.

Singularity advocates extend this trend from the past into the future. They argue that if this exponential growth continues, we’ll soon be living in an unimaginably advanced world. Within this century, decades’ or centuries’ worth of change will take place every year. GDP will grow to effectively infinite levels, ending poverty and trivially satisfying all human wants. They hold that AI is the technology which will make this possible.

However, there’s a problem. The “law” of accelerating returns isn’t a law, not in the same sense as the law of gravity. It’s a fallacy to conclude that, just because you can draw a line on a graph which matches past data points, you’ve proven that the trend will continue indefinitely.

Bacteria can multiply every twenty minutes. If you measured this rate and made a naive extrapolation, you might conclude that it would only be a few days or weeks until they consumed all other life and had a mass equal to all organic material on Earth. Of course, this doesn’t happen because competition and resource scarcity constrain their growth rate.

We’re in the same position as those bacteria. It’s true that science has produced rapid technological progress, but we can’t grow forever on a finite planet—especially if we continue to rely on extractive models of production and consumption. Eventually, we will hit limits and exponential growth will come to a halt. (The term for this is an S-curve.)

No one can say how far in the future this point is. We might have centuries left to grow, or it might be just a few years from now. It depends on how good the laws of physics allow technology to get, and how efficiently we can use available resources.

Similarly, there may be natural limits to intelligence. Singularity scenarios assume that intelligence can increase without bound, but this may not be true. We may reach a point at which AI ceases to improve, no matter how much computational capacity it has or how much training data we pour into it. That limit could come well before the predicted level of the Singularity. As an analogy, we can build vehicles that travel faster than anything in nature (railroads, race cars, airplanes, rockets), but their speed isn’t increasing exponentially every year as technology improves. Just like with bacteria, the laws of physics set natural limits that you eventually run into.

The bottom line is, we don’t know what is and isn’t possible. We should bear that in mind and maintain an attitude of humility, rather than acting as if we know for sure what’s coming next. The future has surprised us at every turn, and there’s no reason to believe this time will be different.