In 1965, Intel co-founder Gordon Moore made a famous observation: that the speed of computer hardware (to be precise, the number of transistors that can be packed onto an integrated circuit) tends to double every two years. In the four decades since, Moore’s law has held true with remarkable accuracy. The technology to fabricate ever-smaller logic elements has steadily improved, leading to astounding increases in computer speed. The memory, bandwidth, and processing power available today in even an ordinary desktop machine surpasses the most powerful computers used by the government and industry of yesterday.
Some sci-fi writers and futurists have foreseen a truly strange consequence of this progress. They anticipate that, assuming the trend of exponential growth continues, we will eventually – perhaps soon – reach the point where we can create machines with more computing power than a human brain. This innovation will lead to true artificial intelligence, machines with the same kind of self-consciousness as human beings. And reaching this point, it is believed, will trigger a technological explosion, as these intelligent machines design their own, even more intelligent successors just as we designed them. Those successors will in turn design yet more intelligent successors, and so on, in an explosive process of positive feedback that will result in the creation of truly godlike intelligences whose understanding far surpasses anything that ordinary human minds can even conceive of. This event is dubbed “the Singularity” by those who imagine it, for like the singularity of a black hole, it is a point where all current understanding breaks down. Some prognosticators, such as Ray Kurzweil (author of The Age of Spiritual Machines) think the Singularity is not only inevitable, but will occur within our lifetimes.
As you might have guessed from the title of this post, I’m not so optimistic. The Singularity, like more than a few other transhumanist ideas, has more than a whiff of religious faith about it: the messianic and the apocalyptic, made possible by technology. History has a way of foiling our expectations. The number of people who have confidently predicted the future and have been proven completely wrong is too great to count, and so far the only consistently true prediction about the future is that it won’t be like anything that any of us have imagined.
The largest immediate obstacle I see to Singularity scenarios is that we don’t yet understand the underlying basis of intelligence in anything close to the level of detail necessary to recreate it in silicon. Some of the more hopeful believers predict a Singularity within thirty years, but I think such forecasts are wildly over-optimistic. The brain is a vast and extremely intricate system, far more complex than anything else we have ever studied, and our understanding of how it functions is embryonic at best. Before we can reproduce consciousness, we need to reverse-engineer it, and that endeavor will dwarf any other scientific inquiry ever undertaken by humanity. So far we haven’t even grasped the full scope of the problem, much less outlined the principles a solution would have to take. Depending on progress in the neurological sciences, I could see it happening in a hundred years – I doubt much before that.
But that, after all, is just an engineering problem. Even discounting it, there’s a more profound reason I doubt a Singularity will ever occur. The largest unexamined assumption of Singularity believers is that faster hardware will necessarily lead to more intelligent machines, so that all that’s required to create a godlike intelligence is to fit more and more transistors on a chip. In response, I ask a simple question: What makes you believe the mere accumulation of processing power will produce greater understanding of the world?
Fast thinking may be a great way to generate hypotheses, but that’s the less important half of the scientific method. No matter how quickly it can think, no intelligence can truly learn anything about the world without empirical data to winnow and refine its hypotheses. And the process of collecting data about the world cannot be accelerated to arbitrary rates.
The pro-Singularity writings that I’ve read all contain the implicit and unexamined assumption that a machine intelligence with faster processors would be not just quantitatively but qualitatively better, able to deduce facts about the world through sheer mental processing power. Obviously, this is not the case. Even supercomputers like Blue Gene are only as good as the models they’re programmed with, and those models depend upon our preexisting understanding of how the world works. The old computer programmer’s maxim – “garbage in, garbage out” – succinctly sums up this problem. The fastest number-cruncher imaginable, if given faulty data, will produce nothing of meaningful application to the real world. And it follows that the dreamed-of Singularity machines will never exist, or at least will never be the godlike omnisciences they’re envisioned as. Even they would have to engage in the same process of slow, painstaking investigation that mere human scientists carry out.
This isn’t to say that artificial intelligence, if we ever create it, will be entirely useless. In virtual-reality software worlds, which are precisely defined and completely knowable, they might be able to create wonderful things. In the real world, I foresee them flourishing in the niche of expert systems, able to search and correlate all the data known on a topic and to suggest connections that might have escaped human beings. But I reject the notion that, as general-purpose intelligences, they will ever be able to far surpass the kind of understanding that any educated person already possesses.