Book Review: The Quantum Mechanic
Summary: A compelling atheist thought experiment, wrapped inside a cleverly plotted and fast-paced tale of transhumanist fiction.
This isn't the first time I've reviewed a book written by a fellow blogger, but it's always a pleasure for me to do, and this one was particularly pleasurable to read. The Quantum Mechanic is a novel written by the blogger D - you may know her as the author of She Who Chatters - for 2009's National Novel Writing Month.
The hero of TQM is Douglas Orange, a mild-mannered Midwest physics professor who discovers one day that he has an extraordinary power: the ability to influence the workings of reality on a quantum level through pure will. He can't change the past or foresee the future, but other than that, Douglas' powers seem to be bounded only by the limits of his imagination. As he grows more skilled in controlling them, he becomes able to do almost anything, from reading minds to teleporting objects through space to creating matter and energy out of nothing.
At first, Douglas uses his power for nothing more than some remarkably convincing stage magic. But after a visit from a certain famous magician offering a million-dollar prize, Douglas is persuaded (and wouldn't you be persuaded?) to become a vigilante superhero. Under the moniker of the Quantum Mechanic, he launches into a career of fighting crime and rescuing people from disaster, much to the consternation of politicians, police departments, and the moralist commentators of Fawkes News.
This is ground well-traveled by novels and comic books, of course. But most of those creative works fail to follow through on the logical implications of their premise, and assume that people in possession of awesome powers would use them for nothing more inventive than foiling petty crime. I'm happy to say that TQM transcends this hoary cliche, and the second part of the novel breaks into new territory. Having cured violence and war, Douglas turns his vision to grander goals, and his power launches humanity into a technological Singularity. Under the all-seeing eye of the Quantum Mechanic, disease, poverty and death become things of the past, and humanity begins to step into its birthright as explorers and settlers of the universe.
But not all is well. Just when the human race seems poised to take the final step into this-worldly paradise, ominous signs and portents begin to arise: the faithful start disappearing from the earth; the seas boil and the skies turn red as blood; and a strange new star appears in the heavens. And on the heels of these omens, humanity receives a visit from a sinister messenger straight out of the Old Testament, a menacing angel of light known only as the Entropic Engineer. Douglas' powers don't seem to work against him, and after delivering a prophecy of doom for all sinners, he promises to return soon at the head of Heaven's vast army to usher in Judgment Day. It's the Singularity versus the Second Coming, as the Quantum Mechanic faces off against the Entropic Engineer in a cosmic war for humanity's eternal destiny... but is this destroying angel all that he seems?
Aside from the audaciously high-concept premise, there were three aspects of this novel that I enjoyed greatly. First of these, as you might have guessed, is its unapologetic advocacy of the atheist perspective. One of my favorite lines is early on: when Douglas denies God's existence and a heckler demands to know if he's searched the entire universe to be sure, he deadpans, "Why, yes." And there are several great dialogues between Doug and his interlocutors on faith, on meaning and purpose, on morality and harm, and on other philosophical topics where the author lays out and defends an atheist and humanist viewpoint with clarity and compelling reason.
Second, TQM accomplishes something that I haven't often seen done well: it tells an enthralling story even as society changes dramatically around its protagonists. Most of the transhumanist fiction I've read lacks the human perspective necessary for readers to identify and empathize with the characters. One could argue that this is unavoidable, since this kind of fiction by definition describes a world radically different from our own; but however necessary it is by the logic of the plot, it doesn't usually make for good storytelling. This book neatly dispenses with that problem by anchoring its plot in Douglas, who retains his fundamental humanity despite his powers, and letting us see through his eyes.
Third, even aside from its explicit advocacy of our perspective through dialogue, this entire novel advances the atheist viewpoint in a more subtle way. The basic story implicitly takes the form of a thought experiment: If you had the power to end evil and suffering, would you do it?
Of course, we have always answered yes, reasoning that an allegedly good God's failure to intervene in the same circumstances casts strong doubt on his existence. If there was a person with the power to stop evil, they wouldn't stand idly by or hide themselves away, but would take action when they saw it was needed. Philosophically, we all know this to be true. But this book vividly illustrates that argument by clothing it in story, and - at least for me - thereby made it far more persuasive and convincing to me than it's ever been before.
Douglas has the power to do almost anything, but he doesn't hide away from the world. He uses his power for good: he stops violence, he cures disease, he answers people's requests in obvious fashion, he shows up to respond to critics, and he acts based on a clear set of principles and not in an arbitrary or capricious manner. He acts, in short, exactly as atheists have always said a rational and benevolent god would act. And as the author shows us how the human race flourishes under his guidance, it drives home the point that evil is not - as advocates of theodicy often claim - an inherent part of the universe that can't be eliminated. Nor does doing so compromise our free will, except in the sense that people are no longer free to inflict harm and suffering on others.
This is by far the most persuasive answer to theodicy I've ever seen: not a philosophical argument pointing out its flaws in a neutral and logical manner, but simply sketching another possible world where such excuses are not needed, and showing how they inevitably suffer from the comparison. And it doesn't hurt that this compelling moral is wrapped inside a slam-bang, fast-paced tale of Earth's ascent into a posthuman future, with a thoroughgoing humanist as its main character and a plot that an atheist can't help but love.
(You can buy a copy of the book from CreateSpace.)
Who Wants to Live Forever?
Last month's post "On Cryonics" outlined why I'm skeptical of that transhumanist doctrine. In today's post, I want to discuss more directly the goal which advocates of cryonics hope to attain - the achievement of immortality through technology that gives us the ability to halt or reverse the aging process.
In this case, my objection is not one of feasibility. I think it's entirely possible that we'll figure out how to do this eventually. (We already know of species, such as lobsters or giant tortoises, that show negligible senescence and that we can use as lab models.) My objection is instead along moral lines: even if we could do this, should we?
It's often said that new ideas triumph not because their advocates convince everyone, but because all their old defenders eventually die out. What, then, would be the effect on human society if we ceased to die? To put it another way, what would the effect on our society have been if immortality had been invented in the era of slavery, or before women's right to vote was recognized, or during the theocratic medieval ages when kings and popes held sway? Humanity's moral progress would have been halted forever, our existing power structure and distribution of opinions lithified by a horde of immortal bigots.
I think we can agree that it would have been a disaster if immortality had been invented during the dark ages of our past. But have we made enough progress since then to be ready? It's true that we have come far; but, I think any fair-minded person would agree, we have not come far enough. Many of the old prejudices still linger, and in some quarters their strength is virtually undiminished. And there are many more cases of bigotry and irrationality yet to be overcome, where whole groups of people are still denied the exercise of basic rights.
If immortality were invented, many of the people who would be among the first to take advantage of it are some of the ones we would least want to live forever. Wealthy dictators could exert a literally endless dominion over their oppressed people. Just imagine the theocratic clergy of Saudi Arabia or Iran living forever, strangling their countries' liberty in a deathless grip of dogma; or North Korea becoming a cult state worshipping an immortal tyrant in fact and not just in ideology. Imagine racism or segregation enshrined as the eternal order of things. Up until now, natural death has forced every regime, no matter how despotic, to change eventually. But immortality could usher in tyranny that was literally never-ending. This is not a utopian vision, but the worst kind of dystopia imaginable.
Another point to consider is that the advent of planet-wide immortality would require us to give up something that is a deeply built-in part of our natures - the drive to have children. Yes, I'm aware that some transhumanists talk rapturously of settling other planets and expanding into the universe, but we should face up to the fact that voyaging on such a grand scale, leaving everything familiar behind, is never going to appeal to any more than a small fraction of the population. And even if it did, the economics of space travel are likely to remain prohibitive. We might be able to send small groups of colonists to other worlds to establish societies there, but it will probably never be a feasible way of emptying the planet of large numbers of people. Unless the advent of immortality brought with it drastic changes in human psychology, a world of immortals would soon become ruinously crowded and unsustainable, leading in short order to resource wars and the collapse of society.
And finally, what would be the effect on our individual lives? An extended life, with more opportunities to take in all that life has to offer, I certainly would welcome. But an endless life, in the long run, would lead inevitably to terminal boredom and despair. I find it more ethical and more rational to live life for a time, take advantage of its bounty, and then make way for a new generation for whom all the wonders of the world are brand-new. Who really wants to live forever?
Last summer I wrote a post, "Why I'm Skeptical of the Singularity", which gave some reasons for doubting that godlike machine intelligences will ever come into being. Today I'll discuss another idea popular among enthusiasts of transhumanism, namely life extension through cryonics. Here, too, I intend to offer a qualified skepticism.
Overcoming Bias presents a strong case for cryonics, in a post which pleads with readers to sign up for the process. I'll use them as my foil. My own viewpoint, meanwhile, hasn't changed significantly since I first touched on the topic in "Life Is Fleeting":
While in the remote future it is a very real possibility that we will unlock the key to personal immortality, for the time being such rosy scenarios are more science fiction than science fact... The most advanced freezing technology in existence today still causes massive cellular damage, irreversible in all except the most fantastic scenarios of what future technology will be capable of. In essence, this is little more than a materialist version of Pascal's Wager.
I realize comparing cryonics to Pascal's Wager is likely to raise some hackles, but the comparison is unavoidable when advocates of cryonics so often defend the idea using Pascal's Wager-like logic: If you bet on cryonics and lose, you lose nothing, since you'd have died anyway; but if you win, you might wake up in a future where science has perfected immortality! Isn't this a potentially infinite payoff with zero risk?
But just as Pascal's argument overlooked the problem of choosing the wrong religion and ending up condemned, cryonics overlooks the possibility that the future, rather than being better, may be worse. What if the future is a 1984-style dictatorship or post-apocalyptic anarchy, or is run by malevolent superintelligences (like the vindictive supercomputer AM from Harlan Ellison's classic story I Have No Mouth and I Must Scream) that take pleasure in tormenting us? What if the future revives cryonically frozen human beings only to put them on trial for the crimes of our era? (I find this last possibility the most plausible of the four.) The potential payoffs of cryonics, needless to say, become far more complicated if we do not assume that we can only wake up in a world far better than the one we left.
Second: Even disregarding this problem, why will the future want to revive us? Any plausible scheme for resurrecting frozen humans - nanotechnological repair, whole-brain uploading, cloning new bodies - is certain to be a difficult and resource-intensive process. What will they want us for? The future is not likely to be short of people, and even if it were, there are easier ways of producing new ones. And by the time revival is perfected, if it ever is, it's probable that the cryonically preserved will have no living relatives who feel any particular kinship for them.
For historical research? But if cryogenically frozen people survive into the distant future, then it's almost certain that the Internet and other records from our time will have survived as well; information is more easily preserved than entire people. The flood of news stories, blog entries, and other chronicles from our era will provide as much data as future historians could ever want and would make the existence of live people superfluous. It may be done the first few times for the sake of novelty, but I can't see it happening much beyond that, especially if there are thousands and thousands of frozen people.
Third: Even if the future is a benevolent one and is willing to revive us, will the person who's revived really be me? Even staunch cryonics advocates admit that the damage done to brain tissue by freezing is irreversible with any current technology. And the revival process, no matter how it works, is likely to cause additional damage.
To truly revive a person, it would be necessary to preserve the incredibly delicate, submicroscopic connections between neurons in exactly the same state as when the person was alive. If neurons die or their synapses are severed during recovery - even if only a small percentage suffer these side effects - the result would be massive brain damage. Even if future technology could repair this damage, it could only operate probabilistically in terms of restoring neural connections - after all, there's no map of the brain to tell which neurons are supposed to connect to which other neurons - with the result that the person revived might be missing memories, might have a drastically altered personality or character traits. Would it really be better to be revived in such a fragmented state? Would the person who's revived even be me, or would he more justly be considered a different person altogether?
Finally, it's worth asking whether investing on cryonics is the best use of our society's resources. While people alive today are still suffering and deprived of life's basic needs, it strikes me as selfish for a wealthy, privileged few who've already enjoyed long and happy lives to grasp after immortality. In my view, it's better to accept that we all get one chance at life, to live it to the fullest while we possess it, and when we're gone, to give away to others whatever we leave behind so that they can enjoy the same opportunity.
Why I'm Skeptical of the Singularity
In 1965, Intel co-founder Gordon Moore made a famous observation: that the speed of computer hardware (to be precise, the number of transistors that can be packed onto an integrated circuit) tends to double every two years. In the four decades since, Moore's law has held true with remarkable accuracy. The technology to fabricate ever-smaller logic elements has steadily improved, leading to astounding increases in computer speed. The memory, bandwidth, and processing power available today in even an ordinary desktop machine surpasses the most powerful computers used by the government and industry of yesterday.
Some sci-fi writers and futurists have foreseen a truly strange consequence of this progress. They anticipate that, assuming the trend of exponential growth continues, we will eventually - perhaps soon - reach the point where we can create machines with more computing power than a human brain. This innovation will lead to true artificial intelligence, machines with the same kind of self-consciousness as human beings. And reaching this point, it is believed, will trigger a technological explosion, as these intelligent machines design their own, even more intelligent successors just as we designed them. Those successors will in turn design yet more intelligent successors, and so on, in an explosive process of positive feedback that will result in the creation of truly godlike intelligences whose understanding far surpasses anything that ordinary human minds can even conceive of. This event is dubbed "the Singularity" by those who imagine it, for like the singularity of a black hole, it is a point where all current understanding breaks down. Some prognosticators, such as Ray Kurzweil (author of The Age of Spiritual Machines) think the Singularity is not only inevitable, but will occur within our lifetimes.
As you might have guessed from the title of this post, I'm not so optimistic. The Singularity, like more than a few other transhumanist ideas, has more than a whiff of religious faith about it: the messianic and the apocalyptic, made possible by technology. History has a way of foiling our expectations. The number of people who have confidently predicted the future and have been proven completely wrong is too great to count, and so far the only consistently true prediction about the future is that it won't be like anything that any of us have imagined.
The largest immediate obstacle I see to Singularity scenarios is that we don't yet understand the underlying basis of intelligence in anything close to the level of detail necessary to recreate it in silicon. Some of the more hopeful believers predict a Singularity within thirty years, but I think such forecasts are wildly over-optimistic. The brain is a vast and extremely intricate system, far more complex than anything else we have ever studied, and our understanding of how it functions is embryonic at best. Before we can reproduce consciousness, we need to reverse-engineer it, and that endeavor will dwarf any other scientific inquiry ever undertaken by humanity. So far we haven't even grasped the full scope of the problem, much less outlined the principles a solution would have to take. Depending on progress in the neurological sciences, I could see it happening in a hundred years - I doubt much before that.
But that, after all, is just an engineering problem. Even discounting it, there's a more profound reason I doubt a Singularity will ever occur. The largest unexamined assumption of Singularity believers is that faster hardware will necessarily lead to more intelligent machines, so that all that's required to create a godlike intelligence is to fit more and more transistors on a chip. In response, I ask a simple question: What makes you believe the mere accumulation of processing power will produce greater understanding of the world?
Fast thinking may be a great way to generate hypotheses, but that's the less important half of the scientific method. No matter how quickly it can think, no intelligence can truly learn anything about the world without empirical data to winnow and refine its hypotheses. And the process of collecting data about the world cannot be accelerated to arbitrary rates.
The pro-Singularity writings that I've read all contain the implicit and unexamined assumption that a machine intelligence with faster processors would be not just quantitatively but qualitatively better, able to deduce facts about the world through sheer mental processing power. Obviously, this is not the case. Even supercomputers like Blue Gene are only as good as the models they're programmed with, and those models depend upon our preexisting understanding of how the world works. The old computer programmer's maxim - "garbage in, garbage out" - succinctly sums up this problem. The fastest number-cruncher imaginable, if given faulty data, will produce nothing of meaningful application to the real world. And it follows that the dreamed-of Singularity machines will never exist, or at least will never be the godlike omnisciences they're envisioned as. Even they would have to engage in the same process of slow, painstaking investigation that mere human scientists carry out.
This isn't to say that artificial intelligence, if we ever create it, will be entirely useless. In virtual-reality software worlds, which are precisely defined and completely knowable, they might be able to create wonderful things. In the real world, I foresee them flourishing in the niche of expert systems, able to search and correlate all the data known on a topic and to suggest connections that might have escaped human beings. But I reject the notion that, as general-purpose intelligences, they will ever be able to far surpass the kind of understanding that any educated person already possesses.
One of the most optimistic - perhaps excessively optimistic - philosophies that take shelter under the umbrella terms of atheism and humanism is transhumanism. Transhumanism is a set of loosely associated philosophies which all share a belief in the desirability of transcending the biology of the human condition through technology.
At the marginally more plausible end of the scale, this may entail feats of technology such as preserving the terminally ill by cryogenically freezing their bodies, to be thawed out in a future age once a cure is discovered; or the invention of genetic or other treatments that arrest or reverse the aging process, allowing people to live greatly extended lives. On the far more enthusiastic end of the scale, other transhumanist ideas include nanotechnology that can repair or reconstruct the human body from the inside, or the ability to digitally scan and simulate our brains to create robot doubles that would think, believe and act just like the originals. (Whether these "uploaded" beings would be the originals is a matter of considerably more debate.)
Of course, none of these wonders are presently possible. At the moment, even the most modest transhumanist ideas are little more than pipe dreams. On the other hand, the accelerating growth of technology has already allowed us to transform the human body in ways that previous eras would have found inconceivable. We can already transplant limbs and organs from one person to another, or in some cases, replace them with mechanical duplicates. We have nascent technologies that can read brainwaves or the firing of neurons and transform them into robotic action. Our ability to treat disease through genetic therapy is still rudimentary, but there can be little doubt that it will improve. In the far future, some of the transhumanist dreams may be possible. What should our response be to this kind of technology?
The first thing I have to say is that, even if these ideas become possible in the future, there are more basic problems we should confront first. As long as human beings still suffer from poverty, malnutrition and treatable disease, we should concentrate on eradicating those. It's already an injustice that some people have access to vastly better medical care than others because of the circumstances of their birth. It would be a far greater injustice if rich people of the First World had access to life-extending innovations while others still suffer needlessly from easily curable diseases. Before we start work on raising the human baseline, in my opinion, we should ensure that everyone is lifted up to that baseline.
But if this could be done, other problems would soon follow. If human beings could live forever, then we would inevitably face a severe problem of overpopulation, since all life, of whatever form, must consume resources. The Earth is only so big, and the natural resources we can tap, however carefully managed, are finite. It seems the physics of our universe conspire to prevent interstellar travel on any large scale, and although it's conceivable that those limits will be circumvented, we can't just assume that this will be the case. If this is so, then it might be argued that humans have a duty to die - at the very least, to cease life-extending treatments after a certain period and allow natural aging to resume.
The last issue concerns the "uploading" of human minds into software. While I find this idea almost too strange to countenance, there's nothing about it that's physically impossible. If it were ever feasible, it would raise deep questions about the nature of personal identity, which I hardly feel qualified to address. But, I have to point out, it could not create two identical individuals - at least not for long. If an uploaded person is truly a mind, they will have the same capacity for learning, reflection and personal growth. A mind modeled in software, therefore, could not help but diverge from the original (which will inevitably have different experiences) over time. Eventually, we would have not one but two distinct people. Perhaps uploading, even if it were one day possible, would not be a way to create identical copies of ourselves, but merely a very new and unusual way to reproduce.
To Be As Gods
I have to admit, I cringe when I read quotes like this:
Max may be a long way from his old home, but he plans on going a lot further than America. Extropianism is a "rational transhumanism", he explains. There may not be any supernatural force in the universe, but pretty soon, suggests More, once we get our brain implants and robot bodies working, we will be as gods.
The linked article is about Max More, a philosopher who advocates transhumanism - the idea that we can use technology to transcend the present limits of human biology. Like most transhumanists, More advocates a potpourri of wildly optimistic ideas: freeze ourselves through cryogenics, make our bodies immortal, digitize and upload our minds to live in virtual worlds or robot bodies.
As far as I'm concerned, most of these speculations so far outstrip the limits of what is currently possible that there's little point even thinking about them. In the very distant future, perhaps, these will be issues to seriously consider. For now, I think we should be concentrating on the many more pressing problems that can be alleviated by current technology. Once people are no longer dying from malnutrition or malaria, maybe then we can start considering how to make them immortal. In the meantime, most of this is just unconstrained fantasizing that distracts us from the things that are truly important.
However, it was something else about this article that bothered me more - the throwaway line about how "we will be as gods". Nothing could appeal to me less. Frankly, I don't want to be like the gods.
Consult just about any piece of mythology you wish, and you'll find that gods are generally not very nice creatures. They're jealous, sadistic, manipulative, capricious, petty, possessing overdeveloped egos and hair-trigger tempers, and hateful toward those who are different. They're swift to anger, slow to forgive, and perpetually obsessed with whether people are groveling enough or paying them sufficient tribute. When it comes to dealing with those who disobey, violence is typically their first, last, and only resort. In short, they exemplify all the worst traits of the humans that created them, and few if any of our best traits. Why on earth would we want to be like them?
We are human beings. No matter how much knowledge we gain, no matter how much power we gain, we will always be human beings. We should not aspire to be gods, or anything else that we are not. We should aspire, instead, to be the best human beings we possibly can be - to cultivate what is best in our nature and encourage it to flourish. For all the evil that we have done, human beings are also capable of astonishing acts of mercy and benevolence. These are traits that are conspicuously absent in most of the stories of gods we read. We do not need to be forever aping our old mythologies; we have the ability to transcend their narrow perspective, and in many ways, we already have.