[Previous: Our AI future, part 2]
Despite my skepticism of the wilder claims made about AI, I’m not dismissive of the technology. The achievements of the new AIs are genuinely impressive. They’re a paradigm shift in our understanding of what computers can do.
I don’t believe we’re going to create superintelligent machine gods that will take over the universe. However, when you look past the hype, AI still has enormous potential to change the world—just not in the ways its most fervent backers say it will.
What AI can do
Self-driving cars could be a huge safety improvement. A robot car never drives drunk, never falls asleep at the wheel, never panics or gets distracted or lets its attention wander. It can react at machine speed to sudden hazards, like a crash up ahead or a child running into the road. If we perfect the technology so it can handle edge cases—which, admittedly, is a fairly large “if”—self-driving cars could save thousands of lives a year.
Robot doctors capable of diagnosing illness from simple scans like X-rays, or even performing surgery without human help, could blow through the obstacles to health care in poor and remote regions of the world. They could be lifesavers on battlefields and in triage situations. Protein-folding and drug-discovery AIs will give us the power to design more potent, better-targeted treatments for all kinds of illnesses.
AI translation apps will allow any two people to communicate in real time, over text or in person, whether or not they share a language. It’s like Star Trek’s universal translator brought to life, helping to dissolve artificial boundaries of incomprehension and fostering peace and mutual understanding.
AI-run farms, factories, garment shops and warehouses will make it possible to produce material goods faster and more efficiently. They’ll eliminate the need for human beings to toil in these grueling, hazardous, low-paid occupations.
To be sure, not all the potential uses of AI are beneficial. Voice-cloning and generative-art programs will make it possible for bad actors to create ever more convincing deepfakes for blackmail, propaganda, and election meddling. Chatbots can be used to cheat on exams, flood the internet with disinformation, and swamp literary magazines and journals with low-quality content from people who want to be writers without putting in the work.
And that’s not even to mention the military applications. How long will it be before a robot is deciding whether to kill someone?
What AI can’t do
The current model of AI is based on neural networks—software systems designed to mimic the web of connections between neurons in the brain. Researchers train them on massive quantities of data, which reinforces some connections while pruning others. A utility function “rewards” the system for producing increasingly accurate output. This approach has proved revolutionary in computer vision, natural-language comprehension, and other fields.
However, this approach only works when the AI’s performance can be measured against an objective metric or an already-known outcome. Game-playing programs like AlphaGo work because the game of Go has fixed rules and precisely determined win/loss criteria. Protein-folding AIs are impressive, but the only reason it’s possible to train them is because we already know in a general sense how proteins fold.
These AIs are good at solving specific problems. However, we can’t write one to solve a problem if we don’t know how to solve it ourselves, or even what a correct solution would look like. That’s why the dream of a scientist-in-a-box that can be booted up and aimed at any unsolved problem in any field—from math to physics to engineering to psychology—probably won’t happen.
There’s no such thing as a set of training data that teaches a neural network how to solve any question. Although the scientific method at a high level is the same across every field, when you drill down to the details, they’re hugely different. They have different types of evidence, different methodologies, different subjects of study, and different criteria for recognizing an answer and demonstrating it to be true. It’s unlikely that they can all be distilled into a single algorithmic process.
Another scenario that’s been floated is “AlphaPersuade”—a hypothetical superintelligent AI that’s so good at persuasion, it can talk anyone into anything. If corporate marketers, religious evangelists, or ethnonationalist ideologues got their hands on this thing, we’d be helpless!
This can’t exist. Let’s calm down a little.
There’s no magic sequence of words that exerts an irresistible effect on every human. Even if you could find an argument that’s convincing to one person, it might not work on another. We’re not computers that can be taken over by a code-injection attack (cf: Little Bobby Tables).
Besides, how would you train such a thing? By showing your test subject an argument, asking them to rank how persuasive they find it, and repeating this millions of times? If you could find someone that cooperative, you wouldn’t need an AI to get them to do your bidding.
The real hazards of AI
Although I’m bullish on AI, it presents very real perils, and I don’t mean paperclip-maximizing nanobots. The sci-fi fantasy of superintelligence taking over the planet is a mirage that cloaks and distracts us from the technology’s genuine dangers.
First of all, AIs can encapsulate human biases when we think they’re being objective. This problem shows up in so-called “predictive policing” algorithms, which output guidance on where police should patrol to head off crime. Other algorithms attempt to help parole boards decide who’s at risk of reoffending and who’s safe to release.
This sounds useful, helping to avoid the prejudices of humans and making justice more disinterested. The problem is that AIs are only as good as the data they’re trained on. If that data comes from a biased justice system—for example, if police arrest minorities more often or judges sentence them more harshly than white people—then AIs will replicate that bias. Worse, they’ll disguise it with a facade of presumed machine objectivity.
ChatGPT shows how these preexisting human biases seep into AI. When prompted to write hypothetical performance reviews, it tends to assume the employee’s gender based on their profession. It even writes more critical feedback for women than for men, replicating a well-known manifestation of unconscious sexism.
AI art programs like DALL-E and Midjourney also tend to depict people in professional careers as white men. Even face-recognition systems show this bias. They tend to do better with white male faces, while struggling to tell women and people of color apart.
AI, like labor-saving technologies in general, puts people out of work, and it’s not clear if new jobs will come into being to replace the ones that disappear. It’s a capitalist’s dream: a factory or a business that runs by itself and needs no workers. If this kind of AI isn’t regulated, it will increase the already-lopsided power of capital over labor. It will enrich the 1% who own the robots, while leaving the rest of us in more dire straits than ever before.
One especially egregious example is the National Eating Disorders Association, which runs a crisis hotline. NEDA’s staff voted to unionize. In response, the management fired them all and replaced them with a chatbot. But in a dark example of poetic justice, they had to scrap the chatbot just a few days later when it was found to be giving dangerous, harmful advice.
In the past, this kind of greed and lack of empathy was held in check by the necessity of hiring workers at a pay rate they’ll accept. For the time being, as the NEDA chatbot fiasco shows, AIs can’t replace humans just yet. However, if a day comes when business owners no longer need humans, sociopathic corporate greed will run wild.
Fortunately, the solutions we need are already available. And I don’t mean putting every spare dollar into paying the salaries of AI researchers so they design good robots rather than evil ones. What we need are policies to ensure the benefits of AI accrue to everyone, rather than to a small class of capital owners.
One way to do this would be with an “AI dividend”. We could increase tax rates on any business that uses AI to make its products, and then distribute that revenue as a basic income. The more jobs AI takes over, the more this income would increase. If we ever reach a point where AI is doing all the jobs and nothing is left for humans, it would allow everyone to live in fully automated post-scarcity comfort.
We also need international treaties, standards, and independent third-party certification bodies to guarantee that AIs are safe, predictable and free of bias before they’re released to the public or sold to businesses and governments. If the technology can’t pass these tests, it shouldn’t be made available. Ideally, every AI should be open-source so its underlying software can be examined for bugs and undesirable behavior. The output of chatbots and generative-art programs could also be stored in immutable public databases so that we can screen for spam and plagiarism.
If AI is to help humanity rather than harm us, there have to be rules. That, too, has precedent in science fiction. However, many stories along these lines, like Isaac Asimov’s three laws, postulate that the rules are built into the robots to protect us from them. Ironically, that’s the opposite of the problem we face. In any realistic scenario of a positive AI future, the rules won’t be for the robots, but for the humans.