Is It Coming to Get You?
Who’s afraid of AI? Too many of us. Particularly since Boeing’s computerized pilot started flying airliners into the ground. But our fears have less to do with the dangers of artificial intelligence than with the limitations of natural intelligence – the circuits in our brain that keep us worried about imaginary threats.
It’s not that the AI doomsayers are stupid. Some of them are quite smart. Elon Musk, who called AI “far more dangerous than nukes,” knows enough rocket science to send a SpaceX payload into orbit. The late Stephen Hawking, who warned that AI “could spell the end of the human race,” knew his cosmology. But they aren’t in the business of building AI. The people who do that have a hard time imagining a robot capable of controlling one screaming toddler, much less taking over the world.
Yes, robots will keep getting smarter, but even superintelligent machines won’t fulfill the fearmongers’ nightmares. Let’s consider the supposed array of threats from AI:
AI will cause mass unemployment.
It’s true that robots are replacing workers in factories and other industries. A $100,000 machine named Flippy went to work grilling burgers at a fast-food chain in California last year. Robots are roaming the aisles of Walmart tracking what’s on the shelves, and may soon replace delivery drivers. New technology has been displacing workers for centuries, and yet we somehow keep finding new work to do. Nearly everyone used to be a farmer, and now nearly everyone has a different job.
As machines relieve us of drudgery and satisfy our basic needs, we constantly discover new needs that machines can’t meet. We don’t have to grow our own food anymore, so we pay people to import it from around the world and cook it for us. And no matter how adept Flippy becomes, we’ll always appreciate a chef who can add a personal touch to the meal. Machines can supply us with all kinds of cheap clothes, just as the Luddites feared when they smashed textile machinery in the 18th century, but people are still willing to pay high prices for designer labels.
Analyst Puts 20-Year Win-Streak on Line with “Outlandish” Prediction
Man who called DotCom Crash, Housing Bubble and Rebound Since 2009 Warns of “New Stock Market Panic” here…
The more products that are made by machines, the more we value things that are made by hand. The more that machines do for us, the more time and money we devote to the services they don’t: therapists, artists, yoga instructors, tour guides, auto detailers, lawyers, concierges, baristas, publicists, and consultants of every stripe. We don’t know today what new jobs will exist in the future – but then, it never occurred to the Luddites that there would ever be a job for someone calling himself a “life coach.”
AI will fail catastrophically.
Recently, when the Ethiopian and Indonesian Boeing 737 MAX airliners crashed within five months of each other, the plane’s automated anti-stall system became the poster villain for AI. But one reason the accidents attracted so much attention is that flying has become so safe. Airliner crashes used to be in the news routinely, but the annual number of fatal accidents has plummeted in recent decades. More people than ever are flying, and there hasn’t been a fatal crash by a U.S. airline since 2009.
Why are there fewer crashes? In no small part, it’s because they’re on autopilot so much more of the time. Even when you take the recent Boeing crashes into account, the cockpit computers have a far better safety record than human pilots do. (And the recent crashes may be due less to bad software than to human failures, like skimping on the amount of training that pilots received on the new system.)
There’s no reason to expect the cockpit computers will stop improving, because aviation engineers will learn and adapt from these mistakes. That’s how technology advances. When Detroit produced deathtraps like the Chevrolet Corvair and the Ford Pinto, they weren’t harbingers of doom on the highway. They were lessons that led to much safer cars.
AI will conquer humanity and rule the Earth.
Will computers ever become as intelligent as humans? “Yes, but only briefly,” said Vernor Vinge, the science-fiction writer and computer scientist who in 1993 described that scenario as the “technological singularity” – the moment when all the old rules would no longer apply. (He borrowed the term from astrophysicists, who call the edge of a black hole a singularity because the normal laws of physics no longer apply beyond that point.) Once computers became as smart as us, Vinge reasoned, those computers would build smarter computers, which would build even smarter computers, and before long there’d be AI with so much brainpower that we’d be dimwits by comparison. They’d regard us the way we regard goldfish.
While computers will do more and more tasks better than humans, whether they’ll ever become truly intelligent is still very much in doubt.
Vinge predicted that this singularity would occur by 2030. With all due respect to Amazon’s Alexa, today that possibility doesn’t look much more likely than it did in 1993, and many cognitive and AI scientists doubt that it will ever occur. While computers will do more and more tasks better than humans, whether they’ll ever become truly intelligent – and achieve consciousness – is still very much in doubt.
But let’s assume that it happens someday. Let’s assume, for the sake of argument, that they became so smart and powerful that they could conquer us…
Why would they want to?
From Arthur C. Clarke’s 2001: A Space Odyssey, to HBO’s Westworld, science-fiction writers have envisioned AI determined to wreak havoc on their creators. It’s a useful literary device, and an evil omnipotent computer makes a convenient villain. (A docile electronic servant with limited powers wouldn’t do – Alexa is not thriller material.) The prospect of an AI lusting for world domination seems plausible to audiences because we imagine that any intelligent creature would share humanity’s aggressive tendencies. But computers don’t have testosterone running through their circuitry.
Human males evolved with the hormonally driven urge for dominance because it helped them reproduce their genes. Conquerors like Attila the Hun fathered more children and were able to provide them with more resources to survive. But computers aren’t looking to enlarge their harems. They’re not trying to win the favor of female computers, and they’re not going to gain anything by burning down a village and carrying off the women. As the cognitive psychologist Steven Pinker has noted, the “Robopocalypse” scenario is based on a fundamental fallacy about the nature of intelligence.
“Intelligence is the ability to deploy novel means to attain a goal,” he writes in Enlightenment Now. “But the goals are extraneous to the intelligence: being smart is not the same as wanting something.” So fretting that superintelligent computers will yearn to conquer us, in Pinker’s words, “makes about as much sense as the worry that since jet planes have surpassed the flying ability of eagles, someday they will swoop out of the sky and seize our cattle.”
If computers ever become smart enough to start plotting their own survival strategies, they don’t need to emulate Attila the Hun. A better role model would be the title character of Tom Edison’s Shaggy Dog, Kurt Vonnegut’s clever short story based on the premise that dogs are actually superintelligent creatures (it was Edison’s dog who actually invented the light bulb) but are all pretending to be dumb so that they can laze around and let humans do the work of feeding and sheltering them.
There’s a kernel of evolutionary truth in Vonnegut’s story: Today’s dogs are descended from wolves who thrived by playing nice with humans. The wolves that practiced the Attila the Hun strategy, attacking humans and their livestock, have dwindled in numbers, but ones that evolved to be less aggressive are flourishing. Dogs don’t need to prey on sheep because they’ll get a meal from the shepherd as long as they follow his orders. They don’t bite the hand that feeds them – and that’s the obvious strategy for an AI to follow, too.
IBM’s Watson may be smarter than us at chess and Jeopardy, but it depends on us for its very existence. It’s made up of silicon and other components that are mined, fabricated, shipped, and assembled by people all over the world. Even if future AI could somehow do all these tasks by themselves, why would they want to bite all the hands that are already feeding them – and will heal them if there’s a massive power failure or some other catastrophe that wipes out their circuits? If nothing else, we’re a backup repair service.
AI will make us helpless and terminally incompetent.
Even if superintelligent computers aren’t malevolent conquerors, the argument goes, we’ll eventually cede so much control to them that we won’t be able to survive without them – and we won’t know how to fix them if something goes wrong. So like the Boeing pilots in Indonesia and Ethiopia, we’ll perish if the systems go haywire.
It’s true that we’ll lose some of our old skills as computers do our work for us. If self-driving cars become common, a lot of people will prefer to rely on computer chauffeurs and not bother to learn how to drive themselves. The computers’ safety record will be so much better than humans’ that there’ll probably be bureaucrats and activists campaigning to outlaw human drivers. But there will also be people reluctant to cede all control to a computer, as well as traditionalists who still prefer driving themselves. Just as there are people who still like to bake their own bread and create their own pottery even though machines can do the job more efficiently. The ability to drive a car will not be lost forever.
But what if some virus suddenly strikes all the world’s cars, causing them to careen off the road or crash into each other while their humans sit there helplessly? Or what if a computer running the world’s power grid crashes, or if some glitch sends armies of drones to bomb cities while human commanders sit there powerless to stop them? Those are the kind of nightmare scenarios that AI-phobes like to imagine, to which the best answer is: Really?
We’re supposed to believe that humans are smart enough to build advanced computers but too dumb to design any safeguards or notice any vulnerabilities until it’s too late to save ourselves. In reality, we’re prone to err in the other direction — to fear new technologies so much that we cling to the old ones for too long or take unnecessary precautions. Railroads kept using brakemen and flagmen long after their functions had been automated. Some buildings still have elevator operators. The risk of an airline being hijacked in the post-9/11 era is minuscule now that cockpit doors are locked, but federal air marshals are still riding planes.
We love to imagine what could go wrong and then spend too much time and money averting it.
We love to imagine what could go wrong and then spend too much time and money averting it. In the late 1990s, the world’s computer networks were supposedly going to be incapacitated when the Millennium Bug flummoxed operating systems unprepared for a year ending in 00… But January 1, 2000 passed with few problems, even in the countries that spent little money preparing for it.
Of course, there will always be AI glitches that we don’t anticipate, but we can always respond the way we did to the problems in the Boeing 737 MAX’s computer. It took just two crashes for humans to ground the whole fleet of planes. When AI goes bad, there’s one simple and immediate solution: Pull the plug.
John Tierney is a contributing editor at City Journal and a contributing science columnist at the New York Times. He is the co-author, with Roy Baumeister, of Willpower: Rediscovering the Greatest Human Strength.