A Friendly Little Argument About the Fate of Humanity
By Maureen Dowd
Maureen Dowd, the Pulitzer Prize winning journalist, political columnist and noted author, set out to examine the latest developments in humankind's pursuit of the ultimate 'vanity' - the desire to defeat death itself.
What follows is an edited excerpt from Dowd's larger essay on the subject.
It was just a friendly little argument about the fate of humanity. Demis Hassabis, a leading British artificial intelligence researcher and neuroscientist, and a leading creator of advanced artificial intelligence, was chatting with Elon Musk, the founder of SpaceX and Tesla, Inc. about the perils of artificial intelligence.
Hassabis and Musk are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.
Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars. Said Musk: “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.”
Here’s the nagging thought you can’t escape as you drive around from glass box to glass box in Silicon Valley: the Lords of the Cloud love to yammer about turning the world into a better place as they churn out new algorithms, apps, and inventions that, it is claimed, will make our lives easier, healthier, funnier, closer, cooler, longer, and kinder to the planet. And yet there’s a creepy feeling underneath it all, a sense that we’re the mice in their experiments, that they regard us humans as Betamaxes or eight-tracks, old technology that will soon be discarded so that they can get on to enjoying their sleek new world. Many people there have accepted this future: we’ll live to be 150 years old, but we’ll have machine overlords.
Among the engineers lured by the sweetness of solving the next problem, the prevailing attitude is that empires fall, societies change, and we are marching toward the inevitable phase ahead. They argue not about “whether” but rather about “how close” we are to replicating, and improving on, ourselves. Sam Altman, the 31-year-old president of Y Combinator, the Valley’s top start-up accelerator, believes humanity is on the brink of such invention. “The hard part of standing on an exponential curve is: when you look backwards, it looks flat, and when you look forward, it looks vertical,” he told me. “And it’s very hard to calibrate how much you are moving because it always looks the same.”
You’d think that anytime Musk, not to mention Stephen Hawking (the theoretical physicist and cosmologist), and even tech titan Bill Gates are all raising the same warning about A.I.—as all of them are—it would be a 10-alarm fire. But, for a long time, the fog of fatalism over the Bay Area was thick. The paradox is this: Many tech oligarchs see everything they are doing to help us, and all their benevolent manifestos, as streetlamps on the road to a future where, as Apple co-founder Steve Wozniak says, humans are the family pets.
Ray Kurzweil is the author of The Singularity Is Near, a Utopian vision of what an A.I. future holds. The 69-year-old eats strange health concoctions and takes 90 pills a day, eager to achieve immortality—or “indefinite extensions to the existence of our mind file”—which means merging with machines. Kurzweil has predicted that we are only 28 years away from the Rapture-like “Singularity”—the moment when the spiraling capabilities of self-improving artificial super-intelligence will far exceed human intelligence, and human beings will merge with A.I. to create the “god-like” hybrid beings of the future.
Just as, two hundred million years ago, mammalian brains developed a neocortex that eventually enabled humans to “invent language and science and art and technology,” by the 2030s, Kurzweil predicts, we will be cyborgs, with nanobots the size of blood cells connecting us to synthetic neocortices in the cloud, giving us access to virtual reality and augmented reality from within our own nervous systems. “We will be funnier; we will be more musical; we will increase our wisdom,” he said, ultimately producing a herd of Beethovens and Einsteins. Nanobots in our veins and arteries will cure diseases and heal our bodies from the inside.
Stuart Russell, the 54-year-old British-American expert on A.I. told me that his thinking had evolved and that he now “violently” disagrees with Kurzweil and others who feel that ceding the planet to super-intelligent A.I. is just fine. “There are people who believe that if the machines are more intelligent than we are, then they should just have the planet and we should go away,” Russell said. “Then there are people who say, ‘Well, we’ll upload ourselves into the machines, so we’ll still have consciousness but we’ll be machines.’ Which I would find, well, completely implausible.” Said Musk, “If you believe the end is the heat death of the universe, it really is all about the journey.” The man who is so worried about extinction chuckled at his own extinction joke. As H. P. Lovecraft once wrote, “From even the greatest of horrors irony is seldom absent.”