DeepMind’s CEO Helped Take AI Mainstream. Now He’s Urging Caution [Time]

January 12, 2023

Demis Hassabis stands halfway up a spiral staircase, surveying the cathedral he built. Behind him, light glints off the rungs of a golden helix rising up through the staircase’s airy well. The DNA sculpture, spanning three floors, is the centerpiece of DeepMind’s recently opened London headquarters. It’s an artistic representation of the code embedded in the nucleus of nearly every cell in the human body. “Although we work on making machines smart, we wanted to keep humanity at the center of what we’re doing here,” Hassabis, DeepMind’s CEO and co-founder, tells TIME. This building, he says, is a “cathedral to knowledge.” Each meeting room is named after a famous scientist or philosopher; we meet in the one dedicated to James Clerk Maxwell, the man who first theorized electromagnetic radiation. “I’ve always thought of DeepMind as an ode to intelligence,” Hassabis says.

Hassabis, 46, has always been obsessed with intelligence: what it is, the possibilities it unlocks, and how to acquire more of it. He was the second-best chess player in the world for his age when he was 12, and he graduated from high school a year early. As an adult he strikes a somewhat diminutive figure, but his intellectual presence fills the room. “I want to understand the big questions, the really big ones that you normally go into philosophy or physics if you’re interested in,” he says. “I thought building AI would be the fastest route to answer some of those questions.”

DeepMind—a subsidiary of Google’s parent company, Alphabet—is one of the world’s leading artificial intelligence labs. Last summer it announced that one of its algorithms, AlphaFold, had predicted the 3D structures of nearly all the proteins known to humanity, and that the company was making the technology behind it freely available. Scientists had long been familiar with the sequences of amino acids that make up proteins, the building blocks of life, but had never cracked how they fold up into the complex 3D shapes so crucial to their behavior in the human body. AlphaFold has already been a force multiplier for hundreds of thousands of scientists working on efforts such as developing malaria vaccines, fighting antibiotic resistance, and tackling plastic pollution, the company says. Now DeepMind is applying similar machine-learning techniques to the puzzle of nuclear fusion, hoping it helps yield an abundant source of cheap, zero-carbon energy that could wean the global economy off fossil fuels at a critical juncture in the climate crisis.

Hassabis says these efforts are just the beginning. He and his colleagues have been working toward a much grander ambition: creating artificial general intelligence, or AGI, by building machines that can think, learn, and be set to solve humanity’s toughest problems. Today’s AI is narrow, brittle, and often not very intelligent at all. But AGI, Hassabis believes, will be an “epoch-defining” technology—like the harnessing of electricity—that will change the very fabric of human life. If he’s right, it could earn him a place in history that would relegate the namesakes of his meeting rooms to mere footnotes.

But with AI’s promise also comes peril. In recent months, researchers building an AI system to design new drugs revealed that their tool could be easily repurposed to make deadly new chemicals. A separate AI model trained to spew out toxic hate speech went viral, exemplifying the risk to vulnerable communities online. And inside AI labs around the world, policy experts were grappling with near-term questions like what to do when an AI has the potential to be commandeered by rogue states to mount widespread hacking campaigns or infer state-level nuclear secrets. In December 2022, ChatGPT, a chatbot designed by DeepMind’s rival OpenAI, went viral for its seeming ability to write almost like a human—but faced criticism for its susceptibility to racism and misinformation. So did the tiny company Prisma Labs, for its Lensa app’s AI-enhanced selfies. But many users complained Lensa sexualized their images, revealing biases in its training data. What was once a field of a few deep-pocketed tech companies is becoming increasingly accessible. As computing power becomes cheaper and AI techniques become better known, you no longer need a high-walled cathedral to perform cutting-edge research.

It is in this uncertain climate that Hassabis agrees to a rare interview, to issue a stark warning about his growing concerns. “I would advocate not moving fast and breaking things,” he says, referring to an old Facebook motto that encouraged engineers to release their technologies into the world first and fix any problems that arose later. The phrase has since become synonymous with disruption. That culture, subsequently emulated by a generation of startups, helped Facebook rocket to 3 billion users. But it also left the company entirely unprepared when disinformation, hate speech, and even incitement to genocide began appearing on its platform. Hassabis sees a similarly worrying trend developing with AI. He says AI is now “on the cusp” of being able to make tools that could be deeply damaging to human civilization, and urges his competitors to proceed with more caution than before. “When it comes to very powerful technologies—and obviously AI is going to be one of the most powerful ever—we need to be careful,” he says. “Not everybody is thinking about those things. It’s like experimentalists, many of whom don’t realize they’re holding dangerous material.” Worse still, Hassabis points out, we are the guinea pigs...

Read the full article on the website using the link below.

Associated CBMM Pages: