by Phil Rowley
"I’ve just finished reading the book Life 3.0 by physicist & AI philosopher Max Tegmark, where he sets out a series of possible scenarios and outcomes for humankind sharing the planet with artificial intelligence.
But because you’re busy installing PowerPoint fonts or finding meeting rooms, I’m going to summarise it here. And because you’re double-busy I’m going to use a series of sci-fi films as a ‘mental shortcut’ or ‘go-to’ reference for each bulletpoint.
AI dystopia and AI utopia are unlikely to happen | The Matrix vs Star Trek
Tegmark immediately shoots down any notion that we are likely to be victims of a robot-powered genocide, and claims the idea we would programme or allow a machine to have the potential to hate humans is preposterous - fuelled by Hollywood’s obsession with the apocalypse. Actually, we have the power, now, to ensure that if AIs goals are properly aligned with ours from the start, so that it wants what we want, then there can never be a ‘falling out’ between species. In other words, if AI does pose a threat - and in some of his scenarios it does - it will not come from The Matrix’s marauding AIs, enslaving humanity and claiming, like Agent Smith, ‘Human beings are a disease. You are a plague and we are the cure’.
Conversely the idea that AI will deliver some sci-fi utopia, where human beings are finessed to perfection - like in Star Trek - also bothers him. Complacency and arrogance are also an enemy of progress, it seems.
Rather and crucially, Tegmark wants us to chart a course between those two poles. A middle way, steering between techno-apocalypse and techno-utopia, driven by cautious optimism, the building of safeguards and safety nets, and very big ‘off-switches’. His Future Of Life Institute, featuring such luminaries as Elon Musk, Richard Dawkins and the late Stephen Hawking, is a think-tank designed to tackle and solve these specific issues, now, before they become a problem..."
Read the full story on BBN Times' website using the link below.