|Title||Is Research in Intelligence an Existential Risk?|
|Publication Type||Views & Reviews|
|Year of Publication||2014|
Recent months have seen an increasingly public debate taking form around the risks of AI (Artificial Intelligence). A letter signed by Nobel prizes and other physicists defined AI as the top existential risk to mankind. More recently, Tesla CEO Elon Musk has been quoted saying that it is “potentially more dangerous than nukes.” Physicist Stephen Hawking told the BBC that “the development of full artificial intelligence could spell the end of the human race”. And of course recent films such as Her and Transcendence have reinforced the message. Thoughtful comments by experts in the field such as Rod Brooks, Oren Etsioni and others have done little to settle the debate.
As the Director of a new multi-institution, NSF-funded and MIT-based Science and Technology Center — called the Center for Brains, Minds and Machines (CBMM) — I am arguing here on behalf of my collaborators and many colleagues, that the terms of the debate should be fundamentally rephrased. Our vision of the Center’s research integrates cognitive science, neuroscience, computer science, and artificial intelligence. Our belief is that understanding intelligence and replicating it in machines, goes hand in hand with understanding how the brain and the mind perform intelligent computations. The convergence and recent progress in technology, mathematics, and neuroscience has created a new opportunity for synergy across fields. The dream of understanding intelligence is an old one. Yet, as the debate around AI shows, now is an exciting time to pursue this vision. Our mission at CBMM is thus to establish an emerging field, the Science and Engineering of Intelligence. This integrated effort should ultimately make fundamental progress with great value to science, technology, and society. We believe that we must push ahead with research, not pull back.
- CBMM Funded