Professor Stephen Hawking has warned that the creation of powerful artificial intelligence will be “either the best, or the worst thing, ever to happen to humanity,” and praised the creation of an academic institute dedicated to researching the future of intelligence as “crucial to the future of our civilization and our species.”
Hawking was speaking at the opening of the Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, a multi-disciplinary institute that will attempt to tackle some of the open-ended questions raised by the rapid pace of development in AI research.
“We spend a great deal of time studying history,” Hawking said, “which, let’s face it, is mostly the history of stupidity. So it’s a welcome change that people are studying instead the future of intelligence.”
While the world-renowned physicist has often been cautious about AI, raising the risk that humanity could be the architect of its own destruction if it creates a super-intelligence with a will of its own, he was also quick to highlight the positives that AI research can bring.
“The potential benefits of creating intelligence are huge,” he said. “We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialization. And surely we will aim to finally eradicate disease and poverty.
“Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilization.”
Huw Price, the center’s academic director and the Bertrand Russell professor of philosophy at Cambridge University, where Hawking is also an academic, said that the center came about partially as a result of the university’s Centre for Existential Risk. That institute, mocked by the tabloid press as offering “Terminator Studies,” examined a wider range of potential problems for humanity, while the LCFI has a narrow focus.
“We’ve been trying to slay the ‘terminator’ meme,” Price said, “but like its namesake, it keeps coming back for more.”
AI pioneer Margaret Boden, professor of cognitive science at the University of Sussex, praised the progress of such discussions. As recently as 2009, she said, the topic wasn’t taken seriously, even among AI researchers. “AI is hugely exciting,” she said, “but it has limitations, which present grave dangers given uncritical use.”
The academic community is not alone in warning about the potential dangers of AI as well as the potential benefits. A number of pioneers from the technology industry, most famously the entrepreneur Elon Musk, have also expressed their concerns about the damage that a super-intelligent AI could wreak on humanity.
Musk explained that he's invested in more than one AI research company not in hopes of an eventual payoff, but mostly to give himself the best possible vantage point on new advancements. "It's really, I like to just keep an eye on what's going on with artificial intelligence. I think there is a potential dangerous outcome there. There are some substantially, very scary outcomes. And we should try to make sure the outcomes are good, not bad."
There is a very clear danger in creating super AI. We may outsmart ourselves. By programming the sum total of human knowledge into computers and the ability for the AI to increase it's own knowledge, we are giving the AI a distinct advantage over humanity and perhaps the power to make arbitrary decisions on our behalf.
Decisions that may lack compassion, mercy, humanitarianism, free choice, and love of humanity.... emotions that should influence, leaven and inculcate the power over life and death. Termintor studies are a very good idea. Think about it.