A few weeks ago, I wrote an article about driverless cars (guided by artificial intelligence), and how they would impact employment, a burning issue during the recent US presidential campaign. Until then, I had never given much thought to artificial intelligence (AI). Of course I’d heard of it, but shrugged it off as movie mania that I had to endure in science fiction previews at the movies. But driverless cars took me to other aspects of the AI world, and I found myself drawn to the hundreds of articles and books on the subject. Now I’m hooked, and it’s a bit late to go back.
Scientific procedure would have me start at the beginning—somewhere in the 1950s, when scientists started to muse and speculate about the potential of computers. But I’m not going to delve into that just yet—I’ve picked my spot: I’m going to start with a conference at Leverhulme Centre for the Future of Intelligence (LCFI) at Cambridge University, in October, 2016.
AI Will Change the World for the Better
The speaker is world-renowned physicist Professor Stephen Hawking, and this is what he said:
“The potential benefits of creating intelligence are huge. We cannot predict what we might achieve when our own minds are amplified by AI. Perhaps with the tools of this new technological revolution, we will be able to undo some of the damage done to the natural world by the last one – industrialization. And surely we will aim to finally eradicate disease and poverty.
“Every aspect of our lives will be transformed. In short, success in creating AI could be the biggest event in the history of our civilization.”
But Professor Hawking knows as much as or more about artificial intelligence than most people on earth, and he warns us about its potential dangers. Here are the views of some scientists who share his concerns:
Not So Fast: AI Can Actually Destroy the World
From Tony Prescott, Professor of Cognitive Neuroscience, University of Sheffield:
“One of the issues is whether AI will go out of control – I think that that’s a remote issue. The more pertinent issue is that people will use AI for bad purposes. And I think that is a risk – it’s difficult to guarantee that won’t happen, in the same way as it’s already difficult to guarantee that people won’t use computer science in nefarious ways.”
From Noel Sharkey, Professor of Artificial Intelligence and Robotics, University of Sheffield:
“I’m part of the campaign to stop killer robots. We’re working at the UN in Geneva. The idea of these weapons is that they will find their own targets and kill them without intervention once they’ve been launched. It’s an area that I think should not be researched.”
A dark new book by James Barrat, “Our Final Invention: Artificial Intelligence and the End of the Human Era,” discusses what might happen once ASI (Artificial Super Intelligence) is developed. Computers may effectively reprogram and improve themselves. leading to a so-called “technological singularity” or “intelligence explosion,” When that happens, he says, the risks of machines outwitting humans in battles for resources and self-preservation cannot simply be dismissed.
Nick Bostrom, Director of the Oxford Future of Humanity Institute, and author of “Superintelligence: Paths, Dangers, Strategies” says that it’s completely impossible to predict what the consequences of an AI Revolution will be. However, he says that with intelligence comes power. This means an ASI, when we create it, will be the most powerful being in the history of life on Earth, and all living things, including humans, will be entirely at its whim — and this might happen in the next few decades.
I hesitated to put these ideas here, on my blog, where my followers will read them and possibly share my anxiety. But many of you have heard much of this already. These ideas are out there, and it seems somehow wrong to let movies and science fiction lead the way in knowledge about artificial intelligence. One thing to remember—the most dangerous aspects of AI are still in the future, and that really means the future. No one has any idea how long it will take to perfect artificial intelligence, or develop super intelligence. But that doesn’t mean that the scientific community should shield us from this knowledge. More importantly, it doesn’t mean that we should hide from it.
Why We Need to Know More About AI
I’m rattled by all of this, especially the development of super intelligence. but I am encouraged by the involvement of several leading researchers who believe that we must be given this information, and become part of the discussion. One of these scientists, Sabine Hauert, Lecturer in Robotics, University of Bristol, says in her article, “Shape the debate, don’t shy from it” (I apologize for the length of this passage, but it’s important.):
“Irked by hyped headlines that foster fear or overinflate expectations of robotics and artificial intelligence (AI), some researchers have stopped communicating with the media or the public altogether. But we must not disengage.”
“Experts need to become the messengers. Through social media, researchers have a public platform that they should use to drive a balanced discussion. A common communications strategy will empower a new generation of roboticists that is deeply connected to the public and able to hold its own in discussions. This is essential if we are to counter media hype and prevent misconceptions from driving perception, policy and funding decisions.”
I agree! I’d like to learn more about AI, and ASI, and would be relieved and happy to get my information from people who want to inform, not alarm.