The dangers and potential of artificial intelligence
Artificial intelligence, or AI, is a topic that has been getting a lot of attention in the past couple of years. Some famous, highly educated people have been warning us about the dangers of AI. Among them are Elon Musk, Steven Hawking and most recently Bill Gates.
“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”
– Elon Musk (At the MIT Aeronautics and Astronautics department’s Centennial Symposium)
“I think the development of full artificial intelligence could spell the end of the human race”
– Stephen Hawking (BBC interview)
“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
– Bill Gates (Reddit “Ask Me Anything (AMA)” thread)
It has not just been warnings in interviews, symposiums and online fora. Hawking, Musk and others contributed to an open letter, posted to the future of life institute, warning about the dangers of AI.
Besides the highly educated mentioning AI, the entertainment industry has been contributing to the discussion as well. There are several movies (e.g. Transcendence, Avengers: age of ultron) and video games (e.g. Destiny) showing the dangers and potential of artificial intelligence.
But what is so dangerous about artificial intelligence, and why the sudden increase in global interest? The biggest risk is the uncertainty of what an AI would do. Emotions, even though we are quite rational, have a big impact on our daily decision making. An AI would be a completely rational being. For example: humans would never choose to kill of half the population, just because the data shows it would be beneficial. An AI would likely have no trouble making such decisions. There might actually be a point where it would consider humanity useless. A being that has all the computational power in the world, who does not like us, could be incredibly dangerous. This is one of the major reasons that so many people are concerned about AI development.
Besides the above mentioned risks, there is a lot of potential. Think of a consciousness with so much more computational power. The development and research that this AI could perform is so much faster than we humans can. Making AI is probably our best bet for immortality, could lead to huge economical progress and so much more.
I personally believe creating artificial intelligence, to the point of consciousness, would be equal to creating an actual deity (god). The result will either be the start golden age, the likes of which we have not seen before, or human extinction.
This leaves us with one big question: should we pursue artificial intelligence knowing the risks and potential?