What will the future hold for A.I. and its effects on human life
Artificial Intelligence (A.I.) is the intelligence exhibited by machines or software. As technology becomes more sophisticated at a very fast pace, A.I. is becoming seriously realistic development rather than just a sci-fi geek’s wet dream. The idea of self-thinking and/or self-aware machines is nothing new and the speculations about whether A.I. is a positive development or not vary greatly. A dystopian scenario of A.I. going wrong was portayed in The Terminator and The Matrix movie series. A more subtle example is the movie Ex-Machina that was released on only recently (Watch this one!). The discussion about A.I. is however no longer in the hands of movie writers and directors.
Artificial intelligence is the attempt to recreate human thoughts, creating a machine with intellectual abilities. The first question to spring to mind is: why do we need machines to do more than assembly line and repetitive processing work? The answer lies with computers themselves. Although the possibilities of what computers can calculate are limitless, they will always be constrained by the input. The computer is incapable of solving problems autonomously . In other words, it can only solve problems it has been programmed to solve, rather than being able to analytically solve problems by itself.
Many industries are interested in having these kinds of capabilities. Examples are the automotive (self-driving cars) and aviation industries, transportation, and gaming and hospitals and medicine sectors. I would like to get into the detail of a much more controversial environment; the weapons industry. Companies have been heavily investing in autonomous weaponry. On the one hand these companies argue that A.I. will make battlefields/warzones ‘safer’ for civilians. On the other, big names in the technology-scene (e.g. Steve Wozniak, Stephen Hawking) are asking governments to stop these developments to prevent an A.I. arms race. In the wrong hands, A.I. weaponry is highly dangerous. Tesla CEO Elon Musk even went as far as calling it more dangerous than nukes. Well-known scientist Stephen Hawking supports him with this notion as I quote
“Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded”. In other words, machines that can update themselves will evolve at a much pace than the human race. He even calls it the potential end of the human race.
Personally I would not go as far as Mr Hawking and I do see a lot of benefits for specific industries. I can’t however ignore that serious negative aspects A.I. can bring with its development. Machines at this point are only able to mimic human behavior rather than initiating it themselves. It will be some years, maybe decades, before the major breakthroughs will become to appear of true A.I. I just hope by that time humans have figures out how to stay in control.
What are your thoughts on A.I and how/where it should be applied? And do we need some sort of a control mechanism over scientists to prevent the dark side of A.I.? Or is it all just sci-fi based fear of the unknown?
By Max van Hilten