Super-intelligent artificial Intelligence: The last invention in Human History? Part 1.

The debate around the dangers of Artificial Intelligence has been a recurring topic in the media recently. Two main parties emerge: the skeptics – those doubting the AI’s impact on human survival- and the believers – subdivided in two categories: the optimists and the pessimists-.


In this first part I shall treat the pessimistic believers after having laid out some necessary concepts.


First, what is AI? It is the development of computer based systems able to perform tasks usually requiring human intelligence.

Which kinds of AI exist, based on “intelligence” criteria?


We distinguish 3 types of AI. We have Artificial Narrow Intelligence (ANI), also termed as Weak AI, as it only focuses on one restricted domain. For example, IBM’s Watson easily beats the best-cultured Jeopardy champions, but wouldn’t be able to achieve other human related tasks. Presently the world runs only on ANI.

The next big step up on the intelligence ladder, AI would attain Artificial General Intelligence (AGI). At this level, AI is considered as “smart” as human beings in all cognitive abilities.

Eventually Artificial Super Intelligence (ASI) would follow. This AI is countless times more intelligent than humans.

If you want to know more about those terms, I invite you to consult WaitbutWhy’s blog post on AI.

exponential curve intelligence

Summarizing one of the key points of that post, we as human should worry about the potential of AI in soon reaching AGI and ASI levels, as technological advances seem to be following an exponential curve. The reason why most people don’t perceive that trend, is that we have a linear vision of technological evolution. But that is a biased perception of reality.

exponential curve


Now that we have the semantics out of the way, let’s get started.


Actually, an increasingly shared belief has surfaced that AI poses a threat.

Not only for our jobs, as koenhut explained in his interesting post, but AI could seriously threaten human kind’s future. The possibility of witnessing ASI and its potential outcomes are subject of intense debate today.


Over the current year, various influential people have voiced their concerns regarding AI, for two main reasons: its potential to self-improve in an increasingly faster fashion combined with the knowledge that technology is improving at an exponential rate.

Those two factors have namely caught the attention of Bill Gates, Elon Musk and Stephen Hawking. They all warn about the potential dangers in trying to develop an as advanced AI as possible. Elon quotes this even as being equivalent to “summoning the demon”.

As a result of this growing anxiety, the institute Future of Life was created in order to raise awareness and to conduct research in making sure AI will not pose a threat to our existence. Future of Life believes that most AI related R&D should not be focused on AI improvement but on predicting and countering all possible negative outcomes related to that technology.


Hereby a passage from the book “The Infernal Device” (Michael Kurland, 1978) illustrating the current concerns:


A group of computer geniuses get together to build the world’s largest, most powerful thinking machine. They program it with the latest heuristic software so it can learn, then feed into it the total sum of mankind’s knowledge from every source-historical, scientific, technical, literary, mythical, religious, occult. Then, at the great unveiling, the group leader feeds the computer its first question:

“Is there a god?”

“There is now,” the computer replies.


The reasons as to how exactly AI can lead to our extinction are too lengthy to describe in one single blog-post.

I however encourage you to discover the blog WaitbutWhy as pointed out by julianderond, to get a good grasp of the potential dangers posed by AI.

You will feel as if you have had a brief look in another dimension, after having read that blog post on AI. And if your curiosity is not satisfied, I recommend you to read the book Superintelligence by the Cambridge philosopher Nick Bostrom, studying existential threats (possibilities of human extinction). For those who do not have the luxury of time, the movie Ex-Machina is a great watch!


In the following post, I will cover some of the grand opportunities AI offers and the other party: the ASI skeptics.

354502 Paul


Wikipedia,. ‘Artificial Intelligence’. N.p., 2015. Web. 5 Oct. 2015.,. ‘Of God, Humans And Machines’. N.p., 2015. Web. 5 Oct. 2015.,. ‘The Future Of Life Institute’. N.p., 2015. Web. 5 Oct. 2015.

Pandora’s Brain,. ‘Short Story’. N.p., 2013. Web. 5 Oct. 2015.

BBC News,. ‘Stephen Hawking Warns Artificial Intelligence Could End Mankind – BBC News’. N.p., 2015. Web. 5 Oct. 2015.

BBC News,. ‘Stephen Hawking Warns Artificial Intelligence Could End Mankind – BBC News’. N.p., 2015. Web. 5 Oct. 2015.



One response to “Super-intelligent artificial Intelligence: The last invention in Human History? Part 1.”

  1. web site says :

    I’m amazed, I have to admit. Seldom do I encounter a blog that’s both equally educative and interesting, and without a doubt, you’ve hit
    the nail on the head. The problem is something that
    not enough men and women are speaking intelligently about.
    I am very happy that I came across this in my search for something regarding this.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: