The question to 42: Will computers overtake us in the near future?
In The Hitchhiker’s Guide Through The Galaxy, which is assume everýbody read of course, a very well advanced people build a supercomputer to calculate the answer to life, the universe and everything. After 7.5 million years of pondering the computer finally comes up with a very clear answer: 42. Now this particular supercomputer isn’t able to calculate the right questions that belongs to this answer, making it useless. However it is so kind to design a new and better computer strong enough to calculate the question belonging to the answer 42… Which will take another 7.5 million years.
At the time Douglas Adams wrote his novel, which is decades ago, the whole idea of a supercomputer building an even smarter computer all by itself probably sounded very sci-fi like. According to Stephen Hawking himself however, Computers will be smarter than humans within 100 years. And if this happens i doubt they need 7.5 million years to build smarter and smarter computers. Stephen Hawking, of course, is no computer scientist or AI expert. However he is absolutely not alone in his vision. A couple of Years ago Nick Bostrom, a superintelligence expert at Oxford, did a survey among AI experts. They asked: By which year there be a 50 percent probability that we will have achieved human-level machine intelligence, human-level intelligence being defined as the ability to do almost any task at least as well as a human would. The answer was: By 2040 to 2050 this will be the case.
In 1989 Peter J. Marcer wrote a short contribution, in reaction to an article by Stonier, to the magazine AI & Society called: Why computers will never be smarter than humans in re. The content of this article can be derived from its title and the main reasoning was the following:
‘A theorem derived from a system of algorithmic information content or complexity n (i.e. the size in bits of the smallest program for computing it), provided n is sufficiently large, can in general not be proven by a theorem constructed from a system of complexity n-l.’ (Stonier, 1989)
Now from this we can derive two things. First: Mr Marcer probably didn’t get invited to a lot of parties. Second: A system or machine cannot solve a when this problem is not in its domain as specified by the programmer. Hereby the problems a machine can solve is limited by the intelligence of its programmer.
The discrepancy in insights then and now according to Nick Bostrom is based on the following: System learning. We created algorithms that enable machines to further teach themselves. This variable was not taken into account 30 years ago when AI meant building an expert system that can solve our problems within the domain we design for them.
The question according to both Bostrom and Hawking is not if AI will ever be smarter than humans. Even the question when it will happen is pretty irrelevant. The question how we will align the interests of AI with ours will be the main issue in the future. The supercomputer in The Hitchhikers Guide through the Galaxy wasn’t very helpful to humans, but it wasn’t destroying them either. According to Bostrom both situations are possible depending on the way we design AI in the future.
To see what he has to say about AI and how to cope with it in the future I would really recommend the Ted Talk above. Also I’m curious what you all think about the possibility of AI outpacing human intelligence and how we can deal with it.
Stephen Hawking: There is no God and computers will overtake humans in the next 100 years – Yahoo News UK. (n.d.). Retrieved October 12, 2015, from https://uk.news.yahoo.com/stephen-hawking-no-god-computers-102114199.html#PT3nczk
Stonier, T. (1989). Open Forum Why Computers are Never Likely to be Smarter than People, 142–145.