Tag Archive | artificial intelligence

Wolfram Alpha: A World based on Computation

Maybe the name, ‘Wolfram Alpha’, is more familiar to science students, but not for all business students. I got to know it because I used it to cheat on my calculus homework during Bachelor since it can easily give you the integral, limit, plot of x*sin(x).

It is a computational knowledge engine that was launched in 2009. It is actually not a brand new technology any more, but it is a revolutionary product that provides a possible direction for the future of information technology. People say it is ‘like a cross between a research library, a graphing calculator, and a search engine’. It looks like a search engine on interface but provides far more than a normal search engine like Google. The essential difference is that it gives you the answer to your question, based on a series of computation and processing of its database. Google can only give you a long list of resources where you may be able to find the answer.

For example, I can search for ‘life expectancy of 25 year old Dutch man’. The result looks like this:

Picture1Picture2Picture3

While Google gives you this:

Picture4

In some sense, Wolfram Alpha is very much like Siri, (though earlier than Siri) to process natural language and give you the answer directly. But Siri works better on natural language and voice processing and focus on more on questions for daily life, e.g. ‘where is the nearest McDonald’s’. But Wolfram Alpha pays more attention to data processing and computation, e.g. it gives stock price, financial figures, return forecasts when you type ‘McDonald’s’. Wolfram Alpha’s target is more about technical people than general public, and this is one reason why it is not known to everyone yet. Most people only care about where is the nearest McDonald’s rather than its financial performance. An interesting fact is that Siri uses Wolfram Alpha as a source of answer and in 2012, 25 percent of the traffic of Wolfram Alpha came from Siri.

Wolfram Alpha is said to be the first applied AI (weak artificial intelligence), since it very closely approximates the ability to ‘think’. As Stephen Wolfram, the founder of Wolfram Alpha, stated in his panel, if you ask Wolfram Alpha for the population of New York City, it will utilize both internal algorithmic work and real-world knowledge in order to compute it, rather than just searching for an accredited answer somewhere on the internet.

On the other hand, it is very different from what we usually think about AI, since we often think that AI is a logic algorithm that tries to mimic the human thinking and learning process. However, the thinking process of Wolfram Alpha is solely based on a complicated process of computation, not trying to replicate human thinking process at all. It cannot learn either. According to Stephen Wolfram, he tried working with artificial general intelligence (strong AI) but failed. He realized that a software can still provide useful knowledge without AGI. That is the reason he invented Wolfram Alpha, to build a smart system that can assemble all the existing knowledge, organize them and bring new knowledge. Wolfram Alpha achieved the goal and its ability to answer queries, organizing knowledge and processing knowledge makes it seem like it can think. This weak artificial intelligence is proved to be very practical and useful now. Maybe it will be a direction of future artificial intelligence development.

Sources:

http://www.cnet.com/news/siri-brings-nearly-25-percent-of-wolfram-alpha-traffic/

http://www.outerplaces.com/science/item/9977-do-it-yourself-ai-how-wolfram-alpha-is-bringing-artificial-intelligence-to-the-masses

https://en.wikipedia.org/wiki/Weak_AI

https://en.wikipedia.org/wiki/Artificial_general_intelligence#Relationship_to_.22strong_AI.22

http://songshuhui.net/archives/91355

Detecting Anomalies in Large Data Sets

Data has become a common concern recently. Both companies and individuals have had to deal with information in multiple ways in order to improve or obtain insight from operations. IT has allowed unprecedented new levels in data management for both these parties. This blog, however, intends to focus on companies’ management of data–more specifically, in the auditing sector.

Fraud has frequently taken place in markets historically. I like to think of Imtech as an example, for those who aren’t familiar with the company there’s no need to worry–I’m sure you have an example of your own. Drawing back to the topic of Data and Fraud, it is becoming increasingly difficult to accurately determine potential fraud cases with the increase of information available. This has given rise to the use of computer algorithms to detect these cases (Pand, Chau, Wang & Faloutsos, 2007).

An interesting way to tackle this challenge is to use mathematical laws for large numbers in order to determine anomalies within these data sets. One particularly interesting example is the application of Benford’s Law to detect these cases of fraud on company documentation (Kraus & Valverde, 2014). In short, Benford’s Law states that 30.1% of random, naturally occurring numbers starts with a 1; 17.6% with a 2; 12.5% with a 3, and so on. Logically this makes sense given our counting structure. This can be expressed as,

Where, is a number {1,2..9} and is the probability of the number starting with d.

Despite the fact that this method seems promising, Kraus and Valverde (2014), could not find any outstanding peculiarities from their data set that contained fraud perpetrators. However, this law does serve a starting point for a drill-down approach to discovering perpetrators. Which brings us to the more strategic topic of whether IT will ever develop a way to outsmart fraud perpetrators in this context? Is an eternal drill-down chase ever going to take the lead?

What do you think? Will this ever be the case? Is there any way you thought this might work out?

I think it’s pointless-of course, as everything, IT methods have their degree of accuracy. However, I firmly believe there will never be a way to completely ensure an honest and transparent market. Not long ago I heard a man say, “Does anybody here know what EBITDA stands for? Exactly. Earnings Before I Tricked the Dumb Auditor.” It’s human nature, and that might take millennia before it changes ever so slightly.

I’d like to say it was nice to write a couple blogs here, till the next time!

References

Kraus, C., & Valverde, R. (2014). A DATA WAREHOUSE DESIGN FOR THE DETECTION OF FRAUD IN THE SUPPLY CHAIN BY USING THE BENFORD’S LAW. American Journal of Applied Sciences, 11(9), 1507-1518.

Pandit, S., Chau, D. H., Wang, S., & Faloutsos, C. (2007, May). Netprobe: a fast and scalable system for fraud detection in online auction networks. InProceedings of the 16th international conference on World Wide Web (pp. 201-210). ACM.

The question to 42: Will computers overtake us in the near future?

In The Hitchhiker’s Guide Through The Galaxy, which is assume everýbody read of course, a very well advanced people build a supercomputer to calculate the answer to life, the universe and everything. After 7.5 million years of pondering the computer finally comes up with a very clear answer: 42. Now this particular supercomputer isn’t able to calculate the right questions that belongs to this answer, making it useless. However it is so kind to design a new and better computer strong enough to calculate the question belonging to the answer 42… Which will take another 7.5 million years.

At the time Douglas Adams wrote his novel, which is decades ago, the whole idea of a supercomputer building an even smarter computer all by itself probably sounded very sci-fi like. According to Stephen Hawking himself however, Computers will be smarter than humans within 100 years. And if this happens i doubt they need 7.5 million years to build smarter and smarter computers. Stephen Hawking, of course, is no computer scientist or AI expert. However he is absolutely not alone in his vision. A couple of Years ago Nick Bostrom, a superintelligence expert at Oxford, did a survey among AI experts. They asked: By which year there be a 50 percent probability that we will have achieved human-level machine intelligence, human-level intelligence being defined as the ability to do almost any task at least as well as a human would. The answer was: By 2040 to 2050 this will be the case.

In 1989 Peter J. Marcer wrote a short contribution, in reaction to an article by Stonier, to the magazine AI & Society called: Why computers will never be smarter than humans in re. The content of this article can be derived from its title and the main reasoning was the following:

‘A theorem derived from a system of algorithmic information content or complexity n (i.e. the size in bits of the smallest program for computing it), provided n is sufficiently large, can in general not be proven by a theorem constructed from a system of complexity n-l.’ (Stonier, 1989)

Now from this we can derive two things. First: Mr Marcer probably didn’t get invited to a lot of parties. Second: A system or machine cannot solve a when this problem is not in its domain as specified by the programmer. Hereby the problems a machine can solve is limited by the intelligence of its programmer.

The discrepancy in insights then and now according to Nick Bostrom is based on the following: System learning. We created algorithms that enable machines to further teach themselves. This variable was not taken into account 30 years ago when AI meant building an expert system that can solve our problems within the domain we design for them.

The question according to both Bostrom and Hawking is not if AI will ever be smarter than humans. Even the question when it will happen is pretty irrelevant. The question how we will align the interests of AI with ours will be the main issue in the future. The supercomputer in The Hitchhikers Guide through the Galaxy wasn’t very helpful to humans, but it wasn’t destroying them either. According to Bostrom both situations are possible depending on the way we design AI in the future.

To see what he has to say about AI and how to cope with it in the future I would really recommend the Ted Talk above. Also I’m curious what you all think about the possibility of AI outpacing human intelligence and how we can deal with it.

Stephen Hawking: There is no God and computers will overtake humans in the next 100 years – Yahoo News UK. (n.d.). Retrieved October 12, 2015, from https://uk.news.yahoo.com/stephen-hawking-no-god-computers-102114199.html#PT3nczk

Stonier, T. (1989). Open Forum Why Computers are Never Likely to be Smarter than People, 142–145.

 

Towards an artificially intelligent future?

There are technological innovations happening all around us and a major trend in the past decade has been the application of “artificial intelligence” onto already existing technology.

As a result of that we now have smart mobile phones, smart watches, smart televisions, smart cars and even smart toilets (seriously)! And those are just examples of artificial intelligence in isolated devices. We are developing entire systems that need minimal human interference.

If we think about it, more than the immigrants it’s the machines that are taking away our jobs.

Initially only the blue collared jobs were taken over by the machines but now the machines are moving up the ranks. We’ve got websites acting as housing agents, supermarkets with hardly any staff, mobile phone apps that could possibly replace your mother (Siri)! This has got to stop! We need to come together as a specie, to guarantee our existence. Have we learnt nothing from Person of Interest??

On a serious note, we need to proceed with caution when it comes to artificial intelligence. In a quest to make life easy we are giving life to machines and it might just turn out to be apocalyptic in the long run. It is something that has got even Stephen Hawking worried. We are curious beings and we are going to press the “red button” when given a choice.

A major concern is definitely the advancements of AI in autonomous weapon systems. Recently, over 1,000 high-profile artificial intelligence experts and leading researchers have signed an open letter warning of a “military artificial intelligence arms race” and have called for a ban on autonomous weapon systems. The letter was presented at the international Join Conference on Artificial Intelligence in Buenos Aires, and the signees include Elon Musk, Steve Wozniak, Stephen Hawking among others.

The letter states: “AI technology has reached a point where the deployment of [autonomous weapons] is – practically if not legally – feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms.”

Here’s hoping we don’t end up creating one of these.

Author: Amogh Jain, 437457


Sources:

http://www.theguardian.com/technology/2015/jul/27/musk-wozniak-hawking-ban-ai-autonomous-weapons

The dangers and potential of artificial intelligence

Artificial intelligence, or AI, is a topic that has been getting a lot of attention in the past couple of years. Some famous, highly educated people have been warning us about the dangers of AI. Among them are Elon Musk, Steven Hawking and most recently Bill Gates.

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”
– Elon Musk (At the MIT Aeronautics and Astronautics department’s Centennial Symposium)

“I think the development of full artificial intelligence could spell the end of the human race”
– Stephen Hawking (BBC interview)

“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
– Bill Gates (Reddit “Ask Me Anything (AMA)” thread)

It has not just been warnings in interviews, symposiums and online fora. Hawking, Musk and others contributed to an open letter, posted to the future of life institute, warning about the dangers of AI.

Besides the highly educated mentioning AI, the entertainment industry has been contributing to the discussion as well. There are several movies (e.g. Transcendence, Avengers: age of ultron) and video games (e.g. Destiny) showing the dangers and potential of artificial intelligence.

But what is so dangerous about artificial intelligence, and why the sudden increase in global interest? The biggest risk is the uncertainty of what an AI would do. Emotions, even though we are quite rational, have a big impact on our daily decision making. An AI would be a completely rational being. For example: humans would never choose to kill of half the population, just because the data shows it would be beneficial. An AI would likely have no trouble making such decisions. There might actually be a point where it would consider humanity useless. A being that has all the computational power in the world, who does not like us, could be incredibly dangerous. This is one of the major reasons that so many people are concerned about AI development.

Besides the above mentioned risks, there is a lot of potential. Think of a consciousness with so much more computational power. The development and research that this AI could perform is so much faster than we humans can. Making AI is probably our best bet for immortality, could lead to huge economical progress and so much more.

I personally believe creating artificial intelligence, to the point of consciousness, would be equal to creating an actual deity (god). The result will either be the start golden age, the likes of which we have not seen before, or human extinction.

This leaves us with one big question: should we pursue artificial intelligence knowing the risks and potential?

Building websites, why bother?

Building websites, why bother?

You know how to use WordPress? Or maybe Drupal or Joomla? So you know how to make websites? That’s great! But why would you put effort in making a website, when websites can make themselves? There is no need to design anything, let artificial intelligence do the work for you!

Read More…