Maybe the name, ‘Wolfram Alpha’, is more familiar to science students, but not for all business students. I got to know it because I used it to cheat on my calculus homework during Bachelor since it can easily give you the integral, limit, plot of x*sin(x).
It is a computational knowledge engine that was launched in 2009. It is actually not a brand new technology any more, but it is a revolutionary product that provides a possible direction for the future of information technology. People say it is ‘like a cross between a research library, a graphing calculator, and a search engine’. It looks like a search engine on interface but provides far more than a normal search engine like Google. The essential difference is that it gives you the answer to your question, based on a series of computation and processing of its database. Google can only give you a long list of resources where you may be able to find the answer.
For example, I can search for ‘life expectancy of 25 year old Dutch man’. The result looks like this:
While Google gives you this:
In some sense, Wolfram Alpha is very much like Siri, (though earlier than Siri) to process natural language and give you the answer directly. But Siri works better on natural language and voice processing and focus on more on questions for daily life, e.g. ‘where is the nearest McDonald’s’. But Wolfram Alpha pays more attention to data processing and computation, e.g. it gives stock price, financial figures, return forecasts when you type ‘McDonald’s’. Wolfram Alpha’s target is more about technical people than general public, and this is one reason why it is not known to everyone yet. Most people only care about where is the nearest McDonald’s rather than its financial performance. An interesting fact is that Siri uses Wolfram Alpha as a source of answer and in 2012, 25 percent of the traffic of Wolfram Alpha came from Siri.
Wolfram Alpha is said to be the first applied AI (weak artificial intelligence), since it very closely approximates the ability to ‘think’. As Stephen Wolfram, the founder of Wolfram Alpha, stated in his panel, if you ask Wolfram Alpha for the population of New York City, it will utilize both internal algorithmic work and real-world knowledge in order to compute it, rather than just searching for an accredited answer somewhere on the internet.
On the other hand, it is very different from what we usually think about AI, since we often think that AI is a logic algorithm that tries to mimic the human thinking and learning process. However, the thinking process of Wolfram Alpha is solely based on a complicated process of computation, not trying to replicate human thinking process at all. It cannot learn either. According to Stephen Wolfram, he tried working with artificial general intelligence (strong AI) but failed. He realized that a software can still provide useful knowledge without AGI. That is the reason he invented Wolfram Alpha, to build a smart system that can assemble all the existing knowledge, organize them and bring new knowledge. Wolfram Alpha achieved the goal and its ability to answer queries, organizing knowledge and processing knowledge makes it seem like it can think. This weak artificial intelligence is proved to be very practical and useful now. Maybe it will be a direction of future artificial intelligence development.
Cash may no longer be a part of this country. This country might soon stop printing notes. With no notes, no coins, people in this country would have one of the lightest pockets in the world. Soon, attempting a paper money transaction at a bank in this country might provoke a suspicious stare or a report to the police.
Wondering which country it is? If you want a clue: It is a part of Scandinavia and the country is so clean that it once even ran out of trash. Read More…
You are rushing to get your train but want to grab a drink before getting on. Next thing you will probably do is heading to the “kiosk” or the vending machines. In the Netherlands, for a small amount we can easily get a drink or a candy bar from the vending machines at the stations. However, in Japan, the vending machine is much more than that.
Vending Machines in Japan
At first glance, the vending machines are not that different than the ones we are used to. You put coins in the machines and you will get the product that you have selected. However, what makes them remarkable is that there are a bunch of them in Japan. It is estimated that there is about 5.52 million vending machines in Japan, which is even more than the total population of New Zealand (Jnto, 2015).
The vending machines in Japan also include bizarre contents which makes it unique: hot meals, fresh lettuce, cup noodles, flowers, umbrellas and even used underwear. You name it, they have it!
Next-generation vending machines
Vending machines has been already for over 50 years in Japan. However, technology is the key behind that keeps it evolving. For instance, there are vending machines with solar panels and touch panels that can sense the demographic of the customer. This allows the machine to suggest a drink on the display (Ryall, 2010). This is just a small example as there are tons of new features that could be added by companies to make a better user experience.
Recently, the company Kirin even implemented a selfie feature in their vending machine. The vending machine is fitted with a large LCD display and camera. The idea is that you can take a free selfie and share it with your friends through Line, a popular smartphone-messaging app in Japan. The service will be only offered free for those who buy a drink (Ashcraft, 2015). This is definitely a fun and exciting experience for customers. However, in my opinion there are lots of implications and potential in this Selfie Vending Machine. There could be branded backgrounds and localized digital content right there in images with you. Or when the Vending machine is not in use, the display can also show advertisements for products.
Japan is famous for its vending machines. However, it is not just the sheer number that exists in the country what makes it fascinating, but how they make these machines their own in a unique way. In combination of Technology, they keep improving their vending machines and create a better user experience for customers. There is huge potential in these vending machines and seems to unlock new ways of branding. So what do you think? Would we be able to improve our vending machines like the ones in Japan?
Ashcraft, B. (2015) ‘Japanese Vending Machines Now Taking Selfies’, http://www.kotaku.com.au/2015/10/japanese-vending-machines-now-taking-selfies/, October 8, 2015.
JNTO(2015) ‘Vending Machines’ , http://www.jnto.go.jp/eng/indepth/cultural/hj/vendingmachines.html, 2015.
Ryall, J. (2010) ‘Japanese vending machine tells you what you should drink’, http://www.telegraph.co.uk/news/worldnews/asia/japan/8136743/Japanese-vending-machine-tells-you-what-you-should-drink.html, November 16, 2010.
Imagine entering your hotel room after a long flight. You are hungry and tired, but realized you forgot to make a reservation in your favourite restaurant within the hotel. You turn on the television, and you are greeted by a message stating that there are still two more tables left in your favourite restaurant. You indicate that you would like to reserve a table for that evening. When unpacking your suitcase, another notification pops up, asking you whether or not you would like to order a steaming cup of hot chocolate, as you have regularly ordered after a long flight.
Hotel Okura Amsterdam (HOA) is a five-star Japanese hotel located in Amsterdam. HOA recently acknowledged the urge for an interdepartmental IT system integration to improve personalization and therewith customer satisfaction; the company currently operates 40 different operating systems. These insight together with many others were obtained during an in-depth interview on business and IT strategy with Okura’s Rooms, Engineer and IT manager. The business strategy of HOA is dedicated to being ‘’Unique and Complete’’, making striving for a high level of customer satisfaction key; a question in this is how a customer can be convinced of the uniqueness of HOA.
The interview brought to light that the non-integrated structure results in inefficiency. Each division registers its own data and information between divisions is transferred verbally instead of it being retrievable from a centralized data system; many opportunities are lost due to this. The current business strategy is to be improved by integrating an IT infrastructure with IT related processes. The proposed solution therefore strives for a high level of integration. However, due to the issue of scale and the importance of (financial) feasibility of the project, the first generation will solely focus on combining the F&B (Food & Beverage) system with the hotel room system. The solution itself, the infotainment which is further explained below, does have a long term focus of eventually integrating all of Okura’s 40 different operating systems.
The trends of superpersonalization and big data to enhance customer value, in combination with the need for responsive service and modern facilities in the room (current guest satisfaction regarding room technology is low), result in the following proposed solution. To cater for the business need of HOA, we propose to provide real-time personalized information in the hotel room of the guest through a television application. Various systems will be able to communicate with each other and can provide the guests with useful information (or upselling). The system will include the reservation process for the restaurants, the room service information and the room service ordering process. The F&B is initially focused on as this is the main source of income from hotel guests for HOA and it limits the initial investment. Additional functionalities can be added at a later stage.
Authors (BIM2015 – Team 6):
- Stéphanie Visser – 407153
- Job Deibel – 407756
- Dirk Breeuwer – 329445
- Jord Sips – 421144
- Colin van Lieshout – 414788
If Moore’s law keeps continuing, there will be a point in time where computer processing power exceeds the processing power of the human brain. Faster computers could have a huge impact on everyday life and the tasks that we perform. To give a deeper understanding of how fast the brain we will use the estimation of the processing power of the human brain that has been made by Dharmendra Modha, IBM Fellow and IBM Chief Scientist for Brain-inspired Computing. He estimated that the brain has 38 pentaflops of processing power, which is a thousand trillion or 38,000,000,000,000,000 in numbers. Flops stand for floating point operations per second and is an indicator for the processing power of the CPU (Central Processor Unit). Some estimated examples to put the human brain into perspective:
Iphone 6 has about 6,250,000,000 flops.
Samsung S6 has about 33,000,000,000, flops.
Nintendo Wii U has about 333,000,000,000, flops.
Playstation 4 has about 1,833,000,000,000, flops.
Tianhe-2 upercomputer has about 33,860,000,000,000,000 flops.
As you can see the world’s fastest super computer’s computing power is getting close to equal the human brain’s processing power. But when will the commercially available processors surpass the processing power of the human brain? The fastest commercially available are the Core i7 5960X and 5930K, which have about 354 gigaflops (354,000,000,000). According to Moore’s Law (with the help of multithreading and service-orientated architecture) it would take another 32 years before processors faster than the human brain would be commercially available.
Processing power is one thing, modeling software to behave and think that surpasses human knowledge and rationale is another. Artificial intelligence is already being developed, but is nowhere near human intelligence. The combination of super fast processors and software that can improve other software could lead to exponential technology development. This could bring many benefits such as human augmentation, robots that will do a lot of human tasks and increased efficiency in everything that is computerized. We also have be cautious when the technology develops at an exponential rate, as artificial intelligence could “outsmart” human beings. By looking at the current trends in the development of processing speed we could definitely say that there are some exciting technological developments/revolutions to come.
Forbes.com, (2015). Forbes Welcome. [online] Available at: http://www.forbes.com/sites/alexknapp/2014/06/23/chinas-tianhe-2-remains-the-worlds-fastest-supercomputer/
Pages.experts-exchange.com, (2015). Processing Power Compared. [online] Available at: http://pages.experts-exchange.com/processing-power-compared
Puget Systems, (2015). Linpack performance Haswell E (Core i7 5960X and 5930K). [online] Available at: https://www.pugetsystems.com/labs/articles/Linpack-performance-Haswell-E-Core-i7-5960X-and-5930K-594/
Researcher.watson.ibm.com, (2015). Dharmendra S. Modha – IBM. [online] Available at: http://researcher.watson.ibm.com/researcher/view.php?person=us-dmodha
started, I. (2015). Intel processors: what you need to know to get started. [online] TechRadar. Available at: http://www.techradar.com/news/computing-components/processors/intel-processors-everything-you-need-to-know-1282987/3
Is Apple trying to slowly kill Google? Since they are fierce rival this scenario does make sense, but how would ad-blockers play a part in that strategy?
Let’s take a step back.
Google in 2014 announced total revenues of $66 billion (Investor.google.com, 2015). Can you guess what portion of the $66 billion comes from advertisement? Whatever you guessed is probably wrong, because the vast majority of Google’s revenues, $59 billion in fact, comes from advertisement. Specifically in 2014 68.3 percent of Google’s revenue came from advertising through Google sites and 21.2 percent through advertising via Google network sites.
If we focus even further we can observe that for 2014 Google had roughly $12 billion ($11.8 billion) in mobile search revenue, almost 20 percent of its total revenues. Of that $12 billion roughly $8.8 billion was attributed to iOS devices. Taking into account that half of the search volume for Google comes from mobile devices, we can infer that in the future that percentage Google makes from mobile devices is only going to grow (Sterling, 2015).
Distribution of Google’s revenues from 2001 to 2014, by source
A permanent threat of Google’s revenue is ad-blockers. Ad blockers are separate programs or add-ons for browsers that remove or filter advertising content in a webpage, or an application. There are ad-blockers available for all operating systems (Windows, Linux, OSX), mobile platforms (Android, iOS, Windows) and browsers including Firefox, Chrome and recently Safari. The obvious benefits of using an ad-blocking tool for the user is the faster, lighter (in terms of data) and cleaner portrayal of websites and also a frustrating-free navigation experience without annoying pop-ups or videos loading without your permission. Another important benefit of using ad-blocking software is the increased privacy since ad platforms cannot track your personal data. Furthermore security issues can be a reason of using ad-blockers, since dangerous malware is sometimes hidden in advertisements (Navaraj, 2014).
Despite the obvious benefits mentioned above, ad-blockers are a threat to content providers (such as websites, publishers and video producers), which depend on advertising as their main source of income either every time ad is shown to a visitor, or every time an ad is clicked, but also to advertising providers such as Google, which depend on users viewing or clicking their ads on behalf of their advertisers.
There is a growing trend in the use of ad-blocking software. Globally the number of active users surfing the web behind an ad-blocking software in 2009 was 21 million, but has quickly grown to 121 million in 2014.
In 2015 the adoption rate of ad blockers globally increased by 41% in 2014 amounting to 200 million users and is expected to grow even more (Blog.pagefair.com, 2015). So in the last 6 years the users of ad-blockers has been multiplied tenfold. The amount of lost revenue due to ad-blockers is beyond imagination. It is estimated to be $41.4 billion by 2016. That has dare consequences for publishers and content providers in general as well as provider of ads, mainly Google.
You might wonder ok, what does Apple has to do with Google’s revenue model, the wide spread of ad-blockers and how does that involve Apple trying to kill Google?
Well recently Apple introduced a feature on its mobile devices which allowed the installation of ad blockers.
Can you see it now?
Apple is indirectly attacking Google’s revenue model (which is based on advertisement) by enabling iOS users to filter and block all advertisement from Google. That is a war and has huge consequences for Google and publishers that depend on revenues from ads on their websites that now can be avoided.
The question now is how Google is going to respond and how publisher are going to survive without their main source of revenue.
Student number: 401028
Blog.pagefair.com, (2015). The 2015 Ad Blocking Report | Inside PageFair. [online] Available at: http://blog.pagefair.com/2015/ad-blocking-report/ [Accessed 10 Oct. 2015].
Grossman, L. (2015). The Great Ad-Blocker Battle. [online] TIME.com. Available at: http://time.com/4065962/our-attention-is-just-a-pawn-in-the-great-game-of-silicon-valley/ [Accessed 12 Oct. 2015].
Investor.google.com, (2015). 2014 Financial Tables – Investor Relations – Google. [online] Available at: https://investor.google.com/financial/2014/tables.html [Accessed 12 Oct. 2015].
Investor.google.com, (2015). Google Inc. Announces Second Quarter 2015 Results – Investor Relations – Google. [online] Available at: https://investor.google.com/earnings/2015/Q2_google_earnings.html [Accessed 12 Oct. 2015].
Navaraj, M. (2014). The Wild Wild Web: YouTube ads serving malware. [online] Bromium Labs. Available at: http://labs.bromium.com/2014/02/21/the-wild-wild-web-youtube-ads-serving-malware/ [Accessed 12 Oct. 2015].
Patel, N. (2015). Welcome to hell: Apple vs. Google vs. Facebook and the slow death of the web. [online] The Verge. Available at: http://www.theverge.com/2015/9/17/9338963/welcome-to-hell-apple-vs-google-vs-facebook-and-the-slow-death-of-the-web [Accessed 12 Oct. 2015].
Sterling, G. (2015). Report: Google Had $12 Billion In Mobile Search Revenue, 75 Percent From iOS. [online] Marketing Land. Available at: http://marketingland.com/report-google-had-12-billion-in-mobile-search-revenue-75-percent-from-ios-130248 [Accessed 12 Oct. 2015].
Staying up-to-date on what is going on in Tech world: Time Management!
Thank you very much for your enthusiasm and responses to my previous blog post, in which I asked you to share your sources of information on the world of Technology. As promised, here is a second post in which I would like to summarize the results for you.
Favorite to one of the commenters, and also to me at the moment, is TechCrunch. Especially for the ones among us who do not have much time to explore the whole web for interesting news, the CrunchReport gives you a quick summary of about five minutes on the main tech news of the day. Here is an example of a few days ago (Thursday), in which among others the development of Facebook Emoji’s is discussed: http://techcrunch.com/video/crunchreport/#.qiir4i:POct
However, in case you are more interested in the way companies use technologies in their day to day operations (National as well), signing up for a specific newsletter from webpages such as Emerce can be recommended.
In order to get news from more websites at once, use Feedly! This website/app enables you to ‘’check’’ the websites that you enjoy following, and their news will than all appear in your Feedly newsfeed.
The commenters do very much agree on that there is a massive load of information coming from various sources; each News Page provides its own newsletter. There is so much to read, that a little organization in keeping up with what you would still like to read later on may be useful. The tool Pocket will help you out in situations in which you come across an interesting article but you just do not have the time to read it at that very moment; this tool enables you to easily save the article for later so you can read it in the train or at any other moment.
To summarize, if your life is just as busy as mine is, just find five minutes a day to view the CrunchReport, add the News Pages you are interested in to your Feedly and save interesting articles in your Pocket for moments at which you do have time!
Colin van Lieshout
Modern Wars: How Information Technology Changed Warfare
On the 30th of September Vladimir Vladimirovitsj Putin, President of Russia, announced that Russia conducted its first air-strikes in Syria targeted at ISIS (or ISIL). However, in the days after the United States of America and other countries began to question Russia’s motive and use of old school bombing technology which might cause harm to civilians and inflame the civil war in Syria (CNN/Time, 2015). According to US official’s Russian bombing technology is a lacking behind American weaponry in terms of accuracy. As such moves increase the tensions between the East and the West and businesses use information technology to reach their goals, I started to research how information technology has changed warfare over time.
The main goals of warfare have not really changed, but the way wars evolve and are waged certainly have. Just hundred twenty years ago, armies marched to battle in their uniforms, lined up against one another, and mainly used weapons with a short effective range. Thus, people who killed one another were always in close proximity. Later on, longer-range weapons emerged, and the distance between the soldiers became larger and larger. Today, some countries have the capability to destroy towns without having to be physically at the site or even have a within a hundreds of miles. All due to the introduction of IT in modern warfare which enables people to fight wars with the touch of button. This instantaneous transfer of information through the Internet and availability of the Internet around the world increases the number of participants in war. Unarmed actors thousands of miles away can participate in a conflict even by sitting at their computer, providing funding or (video/picture) information through the Internet or deep web.
Without a doubt, it can be stated that competing with gigantic smartphone brands such as Apple, Samsung, LG and HTC is tough. However, the startup Fairphone apparently saw a niche market for a new type of smartphone.
The introduction of the ‘smartphone’ was probably the last major change in the telephone industry. However, the working conditions and salaries in the manufacturing companies that produce those smartphones, have been heavily criticized. In 2014, an undercover BBC investigation discovered poor treatment of factory workers in Chinese factories that assemble iPhones, which confirms that Apple broke their promises to protect factory workers (Bilton, 2014). Even worse, in 2010, 14 factory workers under the age of 25 committed suicide at Apple’s biggest manufacturer, Foxconn (Moore, 2012).
Bas van Abel, the founder of Fairphone, believed there is a demand for a fairly produced or ‘fair trade’ smartphone. Therefore, he first started to raise awareness about ethical production of products and creating a ‘buzz’ around that idea. Fairphone’s goal was to establish collaborative, fair and transparent relationships with their manufacturers in order to ensure worker representation, safe working conditions and fair pay (Fairphone, 2015). In addition, they aimed to extend a smartphone’s longevity by improving the life span of the product and increasing repairability by making use of a modular design.
Moreover, millions of smartphones are thrown away every year, generating mountains of electronic waste (Reardon, 2012). Fairphone aimed to find a solution for this problem by by creating a simplistic modular design that allows users to repair their own phone by replacing the old parts with new parts, thereby reducing electronic waste.
With the above mentioned ideas in mind, they started developing the Fairphone, but instead of moving to investors or venture capitalists, they directly went to the end users, because they believed the phone should be funded by the public. In their Kickstarter campaign, they were looking for 5,000 people who were willing to pay 325 euros for a mid-range Android device, with no special specifications or industry changing features (Best, 2014). Therefore, the value proposition was the story behind the product, and they needed to fully leverage the social message behind the product in order to be successful. In their first campaign, they sold even more than 10,000 Fairphones upfront, indicating that the production could be started right away.
After the success of their first model, they launched the Fairphone 2 two weeks ago, on September 25. Compared to the first model, the Fairphone 2 has a better life span and it is even more fairly produced. In technical terms, the software got upgraded, the phone is equipped with a faster processor and a better screen resolution (Van Lier, 2015). However, the Fairphone two has a price tag of 525 euro, which is, looking at the hardware and the specifications, rather expensive (Verlaan, 2015).
For smartphone users that often drop their phone, the Fairphone might be a cheaper option, since all parts are easily replaceable. Also, the device has a considerably long life span, but I think consumers often wish to buy a new smartphone after 2 years.
To conclude, the most important ‘feature’ of this smartphone is that it is produced in a fair and ethical way. As the trend in the organic meat industry shows, more and more people care about how food and products are made. This is the probably the only reason why there is a demand for this product, because if you compare the Fairphone 2 with a smartphone that has equal specifications, you are very likely to find a cheaper option. So, the Fairphone 2 is expected to be successful in their niche market if they fully leverage the media attention and word-of-mouth effect that this product can bring, but they will not evolve into a real ‘competitor’ of Samsung or Apple.
Best, J. (2014). The gadget with a conscience: How Fairphone crowdfunded its way to an industry-changing smartphone [Blog post]. Retrieved from http://www.techrepublic.com/article/the-gadget-with-a-conscience-how-fairphone-crowdfunded-its-way-to-an-industry-changing-smartphone/Verlaan, D. (2015, July 14).
Bilton, R. (2014, December 18). Apple ‘failing to protect Chinese factory workers’. BBC. Retrieved from http://www.bbc.com/news/business-30532463.
Fairphone. (2015, October 09). Our roadmap to a fairer phone. Retrieved from https://www.fairphone.com/roadmap/
Moore, M. (2012, January 11). ‘Mass suicide’ protest at Apple manufacturer Foxconn factory. Telegraph. Retrieved from http://www.telegraph.co.uk/news/worldnews/asia/china/9006988/Mass-suicide-protest-at-Apple-manufacturer-Foxconn-factory.html
Reardon, S. (2012). Will we ever be able to buy a fair-trade smartphone?. New Scientist, 214(2860), 18.
Verlaan, D. (2015). FairPhone 2 Preview: aan de slag met de eerste modulaire smartphone [Blog post]. Retrieved from http://www.androidplanet.nl/reviews/fairphone-2-preview/
Hasn’t it happened to every single one of us? We decided to add a new password (or maybe just a new variation of an old password) to our list of two to four passwords that we use for all of our accounts – either because we feel like we have used it too many times now, or because the requirements ask for a different kind of special character combination than the two versions of the same password you already have. We thought we were so clever when we created this super-complicated and super-safe new password, and we decided not to write it down because… well, we all know we are not supposed to do that. But now we are sitting in front of our laptop staring at the screen hoping that this super-safe password will find its way back into our thoughts.
At some point not too long ago, fingerprints and other sorts of biometric data like iris scans were considered to be the ultimate safety precaution. However, in today’s interconnected world, biometric data is more and more vulnerable to getting into the wrong person’s hands. When the U.S. Office of Personnel Management was hacked in 2015, a number of 5.6 million fingerprints were stolen. Even though the ability of hackers to make use of those stolen fingerprints is still limited at the moment, this is considered to change quickly as technology evolves rapidly (TheGuardian, 2015).
So if we keep forgetting our passwords, we are not supposed to write them down, and even fingerprints and retina prints are soon not to be safe anymore – how can we protect our private property?
Researchers from Birmingham University have developed a way for security systems to identify a person’s identity through that person’s brainwaves. A study showed that brains react to different words with different kinds of electrical potentials that represent neural communication, and that those different reactions can be used to verify a person’s identity with an accuracy of 94 percent. The study also shows that those potentials stay the same over time, making it possible to use this method over long periods of time – for example for security systems. The study also proves that only the minimum number of electrodes required for obtaining clean data has to be placed on the scalp of the person in order to measure his reactions – three (Armstrong et al., 2015).
Those reactions, the so-called ‘Brainprints’, are considered to be a very safe way to protect private property since they cannot be easily stolen by hackers as can be fingerprints or retina prints. Furthermore, finger or retina prints are not cancelable (they cannot be changed). You cannot simply get a new fingerprint or a new retina print. Once this kind of biometric data is compromised, it is not valuable for the use with security systems anymore. The biometric data from ‘Brainprints’, however, is indeed cancelable. In the case of a compromised ‘Brainprint’ through hacking activities, these ‘Brainprints’ can be reset, making this method of property protection very reliable (Birmingham University, 2015).
Do you think this innovation will turn into a technology that will be widely accessible to everyone in everyday life? Or do you think it will only gain relevance (if so at all) in a high security-seeking business or governmental context?
How do you personally feel about this new discovery? Would you rather stick to your analog passwords that you have gotten so used to? Or are you looking forward to a future where you do not need to remember all those annoying password variations anymore?
Armstrong, B. C., Ruiz-Blondet, M. V., Khalifian, N., Kurtz, K. J., Jin, Z., & Laszlo, S. (2015). Brainprint: Assessing the uniqueness, collectability, and permanence of a novel method for ERP biometrics. Neurocomputing.
Binghamton University, State University of New York. (2015, June 2). Brain’s reaction to certain words could replace passwords. ScienceDaily. Retrieved October 8, 2015 from http://www.sciencedaily.com/releases/2015/06/150602160631.htm.
TheGuardian (2015). US government hack stole fingerprints of 5.6 million federal employees. TheGuardian.com. Retrieved from http://www.theguardian.com/technology/2015/sep/23/us-government-hack-stole-fingerprints.
It’s the year 2030 and you are walking with your friend to a cafe in a new city. You see this cosy little cafe and both of you decide to enter the cafe. As soon as you enter the cafe the hostess says: “Hello Mr/Ms “YourName”, we have a table near the back of our cafe as seen in your preferences.” When sitting down the hostess asks: “Would you like to order a Cappuccino, like last week, or do you want something else this time?”. You decide to order a Cappuccino and when you sit down you tap on the table to view the menu on the table. You get a list of recommended items in order to your preferences. You decide to order a tuna salad, like always.
This future event with your friend going into a cafe is pure fiction, however the knowledge of the cafe may be not. How is it possible that this café knew that you were in the neighbourhood, and how did it know what your favourite and preferred drinks/food are? The answer: “Smart Dust”
Smart Dust are tiny little microelectromechanical systems (MEMS) that can detect i.e. vibrations, humidity, temperature, light, movement, magnetism, and chemicals. Tiny devices of 2mm each, work as an system to transfer data to each other. Each of those devices has a small “router” in them to send and receive information. The devices have a wireless range of maximum 10 meters. Due to the small range, it is necessary to have a lot of tiny devices close to each other to transfer data on a larger scale. Their energy source is solar energy, because they have a small solar cell and a small battery in them.
The idea descends from Kristofer Pister, a professor at Berkeley. When Pister presented the idea to his colleagues, his concept attracted the US military and Pister received funds to further his work. The first test was in 2001 were six tiny devices (MEMS) were dropped in a field to detect a military vehicle. The test was successful and they even managed to capture the course and speed of the vehicle. Last year a team of Michigan students successfully embedded solar cells in the MEMS to extend their life drastically.
There are many business implementations for Smart Dust. Pister accomplished to gather information about the weather in San Francisco with a radius of 21km using Smart Dust. Defence related implementations are also possible, such as battlefield surveillance and transportation tracking. Transportation tracking is also possible to control inventories. The tiny Smart Dust devices will take over RFID technology in that case. You can also think of product quality control. Some products need to be stored under certain conditions and smart dust makes it easy to monitor temperature, humidity, vibrations etc. There are more business implementation you can think of such as virtual keyboards, smart offices etc.
The main objective for the researches is to extend the life of the devices even more. When companies start to produce Smart Dust the variable cost of one device will be extremely low. The machines to produce MEMS will be costly at the start, but when this technology becomes feasible for companies it will be implemented on a large scale. Researches ask for caution when implementing this technology, because of the environmental impact. No one wants to live in a city with billions of devices floating in the air. Pister did inhale a device (MEMS) and said that it is equal to inhaling a fly. You will cough it up.
Another thing that researchers ask caution for is privacy. Smart Dust devices can measure a lot of things and they are still trying to implement new kind of sensors in the device. It is also possible that Smart Dust will contain microphones to listen in on conversations. Let’s go back to the introduction. It is possible that your clothes, Identity card and maybe yourself will contain Smart Dust which has information about you and will communicate it with businesses. Where camera’s are easy debatable, because they are visible, Smart Dust is not. People cannot see smart dust being there and don’t know if they will be monitored and for what purposes. Another problem is that information gathered by Smart Dust can possibly be stolen by hackers. You can also think of Smart Dust being used to spy on people or businesses. Someone can scatter some device in a house or conference room to obtain classified information.
Smart Dust is a technology with lots of potential and that’s why it entered Gartner’s hype cycle. It will take some more years to make this technology feasible for the market. Meanwhile the discussion how far monitoring of people can go with current technologies will go on and the discussion will intensify if Smart Dust will be implemented.
Kevin Schaap (358985)
M. Kahn, R. H. Katz and K. S. J. Pister (1999) “Mobile Networking for Smart Dust”, ACM/IEEE Intl. Conf. on Mobile Computing and Networking, Seattle, WA, August 17-19, 1999
S. J. Pister, J. M. Kahn and B. E. Boser, (1999) “Smart Dust: Wireless Networks of Millimeter-Scale Sensor Nodes”, Highlight Article in 1999 Electronics Research Laboratory Research Summary.
Hsu, J. M. Kahn, and K. S. J. Pister, (1999) “Wireless Communications for Smart Dust”, Electronics Research Laboratory Technical Memorandum Number M98/2, February, 1998.
Welcome to the adult world. We love sex. And therefore, we love porn. Although reliable public statistics are difficult to find, porn is estimated to be a $97 billion dollar industry (NBCNews 2015). According to The Guardian (The Guardian 2013), traffic on adult sites easily surpasses traffic on social media- or shopping sites in the UK. And at the same time, Mindgeek, exploiter of amongst others PornHub, claims to attract over 100 million visitors a day who consume over 1.5 terabyte per second (The Economist 2015). According to above, one can easily state that the porn industry is still booming. But what makes the porn industry as successful as it is? From an information technology perspective; nerves of steel.
Before the rise of the Internet, porn used to be exclusive. Amongst others, regulations and taboos ensured that porn was not easily available, which drove the price of porn up. When the Internet got widely available, many traditional industries faced a massive challenge – yet the professional porn industry, contrary to many other industries, quickly picked up the opportunities presented by the Internet to enhance and enrich their business model. The porn industry proved to be extremely technology-driven; according to CNN, many nowadays widely-used online technologies, such as credit-card verification and streaming video, can all “be traced back to innovations designed to share, and sell, adult content” (CNN 2010). Business Insider even states that “the concept of e-commerce (…) owes much of its early existence to porn” (Business Insider 2013). And not only did the porn industry experiment with different types of technologies; several online sales methods can be traced back to the adult industry as well.
On the other hand, the Internet has posed challenges to the professional porn industry, mainly in the form of porn tubes. A massive surge in availability of amateur porn partly contributed to the shift of porn from a speciality good to a commodity good, hence significantly lowering the price of porn. Although professional porn producers have found ways to strategically use tubes to draw customers to their sites, this industry still suffers from lower profits than before.
The porn industry tries to restore its profits by, again, embracing technological innovations and models. PornHub recently announced a crowdfunding initiative for a porn movie (shot in space); an add-free Netflix-like streaming service has been introduced; and options as Virtual Reality and robotics are currently heavily explored.
We all love sex. How do you think that the porn industry should modify its business model through information technology, in order for it to increase profits? What is the future of the porn industry?
Business Insider 2013, Porn: The hidden engine that drives innovation in tech [online]. Available at: http://www.businessinsider.com/how-porn-drives-innovation-in-tech-2013-7?IR=T [Accessed 1 October 2015].
CNN 2010, In the tech world, porn quietly leads the way [online]. Available at: http://edition.cnn.com/2010/TECH/04/23/porn.technology/ [Accessed 1 October 2015].
NBC News 2015, Things are looking up in America’s porn industry [online]. Available at:
www.nbcnews.com/business/business-news/things-are-looking-americas-porn-industry-n289431 [Accessed 1 October 2015].
The Economist 2015, Naked capitalism [online]. Available at: http://www.economist.com/news/international/21666114-internet-blew-porn-industrys-business-model-apart-its-response-holds-lessons [Accessed 1 October 2015].
The Guardian 2013, Porn sites get more internet traffic in UK than social networks or shopping [online]. Available at: http://www.theguardian.com/technology/2013/jul/26/porn-sites-internet-traffic-uk [Accessed 1 October 2015].
The rise of Web 2.0 changed the web from a static portal to a dynamic workplace without physical barriers through which people across the world are able to connect and collaborate. This resulted in new business models and new ways to operate in order to achieve goals by using ‘the crowd’ as a main resource. This blogpost will compare two types of crowd usage (Crowdfunding & Crowdsourcing) through two electronic marketplace platforms (Kickstarter & Freelancer).
Using Kickstarter, business entrepreneurs are able to attract capital using the crowd as their main source of investment, called Crowdfunding. Entrepreneurs submit their ideas and the crowd can decide to ‘back’ these projects by donating money, sometimes in return for a finished product. Kickstarter earns its money by charging an average commission based fee of 5% on all successfully funded projects. The business model is fully driven by transaction volume. One of their success stories is the Pebble E-paper Watch. This project reached its 100.000 USD goal in just two hours, and eventually was pledged more than twenty times the expected amount (Jauregui, 2012).
Crowdsourcing is mainly used for four main purposes: solving problems, generating ideas, designing logos/commercials/websites, and outsourcing human intelligence tasks. At the same time, people around the world are looking for work matching their specialisation (Boons, 2014). Freelancer created an online marketplace which connects these two sides and enabled online outsourcing. Their revenue model is subscription and commission based. Both project suppliers and freelancers need a paid subscription if they wish to participate in this electronic marketplace and the commission based fee is based on a fixed percentage of the value of every completed project. Today, Freelancer has over 16 million users and more than 8 million projects. This marketplace is expected to grow as the adoption of internet in low-wage countries increases.
Comparing Kickstarter & Freelancer
Comparing Kickstarter and Freelancer, we found mostly similarities. Both Kickstarter and Freelancer have the largest market share in their market, and do so by providing a hierarchy free marketplace (Malone, Yates, Benjamin, 1987) . They both exploit similar business models, based on fees and commissions, though Freelancer has more revenue streams due to the paid subscriptions. Furthermore, Kickstarter and Freelancer both exploit the absence of matured legislation and governance guidelines, limiting their responsibilities towards the crowd. But Kickstarter has shown its responsibility recently by becoming a Public Benefit Organisation (Kickstarter, 2015). The main difference lies in the role of demand and supply, which are fundamentally different when comparing Crowdsourcing to Crowdfunding. Whereas in Crowdfunding the crowd solely offers funding, in Crowdsourcing the crowd is responsible for providing services.
Kickstarter and Freelancer are ever growing in size as crowdfunding and crowdsourcing are still rising in popularity. However, in the long run the growth of Crowdfunding is expected to reach a ceiling given that the yearly growth will start to decrease. Crowdsourcing is expected to keep on growing as the job market is an essential human need. Especially in the low-wage countries that are getting increasingly connected to the Internet. The most risky element which can potentially disturb the growth of both crowdfunding and crowdsourcing is the maturation of legislation and governance structures. Legislation will most likely shift the landscape of responsibilities regarding crowdfunding and crowdsourcing websites, which could have an impact on all crowd-based business models.
Jauregui, A. (2013) ‘Pebble iPhone Watch Is Highest Grossing Kickstarter Project Ever’. Accessed on 23 September 2015 through http://www.cnbc.com/id/47100168
Boons, M. (2014), Session 8: The Business Implications of Web 2.0 [PowerPoint slides], Retrieved from RSM http://www.eur.edu/
Malone, T.W., Yates, J., and Benjamin, R.I. (1987). Electronic Markets and Electronic Hierarchies. Communications of the ACM 30(6) 484-497.
Strickler, Y., Chen, P., Adler, C. (2015). ‘Kickstarter is now a Benefit Corporation’. Accessed on 24 September 2015 through https://www.kickstarter.com/blog/kickstarter-is-now-a-benefit-corporation
Hicham Gouiza 322226
Tony Jordan 400986
Kevin Schaap 358985
Jurgen Langbroek 336822
Glenn de Jong 357570
Being a little clumsy and at the same time an absolute smartphone addict, it regularly happens to me that my mobile phone slips out of my hands and lands, mostly, on its screen first on the floor. The time between realizing I dropped my phone and picking it up to check whether the screen is still in one piece usually lets my heartbeat increase drastically.
And this not only happens to me. According to a survey conducted by case manufacturer Tech21 as many as 90% of users drop their phone at least once a month (Blandford, 2013). Although this number should be regarded at with caution since Tech21’s main interest consists in selling as many phone cases as possible, it surely gives us a hint that with the rapid growth of the mobile device market phone dropping and the associated repair costs have become an issue.
However, there is good news for all the phone droppers on this planet! Soon we might not only be able to drop our phones without having to fear any consequences but we might even be able to bend them as much as we like. This can be achieved through a new superlight and superstrong material called graphene, a one atom thick layer of graphite making it both transparent and bendable. In addition to the before-mentioned characteristics, graphene also conducts heat and electricity better than anything else, making it an optimal ingredient for future LED screens (De la Fuente, 2014).
Although the technology integrating graphene into smartphone LED screens is still in its infancy and we might therefore not yet see any transparent and bendable smartphone screens in the very near future, both researchers and the mobile phone industry have launched projects exploring the possible applications for graphene (Hamill, 2014).
For those who might now be interested in graphene and its capabilities, I recommend to have a look at the following TED talk from Mikael Fogelstrom which provides some great explanations.
So what do you think? Is graphene really the new super material researchers like to promote it as? Will our future mobile devices be made of graphene? Do you see any further uses of graphene apart from the one discussed in this article?
Blandford, 2013.‘90% of people drop their phone at least once a month’, http://allaboutwindowsphone.com/, last visited: 20 September 2015.
De la Fuente, 2014.‘Graphene uses and applications’, http://www.graphenea.com/, last visited: 20 September 2015.
Hamill, 2014. ‘Smartphones Of The Future Will Use Graphene Touchscreens’, http://www.forbes.com/, last visited: 20 September 2015.
“Music and Math”
The amazing advances that we have seen in the power of computing have led to information technology being applied to many aspects of our daily lives. Often information technology is working behind the scenes. A nice example of this is (which I came across during my minor “Exploration of New Markets through Innovation” at EUR):
HIT SONG SCIENCE (HSS):
HSS is a part of “Music Information Retrieval”; which is the science of retrieving information from music.
The term Hit Song Science was coined and trademarked by Mike McCready the co-founder of HMI-polyphonic and later X-ray (these operate within the so called “hit counselling business”).
HSS entails using statistical, signal processing and machine learning methods to attempt predicting the commercial success (or as some call it: “mainstream potential”) of a song by looking at characteristics of other songs that have been a success in the past.
Early studies have claimed that this technology seems to be successful in predicting the commercial success of songs. However other studies refute these claims (see: “hit song science: not yet a science”). Nonetheless these firms still exist and are evaluating songs as we speak.
The implications of the adoption of this technology by the music industry are extensive. As this technology has become widely available (Polyphonic HMI and musicxray), more stakeholders (not only the biggest record labels) in the music industry are able to benefit from this technology: producers, small record companies and even individual artists can use HSS.
I have no problem whatsoever if record companies want to use this technology as a data mining tool to be able to predict the success of songs. I also have no problem if artists and record companies use it to find one another (artists finding record labels that fit their sound and vice versa). However, the creators of this technology are now boasting that it cannot only predict the potential of a song but it is also able to suggest improvements in songs. This to me is a little worrying: I am not that keen on this software tampering with music to optimize the chance that it will be a success. I feel that music tampered in this way may lose its human touch/feel/sound.
The controversial part of HSS is that we perceive music as something very “human” and I myself don’t think machines can replace human ears when it comes to music. However this sounds a lot like the argument people use to argue on machines being able to play chess.
Even if this technology becomes successful and widely adopted by the music industry, I believe the positives (data mining and artist/label coordination) outweigh the negatives (losing the “human sound” in certain songs). I am not worried that the quality of music in general might decline due to HSS; it might have an impact on pop music. However, I believe that excellent music will always still be created. There will always be artists that refuse to use such technologies, even if such artists were to remain relatively “underground”.
Since the inception of the internet: underground is not all that underground anymore (see “from niches to riches: anatomy of the long tail”).
On a last note:
I am very curious as to musical compositions made solely by machines, which is a possible future application of Music Information Retrieval (however this hasn’t produced anything noteworthy as of yet).
What do you think of HSS? Will it survive in the (pop-)music industry? Do you think it will improve music or maybe make it more generic? Do you think it will shift the focus from artists to individual songs if this technology were to be even more widely adopted?
Author: Euclid Alexis Haralambidis
Let me give you a quick look into my wallet. I open it and the first thing you find is a debit card. It has the Rabobank logo on it, my name and some random number. This card allows me to pay at most shops. Next to that is my ‘OV-Chipkaart’, again it has my name on it, a beautiful design and some random numbers. This card allows me to pay for my travel. Behind that one is a card from my student association. Again it features a name and some random numbers but this one allows me to buy drinks. In my wallet you can also find a printer card, a cinema card and a coffee card. I bet some of you guys have even more cards. To be honest, this annoys me. I can’t leave the house without at least bringing 4 or 5 cards with me.
This is exactly the problem the guys and girls at Coin were facing. Let me introduce you to a smart card: “Coin is a connected device that can hold and behave like the cards you already carry. Coin works with your debit cards, credit cards, gift cards, loyalty cards and membership cards. Instead of carrying several cards you carry one Coin. Multiple accounts and information all in one place.”
Are you interested? Well, you’re not alone. Within just 40 minutes Coin had raised its crowdfunding target of $50.000. Their video on Youtube went viral and twitter literally exploded. Praised by many, Coin seems to have a bright future ahead. So how does it work? Although the technique is quite sophisticated, the usage for the end user is made very simple and accessible. With your personal ‘Coin’ card you also receive a scanner. Simply scan your cards that you want on your Coin and with the Bluetooth technology build in the scanner and card, your cards information will be transferred on your Coin card. Coin also offers an App for your phone to connect to your card. This offers possibilities to manage, transfer or delete your cards from Coin. Another smart feature is the tracker. If Coin gets too far away from your phone you get a notification warning you about possible danger. Is your card stolen? Simply erase all data.
Although this sounds like an awesome gadget, not all is going well for Coin. First of all, you can’t actually buy it yet. The initial launch somewhere in 2014 has been delayed multiple times and at the moment of writing the release has been delayed again until spring of 2015. Skeptics are doubtful about this release date too, and predict an even later release. Plus, this technology might be arriving a bit too late. Having all your cards into 1 sounds great, but who needs cards in 2020? More and more companies are working on payment through phone, totally removing the need of a physical card. So even with its initial success, Coin’s future might not be as bright as thought.
Coin delays product launch, 22-08-2014, http://www.cnet.com/news/coin-delays-product-launch-until-spring-2015-as-questions-remain/
The Only Coin You Need To Replace All Your Cards, 02-12-2013,http://startups.fm/2013/12/02/the-only-coin-you-need-to-replace-all-your-cards.html
General information Onlycoin https://onlycoin.com/support/faq/
Rabobank and W3C Work together for an open online payment standard, 16-10-2014, http://tweakers.net/nieuws/99091/rabobank-en-w3c-werken-aan-open-standaard-voor-online-betalingen.html
By the time of this writing it’s been 2 hours since Microsoft announced their new product durring a press event in San Francisco. Everyone was expecting a demo and details about Windows 9. However, Microsoft decided to surpise the public with the announcement of Windows 10.
Yes, you read this right – 10! They decided to skip 9 and jump directly to 10.
Microsoft have stated more than once that they have teams working on the next Windows version even before the previous one is released. The point is to release new versions and upgrade the OS as fast as possible. Despite this, they decided to skip a whole version on which a dedicated team has been working for who knows how long and focus on the one after that. No one gives explanations why they did so but the 10th version is already being called “The best Windows yet”
In the live event when asked about the naming difference they answered:
Q: Can you talk about the name? Seems weird going from Windows 8 toWindows 10.
A: This product, when you see the product in your fullness I think you’ll agree with us that it’s a more appropriate name.
They even released a small introduction video showing some of the changes in the new OS. Windows 10 will reintroduce an enchanced version of the old start menu that we all loved before Windows 8 removed it. They are also distancing the product from the Metro style which was by and large optimised for Mobile devices. However, according to Terry Myerson, Microsoft’s executive VP of operating systems, the OS is designed to run on even more devices than the previous version.
I am sure in the following days and weeks we’ll get a tone of information about the new Windows. Meanwhile, I found this interesting article dating from 1st of April last year. The author, Pete Babb, jokes about Microsoft skipping a version of Windows and goind directly to 10. Is this a an inside information that no one saw comming or is it just a very lucky guess? 🙂
What do you, dear readers think? Is reintroducing old features going a step backwards or is it just a clever marketing strategy? Express your feelings in the comments below!
As the internet is expanding rapidly and new households are being connected to the internet on a daily basis, the demand for new technologies to facilitate this rapid expansion is growing as well. The old architecture of the internet had several drawbacks such as IPv4. However, we have come a long way and practically every place has wireless internet (Wi-Fi). The advantages of Wi-Fi over cable networks are obvious and are the catalyst for the surge in mainstream adoption of wireless technology worldwide. For example, Taipei is currently implementing a free public Wi-Fi network for the city.
China is a rapid rising economy at the moment and, as a result, its huge amount of citizens are moving to urban settings in a fast pace. These demographic changes put a huge amount of stress on governments, both national and local, to facilitate this transformation adequately. Although a large part of China is already making use of the internet, it still is very limited in terms of amount of connections in smaller cities and internet speed. Moreover, let’s not forget the great firewall of China blocking a large amount of content on the internet to Chinese society. Nevertheless, scientists at China’s Fudan University have managed to create a futuristic solution: internet emitting light bulbs.
This technology, dubbed Li-Fi, can at the moment provide 4 computers per bulb internet at speeds of 150 mbps which is much higher than the average broadband connection in China. As you can see, this new technology might provide China and other countries new ways for internet adoption. Moreover, it could provide companies huge investment costs of large internet servers and router networks within the firm; rather, it could replace all the light bulbs with Li-Fi ones as light is essential within society.
To me, these are interesting developments and could lead to a more worldwide adoption of the internet, creating an even more connected world. Also, imagine the possibilities of this techology such as Li-Fi streetlighting. Can you think of other usefull solutions?
Rotterdam – The impossible made possible. Nu.nl has just recently announced that researchers linked to the University of California have managed to make elastic screens[1,3]. This screens would not shatter when you drop them, crash when they are stretched and break when you would fold them. What more could a consumer wish for?
For now it seems the age of broken and shattered windows on your mobile phone is coming to an end. But just replacing these screens with the more flexible screens you aren’t using all of the capabilities of these screens. For example, researchers mention these examples as potential applications [2,3]:
• curtains that light up the room
• smartphones that can be enlarged up to two times their original size
• electronics in clothing
Potentials uses of these screens are amazing. Personally, I am very interested in the business side of the potential use. I could imagine for example that people would wear clothes with enormous changing advertisements on them as they go to work using the subway. In that way the reach of conventional advertisement is enlarged, a new opportunity for businesses arises!
Another really interesting use would be the screens that could be enlarged multiple times. For now, the screen has a resolution of 5×5 pixels, but it is expected that the resolution will increase exponentially in the near future. Hopefully screens are also able to be extended even up to 10, 20 or maybe a 100 times. In that way you do not buy a television, tablet, mobile phone and laptop separately, but you will always have your screen with you which you connect with the device you want. You just expand your screen to watch a soccer match and afterwards you let it shrimp, fold it and take it with you upstairs to watch the written summary of the game on your screen you now use as a tablet.
References to used sources:
 http://www.nu.nl/tech/3583732/elastisch-scherm-kan-worden-gevouwen-en-gerekt.html, 24 september 2013
 http://nutech.nl/gadgets/3583729/onderzoekers-maken-elastisch-scherm.html, 24 september 2013
 http://www.nature.com/nphoton/journal/vaop/ncurrent/full/nphoton.2013.242.html#access, 24 september 2013
Twitter would like to go public, as it announced in an appropriately delivered Tweet last week. The first thought I had was rather positive, but as soon newsmedia picked up this message, the sentiment on this IPO rather quickly drawn a lot of comparisons to Facebook’s initial public offering, and I was quite surprised by this pessimistic sentiment, as another big tech company from silicon valley enters the stock market. So in this blogpost, I would like to share some positive reflections on this IPO. I thought it would be interesting to place an emphasis on a medium that is highly academically used in this course and related research, but also on the implication side, as its IPO should have an impact for the users of Twitter?
(You all know Twitter, so I will not bother you with a general introduction)
On September 12th Twitter revealed via its own micro-blogging service that it had begun a process with America’s Securities & Exchange Commission that should ultimately lead to an initial public offering (IPO) of shares in the company. Should the firm’s plans go through, the IPO is likely to take place in 2014. Twitter’s listing will be the most eagerly anticipated tech-company flotation since Facebook’s, early last year. Twitter’s listing will be the most eagerly anticipated tech-company flotation since Facebook’s, early last year. The company yet is worth billions upon billions of dollars, and its founders will become extremely rich sometime in the next year. But Twitter, as we experience it, is also set for a radical redesign sometime soon. The company’s finances are set to change; but its looks may be changing just as much
Avoid a debacle like Facebook’s IPO
Of course, Twitter’s hope is to avoid its IPO turning into an overhyped debacle like some other technology companies. Facebook’s IPO, in particular, is seen by Twitter executives as a cautionary tale, the source said. Asked the other day if he had any advice for Twitter’s expected public offering, Facebook CEO Mark Zuckerberg joked “I’m kind of the person you would want to ask last on how to make a smooth IPO.”
But I think, alike with some others I found; Twitter’s IPO will be significant for several reasons:
Twitter is going public sooner than Facebook went public. Zuckerberg waited more than eight years to conduct a Facebook IPO and by the time he made the decision to go public, primarily due to CEO Mark Zuckerberg’s notorious dislike of public markets, public disclosures, and public pressures to perform, there was so much hype and pre-IPO money invested in Facebook that it almost made the IPO unmanageable.
Secret IPO Filling
Keeping its IPO filing secret until the last minute could help Twitter avoid the overheated anticipation that Facebook had to deal with ahead of its disastrous IPO. It could keep its financial details away from rivals for a few extra months, as it grows a mobile advertising business that might compete with Facebook’s or LinkedIn’s. And if Twitter would rather keep some of its early history under wraps, it could avoid an outside audit and submit just two years of financial statements, as opposed to the customary five.
Advice of Facebook’s Zuckerberg
Twitter’s hope is to avoid its IPO turning into an overhyped debacle like some other technology companies. Facebook’s IPO, in particular, is seen by Twitter executives as a cautionary tale, the source said. Asked the other day if he had any advice for Twitter’s expected public offering, Facebook CEO Mark Zuckerberg joked, “I’m kind of the person you would want to ask last on how to make a smooth IPO.” Zuckerberg not only overcame a very challenging post-IPO period as he successfully transitioned Facebook to being more of a mobile business, he also learned something from the process and recently gave this advice to Twitter.
Powerful media company rather than a technology company
It will further underline the power of social media—in Twitter’s case, short text messages of no more than 140 characters—as a tool for mass communications. It will mean that the all of the biggest social-media behemoths, including Facebook and LinkedIn, will have left the shadows of private ownership for the spotlight of the public markets. And it will add another dimension to the growing rivalry between America’s leading tech firms, who are increasingly invading one another’s strongholds. (economist)
As Twitter has evolved, particularly over the past couple of years, from a simple, text-based service toward something richer and fuller: users can now embed everything from pictures to Vines to full-on mini-apps within their tweets. It’s like a stream gradually becoming a raging river. Twitter has transitioned from a technology company into a powerful media company in its own right.
Morgan Stanley’s powerful tech team, which led the Facebook IPO, will not be the lead underwriter for the Twitter IPO. That role will be filled by Goldman Sachs. This is a big change in Silicon Valley, where Morgan Stanley has ruled the hottest tech IPOs in recent years, like the IPOs of Facebook, Groupon and Zynga, all three of these IPOs did not go well.
It’s not clear yet if Twitter has picked an exchange to list on, but another difference would be if the company chooses to move away from Nasdaq, which for years has been the exchange of choice for Silicon Valley. Nasdaq was widely criticized for its role in the botched Facebook IPO and Nasdaq paid a $10 million penalty to settle SEC allegations stemming from its “poor systems and decision making” during Facebook’s IPO.
The area of growth that would most likely garner user outrage after a Twitter IPO would be the addition of more obtrusive advertisements to the Twitter feed. But that is nothing new to Twitter, which already monetizes heavily through advertisements. (In line with the platform’s use as a second screen, Twitter even works with advertisers to help them target specific television audiences). Because Twitter already employs an ad strategy that relies heavily on mobile and targeted ads, it’s more likely a change in advertising would come around Twitter’s other features, not the Twitter feed. One part of the user experience that may change will be an influx of new features and partnerships, particularly around entertainment, television and ecommerce. Twitter recently hired former Ticketmaster president Nathan Hubbard to run commerce for the platform and bring shopping to the Twitter feed. The added transparency that comes with filing earnings documents each quarter will mean more features for users.
One thing is certain: Dick Costolo, Twitter’s CEO, won’t be showing up for the Wall Street road show in a hoodie.
Looking forward to your comments with your thoughts!
IT has enabled us to efficiently and –usually- effectively store and use big amounts of information to our advantage. Automation has cut costs, shortened supply chains a.o.. Other IT innovations have improved safety, health care and have even saved lives. A new innovation discussed on Techcrunch, the “wearable baby monitor”( Techcrunch) caught my eye whilst searching for a topic for this forum.
Whilst not immediately relevant for what we discuss in this course maybe, the technology did raise a number of (controversial) questions for me. The monitor is a small device that can be put around a baby’s ankle, measuring things like heart rate, the temperature of the baby and the room, the level of light, and more. The information gathered can be accessed via an app on your phone. The app will warn you if something is wrong, e.g. “the level of light in the room is not optimal” or “your baby has just stopped breathing”. I have to admit the app would be quite helpful in case of the latter, but with regard to some of the other options the technology focuses on, such as optimizing the temperature in the room, the level of light, I find it is going a bit too far. It is meant to make it easier on parents, enabling them to make it as comfortable as possible. Nevertheless, the technology does not measure other stuff relevant for a small child’s upbringing, such as the need for attention or interaction. Also, creating a 100% safe and comfortable atmosphere for your child could harm the baby in other ways, for example with regard to its overall immune system.
Apart from these more biological aspects, it is still a technology and technologies can experience failures. What if your app has not been updated correctly or the software has a bug which could give off wrong warnings or incorrect information, resulting in parents not noticing something could be wrong with their child or worried parents making their children go through numerous unnecessary, expensive and possibly harmful medical tests? Could it be that the technological world might have become a bit too enthusiastic with regard to making our life easier? Could people rely on information provided by technology too much, foregoing more emotional, human-specific aspects?
First of all: I have been working on this post for over a week and it turned out to be rather extensive. For that I’m sorry. The reason I’m writing this post is that I was amazed about the misinformed assumptions fellow classmates have and their curiosity on this topic.
In its earliest form, people have been attempting to conceal certain information that they wanted to keep to their own possession by substituting parts of the information with symbols, numbers and pictures. Stenographic techniques have been used for centuries. The first known application dates back to the ancient Greek times, when messengers tattooed messages on their shaved heads and then let their hair grow so the message remained unseen. A different method from that time used wax tables as a cover source. Text was written on the underlying wood and the message was covered with a new wax layer. The tablets appeared to be blank so they passed inspection without question. In the 20th century, invisible inks were a widely used technique. In the Second World War, people used milk, vinegar, fruit juices and urine to write secret messages. When heated, these fluids become darker and the message could be read. This technique is still used among inmates in U.S. prisons that belong to gangs.
Even later, the Germans developed a technique called the microdot. Microdots are photographs with the size of a printed period but have the clarity of a standard typewritten page. The microdots where then printed in a letter or on an envelope and being so small, they could be sent unnoticed.
Recently, the United States government claimed that Osama Bin Laden and the al-Qaeda organization use steganography to send messages through websites and newsgroups. However, until now, no substantial evidence supporting this claim has been found, so either al-Qaeda has used or created real good stenographic algorithms, or the claim is probably false. [Tel, 2004]
So why might someone want to use encryption?
Actually, there are numerous reasons why people might want to use encryption. Of course there are military reasons, the need to protect business or financial information, protecting communication from unauthorized access, the protection of stored data, authenticating payments and the prevention of espionage. However, due to a lack of knowledge unnecessary security issues still arise.
In order to understand the following sections allow me to introduce some terminology. Code is a technique to replace words or semantic structures by a corresponding code word. The simplest example of this principle is a shift in the alphabet by a fixed amount (e.g. 2 positions make a=c, b=d etc.) Cypher means a replacement based on symbols, where each symbol is mapped to another letter. Cryptography is the science of encrypting or hiding secrets. Cryptanalysis is thescience of decrypting messages (cyphertext) or breaking codes and ciphers in order to obtain the unencrypted message (plaintext). Cryptology is the combination of both Cryptography and Cryptanalysis.
Due to space constraints I am not digging into the algorithms. Moreover, I am afraid that I have already lost a lot of readers by now, and throwing in numbers might turn off the last readers. If you are really enthusiastic and think I’m leaving out the good parts, just leave me a message or come see me after class. (If it’s before noon, coffee would be appreciated.)
My experience tells me that basically, a good encryption algorithm is as strong as its randomness. In short, there are two algorithm categories; symmetric-key encryption and asymmetric key encryption. Symmetric key encryption uses one key for both encrypting and decrypting messages. Asymmetric key encryption uses complementary keys in order to encrypt and decrypt. Symmetric key encryption is often used repeated communication where asymmetric key encryption is used for one-shot communication like signatures (e.g. DigID). Do keep in mind that the latter is more computationally expensive.
Encryption and its use have been a controversial topic for years. Until the late ‘90s encryption algorithms were seen as munitions in some countries, including the U.S. and Germany. All kinds of issues arose from this form of governmental control. Companies were forced to release separate versions of their software (one for export, one for domestic use). Even T-shirts were printed stating (in cyper) “This T-Shirt is a munition.”
To prevent governments in creating backdoors, some developers started collaborating in the cloud. In 1991 PGP was the result of their effort. Since it was given away on the internet the U.S. felt this was export. Zimmerman and other developers saw it as a form of free speech. In 1996 court order ruled computer code to be speech leading to U.S. government dropping most export restrictions in 2000. Nowadays, many advanced encryption algorithms are open source, including AES which may even be used by U.S. Top Secret Agents. And did you know AES was originated by Joan Daemen and Vincent Rijmen in 1971? That sounds pretty Dutch right?
Next-Gen Encryption Algorithms
AsI stated before, cryptographers are continuously seeking for the algorithm that generates the most random cipher. Quantum Cryptology looks promising, although it contains flaws and researchers are worrying about its practicality. MIFARE, (PDF alert) an encryption algorithm used for securing data packets between satellite and RFID-chip. (Yes, it’s used for the OV-chipcard. No, it’s not cracked) is pretty advanced. It’s a well-kept secret that it uses swipe time and distance to satellite amongst other variables to generate random cipher. Hi Brenno!
I am amazed that you are still reading. In a discussion with fellow classmates I stated that longer passwords are not always more secure. In short, very long encrypted passwords generate simply less random cipher. Below you find an illustration of common misunderstandings about password strengths.