Archive | Articles RSS for this section

Where the Digital Economy Is Moving the Fastest

HBR1

The transition to a global digital economy in 2014 was sporadic – brisk in some countries, choppy in others. By year’s end, the seven biggest emerging markets were larger than the G7, in purchasing power parity terms. Plus, consumers in the Asia-Pacific region were expected to spend more online last year than consumers in North America. The opportunities to serve the e-consumer were growing – if you knew where to look.

These changing rhythms in digital commerce are more than a China, or even an Asia, story. Far from Silicon Valley, Shanghai, or Singapore, a German company, Rocket Internet, has been busy launching e-commerce start-ups across a wide range of emerging and frontier markets. Their stated mission: To become the world’s largest internet platform outside the U.S. and China. Many such “Rocket” companies are poised to become the Alibabas and Amazons for the rest of the world: Jumia, which operates in nine countries across Africa; Namshi in the Middle East; Lazada and Zalora in ASEAN; Jabong in India; and Kaymu in 33 markets across Africa, Asia, Europe, and the Middle East.

Private equity and venture capital money have been concentrating in certain markets in ways that mimic the electronic gold rush in Silicon Valley. During the summer of 2014 alone $3 billion poured into India’s e-commerce sector, where, in addition to local innovators like Flipkart and Snapdeal, there are nearly 200 digital commerce startups flush with private investment and venture capital funds. This is happening in a country where online vendors largely operate on a cash-on-delivery (COD) basis. Credit cards or PayPal are rarely used; according to the Reserve Bank of India, 90% of all monetary transactions in India are in cash. Even Amazon localized its approach in India to offer COD as a service. India and other middle-income countries such as Indonesia and Colombia all have high cash dependence. But even where cash is still king, digital marketplaces are innovating at a remarkable pace. Nimble e-commerce players are simply working with and around the persistence of cash.

To understand more about these types of changes around the world, researchers developed an “index” to identify how a group of countries stack up against each other in terms of readiness for a digital economy. The Digital Evolution Index (DEI) is derived from four broad drivers:

  • supply-side factors : including access, fulfillment, and transactions infrastructure;
  • demand-side factors : including consumer behaviors and trends, financial and Internet and social media savviness;
  • innovations : including the entrepreneurial, technological and funding ecosystems, presence and extent of disruptive forces and the presence of a start-up culture and mindset;
  • institutions : including government effectiveness and its role in business, laws and regulations and promoting the digital ecosystem.

The resulting index includes a ranking of 50 countries, which were chosen because they are either home to most of the current 3 billion internet users or they are where the next billion users are likely to come from.

As part of the research was to understand who was changing quickly to prepare for the digital marketplace and who wasn’t. Perhaps not surprisingly, developing countries in Asia and Latin America are leading in momentum, reflecting their overall economic gains. But the analysis revealed other interesting patterns.
Take, for example, Singapore and The Netherlands. Both are among the top 10 countries in present levels of digital evolution. But when considered the momentum – i.e., the five-year rate of change from 2008 to 2013 – the two countries are far apart. Singapore has been steadily advancing in developing a world-class digital infrastructure, through public-private partnerships, to further entrench its status as a regional communications hub. Through ongoing investment, it remains an attractive destination for start-ups and for private equity and venture capital. The Netherlands, meanwhile, has been rapidly losing steam. The Dutch government’s austerity measures beginning in late 2010 reduced investment into elements of the digital ecosystem. Its stagnant, and at times slipping, consumer demand led investors to seek greener pastures.

Based on the performance of countries on the index during the years 2008 to 2013, researches assigned them to one of four trajectory zones: Stand Out, Stall Out, Break Out, and Watch Out.

  • Stand Out countries have shown high levels of digital development in the past and continue to remain on an upward trajectory.
  • Stall Out countries have achieved a high level of evolution in the past but are losing momentum and risk falling behind.
  • Break Out countries have the potential to develop strong digital economies. Though their overall score is still low, they are moving upward and are poised to become Stand Out countries in the future.
  • Watch Out countries face significant opportunities and challenges, with low scores on both current level and upward motion of their DEI. Some may be able to overcome limitations with clever innovations and stopgap measures, while others seem to be stuck.

HBR 2

Break Out countries such as India, China, Brazil, Vietnam, and the Philippines are improving their digital readiness quite rapidly. But the next phase of growth is harder to achieve. Staying on this trajectory means confronting challenges like improving supply infrastructure and nurturing sophisticated domestic consumers.

Watch Out countries like Indonesia, Russia, Nigeria, Egypt, and Kenya have important things in common like institutional uncertainty and a low commitment to reform. They possess one or two outstanding qualities — predominantly demographics — that make them attractive to businesses and investors, but they expend a lot of energy innovating around institutional and infrastructural constraints. Unclogging these bottlenecks would let these countries direct their innovation resources to more productive uses.

Most Western and Northern European countries, Australia, and Japan have been Stalling Out. The only way they can jump-start their recovery is to follow what Stand Out countries do best: redouble on innovation and continue to seek markets beyond domestic borders. Stall Out countries are also aging. Attracting talented, young immigrants can help revive innovation quickly.

What does the future hold? The next billion consumers to come online will be making their digital decisions on a mobile device – very different from the practices of the first billion that helped build many of the foundations of the current e-commerce industry. There will continue to be strong cross-border influences as the competitive field evolves: even if Europe slows, a European company, such as Rocket Internet, can grow by targeting the fast-growing markets in the emerging world; giants out of the emerging world, such as Alibaba, with their newfound resources and brand, will look for markets elsewhere; old stalwarts, such as Amazon and Google will seek growth in new markets and new product areas. Emerging economies will continue to evolve differently, as will their newly online consumers. Businesses will have to innovate by customizing their approaches to this multi-speed planet, and in working around institutional and infrastructural constraints, particularly in markets that are home to the next billion online consumers.

We may be on a journey toward a digital planet — but we’re all traveling at different speeds.

Short video about this article : http://bcove.me/nbmmm7et

Shanise Abhelakh
345268

Pillory anno 2015

Unless the form is changed over time the concept is been known for centuries: the pillory of a person. When a person did something wrong he or she was punished for that by “the public”. During the Stone Age rocks were thrown, in the Middle Ages rotten food was thrown at people and now, during the “digital era” there is a new way to let people be punished by the public: Social media.

Shaming is a quite new phenomenon, but can have very big impact on both people and companies. Just a little mistake, an inappropriate tweet or post can go viral in a very short time. Most of the time the effects are irreversible and can ruin a person or company totally.

For example the case of Justine Sacco: She was 30 years old, senior director of corporate communications and had only 170 followers on twitter. Right before she boarded for her flight from London Heathrow to Cape Town she tweeted: “Going to Africa. Hope I don’t get AIDS. Just kidding. I’m white!”. Her tweet went viral and (off course) not in a positive way. While she was asleep, during her flight she became the nr. 1 trending topic on Twitter. When 11 hours later her flight landed the damage was already done.

Her Twitter feed was filled with angry tweets and it went worse and worse.

“In light of @Justine-Sacco disgusting racist tweet, I’m donating to @care today”

“How did @JustineSacco get a PR job?! Her level of racist ignorance belongs on Fox News. #AIDS can affect anyone!”  

“I’m an IAC employee and I don’t want @JustineSacco doing any communications on our behalf ever again. Ever.”

And then one from her employer, IAC, the corporate owner of The Daily Beast, OKCupid and Vimeo: “This is an outrageous, offensive comment. Employee in question currently unreachable on an intl flight.”

Not only were people angry with her and was she target of a crusade against racism, the tone changed overtime into excitement and from there into entertainment.

“All I want for Christmas is to see @JustineSacco’s face when her plane lands and she checks her inbox/voicemail”

“Oh man, @JustineSacco is going to have the most painful phone-turning-on moment ever when her plane lands”

“We are about to watch this @JustineSacco bitch get fired. In REAL time. Before she even KNOWS she’s getting fired.”

In an interview she said: “I had a great career, and I loved my job, and it was taken away from me, and there was a lot of glory in that. Everybody else was very happy about that.”.

Another example was last summer with the killing of lion Cecil. Walter Palmer, the American dentist who paid 50.000 dollar to kill the lion was globally shamed for it. According to Dr. Peter Vasterman, media-sociologist social media are ideal to express indignation. First, because it is a easy to do and can be done immediately. Second the chance is quite big that you’ll find support from others. This has an amplifying effect. And there is a problem with the power of social media. According to Mr. Tempelman, IT attorney most people are hanged by the public before they are even convicted.

Hess & Waller conclude that in these digital times the shaming will increase and that for “ordinary” people there is almost no protection, regardless of the question if this person is guilty or not guilty. I think it is good to think about the consequences of “just sharing or retweeting” that one tweet or post. Like seen above, the impact can be way bigger what might be appropriate.

 

References:

Hess, K. & Waller, L. (2014) ‘The digital pillory: media shaming of ‘ordinary’ people for minor crimes’, Continuum: Journal of Media & Cultural studies, 28,4, pp. 101-111.

Ronson, J. (2015) ‘How one stupid tweet ruined Justine Saccos life’, The New York Times Magazine, 15 February 2015: p. 20

Unknown author:

NOS op 3 (2015) ‘Als prooi overgeleverd aan de social media’ 31 July 2015. Available: http://nos.nl/op3/artikel/2049759-als-prooi-overgeleverd-aan-de-social-media.html

 

Making Talking Generate Next Billion Dollar

In February 2014, WhatsApp was sold to Facebook for an unbelievable figure – 19 billion dollars. Within the next few weeks, it was all over everybody’s blogs, Facebook statuses, lunch conversations, and even kids in school were talking about it. People could not understand that a company whose only product is a messaging app could be worth that much money.

4  1   2   5

WhatsApp is not the only messenger out there. Snapchat, Facebook Messenger, LINE, WeChat, and many others are also stakeholders in the industry. They proved to be a cheap alternative to operator-based text messaging via SMS, and they provide many more features that SMS doesn’t have. According to statistics, in August 2015, WhatsApp has an active user number of 800 million, Facebook messenger has 700 million, and WeChat has 600 million. If we just do a simple math and not include all added features that each messenger provides, all chat messengers have a combined valuation of over 200 billion dollars. That’s half of Google or 4 times more than Yahoo!.

Interestingly on the contrary side, all these messaging apps struggled to figure out their revenue model. Evan Spiegel, the co-founder of Snapchat, acknowledged in an interview the extreme difficulty of making a feasible one. Many internet companies are backed by ads revenue. Google, for example, revealed in their multiple annual reports that more than 90% of their revenue comes from ads. One of their many services, Google Adsense, analyzes a web page and provides advertisements that best fit the content of that page. However, most people on messengers send private messages to their friends, and it is impossible to insert any ad into the conversation. Out of privacy concerns, it is also unlikely to run algorithms on user’s messages to provide personalized recommendations.

Realizing this limitation, apps began to expand their service into other communication areas, such as emojis, playing games with friends, sending money, interesting new content, etc. This is a very successful first step. In 2013, LINE reported in their Q2 quarter report, that out of their $100 million quarterly revenue, game purchase and in-game purchase accounted for 53%, and emojis accounted for 27%. Snapchat is piloting the new discovery feature that pushes sponsored content to the user. With the existing ads before playing video revenue model, the company stated that their revenue is estimated at $50 million dollars this year.

In addition to these efforts, LINE and WeChat also aim to build up their own ecosystems. WeChat launched a feature to send money to multiple friends in January 2014. It targets the Chinese tradition of giving monetary gifts to friends and family for auspicious blessings on special occasions. On 2015 Chinese New Year’s Eve, more than 1.5 billion “red envelopes” were sent on a single day. WeChat also keeps a semi-bank account for a user. Besides sending money to friends from the account, the money could also be used to make purchase, refill phone cards, call a taxi, pay utility bills and many more. WeChat has built a successful image within China and it has penetrated into many aspects of people’s life.

In conclusion, the entire messenger ecosystem is very enormous. The user-to-user communication nature allowed exponential growth in the user base. With the vastly and constantly growing user base, companies are able to reach billion dollars valuation within a very short amount of time. The next step, to achieve their billion dollars revenue, companies are experimenting to expand their services into our daily life. LINE and WhatsApp have built up their ecosystem that allows users to call taxis, stream music, order foods, and we can predict soon other companies will have similar strategies to expand their verticals.

The Internet of Things: The Smart Bicycle Lock!

The internet of things is becoming a reality more and more. Beside smart thermostats, light switches and power outlets controllable by smartphone apps creating smart homes, outside of the house technology has solutions for daily encountered problems as well!

smartlock5

As many Dutchmen know, having a lock on your bike is not a guarantee that it won’t get stolen. The Noke U-lock might be the answer. The creators at Füz Designs have created a smart lock that is not only innovative but also very sturdy. The lock can be locked and unlocked using your phone, as the app in the phone has a unique code with which the lock corresponds through bluetooth when you press a button on the lock.

smartlock1

If someone is messing with the lock or the bike for more than 3 seconds, the lock can sense this and will give a loud 30 second alarm that can be heard up to 50 meters from the lock, generating enough attention to make the thieves go running. The owner will also be alerted through his phone that the alarm went off, so if someone is trying to steal your bike in the middle of the night the owner will be woken up and not come outside the next morning when it’s already too late. It also features a GPS built into the device so in case the bike gets taken together with the lock one can easily track it down. The lifelong battery of the lock will last for a whole year before it needs recharging.

smartlock6

For the people who always forget where they’ve parked their bike, the smartlock app will tell the user where the lock was last used. If a friend wants to use the bike the app has a special lend-out feature. In the case your phone’s battery is empty or you want to leave your phone at home, the lock has a smart feature that allows the user to unlock the lock using a unique rhythm of long and short taps, kind of like a Morse code.

smartlock 2

As innovative IT technology is being implemented in increasingly more objects, we have to get used to the fact that everything in the world around us will record, send signals and is connected. After our cars started to get a lot smarter, and homes are turning into smart homes, now its turn for our bikes to get connected.

Sources:
http://www.coolhunting.com/tech/noke-smart-u-lock-bike
http://www.citylab.com/tech/2015/03/this-bluetooth-enabled-u-lock-shrieks-at-bike-thieves/387544/ http://www.iculture.nl/noke-u-lock-slim-slot-apple-watch/
http://backerjack.com/noke-smart-lock-protects-your-two-wheeler-from-a-stealer/
http://www.solidsmack.com/design/noke-u-lock-smashes-it-as-the-worlds-smartest-u-lock-for-bikes/

Trident, the underwater drone you can control through your phone

The first thing you think about when you hear the word drone is probably ‘unmanned flying machine’. In the recent years, a lot of different types of drones have been invented, some even controllable by phone. But now there is a new kind of drone to be controlled through your phone: the underwater drone! A mix between an unmanned mini-submarine and a phone controllable drone, this new cool vehicle has allowed other dimensions than the sky to be explored through your phone’s touchscreen!

The Trident underwater drone allows you to control the drone and live-stream the images it makes with your smartphone, tablet or laptop. It is one of the first drones that works well under water and that allows for live streaming of the footage. The images captured by the Trident are sent to your phone or tablet through a long cable that is attached to the machine.

trident in water

Smart Design
The smart and sleek design allows the trident to be agile and fast in the water, easy to control and move around obstacles and in small spaces. With a top speed of 7 km per hour, the drone can reach a depth of more than 100 meters, and the battery will allow for three hours of underwater exploration. The trident has been designed such that it has maximum performance and controllability. This is done through the design of the exterior shape as well as the design of the thruster. This allows the Trident to be navigated through the water swiftly and agile. With this device, anyone can become a shipwreck explorer!

“Suddenly, you don’t need to be a James Cameron or a Jacques Cousteau to explore beneath the waves.” – Zachary Slobig, takepart

trident3

Kickstarter Project
The people behind the Trident are not new to underwater exploration, previously they have funded an underwater robot successfully through Kickstarter. Now they are back on the crowdfunding platform, exceeding the $ 50.000 amply, having already raised $ 657,138 funded by 1036 backers at this moment. Four years of designing and testing have gone into the project, and now with the funding goals amply reached it is almost assured the project will lead to some success. Being very easy to use, and most of all fun to control through the oceans and waters, the developers at OpenROV hope this to be their dream product.

Garage Start-up
As their dream is to build an underwater vehicle that was inexpensive and fun to use, OpenROV have made a huge jump from the simple but successful underwater robot they launched three years ago. Starting in one of the founders’ garage with the hopes of finding something special at an underwater location rumoured to contain a lost treasure in Californian waters, they started building a prototype that could be their ticket to finding gold. Although they did not find the treasure, their project is turning into gold as the enthusiastic backers lined up to fund their project and turn it into reality.

trident 5Trident+Bubble+Gif

Open Source Software
In order to make the drone very easy to use, the team has “embraced the latest emerging internet standards from HTML5 and webRTC to WebVR and WebGL to deliver a rich piloting experience through just a browser that runs on laptops, tablets, and modern mobile devices”. Using the same open-source software for the project they used for the previous underwater vehivle, the team has done a lot of updates to make it a lot better.

The cheapest version, costing $799, will include the actual underwater vehicle and its batteries, the wire that sends the footage up, and a buoy that can float on the surface and send a wifi-signal back to the phone or tablet in order to stream the live footage and control the underwater drone!

Sources
http://www.gizmag.com/openrov-trident-rov/39431/
http://www.engadget.com/2015/09/16/openrov-trident-kickstarter/
https://www.kickstarter.com/projects/openrov/openrov-trident-an-underwater-drone-for-everyon
http://petapixel.com/2015/09/19/trident-is-an-underwater-hd-camera-drone-that-lets-you-explore-the-seas/
http://www.digitaltrends.com/cool-tech/openrov-trident-drone/

TWITTER WITHOUT LIMITS

Twitter Increase Character Limit

As you might have heard in the past few weeks Twitter is considering removing the 140-character limit. Currently, Twitter is creating a new product that will enable users to share tweets with an unlimited amount of characters. But what does this mean for the future of twitter? There are various characteristics that separate Twitter from other social networks but the 140-character limit has always been the most important trademark of Twitter. Twitter has been under scrutiny about this for years and many have argued that Twitter should expand the limit. But now that this might actually happen, will the users see blocks of text on their timelines? Or will it be a separate blog-type of service? This change will certainly have an impact on marketers who use Twitter to connect to consumers. Due to the longer content that will crop up almost immediately after the change, marketers will have to spend more time writing the body of tweet. This could create a shift on Twitter from many, small tweets throughout the day to fewer, longer tweets because marketers will not want to deplete their own time and resources and alienate consumers who may unfollow if they feel bombarded with content.

Lifting the limit might seem like a good thing, however, will Twitter be able to differentiate itself from other social networks after this implementation? This change could be detrimental to the unique community of writers and creatives who have found a home on the platform. The inconvenience of the 140-character limit has forced Twitter users and marketers to become better writers and salespeople. These parameters have taught Twitter users to develop their own unique style and flow. The only downside to the limit is that there is not much room for nuance. Twitter user must create a thread of tweets in order to show a progression of thought. However, this separation actually allows followers to process information much more seamlessly. After practicing marketers can learn to use the limit to their advantage with more precise and to the point content, as the most successful Twitter users manage to be both poignant and witty in bite-sized portions.

Not only is Twitter a platform where ideas are exchanged constantly but it is also a platform that turns minorities into targets of harassment by trolls. If Twitter is strongly considering lifting the 140-character limit ban, it must first put in place stricter anti-harassment regulations. Without the character limit trolls will be given more freedom to attack. This can cause the number of Twitter users to decline while Twitter is desperate to find new ways to attract users to the product.

Twitter wants to improve its appeal to the mainstream social media users, who do not know how to interact on a 140-character landscape. Removing the 140-character limit is a way to achieve that and will provide new readers of marketers’ content. However, the question arises whether or not this would actually increase Twitter’s audience. Twitter’s 140-character limit has force innovation in language and art, and created a platform perfectly tailored to facilitate instant interaction and community building. Instead of eliminating the characteristics that make Twitter unique among its competitors and the power these innovations have, Twitter should focus on making it secure for users and use this opportunity to improve this innovation and differentiate even more. Do not mess with what is not broken. Or do you think otherwise?


Sources:

http://www.adweek.com/socialtimes/twitter-might-ditch-the-140-character-limit-what-this-means-for-marketers/627672

http://recode.net/2015/09/29/twitter-plans-to-go-beyond-its-140-character-limit/

http://qz.com/515256/twitter-will-ruin-the-one-thing-that-makes-it-stand-out-by-changing-its-140-character-limit/

http://www.programmableweb.com/news/twitter-removes-140-character-limit-dms-updates-api/2015/06/12

Procrastination? Get things done with momentum!

social_media_distractionHas it ever happened to you, that you when you opened up your browser on your laptop, you instantly lost focus of what you wanted to do and ended up wandering around cyberspace of internet? I am sure you are familiar with a such situation. Especially, when you have to write an assignment, you open up the browser to do some research, but soon after you happen to be reading or watching content about topic that has nothing to do with the assignment.

Deadlines for the assignments are coming up, so I decided to tap into this field, and potentially help someone to become more efficient.

At first, I would like to explain the background of the problem. When we search for anything online we have to navigate through some relevant but mostly not relevant information (Stibel 2009). Most of the websites are designed in a way to provide us a number of different links with intriguing titles intermingled in the webpage (Warner 2013). We tend to click on the links because our brain craves variety (Warner 2013). There, also, seems to be a neurological basis for our actions or, in other words, attention dopiness. In the book, Find Your Focus Zone by Lucy Jo Palladino, it is explained that our dopamine levels raise when we watch TV, play video games, or ,in this case, discover new thing on the internet. When dopamine level is high we are inclined to keep the it high, and thus, search for other distractions (Palladino 2007). In this way, we tend to browse through different pages, explore new information and loose the sense of time. I think you get the idea behind the how our mind, in this case, works.

The question arises, how do we keep the focus, and motivation to get thing done?

I would like to share a little innovative tool that potentially helps to tackle the problem. It is called Momentum.

Momentum is an extension for the Google Chrome browser, which gives you a distraction free homepage. It appears every time you open a new tab or window, that way it shows what is your main focus. The homepage enforces three main elements,

  • Focus, by providing you the option to input your main goal of the day
  • Motivation, by showing new inspirational quotes and pictures everyday
  • Track of things to do, by allowing you to keep the to do list

Hereunder, I share my momentum homepage from the afternoon today, to give you an indication.Screen Shot 2015-10-08 at 15.27.53

I suggest, you give it a shot a see for yourself, whether your productivity improves. Furthermore, let me know bellow if the tool helped you to avoid procrastination on the web and get more things done!

Sources:

Palladino, Lucy Jo. Find Your Focus Zone: An Effective New Plan to Defeat Distraction and Overload. 1st edition . London: Free Press, 2007.

Stibel, Jeff. Why the Internet Is So Distracting (And What You Can Do About It). 20 October 2009. https://hbr.org/2009/10/why-the-internet-is-so-distrac (accessed October 8, 2015).

Warner, Russ. THE INTERNET AND ITS INCREDIBLE POWER TO DISTRACT. 23 March 2013. http://ikeepsafe.org/balancing-screen-time/the-internet-and-its-incredible-power-to-distract/ (accessed October 8, 2015).

The dangers and potential of artificial intelligence

Artificial intelligence, or AI, is a topic that has been getting a lot of attention in the past couple of years. Some famous, highly educated people have been warning us about the dangers of AI. Among them are Elon Musk, Steven Hawking and most recently Bill Gates.

“I think we should be very careful about artificial intelligence. If I were to guess like what our biggest existential threat is, it’s probably that. So we need to be very careful with the artificial intelligence. Increasingly scientists think there should be some regulatory oversight maybe at the national and international level, just to make sure that we don’t do something very foolish. With artificial intelligence we are summoning the demon. In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.”
– Elon Musk (At the MIT Aeronautics and Astronautics department’s Centennial Symposium)

“I think the development of full artificial intelligence could spell the end of the human race”
– Stephen Hawking (BBC interview)

“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”
– Bill Gates (Reddit “Ask Me Anything (AMA)” thread)

It has not just been warnings in interviews, symposiums and online fora. Hawking, Musk and others contributed to an open letter, posted to the future of life institute, warning about the dangers of AI.

Besides the highly educated mentioning AI, the entertainment industry has been contributing to the discussion as well. There are several movies (e.g. Transcendence, Avengers: age of ultron) and video games (e.g. Destiny) showing the dangers and potential of artificial intelligence.

But what is so dangerous about artificial intelligence, and why the sudden increase in global interest? The biggest risk is the uncertainty of what an AI would do. Emotions, even though we are quite rational, have a big impact on our daily decision making. An AI would be a completely rational being. For example: humans would never choose to kill of half the population, just because the data shows it would be beneficial. An AI would likely have no trouble making such decisions. There might actually be a point where it would consider humanity useless. A being that has all the computational power in the world, who does not like us, could be incredibly dangerous. This is one of the major reasons that so many people are concerned about AI development.

Besides the above mentioned risks, there is a lot of potential. Think of a consciousness with so much more computational power. The development and research that this AI could perform is so much faster than we humans can. Making AI is probably our best bet for immortality, could lead to huge economical progress and so much more.

I personally believe creating artificial intelligence, to the point of consciousness, would be equal to creating an actual deity (god). The result will either be the start golden age, the likes of which we have not seen before, or human extinction.

This leaves us with one big question: should we pursue artificial intelligence knowing the risks and potential?

Why API-Centric Software Will Dominate the Future

Why API-Centric Software Will Dominate the Future

There are thousands of apps around. For multiple platforms (iOS or Android) or in multiple browser. You probably use them on many devices: Your phone, tablet or laptop. But all those applications have very limited functionality on their own. Only by communicating to their user, connecting them between each other and swapping all kinds of information they become powerful.

And that’s where APIs come in. API stands for Application Programming Interface and describes the information and rules software programs interact with each other.

The traditional way of development focusing on web frameworks (e.g. Microsoft .NET, Ruby on Rails, PHP) can require costly integration into other software when not set up properly. Adaption to special needs can easily amount to a project in middle five figures.

An API centric piece of software executes most or all functionality through API calls. So why is this important?

API Centric Design

Source: Nikko Bautista

API-Centric Design

With API-Centric Design the core function of a software (for example the Twitter Stream of new Tweets) is build separately from the way a user accesses it (in our example Twitter can be accessed through a browser, an iOS app from an iPhone, iPad, Android devices, aso.). There is only one core product running in the background and then many different customized front-end ways of accessing the core product running in the back-end. All the communication between those parts happens over? You guessed it: APIs!

No more changing and tweaking the core product because on a windows phone was a display error. You just handle that over the windows phone front-end client.

Bah…. that was a lot of techie talk. So what?! Well that brings us to our next big thing:

The Internet of Things

There are estimates that until 2020 there will be more than 50billion connected devices. That’s a lot! And it will shift who and what communicates over the internet. Today people communicate with people or people communicate with machines and systems. But in the age of the internet of things systems mostly communicate directly with systems. And they don’t care about pretty graphical interfaces on some gadget with touch screen. For those systems to work you need solid APIs connecting many back-ends fast and in a reliable way. And what would be more suitable for this task than software created through API Centered Design?

Oracle recently released an API Management Tool. So did IBM and Intel. These big corporations undertake those steps to be well prepared for what is about to come: The internet of things. It’s gonna be a paradigm shift.

But Where is the Money?

APIs aren’t new. And there are a lots of them. In the Programmable Web Database are more than 14’000 APIs registered. But with the emergence of mobile and the internet of things, they’re in the spotlight again. API centered software enables micro services that fit a specific need an solve a well detailed problem. Other programs can build upon existing APIs using their functionality to expand and build their own. This layer structure can help to automate tedious tasks by integrating and arranging the right APIs. There are many offerings already that allow fast creation of API-based back-ends (e.g. Treeline or Stamplay). APIs therefore build a solid foundation others can build upon. Google does that for a while already and offers a ton of APIs for others to use (e.g. Google Maps). But if you and especially your users call them regularly you have to pay for them. And they’re not cheap:

Google Maps API Prices

Google Maps API Prices

This example brings us to our first business model with APIs: If you’re providing some service that is of value to others, you can charge for every time a user or program is calling your API and uses its functionality. Even if it’s just a couple cents per call, if your API gets used thousand times a day, that’s steady income.

Another business case is to offer your API for free and animate other developers to build upon your existing API. Through  referrals from that software you then generate additional sales. Uber does this with success: By offering their API for free they animate developers to build upon their core product. If someone signs up for Uber through another program that uses the Uber API, they pay the developer who build the new product a commission of $5-10.

There will be many more business models emerging around API. Especially connected to the Internet of Things. The paradigm shift opens up new business opportunity ready to exploit.

What  business models including APIs do you see? I’m very interested in reading about them, so please leave a comment!


Sources

http://blog.cloudoki.com/the-new-era-in-software-engineering-api-centric-design/

http://apigee.com/about/blog/technology/api-centric-architecture-all-development-api-development

http://techcrunch.com/2015/09/27/the-future-of-coding-is-here-and-threatens-to-wipe-out-everything-in-its-path/

http://www.infoworld.com/article/2920792/apis/ibms-next-big-bluemix-move-api-management.html

http://www.thestreet.com/story/13259862/1/intel-stakes-future-on-internet-of-things-at-developers-forum.html

http://www.programmableweb.com/news/oracles-api-management-portfolio-aims-digital-enterprise/press-release/2015/06/02

http://www.zingdesign.com/top-10-web-development-trends-and-predictions-for-2015/

Electronic Markets, Computing Power and the Quants: Volatility & High Frequency Trading

Electronic Markets, Computing Power and the Quants:  Volatility & High Frequency Trading

Markets can be – and usually are – too active, and too volatile”

Joseph E. Stiglitz – Nobel prize-winning economist

As some of you might have noticed, the oil market is currently showing wilder fluctuations at a higher frequency than before: volatility has increased. This happened after the market enjoyed relative stability price stability during the last few years. Of course, this is partly due to U.S. shale oil production, quite high supply and lower demand due to the financial crisis aftermaths, and growing demand and supply uncertainties. However, another factor affecting volatility is the increased usage of trading indicators in combination with changes in trading practices: an increasing number of players in the financial markets tend to use algorithmic and high-frequency trading practices (HFT).

Like other derivative based markets, also the crude oil market has a wide range of market players of which many are not interested in buying physical oil. HFT traders are probably drawn towards oil futures due to the market’s volatility. Because, the greater the price swings, the greater their potential profit. HFT is not an entirely new practise, but as technology evolves it is increasingly present in today’s electronic financial markets.

These players make extensive use of computing and information technology in order to develop complex trading algorithms, which are often referred to as the “quants”. HFT trading firms try to gain advantage over other competitors which are still using mostly human intelligence and reaction times. The essence of the game is to use your algobots to get the quickest market access, fastest processing speeds, and perform the quickest calculations in order capture profits which would have otherwise been earned by someone who is processing market data slower (Salmon, 2014). At essentially the speed of light, these systems are capable of reacting to market data, transmitting thousands of order messages per second, as well as automatically cancelling and replacing orders based on shifting market conditions and capturing price discrepancies with little human intervention (Clark & Ranjan, 2012). New trading strategies are formulated by using, capturing and recombining new information with large datasets and other forms of big data available to the market. The analysis performed to derive the assumed direction of the market makes use of a bunch of indicators such as historical patterns, price behaviour, price corrections, peak-resistance and low-support levels, as well as (the moving average of) trends and counter-trends. By aggregating all this information, the databases and its (changes of) averages are usually a pretty good predictor of potential profits for HFT companies.

This information technology enabled way of trading is cheaper for the executors, but imposes great costs on workers and firms throughout the economy. Although quants provide a lot liquidity, but can also alter markets by placing more emphasis on techniques and linking electronic markets with other markets (as well information as financial linking). In most cases, non-overnight, short-term strategies are used. Thus, these traders are in the market for quick wins and use only technical analysis in order to predict market movements instead of trading based upon physical fundamentals, human intelligence or news inputs.

Recent oil price volatility increased

Recent oil price volatility increased

Although, some studies have not found direct prove that HFT can cause volatility, others concluded that HFT in certain cases can transmit disruptions almost simultaneously over markets due to its high speed in combination with the interconnectedness of markets (FT, 2011; Caivano, 2015). For example, Andrew Haldane, a top official at the Bank of England said that HFT was creating a system risks and the electronic markets may need a ‘redesign’ in future (Demos & Cohen, 2011). Further sophistication of “robot” trading at decreasing cost is expected to continue in the foreseeable future. This can impose a threat to the stability of financial markets due to amplified risks, undesired interactions, and unknown outcomes (FT, 2011). In addition, in a world with intensive HFT the acquisition of information will be discouraged as the value of information about stocks and the economy retrieved by human intelligence will be much lower due to the fact that robots now do all the work before a single human was able to process and act on the information (Salmon, 2014). For those interested in the issues of HFT in more detail, I would like to recommend the article of Felix Salmon (2014).

However, it is important to mention that not only HFT and automated systems and technicalities do cause all the volatility. Markets have known swift price swings for centuries. For example in the oil industry, geopolitical risk can cause price changes as it is an exhaustible commodity. As most people know, also human emotions can distort markets as well as terrorist actions. Even incomplete information such as tweets from Twitter and Facebook posts can cause shares to jump or plumb nowadays. As markets are becoming faster, more information is shared and systems can process and act on this information alone quickly due to (information) technological advancements, which will in turn increase volatility. Therefore, it is more important than ever that there are no flaws in market data streams, e.g. the electronic markets and its information systems need to have enough capacity to process, control, and display all the necessary information to market players in order to avoid information asymmetries.

In my opinion, HFT is strengthened by the current state of computing technology and cost reductions of computing power now enable the execution of highly complex algorithms in a split-second. As prices go down and speed goes up, these systems will become more and more attractive as they outperform human intelligence. This can potentially form an issue in the future: volatility might increase and it is this volatility that provides many opportunities for traders, but not the necessary stability for producers and consumers which are more long-term focussed.

Therefore, in the future action is necessary toquality-efficiency-speed-cost restrict, or at least reduce, HFT. Examples might be big data collection by regulators to monitor risk and predict future flash crash or volatility events. Another option can be the introduction of a “minimum resting period” for trading. So traders have to hold on to their equity or trade for a pre-specified time before selling it on, reducing the frequency and thus volatility. Also, widening spreads will help as it makes quick selling and buying more costly and thus HFT less attractive.

Given that the financial market’s watchdogs currently have difficulties with regulating automated trading. Some HFT firms have enjoyed enormous profits from their trading strategies (Jump trading, Tower Research capital, DRW). For example also during the last turmoil of August this year, a couple of HFT firms earned a lot of money (Hope, 2015). Due to these successes, new players enter the market and competition is growing. As speed is essential (even milliseconds matter) HFT firms try to place their servers physically near the exchanges (such as the NYSE), so they can increase their advantage. The HFT firms are expected to stay in the market, ultimately resulting in more price volatility (Hope, 2015).

What do you think, how far should we let our technology intervene with the financial markets? Do we really need to allow algobot’s or similar automated trading systems to influence our financial markets as they can perform the human job faster, fact-based and at a lower cost? Or should the financial markets be always human intelligence based, which might be ultimately better for the economy as a whole and also provides a richer knowledge base of the real world economy (as it this information remains valuable and numbers do not always say everything)?

In case you are interested in this dilemma, I can also recommend reading Stiglitz’ speech at the Federal Reserve Bank of Atlanta in 2014.

Author: Glenn de Jong, 357570gj

Read More…

The dark side of Personalized Search

One of the perks of the internet 2.0 is that we can benefit from heavily personalized internet use. Companies can tailor their products and advertising to target potential customers in a much better than before and the consumers get tailored search results, ads and product recommendations.

This blog post will focus on personalized Search.  Speretta and Gauch (2005) defined Personalized Search as search engines that give search results based on user profiles, description of user interest and cookies. In this way equal search queries may give different search results depending on which user is searching.

On first sight, this seems like an amazing feature, your search engine cuts through billions of pieces of information to get you exactly what you are looking for.

But is always getting what you are looking for not also a danger in itself? Are the things we want also the things we need? In a less serious case you might be looking for new music in a different genre then you normally listen to but your Search Engine hides these new artists and songs because it does not fit your profile.  In a more serious case, for example, elections are coming up and you are looking for a suitable candidate to pick on the topic of Renewable energy. For the sake of the argument, you have a neutral view on this, in previous elections you have voted  right wing which often have relatively conservative energy policies. So when researching the topic of Renewable energy you might get a very biased view as your profile seen as someone who is not pro-renewable energy.

Eli Pariser has coined this with the term ‘’Filter Bubble’’. He argues in his book ‘’Filter Bubble’’ (2012) that this Bubble we live in will hamper society’s progress due to people being uninformed or ignorant to current issues in the world. It may also cause the “truth” to be hidden for some people.

One can argue that personalized search defeats the purpose of the internet. The internet gives you the possibility to connect with the world and get to know things on a whole new level but the Filter Bubble might hamper this. On the other hand, in the pre-internet era people were only exposed to their own paradigms but during this time society still progressed significantly.

What do you think? Are Personalized Search results a blessing or a curse?

 

 

 

 

Speretta, M.; Gauch, S., “Personalized search based on user search histories,” in Web Intelligence, 2005. Proceedings. The 2005 IEEE/WIC/ACM International Conference on , vol., no., pp.622-628, 19-22 Sept. 2005

Pariser, E. (2012) Filter Bubble. London, United Kingdom. Penguin Books.

First step to “speak” code?

As we got to know from the first DBA class big majority of us are not coders and many BIMers never used a programming language before, and as we are taking this course most probably a big percentage of us does not plan to become a developer….but….most probably many of us will be working with developers in smaller or bigger companies a year from now when we will have our beautiful Master diplomas in our hands. And here comes the potentially scary part: how are we going to work with them? And how are we going understand “their language” especially when we actually don’t “speak it”?

The goal of this article is to start building up “list” of materials that can bring us a bit closer to understand developers. Here is the first (very subjective) part of the list. (Although some articles/videos are long they are worth the time.)

When the Development Bank of Singapore (DBS) started a digital transformation strategy it’s Head of Group Technology & Operations, Dave Gledhill created his own application just out of mere curiosity and for his own fun which made him understand the logic behind the development of their own systems and points of attention much better. The article gives an interesting perspective on why managers should also get their hands “dirty” and how can this help them to make better decisions.

ProgrammersLife1

What are the hardships when managing developers? Reading this post makes it clear that although managers and higher level executives might be the ones that have the steering wheel in theory in practice the developers are those who make things work or not work. This read also mentioned the fact emphasized in the previous article: you need to be able to write some parts of the code even if you are a manager who has no background in development.

PrgrammersLife2

“The World belong to people who code. Those who don’t understand will be left behind.”
On a personal note: this is the best thing I ever bought in an airport by far. This June the Bloomberg Business Week spend a whole issue on “explaining” what is code, what is a computer, a little about programming languages, a little about programmers and what is it like to work with them amongst other topics. This might sound boring as a list like this but it is one of the best written pieces in the topic I ever read and the online version is much better than the printed as it has videos, interactive “games” etc imbedded into this mega article. It takes a while to read the whole thing but it has a touch of fun and sarcasm in it which makes reading easier and very enjoyable.

20replies by programers

4. Engineering Culture at Spotify videos 1 & 2
These two 13 minutes videos show how things work at Spotify, what makes them able to develop fast and change the platform step by step in a very innovative way. Both videos are really entertaining while you can learn incredibly lot from them about how to make developer teams engaged, aligned and thus successful.

Spotify Engineering Culture – part 1 from Spotify Training & Development on Vimeo.

Spotify Engineering Culture – part 2 from Spotify Training & Development on Vimeo.

And last but not least a less “serious” way to get the feel of programming, coding and all what is around it:
The ultimate “geek” series that has two hilariously funny seasons so far and the 3rd one is coming in April (only). It shows the daily life of a start-up in Silicone Valley starring four developers and a “manager”. If you want to have a good time don’t hesitate to watch it.

What are the materials (videos, articles, courses etc) that you use or suggest to get to know the “developer World” better? What were the most interesting things you read or heard in the topic?

Online reviews: Content vs. Reviewer

Word-of-mouth has always been very important. With the rise of the Internet online word-of-mouth is increasingly important nowadays. Everyone wants to know the opinion of their peer consumers and based on this information they will feel more secure to make the final decision in purchasing a product or a service. But how big is the effect of online reviews? Why is one review more reliable to a certain potential customer than another one? Do we rely on what people say or maybe more on who says what?

Forman, Ghose and Wiesenfeld (2008) have done some research about the role of the reviewer identity disclosure in online reviews. This research shows that consumers in general rate a review as more helpful if the review contains identity-descriptive information, which means more information about the reviewer such as their real name, hobbies, profile picture and where they come from. These reviews will even have the power to influence and increase online sales as well.
A notable fact here is that Forman et al. (2008) show that source information (identity-descriptive information) often appears to be more important than the actual content itself. If the source information matches a certain profile and the consumers can identify themselves with this (matching community norms), this review is more likely to be rated as an interesting and helpful review. Social identification makes people more secure about the review and it reduces uncertainty. Especially when there is an overload of information in reviews, source information appears to be a good way to process all the information and customers will use source characteristics as a heuristic device on which they base their final product decision. Common geography for example will also increase the feeling of similarity with other people and this will increase the positive relationship between disclosure of personal information of the reviewer and the sales of a product (Forman et al., 2008).

The results of this research surprise me in a way that in my opinion the Internet is full of fake accounts, unreliable information about people and imaginary identities (DailyInfographic, 2015). So why would we trust this personal information and base our decision on that? Why would we for example trust a review written by a person who comes from the same city more? Isn’t it really easy to just lie about the fact that you are from Rotterdam?

And what about the reviewers’ privacy? If you see the results of the research, it seems like a profitable business for online companies to ask more and more personal information from the reviewers. But how should the reviewer and consumer feel about that? A lot of different resources will warn you these days to just write everything down on the Internet. According to Forbes (2012) personal lives, businesses and careers can be affected in more ways than you think if you share too much information online. The intention can be innocent but the results can be worse than expected. Even some minor personal details, such as where you were born, can be enough for some people to manipulate you (ITProPortal, 2015).

Finally we can state that Forman et al. (2008) did some interesting research about online reviews and the role of a reviewer. Important to know for all kinds of businesses is that bad publicity is not always as devastating as we think. The opinion of a public community might be even more powerful than that.

References:
– Daily Infographic. 2015. ‘How Many Of The Internet’s Users Are Fake’ http://www.dailyinfographic.com/how-many-of-the-internets-users-are-fake, last visited 13 September 2015.
– Forbes. 2012. ‘Sharing Too Much? It’ll Cost You’ http://www.forbes.com/sites/cherylsnappconner/2012/10/19/sharing-too-much-itll-cost-you/, last visited: 10 September 2015.
– Forman, C., A. Ghose, B. Wiesenfeld. 2008. ‘Examining the Relationship Between Reviews and Sales: The Role of Reviewer Identity Disclosure in Electronic Markets’, Information Systems Research, 19(3), 291-313.
– ItProPortal. 2015. ‘The surprising danger posting personal information online’ http://www.itproportal.com/2015/03/13/surprising-danger-posting-personal-information-online/, last visited 13 September 2015.

Author: Lizan Bakker

Internet Exchanges for used books: An empirical analysis of product cannibalization and welfare impact.

In this blog post I will provide a summary and discussion of the article Internet Exchanges for used books: An empirical analysis of product cannibalization and welfare impact, written by A. Ghose, M.D. Smith and R. Telang. This article was published in Information systems research in march 2006 and can be accessed for free. The URL to the article can be found in the reference list.

The market for used books is nothing new. Even before the rise of the internet people have been buying, selling and sharing used books. Since the inception of the online market platform amazon.com this process has become much easier. In contrast to a brick-and-mortar bookseller, Amazon is not limited by geographical location or shelf space, and can sell for a lower price.

Groups like Association for American publishers believe that used-book sales through Amazon will cannibalize new book sales and even “threaten the future of authorship.” (Russo, 2014). Ghose, Smith and Lang put it to themselves to quantify and publish the effect of used book sales though amazon on the welfare for all stakeholders, and it is these findings we will analyse.

Let us start with the theoretical analysis.

The authors identify two ways the market for used books has an effect on the market for new books. These effects are dubbed the price effect and the substitution effect. The price effect works as follows: when there is a market for used goods, such as books,  a consumer will be willing to pay more for the product because he can sell it later on the used-goods market. A consumer that values a book at $25 will be willing to pay a maximum of $40 when he knows that he can sell this book on the secondary market for 15. This mark-up, which is equal to the second-hand price is a direct welfare gain to the original seller.

The second-hand market also creates a substitution effect: For many consumers, new and used products are substitutes. Some people who would otherwise have bought a new product will instead buy a used version. This is the cannibalization the authors guild is afraid of.

The welfare gain or loss to the publisher is therefore the net effect of the price effect minus the substitution effect. A positive number means welfare gain, while a negative number means welfare loss.

The empirical evidence seems to indicate that the substitution effect outweighs the price effect. The authors’ results show that publishers lose about $45 million, or 0,03% in gross profit per year from amazons  used book market. Consumer surplus is estimated to be around $67 million annually, and Amazon’s increase in gross profit from used books amounts to about $88 million annually. The net effect on welfare is therefore positive, with consumers of books and amazon’s shareholders being the clear winners from this development, and traditional book publishers losing out.

According to the researchers’ data, only 16% of used book sales through amazon cannibalize new book sales, while 84% of used books would not be sold without amazon. This 84% helps explain the net welfare gain. Amazon sells these books above cost, therefore creating producer surplus. Consumers value the books above their sales price, which is where the consumer surplus comes from.

What about the authors? Are they helped or hindered by the rise of amazon?

The researchers did not investigate the effect amazon has on the authors themselves, which could be an interesting topic for further research.

One effect on author welfare that must be considered is what is known as the “Long tail effect.”

Because physical stores have limited shelf space, they generally only sell items that are popular. This is great for authors of popular fiction, but that means there is less space in the store for niche books. This is unfortunate for the authors of these books, but do the customers ever care? According to the Long tail theory, the answer is yes. The theory of the long tail effect postulates that the demand for goods that are not sold in physical stores might as big as, or bigger than, demand for goods that are. Because amazon.com and similar online platforms are not limited by shelf space of physical location, they are better equipped to fill this demand. This is great news for authors and consumers of niche books, and an additional source of welfare gain.

amazon

Source: Brynjolfsson, Yu and Smith (2006)

There is also an additional threat to the welfare of authors that was not mentioned in the articles: the rise of e-books. Traditionally, the net profits of a book were split evenly between publisher and author. For e-books , the author receives a much smaller share of net profits. The division of profits differs from publisher to publisher however, and this topic is still hotly debated between authors, publishers and amazon. Only time will tell how the market for e-books will develop.

Conclusion:

Since the inception of amazon.com the market of used books has grown to a size never seen before.  While it is natural for publishers to worry that this will cannibalize new book sales, this fear is largely unfounded. New and used books are imperfect substitutes at best, and many book sales would not have occurred without amazon. Because the platform is less limited by physical constraints, amazon can serve niche markets in a way physical stores cannot, thereby increasing welfare for consumers and writers of niche books.

Will e-books help or harm authors? How will the market for books, new and used, develop from here? Can traditional booksellers capitalize on the digital revolution? I’m interested to read your thoughts and ideas in the comments.

Martin Braakhuis

S.I.D: 333718mb

References:

Ghose, A., Smith M.D. and Telang, R. (2006), Internet Exchanges for Used Books: An Empirical Analysis of Product Cannibalization and Welfare Impact, Information Systems Research Vol. 17, No. 1, March 2006, pp. 3–19

Available: http://pages.stern.nyu.edu/~aghose/UsedBook.pdf

Brynjolfsson, E., Yu, H. and Smith M.D. (2006), From Niches to Riches: Anatomy of the long tail. Sloan Management Review,  2006, Vol. 47, No. 4, pp. 67-71.

Available: http://www.heinz.cmu.edu/~mds/smr.pdf

The Authors Guild (2014), Letter from Richard Russo on the Amazon-Hachette Dispute

Available: https://www.authorsguild.org/industry-advocacy/letter-from-richard-russo-on-the-amazon-hachette-dispute/-math-the-house-always-wins-2/

The Individual Stock Trader

With the invention of the internet and the ongoing digital transformations, many everyday practices changed. I believe that one many of us can relate to, is stock trading for individual investors and the business models of the brokers we use.

In the traditional setting, we notice that a lot of communication is needed for buying or selling stocks. The brokers must actively seek to acquire a client base and then do research about the financial market to generate stock ideas. They will then communicate buy/sell recommendations to clients over the telephone and finally the brokers would then use their systems to order the stocks with their people on the trading floor. With this much communication and systems needed, cost for placing orders were high. This means that stocks had to rise (or fall) a lot before a profit was made. (Beattie, 2015)

Stock exchange

Because more and more people started connecting through the internet a new type of stock brokers emerged: Online Traders. The first company to offer this online trading was K. Aufhauser & Company in 1994 (!). On its website “WealthWEB” individuals were now able to order stocks directly and therefore minimizing the role of the agent they had to contact in the past.

With this development there was also a big change in the business models of brokers. In the traditional setting, brokers generated revenues from mostly payments for orders and trading commissions. They generated a lot of revenue on relatively few people.

The quality of traditional brokers varied dramatically across individuals, making it hard for investors to choose the best among them. Also, it is difficult for investors to discern whether or not the brokers have made a well-informed recommendation after only having had a brief telephone conversation. (Wu et al, 1999)

According to Wu, the online trading model however, is much more dynamic. Because of the huge amount of data available the investor got a much more active role in his own portfolio. With the Internet serving as an information gateway, the investor can do everything that the retail brokers used to do. With online trading, they can make their own decisions, and trades are executed instantaneously, at essentially the same price.

The business model for online brokers today is about delivering service and value. Convenience, control, accessibility and low commissions make online investing very attractive to individual investors. They generate revenues mostly from trading commissions, net interests from margin accounts, and (sometimes) payments for orders. Their goal is to generate highest possible traffic through an effective system with quality service. So in contrary to traditional brokers, they try to make profit through volume. (Investopedia, 2015)

plus500screen

To conclude, through digitalization and a change business models we can now trade stocks faster, cheaper and more convenient than ever before.

Author: Sven Sabel
S.I.D.: 354240ss

References

Beattie, A. (2015) ‘The birth of stock exchanges’, http://www.investopedia.com/articles/07/stock-exchange-history.asp, last visited: 9-9-2015.

Wu J., Siegel M., Manion J. (1999). ‘Online trading: An internet revolution’, Sloan School of Management Institute of Technology (MIT) Cambridge.

Investopedia Staff (2015) ‘Brokers and online trading: Full service or discount’, http://www.investopedia.com/university/broker/broker2.asp, last visited: 9-9-2015

Anonymous Communication Networks and Their Potential Role in Business

rsz_tor-logo-root-design

The notions of on-line black markets, services and information that occur and can be found on the ‘Deep Web’ (or however else it may go by–the collection of web addresses not indexed by search engines) are presently relatively well known. For those unfamiliar with the topic, a study in 2001 estimated the size of these non-indexed internet sites to be around 7.5 petabytes (Bergman, 2001). Currently, estimating the size of these non-indexed sites has proven to be even more complicated since content stored in its databases have peculiar features complicating their access. As an example, for data mining, this information can only be accessed through the query interface they support based on input attributes obliging user queries to specify values for these attributes (Liu, Wang & Agrawal, 2011). These intricacies have made it near-impossible to accurately determine, and even harsh to estimate, the size of this vast collection of information.

The ‘Deep Web’ has grown in importance as of late (Braga, Ceri, Daniel & Martinenghi, 2008; Cali, Martinenghi, 2008) and with it, the use of anonymity networks such as Tor. For the readers unfamiliar with Tor, “The Tor network is a group of volunteer-operated servers that allows people to improve their privacy and security on the Internet. Tor’s users employ this network by connecting through a series of virtual tunnels rather than making a direct connection, thus allowing both organizations and individuals to share information over public networks without compromising their privacy.”. In short, Tor works by giving you a randomized relay IP-address donated by volunteers continuously such that no single relay point knows the route a user has taken throughout the random-generated relay route up to the user reaches their desired website. This process is repeated and is different every time the user visits a site. It goes without saying that the more users take part of this tool the “more randomized” it becomes. It does come with one weakness however, and that is node eavesdropping, or end-to-end correlation analysis. In short, both node ends between communication ports can be inspected to find a match in information and thus identify users. (For more information click here).

htw2

This is all fantastic news for reporters seeking to share controversial stories in oppressive governmental situations, or in general oppressed people seeking to express controversial messages. However this has also fostered crime through virtual networks, inducing organizations such as Interpol to obtain training in how to use and, “gain the upper-hand” with, these tools (Dutch/English). The amount of discussion about the level of anonymity in this network is vast and can be commonly found throughout both indexed and non-indexed web addresses. It commonly discusses the aforementioned end-to-end correlation analysis, points of encryption, whether to use TAILS, and where to use it from, and how often, self-destruct emails, personalized email encryptions, black bitcoin wallets, and much, much more.

This brings this blog to its fundamental point of contemplation which is much more relevant to this blog’s strategic perspective. How can this play out for business? Can IT-savvy security traders in worldwide financial capitals make use of these tools for own gain? Could this lead to a new financial meltdown where individual players’ lust for capital gains leads to the breakdown of a financial system? How could anonymous messages among acquainted parties play out in the M&A market, where secrecy is a fundamental pillar of daily business? How could this play out for entire industries?

trader-2

There is no doubt that anonymous communication leads to freedom. This freedom is presently available to whoever has the knowledge to harness it. In a world where knowledge is openly available to any with a computer and internet connection. Both to the oppressed as well as to powerful individuals who are trusted with responsibility, and are closely monitored by third-parties because of mistakes occurred in the past that have led to a negative impact for several individuals.

In my own conceit, these tools can prove to be radical for business. A truly anonymous communication network would return the fullest definition of trust back into business. However, knowing human nature as it has shown itself throughout history, these tools might prove harmful in the long-run, assuming they are kept being worked on and made more common among society.

Author: Dennis Oliver Huisman
S.I.D.: 369919dh

References

Bergman, M. K. (2001). “The Deep Web: Surfacing Hidden Value”. The Journal of Electronic Publishing (1).

Braga D., Ceri S., Daniel F., Martinenghi D. (2008). Optimization of multidomain queries on the web. Proceedings of the VLDB Endowment, 1(1): 562–673.

Cali A., Martinenghi D. (2008) Querying data under access limitations. In: Proceedings of the 24th IEEE International Conference on Data Engineering, 50–59.

Liu, T., Wang, F., Agrawal, G. (2001). “Stratified sampling for data mining on the deep web”, Frontiers of Computer Science (179-196)

When Does Retargeting Work? Information Specificity in Online Advertising

It is estimated that the average US citizen spends 5 hours and 46 minutes of his time online (eMarketer, 2014). Hence, it is no surprise that the business of online advertising is booming. With an increased intensity of individual browsing behaviour (for example, by tracking cookies), firms are now able to offer better personalized product recommendations than ever. The article of Lambrecht and Tucker (2013) is particularly interested in the effects of Retargeting. Although retargeting knows several different forms, the general overarching concept can be described as follows: a customer visits website A, but decides not to buy a product and then proceeds to leave the website. When he logs on to a different website – website B – he is now shown an advertisement of website A. These so-called external advertisements that run through ad networks provide firms the chance to target consumers even when they are not on their website.

Lambrecht and Tucker (2013) are specifically interested in the concepts dynamic and generic retargeting. Whereas dynamic retargeting shows the actual product you have been looking at (often together with three recommendations in the same price range) on an external website, generic retargeting only shows the image of the general brand or firm. Whereas personalized product recommendations on own websites have proven to be successful, little is known about dynamic and generic retargeting. Lambrecht and Tucker (2013 help to explain if dynamic retargeting is more successful and also when it is effective in converting customers to purchase.

123

By conducting a field test that ran for 21 days in which 77,937 individuals viewed both the firm website (in this case a travel website) as well as an external ad, the authors yielded a surprising result. In general generic retargeting proved to be MORE successful than dynamic retargeting(!). To explain this phenomenon the authors suggest that the effectiveness of an advertisement is dependent on how narrow the consumer has defined his or her preferences. They state that initially, consumers have only a broad idea what they want, whereas only after they define a detail viewpoint of what they actually want.

Scrutinizing browsing history, the authors manage to confirm this hypotheses. They find that dynamic targeting is indeed MORE effective when costumers have visited a review site (for example, Tripadvisor). In this case, the review site is an indicator that a consumer has narrowed down his or her preference. As a result, when a customer has gone to a review site, it will be more effective to target him or her with a dynamic retargeted ad.

222

Similarly, the authors also prove that browsing different websites in the same category is an indicator that a customer has narrowed down his or her preference. Again, it is proven that when a customer has shown particular interest in one category (across different websites), dynamic retargeting proves more successful.

Lambrecht and Tucker (2013) clearly show that using external websites is an important indicator on how to differentiate your retargeting. Initially, customers are more likely to react to generic retargeting; however, once customers define their product preferences over time, dynamic retargeting proves to be more successful. The multistage journey of a customer’s decision proves once more that advertising is not a one-size-fits-all concept.

Sources

Emarketer, 2014, Mobile Continues to Steal Share of US Adults’ Daily Time Spent with Media, 22 April, retrieved from: http://www.emarketer.com/Article/Mobile-Continues-Steal-Share-of-US-Adults-Daily-Time-Spent-with-Media/1010782#sthash.ubc17EDr.dpuf

Lambrecht, A. & Tucker, C. 2013, When Does Retargeting Work? Information Specificity in Online Advertising, Journal of Marketing Research, vol. L, 561-576.

:: 4G spectrum auction ::

Paul Prins – 4min. read

31st of october 2012, the auction for the ‘4G spectrum’ started in The Netherlands and finished 1,5 month later at the 13th of december 2013. This auction took longer and raised more money than expected. Both the Dutch market leaders, KPN and Vodafone, bided each about €1,3 billion, T-Mobile €911 million and Tele2 – a fairly new player in the market – €161 million. KPN paid (relatively) so much that the dividend payments for 2012 and 2013 were actually cancelled.

 Why did it take so long? And why is there so much money involved – almost 4 times more than was expected?

To put it shortly, all of these companies were aiming at getting access to chunks in (mainly) both the 800MHZ and 900MHz spectrum (more about these frequency bands later). The licenses for these spectrum bands are valid until 2030 in order to provide faster mobile data [LTE] services all over the Netherlands. The roll-out was, by then, planned to start in Q2 2013 until – expected – the end of 2014.

Shortly after the auction, the real rivalry between the companies took off. KPN was the first player that actually ‘launched’ it’s mobile 4G connection for her customers. But this was only for their premium data plans and they also didn’t foresee that some early adopters started to pick on KPN that their 4G was not compatible with the – just released – iPhone 5. On the contrary, Vodafone has recently fired a marketing campaign as a reaction to KPN, claiming that they are the first provider in The Netherlands offering a 4G connection for the iPhone 5 – truth to be told: just for now, only in the 4 main rural cities Amsterdam, Rotterdam, Utrecht and Den Haag.

 But what’s the fuzz all about? – Why emphasis on the iPhone 5?…let’s clear things up.

First of all – and without getting in too much details yet – the iPhone 5 is one of the most popular high-end mobile devices that, as it turned out, doesn’t support the initial 4G-rollout  frequency (800MHz) from KPN, whereas the Samsung Galaxy S4, HTC One and Nokia Lumia 920 do. Why? Because the iPhone 5 for the Dutch market doesn’t support 800MHZ, but only 1800MHZ. A great opportunity for the marketing departments of Vodafone to make public to their customers that they have already started rolling out their LTE services in the 1800MHZ band – for the record: KPN will also start to roll-out the 1800MHz band in 2014 as well.

 But what about these frequency bands?

Back in the days, when you went to a country abroad you needed a separate mobile device in order to connect to a local network – or you were one of the lucky fews, owning your own satellite phone – since all countries had their own ‘rules’. Luckily for us, things changed and international standards for the mobile spectrum have been adopted. Just like any international agreement, the real results of these standards are visible many years later. For example, in The Netherlands the 800MHz band was exclusively used for analog television. Therefore, this bandwidth was not available for telecommunications, otherwise nobody was able to watch television back in the days.

But for this new digital age, 800MHz is not exclusive for analog TV anymore. Why? The first reason is that probably nobody watches analog TV anymore, but more importantly because 800MHz main characteristic is that it can can cover a wide distance, whereas bands in the higher frequencies cannot. For the same reason the GSM network is provided through the 900MHz band.

So what does it mean for Telecommunications in the Netherlands?
The final results of the Dutch 4G auctions give a pretty clear overview:

At this point is not specifically known which provider will use what specific band, but a rough expectation shows the following:
2G / GSM: 900MHz
3G / UMTS: 2100 MHz
4G / LTE: 800, 1800, 2600 MHz

Back to the iPhone:
As mentioned before, the initial roll-out is focused on the 800MHz frequency, hence the fuzz about the iPhone 5 that ‘only’ supports the 1800MHz band. The good news is though, that the new iPhone models (5S and 5C) will support all three LTE bands

And that is exactly what it’s all about for LTE: device and mobile carrier support:
Band 20 (800MHz) – Vodafone, KPN, Tele2
Band 3 (1800MHz) – Vodafone, KPN, T-Mobile
Band 7 (2600MHz) – Vodafone, KPN, T-Mobile, Tele2, Zum (Ziggo)

Thus, the bottom line is: if you want to know wether you will see that new beloved 4G icon in the top of your screen, not only check if your provider supports LTE access (and you are actually paying for it) but also if your phone supports it.

Click & Pay Online, Eat & Connect Offline

Founded in December 2012, BonAppetour is a web-based social platform that aims to ‘provide travelers an authentic dining experience; be it just eating or even cooking with locals in the comfort of their homes’. Born as a venture startup, it is as an alternative to eating at restaurants.

Features:

  • Social platform : The offline activity is made possible through online activity. Based in a consumer-to-consumer market, this business model is powered by social media.
  • Pricing : The prices of the menu are pre-determined by the hosts, and customers can choose according to their budget. The current price range from ‘Free – 50 Euros’. Hosts provide additional information signals such as ingredients list and a photograph of the dish.
  • Profiles: A trust premium is involved in the transaction where fewer cues on the quality of food and experience are available for informing the decision, as compared to a typical restaurant.
Bonappetour1

Traveler – Experience Journey with BonAppetour

Hosts - Experience Journey with BonAppetour

Hosts – Experience Journey with BonAppetour

Current Approach to Increase Platform Credibility

1. Building a community through the its Facebook page – through real-time updates of successful meetings and sharing through photos or generating discussions, it helps to foster online word-of-mouth.

2. Blogs that documents stories – Through careful content curation, BonAppetour is able to showcase the unique aspects of the social experience it has enabled. And the links to traveler’s blog especially prominent digital nomads can help to increase its ‘searchability’ with backlinks and referral searches.  All these adds up to improve the platform’s credibility.

Questions

1. How much has consumer behavior changed because of IT? Whether both parties are leveraging on some other form of external information sources to guide their transaction – e.g. I may do a quick Facebook check on the host and not rely only on the reviews provided on BonAppetour.

2. An area of research is the amount of information and attributes that is important or can affect the probability of a successful transaction.

Future Progress

They are now currently working on sharing/referral that leverages on similarities in social interests in their existing customer pool to expand outreach. They have a profile page that collates reviews of past ‘experiences’, and it can be used for future observational learning purposes.  Also, the quality level of information provided may vary, a good cook may not have the best photography skills and would not be able to present his or her dishes in the best ‘light’. In future, BonAppetour may need to spend effort in educating their hosts of how they can make information signals work to their favor and other SEO tools such as the keywords to use to increase their ‘attractiveness’.

Learning from Others

EatWith is the Airbnb for dinner parties. The site layout incorporates knowledge of online search behaviors for intangible goods – providing images of the end product and reviews. To help its customers – travelers and hosts – avoid ‘lemons’, the business currently screens hosts by interviews and allow hosts to vet their profiles. It also provides assurance services in the form of Eatwith Guarantee and insurance policy.

BonAppetour can seek to take a page out of the manual from EatWith’s business model of taking 15% cut and learn from to design tools to effect successful transactions.

Eatwith-Amsterdam

EatWith – Example of host in Amsterdam, showing details of the kitchen, food and people involved.

 

 

Sources:

1. http://www.webnews.it/2013/02/04/startup-che-weekend-a-milano/

2. http://www.bonappetour.com/how_it_works.php#

3. http://allthingsd.com/20130731/eatwith-an-airbnb-for-dinner-parties-sets-its-sights-on-u-s-cities/

:: Rethinking Risk ::

Paul Prins — 3min. read 

Rethinking Risk

In the Post-digital era we find that modern technologies are often more interacting and converging. And while this ‘interconnectedness’ holds the promise of new opportunities for business, it also creates interrelated risks that may not be effectively managed.

Imagine that one of your employee’s uses a communication app – one that is not approved by the IT department – on his personal phone that is connected to the company’s database and network. While having a coffee during his break he reaches for his phone, texts with some friends and unconsciously opens a malicious link he received in one of his messages… In this potential situation one user with a mobile device can now expose a company and private information to significant security and regulatory risk.

In the paper ‘Communication and Interpretation of Risk,‘ by J. Richard Eiser of the School of Psychology at the University of Exeter in Exeter, UK (1998) risk is defined:

 “Risk is traditionally defined in terms of probability. However, people often have difficulty in processing statistical information and may rely instead on simplified decision rules. Decision making under risk is also critically affected by people’s subjective assessments of benefits and costs.”

When it comes to risk management, I believe one of our biggest vulnerabilities is our ability to make good risk decisions based on the information available to us. Our decisions are often inconsistent.
As Daniel Kahneman in 1979 already proposed in his work on ‘Prospect Theory’, the way people choose between probabilistic alternatives that involve risk – based on potential value of losses or gains – risks are usually evaluated by making use of cognitive heuristics. Thus, even though we believe a human being is capable of making rational decisions, most of these decisions turn out to be more irrational.

In a relatively new book (2012) ‘Risk Intelligence: How to Live with Uncertainty‘, Dylan Evans explores the psychology, sociology, and politics of uncertainty through what he calls “risk intelligence” — the essential skill of learning the boundaries of our knowledge and using that insight in honing our humanly flawed decision-making:

“At the heart of risk intelligence lies the ability to gauge the limits of your own knowledge — to be cautious when you don’t know much, and to be confident when, by contrast, you know a lot. People with high risk intelligence tend to be on the button in doing this.

[…]

This is a vital skill to develop, as our ability to cope with uncertainty is one of the most important requirements for success in life, yet also one of the most neglected. We may not appreciate just how often we’re required to exercise it, and how much impact our ability to do so can have on our lives, and even on the whole of society.”

This may also present a challenge to CIOs and to the IT audit function. They will likely need to develop an integrated approach to risk assessment and figure out how the ‘interconnectedness’ could make sensitive data vulnerable to, for instance, cyber crime. Hence, a potential risk strategy doesn’t only involve the skill of risk intelligence but also creating a system that involves comprehensive assessments and constant risk monitoring.

 …Or do you know other effective ways?

Healthcare and IS

Healthcare and IS

The Dutch students might remember ‘het elektronisch patiënten dossier (EPD)’, a digital dossier on every patient that can easily accessed by medical staff around the nation to exchange medical information. There where quite some privacy concerns  due to the sensitive nature of the information and since no system is complacently secure, that the system was commissioned by a governmental body made the odds for security breach even worse. At the time every citizen received a letter informing them that this was going to happen and that there was an option to request to opt-out (no guarantees) if you wrote them a letter with a copy of your passport.

As most of you know the first chamber blocked the initiative in 2011 and ordered the minister to withdraw from the project. This does not mean, contrary to popular belief, that the EPD does no longer exist. About half of the Dutch population has already been added to the system tough further implementation will be much harder since it now requires an opt-in system.

This year the UK has followed in the Dutch footsteps announcing their own information strategy concerning healthcare that is similar is many ways. This info graphic depicts the proposed benefits.

<div style=”margin-bottom:5px”>

It is clear that governments want to join the 21st century and profit from all the benefits new technologies have to offer. The question in this case however is not only if they have the skill to pull it off but also if the population is ready for this kind of change. The Dutch case has shown much resistance to the plans and left us with a half implemented system that most people don’t even know about.

sources:http://www.rijksoverheid.nl/onderwerpen/elektronisch-patientendossier http://theamazingworldofpsychiatry.wordpress.com/2012/05/23/the-uk-government-announces-an-information-strategy/