Elsevier is a world-leading scientific publishing company and offers over 2,500 unique journals and more than unique 33,000 book titles (Elsevier, 2015). These offerings are unique and therefore differentiate them from the competition. Additionally, Elsevier offers web-based, digital solutions, such as ScienceDirect, Scopus, and Reaxys. These unique services enable researchers, students and other individuals to better consult the content made available by Elsevier (and other publishers). These solutions are just an example of all the Internet features Elsevier tries to implement into their business fundamentals. Currently, Elsevier’s business is shifting from scientific publisher towards a professional information solutions provider. Elsevier’s CEO Ron Mobed is encouraging the business to ‘Lead the way’ (Mobed, 2014). From this corporate vision, we can infer that Elsevier is striving to implement new technologies in order to disrupt the publishing industry.
To generate revenue, Elsevier mainly sells access to scientific journals to its customers. The value proposition Elsevier offers is that they consult the institution how to generate revenue with their services. The demonstration of this value proposition is done on a yearly basis by Sales directly to the institution. However, these business-to-business negotiations are transforming due to emerging technologies, which for example result in the increase of consumer informedness (Li et al., 2014).
To control this transformation (e.g. consumer informedness) and provide other complications regarding technology development, we propose an online application driven by cloud computing. It is an online platform where the institution can login, create and adjust similar metrics as currently shown by Sales. This innovation will further expand the current concept of Elsevier’s value to the institutions, but will introduce risk since institutions are not required to contact Elsevier anymore for these metrics. The same focus will remain, where not only the value of their investment in Elsevier is presented, but also how Elsevier’s services contribute the institution‘s revenue through an increased institutional competitiveness and collaboration among researchers. Competitiveness will help the institution to gain a better market position and earn more out of four sources: block funding, project funding, commercial monetization, and tuition and endowment. Collaboration among researcher will improve the quality of their research, which will lead to better publications and will result in more value for the institution. In conclusion, the online application will lead to more captured value for Elsevier and lead to more value and revenue for the institution.
Elsevier, 2015. At a Glance. [Online] Available at: https://www.elsevier.com/about/at-a-glance [Accessed 7 October 2015].
Li, T. et al., 2014. Consumer Informedness and Firm Information Strategy. Information Systems Research, 25(2), pp.345–63.
Mobed, R., 2014. Elsevier’s vision. Amsterdam, Netherlands: Elsevier. Internal employee presentation.
Without any doubt, everyone within this blog has already heard about the concept of outsourcing. In this post, I am going to write about a particular product, that has incredible potentiality: the thin client.
I firstly got in touch with the concept of thin client when I was reading the book: The Big Switch written by Nicholas Carr. In its book, the author does an interesting parallelism between the diffusion of the electricity and the computers, forecasting the computing to become soon an utility. According to him, the next big change will be the outsourcing of the computers, as a matter of fact he predicts a bright future for the so called as-a-service-models (in particular in his book he speaks of SaaS and HaaS). In its chapter 4, called: Goodbye, Mr. Gates, he explains how this is going to be possible: through the use of thin clients. Thin clients are stateless, fanless desktop terminal that has no hard drive. They works thanks to a connection with a data centre (which could be proprietary or also outsourced), which allow the users to have all features typically found on the desktop PC, including applications, sensitive data, memory, etc. In other words, the thin client allow users to perform, in most of the occasion, as they would do with a personal computer. The only case in which normal computers are better, is when it comes to very intensive and demanding applications, such as AutoCAD, this is due to the absence of hardware.
Thin clients are linked to a single powerful host machine, which can run multiple operating systems and multiple applications on the same server at the same time. This is possible only thanks to the use of virtualization, i.e. a software that separates physical infrastructures to create various dedicated resources.
Creating such an infrastructure has several benefits for a company:
1) Lower Operational Costs: An office environment where several workstations are involved can access a single server unit, thereby reducing the operational costs covering these related actions:
- Setting up the device takes less than ten minutes to accomplish.
- The lifespan of a “client” unit is very long, since there are no moving parts inside. The only parts that need constant replacements are the peripherals that are external to the PC. This means that when something breaks at the “client’s” end, it can be as easy as taking a replacement unit to replace the broken one. Even wear and tear is considerably unnoticeable.
- Energy efficiency – A slim unit is said to consume 20W to 40W as opposed to the regular thick PC, where power consumption during operation mode consumes 60W to 110W. In addition, the thin PCs need little or no air conditioning at all, which literally means less operating costs. Whatever air conditioning needed is demanded and supplied at the server area.
- Work efficiency – Its work environment can be far-reaching and extensive; as it can provide quick access to remotely located workers simultaneously operating on server-based computing.
2) Superior Security: Since users will only have access to the server by network connections, security measures like different access levels for different users can be implemented. That way, users with lower access levels will not be able to see, know, or in worst case scenarios, hack into the confidential files and applications of the entire organization. They are all secured at the server’s end, which is also a way of securing data files in the event of natural disasters. The servers will be the only machines that need to survive the disaster as the main location of all the saved data. Immediately after the disaster, new “clients” can easily be connected to the server, for as long as the latter remains intact.
3) Lower Malware Infection Risks: There is a very slim chance of getting malware on the server from a thin client because inputs to the server only come from the keyboard, mouse actions, and screen images. The PCs get their software or programs from the server itself; hence, patches, software updates and virus scanning applications are being implemented only on the server’s end. It follows that the servers will be the one to process information and store the information afterwards.
4) Highly Reliable: Business organizations can expect continuous service for longer durations since thin clients can have a lifespan of more than five years. In as much as these units are built as solid state devices, there is less impact from wear and tear through constant use.
5) Space Savings: the small dimension of a thin client allow to have a better workplace with more space for the normal working activities.
Figure 2: An HP t420 Thin Client
Of course the thin clients have some downsides such the fact they have to be always connected and that a powerful central host machine is needed, but for companies which have to bear expenses for setting up an IT infrastructure the thin clients could be a real revolution.
Carr, N. G. (2008). The big switch: Rewiring the world, from Edison to Google. WW Norton & Company.
Despite the fact that no average-customer has ever tried it in person, everybody knows what Samsung’s Gear VR is. In theory as being defined as a portable, wearable device to display a virtual reality, in practice … just an other Oculus Rift
We have seen pictures and we have read articles since 2010 about this kind of products, but we are far from integrate these devices in our life. In addition to this, when we think to any possible applications, the only thing that comes to our mind is the gaming industry.
In other worlds, right now they are complex and expensive toys.
Is not of this idea Audi, the famous automobile company. The German firm has find an innovative way to use the Samsung Gear VR to enhance the customer engagement. In simple words: a great marketing expedient. Audi, in order to support the launch of the new TT, have crated the first ever digital car showroom (these are the exact words used by Raju Sailopala, head of Sales at Audi city London) and it has provided all its115 Audi centres with the Samsung visors.
Customers can now choose the model, customize it and see it in a matter of second in first person. Audi has also recorded a test drive and now the customer can seat as in the passenger seat and enjoy the experience of a test drive.
After this first success, also another member of the Volkswagen Group has embraced this marketing strategy and has proposed at the 2015 Geneva Motor Show a virtual driving experience on the new amazing Lamborghini Huracan LP 610.
Audi will for sure leverage this new technology to open new stores in the great metropolis and capitals, where the space is a premium and there are no competitors, but will it dare to abandon the traditional dealerships?
More than looking at starts, NASA is also being very down to earth.
Recently, NASA completed a few tests of its new Traffic Aware Planner (TAP) application aboard a PiAggio P.180 Avanti pusher prop and continue a formal test program corporate with Alaska Airlines and Virgin America in the next three years (Lynch, 2015).
TAP connects to the aircraft monitor station which is able to locate the aircraft current position, read its altitude, in charge of the flight route and other real-time information to specify the aircraft’s situation. Then, TAP will automatically start searching a variety of route or altitude changes that could save flight time, reduce fuel costs or carbon emissions and send the detailed analysis result to the pilots.
Moreover, making a connection between TAP and receiver ADS-B ensure the application to sweep the signals of nearby air traffic in order to avoid potential risks in proposed flight route changes, it will be more easy for the controller in the monitor station to approve a pilot’s route change request (Unknown, 2015). By doing so, it will improve the efficiency between pilots and controllers.
In addition, once the aircraft cockpit has Internet connection, TAP can obtain the information on real-time weather conditions, wind speed, temperature, visibility, cloud cover and other information to increase flight efficiency.
TAP can be operated on a tablet and easily implemented. Basically, it doesn’t need any changes of the roles or responsibility of flight pilots and controllers; therefore, it can be implemented on the aircraft to produce benefits in a timely manner.
William Cotton, one of the test pilots said that TAP can save four minutes off the flight time after he requested a route change which granted by air traffic control (Lynch, 2015). Four minutes cutting seems not a big deal, however, with thousands of airplanes travel on each day, there will be an enormous amount of cost savings on fossil fuel and an incredible decrease of carbon emission. Here is an example: shaving off four minutes reduces the fuel costs for a Boeing 767 aircraft for around 330 dollars, if half of the aircrafts take off from Atlanta airport are able to save four minutes flight time each day, cost savings will reach to around 165,000 dollars (Ren, 2015).
Researchers from NASA anticipate this application is going to take a big step forward in the national airspace, since TAP is really doing a nice job on reducing delays, increasing the flight efficiency, protecting environment and offering passengers a better flight experience.
Unknown. (2015, 9 23). NASA’s Traffic Aware Planner to Increase Efficiency of Airliners. Retrieved from World Industrial Reproter: http://worldindustrialreporter.com/nasas-traffic-aware-planner-to-increase-efficiency-of-airliners/
Lynch, K. (2015, 9 23). NASA Tests New Traffic Planning App Aboard Avanti. Retrieved from AIN online: https://www.ainonline.com/aviation-news/business-aviation/2015-09-23/nasa-tests-new-traffic-planning-app-aboard-avanti
RenZong. (2015 9 24). NASA developed an app for helping piolts find optimal flight route. Retreved from Leifeng: http://www.leiphone.com/news/201509/IdoiTYvPCPDnqhFR.html
I recently found a very interesting app that as I believe has not received enough attention from the public yet given its potential. This is why I decided to share it here and I hope it will bring some joy and excitement to many of the readers of this post.
Spotlight Stories from Google Play, which is available for both iOS and Android run devices, is an application that allows you to watch movies in 360 degrees format on your smartphone or tablet. Given the limitations of the human vision you are, of course, not able to see all 360 degrees at one instant of time. However, as you move your device either sideways, up or down the image changes as if you were moving your eyes into the respective direction. So far the app offers four different short animated movies ranging from a windy day in the life of a frog to a very futuristic chase with an alien.
This kind of movie experience, as you can imagine, brings with it a couple of challenges for the directors. One of them being the issue on how to make sure the spectator is always looking into the right direction to not miss any of the main action. In the four exemplary movies that were released so far this potential problem is solved with the inclusion of 360 degree accustic adaption. To put this into simpler words: As you move your device away from the view of the main action the music and conversations in the movies diminuish in volume. As your brain registers this you “automatically” adjust your view.
Although both the app and movies are still for free at the time, the notion “free for a limited time” gives us a hint that Google may be planning to sell movies through this application in the future (Perez, 2015). I therefore inivite you to try the app out as long as it still is for free and give me your opinion on it in the comments below.
Perez, 2015.‘Google Brings Its 360-Degree Movies App, Spotlight Stories, To iOS‘, http://techcrunch.com/, last visited: 06 October 2015.
There are thousands of apps around. For multiple platforms (iOS or Android) or in multiple browser. You probably use them on many devices: Your phone, tablet or laptop. But all those applications have very limited functionality on their own. Only by communicating to their user, connecting them between each other and swapping all kinds of information they become powerful.
And that’s where APIs come in. API stands for Application Programming Interface and describes the information and rules software programs interact with each other.
The traditional way of development focusing on web frameworks (e.g. Microsoft .NET, Ruby on Rails, PHP) can require costly integration into other software when not set up properly. Adaption to special needs can easily amount to a project in middle five figures.
An API centric piece of software executes most or all functionality through API calls. So why is this important?
With API-Centric Design the core function of a software (for example the Twitter Stream of new Tweets) is build separately from the way a user accesses it (in our example Twitter can be accessed through a browser, an iOS app from an iPhone, iPad, Android devices, aso.). There is only one core product running in the background and then many different customized front-end ways of accessing the core product running in the back-end. All the communication between those parts happens over? You guessed it: APIs!
No more changing and tweaking the core product because on a windows phone was a display error. You just handle that over the windows phone front-end client.
Bah…. that was a lot of techie talk. So what?! Well that brings us to our next big thing:
The Internet of Things
There are estimates that until 2020 there will be more than 50billion connected devices. That’s a lot! And it will shift who and what communicates over the internet. Today people communicate with people or people communicate with machines and systems. But in the age of the internet of things systems mostly communicate directly with systems. And they don’t care about pretty graphical interfaces on some gadget with touch screen. For those systems to work you need solid APIs connecting many back-ends fast and in a reliable way. And what would be more suitable for this task than software created through API Centered Design?
Oracle recently released an API Management Tool. So did IBM and Intel. These big corporations undertake those steps to be well prepared for what is about to come: The internet of things. It’s gonna be a paradigm shift.
But Where is the Money?
APIs aren’t new. And there are a lots of them. In the Programmable Web Database are more than 14’000 APIs registered. But with the emergence of mobile and the internet of things, they’re in the spotlight again. API centered software enables micro services that fit a specific need an solve a well detailed problem. Other programs can build upon existing APIs using their functionality to expand and build their own. This layer structure can help to automate tedious tasks by integrating and arranging the right APIs. There are many offerings already that allow fast creation of API-based back-ends (e.g. Treeline or Stamplay). APIs therefore build a solid foundation others can build upon. Google does that for a while already and offers a ton of APIs for others to use (e.g. Google Maps). But if you and especially your users call them regularly you have to pay for them. And they’re not cheap:
This example brings us to our first business model with APIs: If you’re providing some service that is of value to others, you can charge for every time a user or program is calling your API and uses its functionality. Even if it’s just a couple cents per call, if your API gets used thousand times a day, that’s steady income.
Another business case is to offer your API for free and animate other developers to build upon your existing API. Through referrals from that software you then generate additional sales. Uber does this with success: By offering their API for free they animate developers to build upon their core product. If someone signs up for Uber through another program that uses the Uber API, they pay the developer who build the new product a commission of $5-10.
There will be many more business models emerging around API. Especially connected to the Internet of Things. The paradigm shift opens up new business opportunity ready to exploit.
What business models including APIs do you see? I’m very interested in reading about them, so please leave a comment!
It’s the year 2030 and you are walking with your friend to a cafe in a new city. You see this cosy little cafe and both of you decide to enter the cafe. As soon as you enter the cafe the hostess says: “Hello Mr/Ms “YourName”, we have a table near the back of our cafe as seen in your preferences.” When sitting down the hostess asks: “Would you like to order a Cappuccino, like last week, or do you want something else this time?”. You decide to order a Cappuccino and when you sit down you tap on the table to view the menu on the table. You get a list of recommended items in order to your preferences. You decide to order a tuna salad, like always.
This future event with your friend going into a cafe is pure fiction, however the knowledge of the cafe may be not. How is it possible that this café knew that you were in the neighbourhood, and how did it know what your favourite and preferred drinks/food are? The answer: “Smart Dust”
Smart Dust are tiny little microelectromechanical systems (MEMS) that can detect i.e. vibrations, humidity, temperature, light, movement, magnetism, and chemicals. Tiny devices of 2mm each, work as an system to transfer data to each other. Each of those devices has a small “router” in them to send and receive information. The devices have a wireless range of maximum 10 meters. Due to the small range, it is necessary to have a lot of tiny devices close to each other to transfer data on a larger scale. Their energy source is solar energy, because they have a small solar cell and a small battery in them.
The idea descends from Kristofer Pister, a professor at Berkeley. When Pister presented the idea to his colleagues, his concept attracted the US military and Pister received funds to further his work. The first test was in 2001 were six tiny devices (MEMS) were dropped in a field to detect a military vehicle. The test was successful and they even managed to capture the course and speed of the vehicle. Last year a team of Michigan students successfully embedded solar cells in the MEMS to extend their life drastically.
There are many business implementations for Smart Dust. Pister accomplished to gather information about the weather in San Francisco with a radius of 21km using Smart Dust. Defence related implementations are also possible, such as battlefield surveillance and transportation tracking. Transportation tracking is also possible to control inventories. The tiny Smart Dust devices will take over RFID technology in that case. You can also think of product quality control. Some products need to be stored under certain conditions and smart dust makes it easy to monitor temperature, humidity, vibrations etc. There are more business implementation you can think of such as virtual keyboards, smart offices etc.
The main objective for the researches is to extend the life of the devices even more. When companies start to produce Smart Dust the variable cost of one device will be extremely low. The machines to produce MEMS will be costly at the start, but when this technology becomes feasible for companies it will be implemented on a large scale. Researches ask for caution when implementing this technology, because of the environmental impact. No one wants to live in a city with billions of devices floating in the air. Pister did inhale a device (MEMS) and said that it is equal to inhaling a fly. You will cough it up.
Another thing that researchers ask caution for is privacy. Smart Dust devices can measure a lot of things and they are still trying to implement new kind of sensors in the device. It is also possible that Smart Dust will contain microphones to listen in on conversations. Let’s go back to the introduction. It is possible that your clothes, Identity card and maybe yourself will contain Smart Dust which has information about you and will communicate it with businesses. Where camera’s are easy debatable, because they are visible, Smart Dust is not. People cannot see smart dust being there and don’t know if they will be monitored and for what purposes. Another problem is that information gathered by Smart Dust can possibly be stolen by hackers. You can also think of Smart Dust being used to spy on people or businesses. Someone can scatter some device in a house or conference room to obtain classified information.
Smart Dust is a technology with lots of potential and that’s why it entered Gartner’s hype cycle. It will take some more years to make this technology feasible for the market. Meanwhile the discussion how far monitoring of people can go with current technologies will go on and the discussion will intensify if Smart Dust will be implemented.
Kevin Schaap (358985)
M. Kahn, R. H. Katz and K. S. J. Pister (1999) “Mobile Networking for Smart Dust”, ACM/IEEE Intl. Conf. on Mobile Computing and Networking, Seattle, WA, August 17-19, 1999
S. J. Pister, J. M. Kahn and B. E. Boser, (1999) “Smart Dust: Wireless Networks of Millimeter-Scale Sensor Nodes”, Highlight Article in 1999 Electronics Research Laboratory Research Summary.
Hsu, J. M. Kahn, and K. S. J. Pister, (1999) “Wireless Communications for Smart Dust”, Electronics Research Laboratory Technical Memorandum Number M98/2, February, 1998.
On 9 September, Tim Cook (CEO Apple) says: ‘the future of television is Apps‘ (Apple, 2015). Not everyone will agree, but it is almost certain that this industry is on the brink of a huge transformation. The only challenge left for television is the input problem, where people primarily pay for traditional, linear, pay-television services and besides that own a secondary device (e.g. DVD player, Apple TV) for additional content (Yarow, 2015). However, it is unclear if or when the ‘secondary’ service can be a substitute for the conservative primary services. Some predictions state that these new devices (e.g. Apple TV) could turn the television into a dumb piece of glass (Yarow, 2015), since many companies are making a bet that the largest screen in our homes is going to become an operating system like the ones that power our computers and phones (Hempel, 2011).
Many things have changed since devices are connected to the Internet. Millions of independent developers have got the chance to create great applications for multiple devices. The television is next and many start-ups will look for opportunities to offer video experience via applications on products such as the Apple TV (Yarow, 2015). Besides that big companies are forced to adjust their content as well. For example, Jeff Bewkes (CEO of Time Warner) spoke about the company’s plan to move its vast catalogue of movies and TV shows onto the Web (Lyon, 2011). Besides that, products like the Apple TV provide opportunities for all kinds of businesses (e.g. Netflix, HBO) to broadcast their content in a new way on the biggest screen in the house.
To convince the consumer, the only way to win it digital is to keep it simple (Lyon, 2011). Then if the new platform works, the prediction is that the traditional, linear, pay-television services will become secondary, because people will start to wonder why they are wasting money on this conservative service (Yarow, 2015). To make this transformation from traditional television to the Internet happen, some things need to be taken into consideration. Especially content expectancy, social influence, facilitating conditions, hedonic motivation and habit have significant effects on behavioral intention on (mobile) television (Wong et al., 2014). Additionally, Wong et al. (2014) claims that gender and other demographics tend to have a moderating effect on this television behavior. The question remains if online television is better in serving the needs of users than the traditional television service. And will suppliers be able to adapt new technologies to capture value? Research implies that this adaption is needed. For example, the viewer engagement actually is greater when social media is involved (Pynta et al., 2014), and new social possibilities come along with Internet on television.
From the supplier side, the web has the power to make media distribution cheaper and more efficient (Hempel, 2011). On the other hand, the current business model heavily relies on the revenue they earn from licensing. In each country there are able to capture value since it is legally possible to capture value in each geographic region. The web is breaking this business model. Ad rates are much lower on the Internet. Networks cannot collect their fees. Cable companies fear losing our business. Someone has to pay for all that bandwidth we are using to stream our shows (Hempel, 2011). This means that the suppliers must look for new opportunities to generate their revenue. The Internet on television not only brings opportunities, but also big challenges for the current participants, if they want to stay alive.
Vincent Laduc (417658vl)
Apple, 2015. Apple Special Events. [Online] Available at: http://www.apple.com/apple-events/ [Accessed 1 October 2015].
Hempel, J., 2011. What the hell is going on with TV?. [Online] Available at: http://fortune.com/2011/01/03/what-the-hell-is-going-on-with-tv/ [Accessed 1 October 2015].
Lyon, D.W., 2011. JEFF BEWKES AND THE APPLE TRAP. B-School Connection.
Pynta, P. et al., 2014. The power of social television: Can social media build viewer engagement? A new approach to brain imaging of viewer immersion. Journal of Advertising Research, pp.71-80.
Wong, C.H., Tan, G.W.H., Loke, S.P. & Ooi, K.B., 2014. Mobile TV: A new form of entertainment? Industrial Management and Data Systems, 5 August. pp.1050-67.
Yarow, J., 2015. The new Apple TV will blow up the TV industry. [Online] Available at: http://uk.businessinsider.com/the-new-apple-tv-is-going-to-blow-up-the-tv-industry-2015-9?r=US&IR=T [Accessed 1 October 2015].
On 9 September 2015 Apple presented the iPhone 6S, where they claim: ‘The only thing that has changed is everything’ (Apple, 2015). On the other hand, Samsung claims that ’The next big thing is (already) here’ with their new smartphones (Samsung, 2015). Since I need to buy a new phone very soon, I am starting to doubt how different these products actually are.
The acknowledgment must be made that these companies do not make these phones by themselves. For example, Apple has over 200 suppliers to create their products (Apple Inc., 2015). Besides that Samsung aims to strengthen its position as worldwide computer chip manufacturer (ANP, 2015), which implies that they supply other firms to make their electronic devices (e.g. iPhones).
According to Kaufman et al. (2010) these business networks emerge because customers are more informed and therefore increasingly demanding products and services tailored to their specific needs. This results in business networks, which are able to break up their value chain into independent modules (Kauffman et al., 2010) and thereby are able to add more value to the final product (Ketchen Jr. et al., 2004). One of the reasons to participate in a business network is that it accomplishes more as a whole than the value it can capture by its individual parts (Kauffman et al., 2010). Another reason, especially in this technology driven industry, is that business networks tend to be more innovative (Möller & Rajala, 2007) (Gnyawali & Park, 2011). Therefore all these firms help to grow their entire business network (Gnyawali & Park, 2011), to motive more external parties to join the network (Gallaugher, 2014) and further improve their competitive advantage with their final product (Ketchen Jr. et al., 2004).
The uniqueness of Apple’s business network is that a direct competitor (e.g. Samsung) is a supplier for their products (e.g. iPhone). Scientific literature names this phenomenon co-opetion, where end-product competitors are contributing in each other’s value chain. As aforementioned a reason to embrace co-opetion is more innovation (Gnyawali & Park, 2011), but this still does not clarify why for example Samsung might cannibalize its own products. An explanation is that co-opetition is only beneficial when businesses are still able to differentiate with their value adding activities (Ketchen Jr. et al., 2004). Therefore if end-product competition is growing, businesses are trying to further protect their differentiating activities (Ritala & Hurmelinna-Laukkanen, 2009). A good example from Apple and Samsung are the patent wars they are having for the past few years. They are blaming each other for copying each other innovations to protect their differentiating activities. However, co-opetition will still be beneficial for both parties, since another observance states that it results in less vertical integration and more diversification (Gnyawali & Park, 2011). For example, this ensures that Samsung can further grow as a chip manufacturer without the interference of Apple. Additionally, the suppliers of companies such as Apple benefit from the demand they generate (Zhang & Frazier, 2011). Therefore the question about co-opetition should be: do we as a business want to capture value from competitors or establish a greater competitive advantage? (Park et al., 2013)
To be honest I really admire the research done about this phenomenon named co-opetition. However I still can’t figure out my personal issue. Therefore I would like to ask you: what phone should I buy? Since I can’t see the difference between the products of Apple and Samsung anymore after this study.
Vincent Laduc (417658vl)
Anderson, A., Park, J. & Jack, S., 2007. Entrepreneurial social capital: Conceptualizing social capital in new high-tech firms. International Small Business Journal, 25, pp.245-72.
Anon., 2014. In Gallaugher, J. Information Systems: A Manager’s Guide to Harnessing Technology. Saylor.
ANP, 2015. Samsung wil verder groeien als toeleverancier. [Online] Available at: http://www.nu.nl/mobiel/4132940/samsung-wil-verder-groeien-als-toeleverancier.html [Accessed 25 September 2015].
Apple Inc., 2015. Supplier Responsibility. [Online] Available at: https://www.apple.com/supplier-responsibility/our-suppliers/ [Accessed 23 September 2015].
Apple, 2015. iPhone. [Online] Available at: http://www.apple.com/iphone/ [Accessed 1 October 2015].
Gnyawali, D.R. & Park, B.-J.(., 2011. Co-opetition between giants: Collaboration with competitors for technological innovation. Research Policy, 40(1), pp.650-63.
Greve, H.R., Baum, J.A.C., Mitsuhashi, H. & Rowley, T., 2009. Built to Last but Falling Apart: Cohesion, Friciton and Withdrawal from Interfirm Alliances.
Hitt, L.M., 1999. IT and firm boundaries: Evidence from panel data. Information, 10(2), pp.134–49.
Kauffman, R.J., Li, T. & van Heck, E., 2010. Business Network-Based Value Creation in Electronic Commerce. International Journal of Electronic Commerce, 15(1), pp.113–43.
Ketchen Jr., D.J., Snow, C.C. & Hoover, V.L., 2004. Research on Competitive Dynamics: Recent Accomplishments and Future Challenges. Journal of Management, 30(6), pp.779-804.
Möller, K. & Rajala, A., 2007. Rise of strategic nets — New modes of value creation. Industrial Marketing Management, 36(7), pp.895-908.
Park, B.-J.R., Srivastava, M.K. & Gnyawali, D.R., 2013. Walking the tight rope of coopetition: Impact of competition and cooperation intensities and balance on firm innovation performance. Industrial Marketing Management , 43, pp.210-21.
Ritala, P. & Hurmelinna-Laukkanen, P., 2009. What’s in it for me? Creating and appropriating value in innovation-related coopetition. Technovation, 29, pp.819-28.
Samsung, 2015. Homepage. [Online] Available at: http://www.samsung.com/us/ [Accessed 1 October 2015].
Zhang, J. & Frazier, G.V., 2011. Strategic alliance via co-opetition: Supply chain partnership with a competitor. Decision Support Systems , 51, pp.853-63.
Electronic Markets, Computing Power and the Quants: Volatility & High Frequency Trading
Markets can be – and usually are – too active, and too volatile”
Joseph E. Stiglitz – Nobel prize-winning economist
As some of you might have noticed, the oil market is currently showing wilder fluctuations at a higher frequency than before: volatility has increased. This happened after the market enjoyed relative stability price stability during the last few years. Of course, this is partly due to U.S. shale oil production, quite high supply and lower demand due to the financial crisis aftermaths, and growing demand and supply uncertainties. However, another factor affecting volatility is the increased usage of trading indicators in combination with changes in trading practices: an increasing number of players in the financial markets tend to use algorithmic and high-frequency trading practices (HFT).
Like other derivative based markets, also the crude oil market has a wide range of market players of which many are not interested in buying physical oil. HFT traders are probably drawn towards oil futures due to the market’s volatility. Because, the greater the price swings, the greater their potential profit. HFT is not an entirely new practise, but as technology evolves it is increasingly present in today’s electronic financial markets.
These players make extensive use of computing and information technology in order to develop complex trading algorithms, which are often referred to as the “quants”. HFT trading firms try to gain advantage over other competitors which are still using mostly human intelligence and reaction times. The essence of the game is to use your algobots to get the quickest market access, fastest processing speeds, and perform the quickest calculations in order capture profits which would have otherwise been earned by someone who is processing market data slower (Salmon, 2014). At essentially the speed of light, these systems are capable of reacting to market data, transmitting thousands of order messages per second, as well as automatically cancelling and replacing orders based on shifting market conditions and capturing price discrepancies with little human intervention (Clark & Ranjan, 2012). New trading strategies are formulated by using, capturing and recombining new information with large datasets and other forms of big data available to the market. The analysis performed to derive the assumed direction of the market makes use of a bunch of indicators such as historical patterns, price behaviour, price corrections, peak-resistance and low-support levels, as well as (the moving average of) trends and counter-trends. By aggregating all this information, the databases and its (changes of) averages are usually a pretty good predictor of potential profits for HFT companies.
This information technology enabled way of trading is cheaper for the executors, but imposes great costs on workers and firms throughout the economy. Although quants provide a lot liquidity, but can also alter markets by placing more emphasis on techniques and linking electronic markets with other markets (as well information as financial linking). In most cases, non-overnight, short-term strategies are used. Thus, these traders are in the market for quick wins and use only technical analysis in order to predict market movements instead of trading based upon physical fundamentals, human intelligence or news inputs.
Although, some studies have not found direct prove that HFT can cause volatility, others concluded that HFT in certain cases can transmit disruptions almost simultaneously over markets due to its high speed in combination with the interconnectedness of markets (FT, 2011; Caivano, 2015). For example, Andrew Haldane, a top official at the Bank of England said that HFT was creating a system risks and the electronic markets may need a ‘redesign’ in future (Demos & Cohen, 2011). Further sophistication of “robot” trading at decreasing cost is expected to continue in the foreseeable future. This can impose a threat to the stability of financial markets due to amplified risks, undesired interactions, and unknown outcomes (FT, 2011). In addition, in a world with intensive HFT the acquisition of information will be discouraged as the value of information about stocks and the economy retrieved by human intelligence will be much lower due to the fact that robots now do all the work before a single human was able to process and act on the information (Salmon, 2014). For those interested in the issues of HFT in more detail, I would like to recommend the article of Felix Salmon (2014).
However, it is important to mention that not only HFT and automated systems and technicalities do cause all the volatility. Markets have known swift price swings for centuries. For example in the oil industry, geopolitical risk can cause price changes as it is an exhaustible commodity. As most people know, also human emotions can distort markets as well as terrorist actions. Even incomplete information such as tweets from Twitter and Facebook posts can cause shares to jump or plumb nowadays. As markets are becoming faster, more information is shared and systems can process and act on this information alone quickly due to (information) technological advancements, which will in turn increase volatility. Therefore, it is more important than ever that there are no flaws in market data streams, e.g. the electronic markets and its information systems need to have enough capacity to process, control, and display all the necessary information to market players in order to avoid information asymmetries.
In my opinion, HFT is strengthened by the current state of computing technology and cost reductions of computing power now enable the execution of highly complex algorithms in a split-second. As prices go down and speed goes up, these systems will become more and more attractive as they outperform human intelligence. This can potentially form an issue in the future: volatility might increase and it is this volatility that provides many opportunities for traders, but not the necessary stability for producers and consumers which are more long-term focussed.
Therefore, in the future action is necessary to restrict, or at least reduce, HFT. Examples might be big data collection by regulators to monitor risk and predict future flash crash or volatility events. Another option can be the introduction of a “minimum resting period” for trading. So traders have to hold on to their equity or trade for a pre-specified time before selling it on, reducing the frequency and thus volatility. Also, widening spreads will help as it makes quick selling and buying more costly and thus HFT less attractive.
Given that the financial market’s watchdogs currently have difficulties with regulating automated trading. Some HFT firms have enjoyed enormous profits from their trading strategies (Jump trading, Tower Research capital, DRW). For example also during the last turmoil of August this year, a couple of HFT firms earned a lot of money (Hope, 2015). Due to these successes, new players enter the market and competition is growing. As speed is essential (even milliseconds matter) HFT firms try to place their servers physically near the exchanges (such as the NYSE), so they can increase their advantage. The HFT firms are expected to stay in the market, ultimately resulting in more price volatility (Hope, 2015).
What do you think, how far should we let our technology intervene with the financial markets? Do we really need to allow algobot’s or similar automated trading systems to influence our financial markets as they can perform the human job faster, fact-based and at a lower cost? Or should the financial markets be always human intelligence based, which might be ultimately better for the economy as a whole and also provides a richer knowledge base of the real world economy (as it this information remains valuable and numbers do not always say everything)?
In case you are interested in this dilemma, I can also recommend reading Stiglitz’ speech at the Federal Reserve Bank of Atlanta in 2014.
Author: Glenn de Jong, 357570gj
How do you stay up-to-date on what is going on in Tech world?
Living in a society in which nothing goes unseen, one should be selective on what he or she chooses to see, otherwise you might be overwhelmed by all kinds of impressions which might not at all be relevant to your interests. Traditionally, a lot of knowledge came in through the local newspaper, a type of media which limits the control you have on learning what you want to learn. Nowadays, the News is a rapidly developing industry. News can be followed through traditional channels such as the television or the mentioned newspaper, but recently, multiple new channels have been added to the possibilities.
A good example of this is the Dutch company Blendle, which allows you to only buy specific articles of papers and magazines which you are interested in. Another is the phone application Appy Geek which allows you to only receive news about very specific topics within information technology. The web blog we are using now is a good example as well. To this, numerous other websites, applications and social media pages can be added, which can provide you with the news you are interested in.
However, how do you get to part of the internet which perfectly matches to your desires? I would say helping each other out would be a great way to start. I am sure most BIM students would like to be aware of what is going on in the world of technology. Some of us might be very familiar with the ins and outs of Information Technology and Strategy, some might not…
Therefore, my request to all of you is, to SHARE with us the media you are following to keep up-to-date on News which might be interesting for the whole group of BIM students. Facebook pages, LinkedIn groups, News applications or websites, Youtube Channels, Journals, Television programs and anything else you can possibly think of. Also, a motivation on why and how your suggestion is interesting for all of us is of course always welcome. I am very much looking forward to your comments!
Author: Colin van Lieshout