Lepak Smith And Taylor 2007

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Mol et al. (2005) conceptualize a passage through the multiple value-adding stages, which eventually accumulates into an overall value proposition. In their model, each stage contributes a particular proportion of the overall value created, but value capture by each contributor depends on the participants’ relative bargaining power (Brandenburger and Nalebuff 1997; Brandenburger and Stuart 1996; Porter 1985; Teece 1986). Of course, this process may not always be as sequential as this description suggests, but no viable product or service will materialize unless a collection of these value-contributing parties can settle on a division of spoils that keeps each party willing to participate.

For firms it is not possible to auto-create value. They are inserted in a constant flow of inbound and outbound activities which are the main moments of value exchange. Porter (1985) sees, in its value chain theory, activities as linear chains, but there are many different scholars who argue that linear logic is not capable anymore to explain all the forms and structures of value activities (Christensen & Rosenbloom, 1995ï¼›Duncan & Moriarty, 1997).

In reality, among firms and their customers and suppliers, there are not only one way value chains starting from suppliers and ending at customers but also reverse or transverse value chains from customers to the focal firm, from customers to suppliers, from the focal firm to suppliers. These complicated linkages of value chains constitute a "value network" (Bovet & Martha, 2000). All the value creation and value capture activities happen within such network (Hamel, 2000).

Value network has been defined a cluster of economic actors which collaborates to deliver value to the end consumer and where each actor takes some responsibility for the success or failure of the network (Barnes 2002; Bitran et al. 2003; Pigneur 2000; Sabat 2002). According to Basole and Rouse (2008) five types of actors have been distinguished: consumers, service providers, tier 1 and 2 enablers, and auxiliary enablers.

Since value creation is the core element of business model and value creation happens in network structure, investigating business model from value network perspective is necessary and possible. In fact, some scholars have already tried to introduce value network view into the field of business model and suggested that business model has a value network structure (Christensen & Rosenbloom, 1995; Chesbrough & Rosenbloom, 2002; Voepel et al., 2004).

In network research, network is usually regarded as a series of socialites or relations linking actors. Existing literatures on value network have similar view. For instance, Allee (2000) states that a value network generates economic value through complex dynamic exchanges between one or more enterprises, customers, suppliers, strategic partners and the community. According to Wu and Zhang (2009) actors in value network include the focal firm and its affiliated companies, and all the organizations and individuals which affect the focal firm’s value creation activities. The relations among actors include tangible flows like products, service and profit, and intangible ones like knowledge, emotion and influence. Actually, these flows are value flows formed by value activities like value creation, value delivery and value capture. Actors are linked by these value flows and constitute network structure.

Network research usually analyzes relation level attributes like content, strength, directness and indirectness and structure level ones like density, scale and centrality. Relation content and the form of linkage are the most crucial attributes in value network research. Relation content refers to what the value flow is, e.g. products, information and cash flow. The form of linkage refers to the mode of network governance, including equity arrangement and non-equity coalition, and can be classified from dimensions like frequency, duration, degree of trust, integration and control. These elements including nodes, relation and structure and their attributes constitute the basic objects of value network analysis.

From the perspective of value network, Wu and Zhang (2009)proposed the definition of business model as "the system connecting internal and external actors by value flows to create, deliver and capture value".

Based on this definition, the components of business model comprise three levels: value actors as the network nodes, value flows as the network relation and part of or the whole value network as the network structure. The key attribute of value actors is the division of labor, e.g. basic research, product design, component manufacturing, integration and distribution. Key attributes of value flow include linked actors, the content of value and the form of relation. The combinations of value actors and value flows determine the structural features of value network like scale, density and centrality.

In line with the thinking of Normann and Ramìrez (1994) we agree that every company occupies a position on a value chain which has upstream inputs suppliers, and downstream other businesses or final consumers. The company adds value to the inputs before passing them down to the next actor. In a so volatile and competitive environment, strategy (the art of positioning a company in the right place on the value chain, defining the right value-adding activities) is no longer a matter of positioning a fixed set of activities along a value chain (adding value), but companies have to find ways to reinvent value. That is under the impact of IT and the resulting globalization of markets and production, that new methods of combining activities into offerings are producing new opportunities for value creation. One of the implications of this phenomenon is that the distinction between physical products and intangible services is currently breaking down. Very few offerings (the way all products and services are grounded in activity) can be clearly defined as one or the other anymore. This is a change in the entire value-creating system and not merely in technology or even in the transaction itself. More opportunities for value creation are packed into any particular offering. The new logic of value presents companies with three strategic implications: value occurs not in sequential chains but in complex constellations, shifting the business’ goals towards the customers’ involvement in the value creation process; as potential offerings become more complex and varied, so as the relationships necessary to produce them, the company’s principal task is becoming the reconfiguration of its relationships and business systems; and lastly, if the key to creating value is to co-produce offerings that mobilize customers, then the only true source of competitive advantage is the ability to conceive the entire value-creating system and make it work.

According to this perspective, this section aimed to examine the underlying logics of value in a network of companies, but more specifically the dynamics of value creation and value capture have to be analyzed when influenced by the open innovation model, impacting on companies’ behavior in the network.

Innovation, "the implementation of an invention taken to market" (Chesbrough 2003), is one of the key factors behind all successful organizations. Being innovative means more than having good ideas. A serious, sustained commitment to innovation implies risk and investment. But the pay-off is substantial. In contemporary, fast-changing trade environments, innovation is not just a matter of generating profitability; it can be a matter of survival (Riddle, 2000). As a result, one of the core competencies of many organizations operating in today’s competitive marketplace is an ability to successfully manage adaptation and innovation (Dooley and O'Sullivan, 2001).

In the past, internal research and development (R&D) was a valuable strategic asset, even a formidable barrier to entry by competitors in many markets. Only large corporations could compete by doing the most R&D in their respective industries (and subsequently reaping most of the profits as well). Rivals who sought to unseat those powerhouses had to ante up considerable resources to create their own labs, if they were to have any chance of succeeding (Chesbrough, 2003).

There are some recent research findings which indicate that an increased financial investment in advanced technologies or innovation is, in itself, not sufficient. The inability to manage these innovations properly and capture the improvement, and the generated value effectively also contributes to the wide competitive gap between the organizations and their competitors (Davis, 1989). In addition to this, shorter innovation cycles, industrial research and development’s escalating costs as well as the dearth of resources are reasons why companies are searching for new innovation strategies. The phenomenon is reinforced by the increasing globalization of research, technologies and innovation, by new information and communication technologies as well as by new organizational forms and business models’ potential (Gassmann, and Enkel 2004).

In fact, the leading industrial enterprises of the past have been encountering remarkably strong competition from many

Open source is the most prominent example of the revolutionizing of the conventional innovation process: worldwide, several thousand programmers develop highly sophisticated software that competes with Microsoft’s products. The Open Source approach is the phenomenon of co-operative software development by independent software programmers who, on demand, develop lines of codes to add to the initial source code to increase a program’s applicability, or enable new applications (Gassmann and Enkel, 2004). The idea behind this approach is co-operative software creation outside firm boundaries, which is thereafter freely available. However, the source code too has to be freely available. This principle drives the evolutionary development and improvement of the software. Famous examples of the development of open source software are Linux, the Apache server or Freemail.

The need of companies to have a lightweight. low-cost, and fully customizable operating system for their devices without having to consume resources to develop a proprietary software, allowed Android to become one of the most used open source operating systems. As a result from this success, it has seen additional applications other devices, such as televisions or games consoles.

For these situations, a new approach, Open innovation, is emerging in place of Closed Innovation. "Open innovation is a paradigm that assumes that firms can and should use external ideas as well as internal ideas, and internal and external paths to market, as the firms look to advance their technology" (Chesbrough, 2003).

Open innovation combines internal and external ideas into architectures and systems whose requirements are defined by a business model. The business model utilizes both external and internal ideas to create value, while defining internal mechanisms to claim some portion of that value. Open innovation assumes that internal ideas can also be taken to market through external channels, outside the current businesses of the firm, to generate additional value.

Ideas can still originate from inside the firm’s research process, but some of those ideas may seep out of the firm, either in the research stage or later in the development stage. Other leakage mechanisms include external licensing and departing employees. Ideas can also start outside the firm’s own labs and can move inside.

At root, the logic of Open innovation is based on a landscape of abundant knowledge, which must be used readily if it is to provide value to the company that created it. The knowledge that a company uncovers in its research cannot be restricted to its internal pathways to market. Similarly, its internal pathways to market cannot necessarily be restricted to using the company’s internal knowledge.

Chesbrough and Schwartz (2007) define open innovation as the "(...) use of purposive inflows and outflows of knowledge to accelerate internal innovation, and expand the markets for external use of innovation, respectively".

In this open innovation, firms commercialize external (as well as internal) ideas by deploying outside (as well as in-house) pathways to the market. Specifically, companies can commercialize internal ideas through channels outside of their current businesses in order to generate value for the organization. Some vehicles for accomplishing this include startup companies (which might be financed and staffed with some of the company’s own personnel) and licensing agreements. In addition, ideas can also originate outside the firm’s own labs and be brought inside for commercialization. In other words, the boundary between a firm and its surrounding environment is more porous, enabling innovation to move easily between the two. At its root, open innovation is based on a landscape of abundant knowledge, which must be used readily if it is to provide value for the company that created it. However, an organization should not restrict the knowledge that it uncovers in its research to its internal market pathways, nor should those internal pathways necessarily be constrained to bringing only the company’s internal knowledge to market. This perspective suggests some very different rules. For example, no longer should a company lock up its IP, but instead it should find ways to profit from others’ use of that technology through licensing agreements, joint ventures and other arrangements (Cardoso, Carvalho, and Ramos. 2009).

Enabling factors for this successful model are the short design-build-test cycles (rapid change of generations), new releases’ low transaction costs, the great number of ideas that are enabled by the number of programmers involved and which, in turn, create variation and mutations as well as the selection criteria (survival of the fittest, the principle on which acceptance within the user community is based). Other success factors are a stable structure which entails an accepted system architecture and language, the communication, which is a combination of ideas and technical solutions, as well as a strong incentive (e.g. "Beat Microsoft in the case of Open Source developers). Open innovation means that the company needs to open up its solid boundaries to let valuable knowledge flow in from the outside in order to create opportunities for co-operative innovation processes with partners, customers and/or suppliers. It also includes the exploitation of ideas and IP in order to bring them to market faster than competitors can. Open innovation principles therefore describe how to deal best with strategic assets in order to meet market demands and company requirements. The open innovation approach is about gaining strategic flexibility in the strategic process and creating a critical momentum in innovation diffusion in order to generate customer acceptance and create industry standards (Gassmann and Enkel 2004).

The open innovation paradigm implies co-development of partnerships, the development of a mutual working relationship (versus the traditional defensive business strategy), or the use of external sources of knowledge.

These activities can be focused on the delivery of a new product, technology, or service, on the reduction of R&D expenses, on the expansion of the innovation output and its impact, and even focused on the opening of new markets otherwise inaccessible (Chesbrough and Schwartz 2007).

As Törrö holds, the open innovation paradigm means firms practice the sourcing of external competences, use networks as an external resource pool and these means they can benefit from global intellectual capital brokering (Törrö 2007). Becker and Zirpoli (2007) also mention the boundaries of the firm in the open innovation process. A strong relationship between the existence of a firm innovation strategy and the interaction with universities is surely important.

Some factors favorable to the existence of university partners (Bercovitz and Feldman 2007) are the perceived ability to fully appropriate results due to different objectives, what puts appropriability as a partnership motivation; and also patenting results. This is changing, though, because of the growing assertion of property rights. Other factors important to choose an innovation partner are the limited risk of competition and the central role of universities in an innovation system. Partners possibly will have to implement a new business model, considering a common objective for the partnership (for example, to increase profitability or expand market access) (Becker and Zirpoli 2007; Lettl 2007). Becker and Zirpoli (2007) also refer that, surprisingly, firms are adapting business models and value chains to open innovation demands. R&D capabilities of both firms should be assessed (Lettl 2007) and classified, between core, critical or contextual categories. Core mean, usually, key sources, sparingly shared; critical capabilities are those essential for a product's success and finally, contextual are the ones which aren’t essential to one of the partners, yet essential or core to the other, maybe smaller partner.

Business model alignment usual problems can be mis-assessment of the objectives, mis-judgement of the criticality of capabilities, lack of alignment including complementarity, too) and this should be a reason to carefully determine the degree of business model alignment and to manage the partnership caring for future needs (Huang et al. 2002; Lettl 2007).

More recent studies in innovation have stressed the growing relevance of external sources of knowledge and creativity (Perkmann & Walsh, 2007). These studies have showed that more than trusting their R&D labs, organizations should devote more efforts in open innovation (Chesbrough 2006). This means that innovation can be considered the result of knowledge networks connecting several organizations instead of a function within one organization (Coombs et al. 2003; Powell et al. 1996).

Parallel to the organizational concern to keep the growth of their structure, they are also required to trust in external sources for the innovation processes’ input (Törrö, 2007). Collaboration with suppliers is already an important part of the innovation strategy of large organizations. Simultaneously, the traditional outsourcing of innovation, in which the full responsibility for part of the innovation process is transferred to another organization, is growing in popularity. The trend is, however, to form extensive networks in order to reach external competencies (Cardoso, Carvalho, and Ramos. 2009).

Laursen and Salter (2006) have explored the relationship between the openning of the organization to its external environment with the innovation performance. They have concluded that the organizations that are opened to external sources of innovation, or with external inquiry channels, have a higher level of innovation performance. By studying British industrial companies, the authors showed that these companies kept systematic strategies to search various channels and in doing so they were able to get ideas and resources that enabled them to identify and explore opportunities for innovation. This study follows the work of Cohen and Levinthal (1990), who argue that the ability to explore external knowledge is a key element of the innovation performance.

With the aim of promoting the internalization of the organization, the Open innovation strategy can induce an improvement in the performance of the innovation processes. Kafouros, Buckley, Sharp, e Wang (2007) suggest that organizations need to have some internationalization maturity, being active in various markets, to be able to successfully innovate.

The two main conditions that can accommodate the application of an Open innovation Model are when there is a globally distributed knowledge in the market and when the organizations do not have the enough resources to trust only upon internal innovation (Chesbrough, Harvard, 2003). Other organizations implementing the close model have in facto to make substantial investments in large R&D laboratories to create the conditions for the emergence of knowledge and creativity. We have seen, that on the other side the Open innovation model praise the knowledge flow through the organization boundaries to enable the accelerated development of internal innovations (i.e., supported by the licensing of technologies developed by others), and to expand the use of technologies internally developed that could become underused (Cardoso, Carvalho, and Ramos. 2009).

A first way of doing so is by recognizing that open innovation reflects much less a dichotomy (open versus closed) than a continuum with varying degrees of openness (Dahlander and Gann, 2010). Open innovation also encompasses various activities, e.g., inbound, outbound and coupled activities (Gassmann and Enkel, 2004), and each of these activities can be more or less open. Open innovation measurement scales should therefore reflect this multidimensional nature and allow the dimensions to be not (fully) correlated.

Second, Dahlander and Gann (2010) use the dimensions of inbound versus outbound open innovation and pecuniary versus non-pecuniary interactions. The four cells in the matrix are labeled as acquiring, sourcing, selling, and revealing.

Third, another perspective is to consider the various knowledge flows in open innovation. Lichtenthaler and Lichtenthaler (2009) distinguish between three knowledge processes (knowledge exploration, retention, and exploitation) that can be performed either internally or externally. In this way they construct a 3x2 matrix to identify six knowledge capacities. An aspect not fully analyzed yet, is to what extent companies need to develop all capacities or whether the capacities can compensate for each other, thereby enabling companies to choose for a specific and differentiated innovation strategy. This is related to Dahlander and Gann’s (2010) conclusion that internal R&D is a necessary complement to openness for outside ideas, but that it is less clear whether the outside ideas can be substitute for internal R&D.

Fourth, open innovation practices can also be grouped by distinguishing between process and outcome. This model links discussions in innovation management with those in IT/IS management, where much research has been focused on open source software, see also von Hippel (2010). Both the process and the outcome of innovation can be closed or open, leading to a 2x2 matrix. Closed innovation reflects the situation, where a proprietary innovation is developed in house (Chesbrough, 2003), and both the process and the outcome are closed. In the second category of private open innovation the outcome is closed (a proprietary innovation) but the process is opened up, either by using the input of external partners or by externally exploiting an internally developed innovation. Many well-known case studies belong to this category, such as Procter&Gamble (Huston and Sakkab, 2006). According to the second dimension, the outcome of the innovation process is either proprietary (closed) or available to others (open). Recently, the interest for this dimension is growing, as it has become clear that advantageous appropriability regimes do not always be equal to strong intellectual property protection (Pisano, 2006). Devoting scarce resources to innovation and then to give away the outcome for free seems highly unlikely for economists (e.g., Lerner and Tirole, 2005; Kogut and Metiu, 2001), but in some cases it makes good economic sense (von Hippel and von Krogh, 2006). A classic example of such public innovation is standard setting, where the original innovators do not exclude others to use an innovation in order to reap the benefits of a de facto market standard, examples include the introduction of JVC’s VHS videotape in 1976 and the IBM PC in 1981. The final cell is labeled as open source innovation and refers to instances, where both the innovation process and the outcome are open. Open source software (e.g. Linux, Android operating systems) is the best known example of this category.

Based on an empirical study of 124 companies, Gassmann e Enkel, (2004) identified three Open innovation core processes: (1) Outside-in process which enriches the organizational knowledge base by integrating suppliers, clients, and other external sources of knowledge; (2) Inside-out process which explores external markets to sell internal ideas. (3) Coupled process a mix between the outside-in and inside-out processes working in partnership with other organizations.

Based on their studies, Gassman and Enkel (2004)found that not all companies choose the same core open innovation process, or have integrated all three processes to the same degree. Each company chooses one primary process, but also integrates some elements of the others.

Deciding on the outside-in process as a company’s core open innovation approach means that this company chooses to invest in co-operation with suppliers and customers and to integrate the external knowledge gained. This can be achieved by, e.g., customer and supplier integration, listening posts at innovation clusters, applying innovation across industries, buying intellectual property and investing in global knowledge creation. Opening up the internal innovation process by integrating suppliers and/or customers is not new. The literature on inter-firm collaboration in general and on supplier relationship management in particular repeatedly suggests that firms can significantly benefit if they are able to set up differentiated relationships with suppliers (Dyer et al., 1998; Boutellier, Wagner, 2003). If firms possess the necessary competence and supplier management capabilities, they could successfully integrate internal company resources with the critical resources of other supply chain members, such as customers or suppliers, by extending new product development activities across organizational boundaries (Fritsch, Lukas, 2001). According to Gassmann and Enkel (2004), suppliers and customers should start an integration process in order to get valuable sources of knowledge and competences that are needed for product development.

The "ability of a firm to recognize the value of new, external information, assimilate it, and apply it to commercial ends is critical to its innovative capabilities" (Cohen, Levinthal 1990), since many organizations lack the ability to listen to their external world and efficiently process the signals received. The efficiency of both knowledge generation and application is contingent on the concept of absorptive capacity.

Companies that choose the inside-out process as a key process focus on the externalizing of the company’s knowledge and innovation in order to bring ideas to market faster than they can through internal development. Deciding to change the locus of exploitation to outside the company’s boundaries means generating profits by licensing IP and/or multiplying technology by transferring ideas to other companies. As already mentioned in the beginning, commercializing ideas in different industries (cross industry innovation) and therefore focusing on the inside-out process in open innovation can increase a company’s revenue immensely. The different approaches within the inside-out processes can be summarized as: leveraging a company’s knowledge by opening the company’s boundaries and gaining advantages by letting ideas flow to the outside. The inside-out process as a major process in an open innovation strategy, creates a substantial advantage for companies that fulfill certain criteria.

The exploitation of knowledge outside the company is related to the company’s capability to multiply and transfer its knowledge to the outside environment. The capability to multiply innovation by external exploitation is strongly connected to firm’s knowledge transfer capability and the selection of appropriate partners. Only if the company is able to codify and share its knowledge with the external entity, will the commercialization of ideas be successful. But also the strategic selection of partners that are willing and able to multiply the new technology is an important element of the multiplicative capability of the firm.

Companies that decide on the coupled process as a key process, combine the outside-in process (to gain external knowledge) with the inside-out process (to bring ideas to market). In order to do both, these companies co-operate with other companies in strategic networks. Co-operation refers to the joint development of knowledge through relationships with specific partners, such as consortia of competitors (Hagedoorn, 1993; Chiesa, Manzini, 1998; Ingham, Mothe, 1998), suppliers and customers (von Hippel, 1988; Hakanson, Johanson, 1992), joint ventures and alliances (Kogut, 1988; Hamel, 1991; Mowery et al., 1996) as well as universities and research institutes (Bailetti and Callahan, 1992; Conway, 1995; Cockburn, Henderson, 1998; Santoro, Chakrabarti, 2001). Companies working in strategic alliances or joint ventures know that one major success factor for cooperation is the right balance of give and take. A crucial precondition for working in co-operative innovation processes is the capacity to integrate foreign knowledge into a company’s own knowledge and technology and to externalize it in order to enable the partner to learn. Success is based on a company’s ability to find and integrate the right partner that can provide the competencies and/or knowledge needed to gain a competitive advantage in the own industry.

The notion of "relational capacity" as a source of competitive advantage relates to Singh’s idea (1998) that a company’s value is strongly related to its capability to build and maintain relationships with partners in order to enable joint development in strategic alliances (Dyer, Singh, 1998; Johnson, Sohi, 2003). A company can be differentiated by the networks to which it is connected and the alliances and joint ventures that it can undertake. Therefore, the relationships with other companies, complementary companies and competitors can be a firm’s major assets and a necessary precondition for the linked process within an open innovation strategy.

This section retraces the history of the video game industry, putting particular attention towards the impacts of innovations in the market and the relationship changes among different players in the value network. This section is particularly relevant in order to fully understand the evolution of business models, the impact of new technologies on them and how the incumbents are reacting in a situation of power shifting.

The video games industry is highly dependent upon technological innovation, both from within and outside the industry. There is a strong relationship between the video games and electronics industries, with the latter traditionally driving innovation and change in the former. From 1975 onwards, semiconductor looked to games machines as an ideal application for their new technology, which had a profound effect upon the production organization of video games.

The creation of an interconnected, but autonomous, software industry would eventually allow firms to enter the video games production system concentrating upon one particular function, such as game developing or publishing. However, during its earlier stages the industry was not big enough to support vertical disintegration (Johns 2006).

The late 1970s and early 1980s saw a high turnover of firms entering markets across the globe, attempting to dominate hardware production. But competition was intense and many firms went bankrupt. Nevertheless, hardware and software sales remained at high enough levels to support the fewer companies still operating in the market (Haddon, 1999), and the predicted failure of gaming did not occur. Since this time, technological developments, particularly those in the semiconductor industry, have driven the evolution of the video games industry. As a result, the games business goes through a cycle every 5–6 years, as one generation of consoles succeeds another. As a new generation of consoles emerges, a boom in sales results during which intense competition takes place between the hardware manufacturers, with one producer dominating sales after a relatively short time.

In the first cycle, the dominant hardware manufacturer was Atari, with 80% of its home US market (Haddon, 1999), later followed by Nintendo’s success during the late 1980s and early 1990s. Of crucial importance to the hardware manufacturers is the supply of software, which is now provided by developers and publishers. These developers tend to take a couple of years to develop new generation software which, when released, intensifies the boom in hardware sales. The decline of Atari is often blamed on too much poor quality software being available, which resulted in Nintendo taking tight control over the publishing of software for both the Nintendo Entertainment System (NES) and Super Nintendo Entertainment System (SNES). Nintendo, and later their main rival Sega, both used cartridge technology to prevent copying, as developers were forced to pay the hardware producers to manufacture their cartridges. This practice was seen as unfair by the software developers, but Nintendo and Sega were effectively seeking to retain control over the production of software for their consoles, and crucially, to take a proportion of revenue generated by its sale. Booms in the sales of both hardware and software are subsequently followed by lulls, as consumers anticipate the launch of the next generation machines. In the mid-1990s these were the Sony PlayStation, Nintendo 64 and Sega Saturn, offering 32- or 64-bit consoles, and in the case of Sony and Sega using CD-ROM storage. These consoles required new levels of investment, and as a result, higher risk.

The intense competition between manufacturers is decided by the consumer based on a number of factors: the price of the console, the availability and quality of software, the quality of graphics and game-play, and marketing and peer review. In addition, enormous competitive advantage seems to be gained by being the first of the next generation of console to launch, which has been the case with both the Sony PlayStation and PlayStation2. As with many other cultural industries, demand in the video games industry is difficult to estimate and the highly competitive, volatile and risky environment results which directly impacts upon the structure and evolution of Global Production Networks.

The video games industry has a history which accelerates from small firms, maybe even individuals programming software in their bedrooms, producing for a highly niche market, to an industry dominated by multinational hardware producers. Over 100 million 128-bit consoles and hand-held gaming devices have been sold across the world to date. Video game GPNs have evolved greatly during this time, producing shifting geographies and changing power relationships between actors. In order to understand how these relations are structured and organized over space, it is first necessary to conceptualize the video games production network. Johns (2006) argued that the video games industry should be divided into seven key stages of production, beginning with financing and ending with consumption.

This conceptualization should not be viewed in isolation as a great number of other production networks are intimately connected to that of video games production. These include inputs from the music industry (in the form of tracks on games), the film industry (concept and script development, voice-overs, and visual effects), and the advertising and marketing industries following the completion of both hardware and software production. Indeed, the cultural industries are becoming further interconnected as media conglomerates seek to deliver concepts via an increasing number of forms of content delivery. Therefore, while video games are an important and growing industry, it is gaining greater significance through its connections to other cultural sectors.

Before the end of 2001, Microsoft Corporation, best known for its Windows operating system and its professional productivity software, entered the console market with the Xbox. Based on Intel's Pentium III CPU, the console used much PC technology to leverage its internal development. In order to maintain its hold in the market, Microsoft reportedly sold the Xbox at a significant loss and concentrated on drawing profit from game development and publishing. Shortly after its release in November 2001 Bungie Studio's Halo: Combat Evolved instantly became the driving point of the Xbox's success, and the Halo series would later go on to become one of the most successful console shooters of all time. By the end of the generation, the Xbox had drawn even with the Nintendo GameCube in sales globally, but since nearly all of its sales were in North America, it pushed Nintendo into third place in the American market.

In the early 2000s, mobile games had gained mainstream popularity in Japan's mobile phone culture, years before the United States or Europe. By 2003, a wide variety of mobile games were available on Japanese phones, ranging from puzzle games and virtual pet titles that utilize camera phone and fingerprint scanner technologies to 3D games with PlayStation-quality graphics. Older arcade-style games became particularly popular on mobile phones, which were an ideal platform for arcade-style games designed for shorter play sessions.

Mobile phone gaming revenues passed 1 billion dollars in 2003, and passed 5 billion dollars in 2007. More advanced phones came to the market such as the N-Series smartphone by Nokia in 2005 and the iPhone by Apple in 2007 which strongly added to the appeal of mobile phone gaming. At Apple's App Store in 2008, more than half of all applications sold were iPhone games.

The seventh generation opened early for handheld consoles, as Nintendo introduced their Nintendo DS and Sony premiered the PlayStation Portable (PSP) within a month of each other in 2004. While the PSP boasted superior graphics and power, following a trend established since the mid-1980s, Nintendo gambled on a lower-power design but featuring a novel control interface.

In console gaming, Microsoft stepped forward first in November 2005 with the Xbox 360, and Sony followed in 2006 with the PlayStation 3, released in Europe in March 2007. Setting the technology standard for the generation, both featured high-definition graphics, large hard disk-based secondary storage, integrated networking, and a companion on-line gameplay and sales platform, with Xbox Live and the PlayStation Network, respectively. Both were formidable systems that were the first to challenge personal computers in power while offering a relatively modest price compared to them.

Nintendo would release their Wii console shortly after the PlayStation 3's launch, and the platform would put Nintendo back on track in the console race. While the Wii had lower technical specifications (and a lower price) than both the Xbox 360 and PlayStation 3, its new motion control was much touted. Many gamers, publishers, and analysts initially dismissed the Wii as an underpowered curiosity, but were surprised as the console sold out through the 2006 Christmas season, and remained so through the next 18 months, becoming the fastest selling game console in most of the world's gaming markets.

With high definition video an undeniable hit with veteran gamers seeking immersive experiences, expectations for visuals in games along with the increasing complexity of productions resulted in a spike in the development budgets of gaming companies. While some game studios saw their Xbox 360 projects pay off, the unexpected weakness of PS3 sales resulted in heavy losses for a few developers, and many publishers broke previously arranged PS3 exclusivity arrangements or cancelled PS3 game projects entirely due to rising budgets.

Beginning with PCs, a new trend in casual gaming, games with limited complexity that were designed for shortened or impromptu play sessions, began to draw attention from the industry. Many were puzzle games, such as Popcap's Bejeweled and PlayFirst's Diner Dash, while others were games with a more relaxed pace and open-ended play. The biggest hit was The Sims by Maxis, which went on to become the bestselling computer game of all time, surpassing Myst (as of May 2011, the franchise has sold more than 150 million copies worldwide, and is also the best-selling PC franchise in PC history).

Other casual games include Happy Farm and Zynga games like Mafia Wars, FarmVille, and Cafe World, among many others, which are tied into social networking sites such as Myspace, and Facebook. These games are offered freely with the option buy in game items, and stats for money and/or reward offers.

In 2008, social network games began gaining mainstream popularity following the release of Happy Farm in China which attracted 23 million daily active users. The most popular social network game is FarmVille, which in 2010 had over 80 million active users worldwide.

In 2009, a few cloud computing services were announced targeted at video games. These services allow the graphics rendering of the video games to be done away from the end user, and a video stream of the game to be passed to the user. OnLive allows the user to communicate with their servers where the video game rendering is taking place. Gaikai streams games entirely in the user's browser or on an internet-enabled device.

The new decade has seen rising interest in the possibility of next generation consoles being developed in keeping with the traditional industry model of a five-year console lifecycle. However, in the industry there is believed to be a lack of desire for another race to produce such a console. Reasons for this include the challenge and massive expense of creating consoles that are graphically superior to the current generation, with Sony and Microsoft still looking to recoup development costs on its current consoles and the failure for content-creation tools to keep up with the increased demands placed upon the people creating the resources such as art for the games on those consoles. The focus for new technologies is likely to shift onto motion-based consoles and peripherals, such as Nintendo Wii, Microsoft's Kinect, and Sony's PlayStation Move.

The shift to online and wireless games hurts the console game market in the near term, although new games being marketed for the current generation of consoles with improved motion-sensory technology, which changes the game-play experience and brings in a wider range of players, is limiting the decline. The migration of many massively multiplayer online games (MMOGs) from their subscription models to a free-to-play business model is increasing the number of players worldwide. The growth of micro-transactions is providing a boom for the industry. Casual games and social network games are important components of the online market, helping expand the demographic base and stimulate spending. Some developers are shifting their attention from console games to concentrate more on online games.

The growth of smartphones and tablets (such as the iPad) with improved graphic capabilities, is enabling developers to produce more-advanced wireless games and will drive demand for those games. Smartphones and tablets, aided by an intuitive-touch interface, are fast becoming the devices of choice for casual game players. At the same time, new application stores that make the purchase of games more user-friendly will increase the number of gamers willing to purchase games. The growth of advanced wireless networks, with their faster speeds, will enable wireless games to approach the quality of console games. The market for PC games will continue deteriorating as consumers turn their attention to newer technologies. Piracy of PC games, which is prevalent in certain markets, has also hampered the growth of the segment. The growth of the MMOGs, which usually require retail purchase of a PC game, continues to support the retail PC game market.

2012 started the era of microconsoles (Kelly 2013). Ouya, Gamestick, and GCW Zero are some of the consoles funded through Kickstarter (an open innovation online platform focused on funding projects) announced for the latest generation of consoles. They feature an open source operative system, and low hardware costs.

Traditional consoles had a digital distribution wing since 2002, but these online channels are not providing a great distribution window for independent games due to the over exposure of traditional titles and the costs involved for being present on the platform (not always sustainable from Indie developers).

The microconsoles, on the other hand, promise to be open, available, connected and easy to work with. One of the focus of these platforms will be the connectivity to the end user through online channels. Examples like Valve through Steam or Ouya through an adaptation of the Android store, are attempting to connect the developer and consumer in a faster and collaborative way. It can be easy for private to develop digital content for these platforms with rather cheap equipment and without a development team. Moreover, they can benefit from having an open community support on hardware and software modifications and add-ons, in a way never seen before in the video game industry.

Furthermore, with the customer base expansion brought by Nintendo (with its Wii and DS) and mobile applications (smartphones and tablet games) the importance of high-end graphics games is dropping. Many independent titles (like The Walking Dead, Dear Esther, Bastion, or Journey) are achieving great results in terms of sales and public reception showing a distinctive graphic style and gameplay. The sense of look, and the game experience itself is increasingly more important than simple high level graphic.

microconsoles are also leveraging their value proposition upon lower costs, both on hardware and distribution. In this era, the perceptible differences of performance hardware have largely given way to lower costs (e.g. Raspberry Pi). It seems that producing a low-to-medium power microconsole is doable, and that is why there is more than one project ongoing.

As already seen, the video games production network can be divided into two interrelated parts, hardware and software production and, although these are complementary, each has distinct organizational settings. Hardware production is conducted by console manufacturers who coordinate concept and research development, console production and distribution to the consumer. However, the degree to which these manufacturers complete these tasks in-house and, indeed, the relative importance of video games to their overall operations varies greatly between firms.

How particular firms are able to manipulate the production network to increase their percentage of revenue is a function of their positionality within the network, and an outcome of the power negotiations between themselves and other actors. In the case of the video games industry, clear differences between actors’ power in each production stage can be observed that are dramatically altering the structure and geography of the production network over time.

The console manufacturer performs a number of different functions of the production network in-house, such as publishing and developing its own games (which have an higher profit margin compared to hardware). However, all console manufacturers outsource a proportion of their games development, which is handled by an division separate from their own development.

Each console manufacturer is keen to have successful titles produced for their consoles and seek to obtain such titles from reputable third-party publishers. The console manufacturers are able to capture up to a third of the retail value of a game, generating significant profits. However, the console manufacturers use this licensing process to perform quality control, and if they wish are able to reject games. This represents the source of the console manufacturer’s direct power over publishers, as they use only approved publishers that have been formally vetted, creating an inner circle of preferred suppliers.

Through the control of manufacturing games, the console manufacturers are able to maintain a powerful grip on the activities of firms in the development and publishing stages of the production network. As the console manufacturers have a vested interest in producing high quality games for their products, they are often willing to offer generous funding to developers with promising concepts. As a result, developers are often keen to work directly with console manufacturers rather than independent publishers, but due to high competition for financial backing, most developers are unable to select the publisher they work with. In essence, developers are charged with the creative development of a game code, which is then passed over to the publisher who oversees the rest of the production network. Developers are relatively isolated in terms of network connectivity, occupying a more peripheral position than the console manufacturers and publishers. Consequently, they are often in a weak negotiating position and are unable to capture extra value. In addition, the publisher often retains the intellectual property rights to games, despite the initial concept and creative input originating with the developer.

However, while developers are often the weaker partner in negotiation with publishers and developers, this is not always the case. Two factors can greatly increase the power of developers; the reputation or history of the firms, or individuals employed by the firm, and the temporal position of negotiations within the broader cycle of the console market. Console manufacturers have a preferred list of developers to supply game code, based upon the track record of the development team. Here, factors such as the creative reputation of certain individuals for conceiving and developing popular titles, the genre of games produced and the particular concept being presented all play a role in the console manufacturer’s selection process. In essence, the developer is attempting to trade their unique creative skills which are, by their very nature, unquantifiable. The potential of a particular concept is an unknown until it is produced and reaches the consumer. As the publisher assumes the majority of the risk by financing the development of an idea, they have to negotiate the best deal in order to gain maximum revenue from successful titles. Therefore, publishers often seek developers with strong reputations for success and, as a result, these particular firms are placed in a more powerful position as trust and confidence assume extra significance in negotiating financing deals.

In addition, publishers are increasingly purchasing the licenses to produce game versions of other cultural products such as films or sports, or financing sequels to successful games, as the perceived risk on the investment is lower. Therefore, developers that have produced one successful title have greater negotiating power, especially if a publisher commissions them to develop a sequel to that game (although the publisher retains the intellectual property rights in the majority of cases).

When examining the power relations in the production network, it is essential to consider broader temporal dimensions, given the cyclical nature of the industry. The bargaining power of games developers varies greatly depending on the position of the console manufacturer, and as suggested above, upon the particular developer’s reputation. This generate a strong interdependence between hardware and software production, as hardware sales increased in turn as more games were available.

In comparison, rival console manufacturers Microsoft and Nintendo have struggled to come close to rivaling Sony’s installation base. Consequently, these hardware producers find it more difficult to attract publishers and developers as they offer smaller markets. The success of console manufacturers and publishers depends upon their ability to reach the consumers, hence the significant proportion (roughly 30%) of revenue captured by retailers. The relationship between publishers and retailers is often contested, and is the result of negotiations between these actors at the local, national and increasingly multinational scale.

However, as the retailers are aware of their importance to the video games industry, they are able to counter the dominance publishers hold in other stages of the production network. In this highly competitive industry, the retailers are in a strong position and are able to negotiate favorable agreements. While the power relationships between actors in the software production network are uneven, they are also greatly affected by their spatial dimensions. Indeed, the relationships outlined in Figure 4 must be placed within their geographical context. Primarily, these relations are conducted within the boundaries of the three major economic regions: North America, Europe and Asia Pacific. The prime reason for this bounding of activity is the organizational structures of the console manufacturers, through which they are able to gain a greater degree of control than would be possible in a truly global system. Publishers are required to submit code to the console manufacturer’s base in the region in which they are located. For example, a UK-based publisher wishing to publish a game on PlayStation2 has to submit code to Sony Computer Entertainment Europe (SCEE). If accepted, SCEE will publish the game and manufacture games suitable for PAL territories. If the publisher wishes to sell the game in the USA, they are required to begin the whole submission process again with Sony Computer Entertainment America (SCEA), but this is only possible if they have registered offices in North America.

Through this organizational structure, the console manufacturers have greater control over the release of software in each region, and maintain the tri-regional structure. It is also much easier for them to organize global releases of games than it is for third party publishers. However, if actors are able to find ways to overcome this imposed divide, their power within the production network can be greatly increased. Firms’ strategies to achieve this include increasing in size to gain greater economies of scale, particularly in relation to distribution, increasing access to finance and distribution (either through vertical integration or exclusive agreements), and increasing the geographical extent of operations. In particular publishers are highly connected and have developed a range of strategies to attempt to capture maximum value and thereby increase their power. The increased size and scope, both vertically and horizontally, of actors in the production network has resulted in the rapid internationalization of software production and distribution. While it can be argued that the production of console hardware is global, software production is organized around the tri-region structure created by the console manufacturers. While this technicality could easily be overcome (conversion chips are freely, but unofficially, available in most nations) these supra-regional divides are maintained by games developers and consumers alike. In fact, the geographical unevenness of video games production and consumption is further complicated by strong cultural differences between markets.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now