The History Of Commercial Volunteer Computing

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Private Volunteer Computing

Although true volunteer computing may be the most visible and most noble form of volunteer computing, it is not the only one. Volunteer computing can take other, less lofty but more practical, forms as well.

At the lowest level, volunteer computing principles can be used in private volunteer computing networks within organizations such as companies, universities, and laboratories to provide inexpensive supercomputing capabilities. Many companies and universities today have internal networks (intranets) with many PCs and workstations that remain idle most of the time – not only during off-hours, but also during office hours when they are mostly used for non-computational tasks such as word processing. Volunteer computing can be used to pool together the computing power of these existing and under-utilized resources to attain supercomputing power that would otherwise be unaffordable. This not only makes it possible for research organizations to satisfy their computational needs inexpensively, but also creates an opportunity for other organizations to consider the use of computational solutions and tools where they have not done so before.

For example, companies with heavy computational needs can use volunteer computing systems as an inexpensive alternative or supplement to supercomputers. Some potential applications include: physical simulations and modelling (e.g., airflow simulation in aircraft companies, crash simulation in car companies, structural analysis in construction firms, chemical modelling in chemical, biomedical, and pharmaceutical labs, etc.), intensive data analysis and data mining (e.g., for biomedical labs involved in genome research, for financial analysts and consulting firms, etc.), and high-quality 3D graphics and animations (e.g., for media and advertising companies).

At the same time, companies that have hitherto deemed computational solutions or tools too expensive can start benefiting from them. For example, manufacturing companies can benefit from computational analysis and process simulations of their complex pipelines. These can be used to expose bottlenecks and weak points in processes, predict the effects of changes, and lead to finding ways to make them more efficient and cost-effective. Marketing departments of companies can do data mining on sales and consumer data, finding patterns that can help them formulate strategies in production and advertising. In general, by providing an affordable way to do computation-intensive analysis, volunteer computing can let companies enhance the intuitive analysis and ad hoc methods currently used in planning and decision making, and make them more informed and systematic.

Like companies, universities and research labs can use volunteer computing to turn their existing networks of workstations into virtual supercomputers that they can use for their research. This is especially useful in financially constrained institutions, such as universities in developing countries that cannot afford to buy supercomputers and have thus far been unable to even consider doing computation-intensive research. Even in non-research-oriented universities that do not have a true need for supercomputing, virtual supercomputers built through volunteer computing can be used to teach supercomputing techniques, giving students skills they can use in graduate school or industry.

Most of the applications mentioned here can be implemented with conventional NOW or metacomputing software, and are in fact already being implemented in a growing number of organizations today. The advantage of volunteer computing over these is that: (1) it makes implementing applications dramatically easier for users and administrators alike, and (2) by doing so, it makes the technology accessible to many more organizations, including those that do not have the time and expertise to use conventional NOWs. For example, if a company decides to use its machines for parallel processing, system administrators do not need to spend time manually installing software on all the company machines anymore. With a web-based system, they can simply post the appropriate Java applets on the company web site, and then tell their employees to point their browsers to a certain web page. The employees do so, and leave their machines running before they go home at night. In this way, a company-wide volunteer computing network can be set up literally overnight, instead of taking days or weeks for installing software and educating users as it would in conventional systems.

Collaborative Volunteer Computing

The same mechanism that makes volunteer computing within individual organizations work can be used to make volunteer computing work between organizations as well. By volunteering their computing resources to each other, or to a common pool, organizations around the world can share their computing resources, making new forms of world-wide collaborative research possible.

Much research today is going into developing technologies for co-laboratories – world-wide virtual laboratories where researchers in different labs around the world can interact freely with each other and share data and ideas, as if they were all in one big laboratory [124, 159]. Technologies currently under development include videoconferencing, shared whiteboards, remote instrument control, and shared databases. Volunteer computing can enhance co laboratories by allowing researchers to share not only data and ideas, but computing power as well.

Even organizations that do not collaborate very closely can use volunteer computing mechanisms to barter trade their resources, depending on their need. This has interesting geographical possibilities. For example, a university in the United States can allow a lab in Japan to use its CPUs at night (when it is day in Japan) in exchange for being able to use the Japanese lab’s CPUs during the day (when it is night in Japan). In this way, we get a win-win situation where both labs get twice the processing power during the hours they need it.2

Taken to an extreme, pooling and barter trade of computing resources between organizations can lead to the formation of cycle pools – massive pools of processing power to which people can contribute idle cycles, and from which people can tap needed processing cycles. This can be used for organizations’ individual gains, as in the example above, but can also be a venue for larger organizations to help smaller ones and contribute toward a better social balance of resources. For example, a large university with more idle computer power than it needs can donate its idle cycles to the pool, and allow smaller universities and schools to make use of them for their own teaching and research needs.

Note that these possibilities are not unlike what grid systems seek to realize. Thus, in a way, volunteer computing can be seen as an enabling technology for grid-based systems.

Commercial Volunteer Computing

By giving people the ability to trade computing resources depending on their needs, volunteer computing effectively turns processing power into a commodity. With appropriate and reliable mechanisms for electronic currency, accounting, and brokering, market systems become possible, allowing people and groups to buy, sell, and trade computing power. Companies needing extra processing power for simulations, for example, can contact a broker machine to purchase processing power. The broker would then get the desired pro-cessing power from a cycle pool formed by companies selling their extra idle cycles. Even individuals may be allowed to buy and sell computing power.

Commercial applications of volunteer computing also include contract-based systems, where computing power is used as payment for goods and services received by users. For example, information providers such as search engines, news sites, and shareware sites, might require their users to run a Java applet in the background while they sit idle reading through an article or downloading a file. Such terms can be represented as a two-sided contract that both sides should find acceptable. For example, while it is actually possible in today’s browsers to hide Java applets inside web pages so that they run without users knowing about them, most users will consider this ethically unacceptable, and possibly even a form of theft (of computing power). Sites that require users to volunteer must say so clearly, and allow users to back-out if they do not agree to the terms.

If appropriate economic models and mechanisms are developed, commercial volunteer computing systems can allow people not only to save money as they can in organization-based systems, but to make money as well. Such an incentive can attract a lot of attention, participation, and support from industry for volunteer computing. With commercial volunteer computing systems, what happened to the Internet may happen to volunteer computing as well – it will start out as something primarily used in academia, and then when it becomes mature, it will be adopted by industry, which will profit immensely from it.

In fact, in the past year, this has already started to happen. Several new start-up companies, including Entropia [45], Parabon [113], Popular Power [118], Process Tree [121], and United Devices [148], have taken the technology and ideas from SETI@home and dis-tributed.net and are setting up market-based systems, hoping to make profits by acting as brokers between people that need computing power, and people that are willing to share their idle computing power for pay.

NOIAs

Many experts, both in industry and in the academic community, predict that in the near future, information appliances – devices for retrieving information from the Internet which are as easy to use as everyday appliances such as TVs and VCRs – will become common-place [53, 107]. In the United States today, companies such as WebTV [156] are starting to develop and sell information appliances in the form of "set-top boxes", while many cable companies already support high-speed cable modems that use the same cable that carries the TV signals to connect users to the Internet up to 50 times faster than a telephone modem can. It is not hard to imagine that within the next five or ten years, the information appliance will be as commonplace as the VCR, and high-speed 24-hour Internet access as commonplace as cable TV.

This brings up an interesting idea: what if we use volunteer computing to allow users of these information appliances to volunteer their appliances’ idle cycles? These nodes can perform computation, for example, when the user is not using the information appliance, or while the user is reading a web page. Such networks-of-information-appliances, or NOIAs, as we can call them (appropriately, the acronym also means "mind" in Greek, and evokes an image of a brain-like massively parallel network of millions of small processors around the world), have the potential for being the most powerful supercomputers in the world since: (1) they can have tens of millions of processors (i.e., potentially as many as the number of people with cable TV), and (2) although they are only active when the user is idle, we can expect the user to be idle most of the time.

NOIAs can be contract-based systems. Cable companies or Internet service providers (ISPs) can sign a contract with their clients that would require clients to leave their information appliance boxes on and connected to the network 24 hours a day, running Java applets in the background when the user is idle. This may be acceptable to many clients since most people today are used to leaving their VCR on 24 hours a day anyway. In some cases, however, clients may object to having someone else use their appliances when they are not using it, so a reasonable compromise may be to always give clients the option of not participating, but give benefits such as discounts or premium services to clients who do.

In addition, volunteering clients may be allowed to indicate which kinds of computations they do and do not want to be involved in. For example, a client may allow her appliance to be used for biomedical research directed at lung cancer, but not for tobacco companies doing data mining to improve their advertising strategy.

From programmers’ and administrators’ points-of-view, NOIAs also have several ad-vantages over other kinds of volunteer networks. Hardware-based cryptographic devices can make NOIAs secure against malicious volunteers attempting to perform sabotage (see Sect. 2.3.3) by preventing such saboteurs from forging messages or modifying the applet code that they are given to execute. NOIAs are also significantly more stable than other forms of volunteer computing. That is, since users are likely to leave their information appliances on all the time, then the chance of a particular node leaving a NOIA is smaller than that in other kinds of volunteer networks. Finally, NOIAs composed purely of in-formation appliances would also be homogeneous, since each participating information appliance would have the same type of processor. All these lessen the need for adaptive parallelism (see Sect. 2.3.2), and allows greater efficiency and more flexibility in the range of problems that a NOIA can solve.

It may take some time before information appliances and high-speed Internet access become widely available enough to make NOIAs possible. However, it is useful to keep them in consideration since techniques developed for the other forms of volunteer computing are likely to be applicable to NOIAs when their time comes.

Research Issues and Challenges

While volunteer computing offers all these promising potentials, realizing these poten-tials and implementing real volunteer computing systems involves many interesting and challenging technical questions and problems. These include technical issues that need to be addressed in making volunteer computing possible and effective. These technical is-sues can be classified broadly into accessibility (making volunteer computing as easy, open, and inviting to volunteers as possible), applicability (making volunteer computing useful in real life), and reliability (making volunteer computing work in the presence of faults and malicious volunteers). In addition, there are economic issues as well, which are especially relevant in implementing commercial systems. In this section, we present all these issues and discuss the challenges that they bring to researchers and developers of volunteer computing systems.

Accessibility

The key to volunteer computing advantages over other forms of metacomputing is its accessibility. It is by making it as easy as possible for as many volunteers as possible to join, volunteer computing can do things that other forms of metacomputing cannot.

Achieving accessibility involves addressing several issues, including:

Ease-of-use and platform-independence. In order to maximize the potential worker pool size and minimize setup time, volunteer computing systems must be usable and accessible to as many people as possible. Thus, they must be easy to use and platform-independent. Volunteering must require as little technical knowledge from volunteers as possible. Even a seemingly simple setup procedure such as downloading and installing a program may be too complex, since most computer users today gener-ally only know how to use applications, not how to install them. At the same time, users should be able to participate regardless of what type of machine and operating system they use, and preferably without having to identify their platform type.

Volunteer-side security. Volunteer computing systems must also be secure in order not to discourage people from volunteering. Since programs will be executed on the volunteers’ machines, volunteers must be given the assurance that these programs will not do harm to their machines.

User-interface design. Finally, volunteer computing systems should have a good user interface design to encourage volunteers to stay and participate. In most traditional parallel systems, user interfaces for the processing nodes are unnecessary because these nodes are usually hidden inside a large supercomputer and cannot be accessed independently anyway. In volunteer computing systems, however, volunteers need an interface for doing such things as starting and stopping applets, or setting their priority. They also need some sort of progress indicator to assure them that their machines are actually doing useful work. User interfaces for viewing results and statistics, or for submitting jobs or specifying parameters, are also important. Whereas these are traditionally available only to the server’s administrators, we may want to make them available to users as well. In commercial systems, for example, users would like a good interface for submitting problems, and receiving results.

Applicability

Of course, volunteer computing would not be interesting if it were not useful. Thus, the applicability of volunteer computing is of prime concern. This involves issues in programmability, adaptive parallelism, performance, and scalability.

Programmability

The first aspect of applicability is programmability. A volunteer computing system should provide a flexible and easy-to-use programming interface that allows programmers to implement a wide variety of parallel applications easily and quickly.

One of the key benefits that PVM and MPI brought to the parallel computing and meta-computing worlds, for example, is to provide an easy-to-use, general-purpose programming interface that programmers can quickly learn, and use for a wide variety of applications. This enabled programmers to start using the idea of parallel computing for their own applications. Similarly, in order for people to start benefitting from volunteer computing systems, it would be good to provide them with an easy-to-use and general-purpose APIs and frameworks that would enable them to implement their own applications on them.

The Java programming language provides a good starting point in this respect. Aside from being platform-independent and secure, Java is also object-oriented, encouraging (of-ten even forcing) programmers to write code in a modular and reusable manner. One challenge in using Java for volunteer computing, therefore, is to develop programming models and interfaces that allow users to take advantage of object-oriented programming while making it easy to write parallel programs and to port existing programs, usually written in C or FORTRAN, to Java.

Adaptive parallelism

One thing that makes writing programs for volunteer computing systems challenging is the heterogeneous and dynamic nature of volunteer computing systems. Volunteer nodes can have different kinds of CPUs, and can join and leave a computation at any time. Even nodes with the same type of CPU cannot be assumed to have equal or constant computing capacities, since each can be loaded differently by external tasks (especially in systems which try to exploit users’ idle times). For these reasons, models for volunteer computing systems must be adaptively parallel [55]. That is, unlike many traditional parallel programming models, they must not assume the existence of a fixed number of nodes, or depend on any static timing information about the system.

Traditional message-passing-based parallel systems, such as those using PVM or MPI, are generally not adaptively parallel. In these systems, it is not uncommon to write pro-grams that say something like, "At step 10, processor P 1 sends data A to processor P 2." This may not work or may be inefficient in a volunteer computing system because P 2 may be a slow machine, and may not be ready to receive the data at the time P 1 reaches step 10. Worse, P 2 may simply choose to leave the system, in which case P 1 would get stuck with no one to send the data to.

Various strategies for implementing adaptive parallelism have already been proposed and studied. In eager scheduling [34], packets of work to be done are kept in a pool from which worker nodes get any undone work whenever they run out of work to do. In this way, faster workers get more work according to their capability. And, if any work is left undone by a slow node, or a node that "dies", it eventually gets reassigned to another worker. Volunteer computing systems that implement eager scheduling include Charlotte [12], and Javelin++ [104].

The Linda model [26] provides an associative tuple-space that can be used to store both data and tasks to be done. Since this tuple-space is global and optionally blocking, it can be used both for communication and synchronization between parallel tasks. It can also serve as a work pool, which like in eager scheduling, can allow undone tasks to be redone. Linda was originally used in the Piranha [55] system, and more recently implemented in Java by WWWinda [64], Jada [125], and the older version of Javelin, SuperWeb [6]. Sun it is currently using a Linda-like tuple-space as a basis for their Jini and JavaSpaces technologies [145].

In Cilk [17], a task running on a node, A, can spawn a child task, which the node then executes. If another node, B, runs out of work while node A is still running the child task, it can steal the parent task from node A and continue to execute it. This work-stealing algorithm has been shown to be provably efficient and fault-tolerant. Cilk has been implemented in C for NOWs in Cilk-NOW [18], and a proof-of-concept system using Java command-line applications (not browser-based applets) was implemented in ATLAS [11].

Performance and scalability

Finally, for volunteer computing to be truly useful, it must ultimately provide its users with speedups better than, or at least comparable to, other available metacomputing technologies. This implies that volunteer computing systems must have good raw performance and high scalability.

These issues, while important in all forms of volunteer computing, are of particular concern in Java-based volunteer computing systems because of Java’s historically slow execution speed and restricted communication abilities. One of the major problems in Java-based volunteer computing, for example, is that currently, security restrictions dictate that applets running in users’ browsers can only communicate with the Web server from which they were downloaded. This forces Java-based volunteer networks into star topologies, which have the disadvantage of having high congestion, no parallelism in communications, and not being scalable.

To solve this problem, we may allow volunteers who are willing to exert extra effort to download Java applications, and become volunteer servers. Volunteer server applications need to be run outside a browser, but do not have security restrictions. This lets them connect with each other in arbitrary topologies, as well as act as star hubs for volunteers running applets. Alternatively, we can use signed applets, which will also be free of restrictions. These applets can be used as volunteer servers as well. Another approach would be to design peer-to-peer networks which apply the basic ideas used in popular file-sharing networks such as Napster [100] and Gnutella [58] for providing computational resources instead of files. Some of the more recent commercial metacomputing companies, including those parts of the Intel Peer to Peer Working Group [74] claim to be taking this approach. There are many challenges in taking this approach, however, as it exacerbates the problems of adaptive parallelism, security, and reliability.

Reliability

Because of their size and their open nature, volunteer computing systems are more prone to faults than other forms of parallel computing. Not counting the problem of volunteers crashing or leaving (which is already covered by adaptive parallelism), faults can include not only unintentional random faults, such as data loss or corruption due to faulty network links or faulty processors, but also intentional faults caused by malicious nodes submitting erroneous data.

One type of malicious attack is sabotage, where volunteers (and possibly also non-volunteers) intentionally submit erroneous results. An unscrupulous user in a commercial system, for example, may try to get paid without doing any real work by not doing its work but simply returning random numbers instead. Such cheating volunteers can cause financial loss for the commercial system, not only because they cheat the system of money, but even more because they can generate errors that can propagate and render other computations invalid – even those from honest volunteers.

Another type of attack, particularly relevant in commercial volunteer computing net-works, is espionage, where volunteers steal sensitive information from the data they are given. Suppose, for example, that a company A purchases computing power from a commercial cycle pool in order to process its sales data. Without some form of protection, a competitor B can then spy on A’s data by joining the cycle pool and volunteering to do work.

Guarding against malicious attacks is a very challenging problem. It is also one that has not been studied very well since, so far, people have been able to trust the parts in their computer not to intentionally sabotage their computations. Possible ways of addressing this problem include using cryptographic techniques such as digital signatures and check-sums to protect against volunteers sending back random results, encrypted computation to protect against espionage and attempts to forge digital signatures and checksums, and periodic obfuscation (i.e., instruction scrambling) techniques to simulate encrypted computation when it is not possible.

In cases where these mechanisms may not work, however, one must resort to some form of redundancy. For example, we may give the same piece of work to three different processors, and have them vote on the correct answer. Unfortunately, however, such techniques have a high computational cost, since repeating workr times generally means taking r times longer to solve the whole problem. Thus, an interesting research problem is to develop effective but efficient fault-tolerance techniques. As we shall show in Sect. 6.3.3, one possible approach is spot-checking with blacklisting, where results are only double-checked occasionally, but faulty nodes that are caught are not used again. This may be less reliable than replication, but potentially much more efficient.

In some cases, reliability problems can be addressed by simply choosing more fault Economic Issues

Issues in paid and commercial systems. In implementing commercial volunteer computing systems, we need to have models for the value of processing power. How does supply and demand affect the value of processing power? How about quality of service? How does one arrive at guidelines for fair barter trade? We also need mechanisms to implement these models. These include electronic currency mechanisms to allow money to be electronically exchanged in a safe and reliable manner, an accurate system for accounting of processing cycles, and efficient brokering mechanisms for managing the trading of processing cycles. Because these mechanisms deal with real money, it is important that they are accurate and robust. Loopholes that permit crimes such as electronic forgery of currency, and "cycle theft" (using someone’s computer without paying them) can lead not only to loss of computational power and data, but also to direct financial losses.

Many groups are already studying these issues [6, 24, 88, 22, 97], and several companies have already started brokering volunteer computing resources for pay as noted in Sect. 2.2.4. Meanwhile, it may also be worthwhile to look into implementing commercial systems with more relaxed mechanisms, such as lotteries and barter trade that may be less accurate, but at least cannot result in large direct financial losses.

Hidden costs. One of the early arguments for using NOWs instead of supercomputers was that of cost-effectiveness: whereas a single supercomputer costs millions of dollars, a NOW uses existing workstations and thus costs next to nothing. In this way, NOWs can bring parallel processing capabilities to people who cannot afford to buy supercomputers.

When considering massive NOWs such as volunteer computing networks or NOIAs, how-ever, the validity of this argument may not be so clear anymore. Several issues arise and require further study.

For one, the cumulative cost of supplying electrical power to a massive NOW with thousands or millions of nodes may eventually offset the cost advantage gained from using existing hardware. Because of overhead, fault-tolerance redundancy requirements, and other factors, we expect that a massive NOW with the same processing power as a supercomputer would need several times as many nodes. Furthermore, each of these nodes would be an entire PC or workstation instead of just a single processor. Thus, it is easily conceivable that a NOW can use much more total power than a supercomputer.

We might argue that this power is already being spent by computers sitting idle, and that we are just putting it to better use. This may be true, but it deserves more study. It may be that with power-saving mechanisms, PCs and information appliances would spend less energy when idle. As a very simple example, when a user is not using his information appliance, he can turn it off. If we required the user to allow his information appliance to be used in a NOIA as part of his contract, then he cannot turn it off and it will be spending energy which he would normally not be spending otherwise.

Another source of hidden costs is the network. The high-speed links needed to alleviate congestion and improve performance in massive cycle pools may end up costing more than a supercomputer. These links are often leased from a telephone company, and thus can be quite expensive. Furthermore, if a volunteer computation uses a public network such as the Internet, then moving data between computers can cause congestion in some areas, which can then affect unrelated Internet traffic, and cause equivalent financial losses in places uninvolved with the project. Note that these problems are not necessarily enough to make volunteer computing completely untenable. However, in the long run, they are real concerns and should be studied further.

Fig 2.1 TAXONOMY OF VOLUNTEER COMPUTING SYSTEMS

 

Unpaid

Paid

Forced

"cycle thief" hidden applet

traditional NOW; forced private or collaborative network

forced NOIA

Contract-based (e.g., trade cycles for info.)

Voluntary

true volunteer network

voluntary private or collaborative network

voluntary NOIA

market-based system

Untrusted

Trusted

Untrusted

Taxonomy of Volunteer Computing Systems

Figure 2-1 shows a taxonomy (drawn as a Karnaugh map) of various forms of volunteer computing systems discussed in Sect. 2.2 according to the following "three A’s" criteria:

autonomy (truly voluntary vs. forced) – whether volunteers can join and leave of their own free will at any time or are "forced" into volunteering,

Anonymity (known and trusted vs. untrusted) – whether the volunteers are known and trusted by the administrators, or are unknown or untrustable, and

Altruism (unpaid vs. paid) – whether volunteers expect to be compensated for volunteering or not.

This taxonomy is useful in identifying the various issues that need to be addressed in implementing these systems, as discussed in Sect. 2.3.

In general, autonomy is related to the need for adaptive parallelism. That is, forced (non-autonomous) volunteer networks would in general be easier to implement and would potentially have higher performance because they can rely on processing nodes to stay in the system longer. In such networks we may be able to consider running longer jobs than in voluntary systems, where we need to make jobs short enough so they can be done before a volunteer quits. Also, we may be able to do peer-to-peer communication, which, as noted earlier, cannot be done in systems where machines can leave the system at arbitrary times.

Anonymity is related to the security and reliability of the system. Generally, trusted (non-anonymous) networks would be easier to implement than untrusted networks be-cause they do not need to be secure against cheating, sabotage, and espionage as described in Sect. 2.3.3.

Altruism is related to the need for economic mechanisms, for security and reliability, and for performance. Altruistic systems are generally easier to implement since they do not need precise and accurate accounting and payment mechanisms. Also, they are not as prone to cheating, sabotage, and espionage because there is no economic incentive for volunteers to do so. Furthermore, they can also afford to be less efficient in terms of performance since they are not financially accountable to paying clients who might be concerned about getting their money’s worth.

In summary, forced, trusted, and unpaid systems are easier to implement because of fewer issues need to be addressed in them. (In fact, traditional NOWs and metacomputing software are built to run on such systems.) On the other hand, voluntary, trusted, and paid systems have the greatest potential for attracting the largest number of volunteers and thus offer the promise of very high performance if their associated problems can be overcome.

Conclusion

In this chapter, I have presented the many potential forms and benefits of volunteer computing, as well as the many new challenges it brings. Although they use different names such as web-based metacomputing, global computing, and Internet computing, many others have also recognized the potentials and challenges of volunteer-based parallel computing systems, and have presented their ideas (e.g., [6, 12, 46, 56, 111] and others). Our own approach in this chapter has been to present volunteer computing as a distinct new form of computing, specifically distinguished from other network-based parallel computing systems such as NOWs and metacomputing by its emphasis on ease-of-use and accessibility for volunteers.

Information

Information helps understanding something so that we can predict how it behaves and perhaps even influence and control it. It means we can reduce the uncertainty and doubt which surrounds us. Such ability to interview and control come from facts, information and observation that come from data. It describes the world around us.

Information Warfare

Actions taken to preserve the integrity of one’s own information systems form exploitation, corruption, or destruction while at the same time exploiting, corrupting, or destroying an adversary’s information systems and in the process achieving an information advantage in the application of force. It is also actions taken to achieve information superiority in support of national military strategy by affecting adversary information and information system while leveraging and defending our information and information systems. Command and control warfare is a subset of information warfare [1] .

Information warfare, as a separate technique of waging war, does not exist. There are, instead, several distinct forms of information warfare, each laying claim to the larger concept [2] . Seven forms of information warfare conflicts that involve the protection, manipulation, degradation, and denial of information can be distinguished:

Command-and-Control warfare [3] (which strikes against the enemy's head and neck),

Intelligence-based warfare [4] (which consists of the design, protection, and denial of systems that seek sufficient knowledge to dominate the battle space).

Electronic warfare [5] (radio-electronic or cryptographic techniques).

Psychological warfare [6] (in which information is used to change the minds of friends, neutrals, and foes).

Hacker warfare [7] (in which computer systems are attacked).

Economic Information Warfare [8] (blocking information or channelling it to pursue economic dominance).

Cyber warfare [9] (a grab bag of futuristic scenarios).

All these forms are weakly related. The concept of information warfare has as much analytic coherence as the concept, for instance, of an information worker.

The several forms range in maturity from the historic (that information technology influences but does not control) to the fantastic (which involves assumptions about societies and organizations that are not necessarily true).

Information is not in and of itself a medium of warfare, except in certain narrow aspects (such as electronic jamming). Information superiority may make sense, but information supremacy (where one side can keep the other from entering the battlefield) makes little more sense than logistics supremacy.

Information as a weapon

The information warfare in the longer run can cause several momentary and long lasting implications. Susan W. Brenner says that a computer system can be used as a weapon at 3 instances:

Weapon of mass destruction.

This is a conceptual option computers, alone, can be used to inflict the kind of demoralizing carnage the world saw in New York and Washington, D.C., on 9/11 or in Madrid on 3/11. Computers, as such, cannot inflict physical damage on persons or property; that is the province of real-world implements of death and destruction.

Weapon of mass distraction.

This is both a conceptual and a real possibility. Here, computer technology plays a pivotal role in the commission of a terrorist act Computer technology is used to manipulate a civilian population psycho-logically. This manipulation saps civilian morale by undermining citizens’ faith in the efficacy of their government

Weapon of mass disruption.

The information warfare would enable the era of largely bloodless conflict; battle would occur without guns and ammunitions and may occur in the virtual world "cyberspace," and "information warriors" would be able to disable important enemy command and control or civilian infrastructure systems with little loss of life. But this kind of warfare will be able to take the life of millions at a single point of time, such as, if any one take control over the medical infrastructure and destroys or alters the database there will be a huge chaos and will take life of thousands of people.

Whatever the development and diffusion of information technology mean for the future of warfare, it is apparent that some of the new forms of attack that information technology enables may be qualitatively different from prior forms of attack. The use of such tools as computer intrusion and computer viruses, for example, may take war out of the physical, kinetic world and bring it into an intangible, electronic one [10] .

Attacks could be conducted from a distance, through radio waves or international communications networks, with no physical intrusion beyond enemy borders. Damage could range from military or civilian deaths from system malfunctions, to the denial of service of important military or governmental systems in time of crisis, to widespread fear, economic hardship, or merely inconvenience for civilian populations who depend upon information systems in their daily lives.

Disrupting the information infrastructure of another nation will shut down hospitals, cause planes and trains to crash, cause starvation in isolated regions, etc. Though there are no direct casualties when logic bombs destroy the information infrastructure of another nation, but indirectly IW can cause significant collateral death, most likely civilian. In addition, information warfare can be used for immoral or unethical purposes.

Classification of Information Warfare

Information Warfare may be more morally acceptable to disrupt the enemy's information infrastructure, rather than bombing the enemy with weapons of destruction that lead directly to the loss of human lives, mostly civilians. While in information attack direct human casualties may be avoided there may be considerable indirect death and damages.

Information Warfare has been divided into three categories: Its role is different in all three.

Individual’s Information Warfare

The first class describes attacks against an individual’s electronic privacy. This includes the disclosure of digital records and database entries wherever information is stored. The average person today has little control over the information stored. We cannot control the amount of information concerning us even if it is correct or not. According to a USA Today poll, 78% of Americans are concerned about the loss of privacy. In India there is very less awareness among the people about the loss of privacy. In a survey done by me for a project "child protection in cyber space" it has been seen that more than 50% people in India who use internet do not even know about the privacy setting that are provided by the service providers on internet such as Gmail, Face book, Twitter, Hi5, hotmail, Orkut and many more. Only 15% people of the respondents were aware and were using the privacy settings to protect their data and personal information, and the rest of the respondents were aware of the privacy settings but they never used or if used then they were using very minimal level of privacy setting.

In the past, a spy had to tap phone lines and had to use miniature cameras and microphones to get desired information about a person. Today, he still has the capability to use these utilities but most of the information about a person will be available in existing databases. To blackmail someone, it is no longer necessary to survey him/her for months; today’s Information Warrior gets the desired information with the help of a computer over the telephone line.

This is the first approach to class 1 Information Warfare, but it can easily become worse. As we saw in the movie The Net with Sandra Bullock, we strong rely on the information about us. If someone is able to edit information in law enforcement databases, how would you explain to a police officer that the credit card you used is not stolen, that your passport is not faked, that your real name is not Juan Garcia, that you never smuggled drugs and did not kill three people and that you are not wanted all over the world by Interpol if the computer in the car of the patrolman, which you called because someone sold your house during your holidays, says so.

Put together, we can say:

Thousands of databases hold together the digital images of our lives.

Computers constantly exchange information about each of us.

Available information does not have to be correct

Getting erroneous information corrected is almost impossible

Class 1 Information Warfare does not seem to be a potential threat but can easily destroy someone's identity or even link to class 2 or even class 3 Information Warfare.

Corporate Information Warfare

This class describes competition, or better said today's war between corporations around the world. It is easy to imagine that a company could invest 1 crore in a system that allows them to break into a competitor’s database and copy research results worth over 15 crore. To make sure that the competitor will not be the first on the market with the new product; they could also destroy the original database on the fly and make it look like a possible accident with a virus on the mainframe.

This description of corporate information warfare is not new. This kind of "espionage" is well known from the cold war where Russian and American spies tried to gather information about each other’s nation.

Today, corporate information warfare has a new dimension. Not only can one corporation try to get the research results of a competitor, states became involved in this "game". It is possible that a state encourages students to study abroad (e.g. in the India) and asks them to keep an eye open, not only to the lectures at the University but also to work as interns in Indian corporations and give the information back to their government.

Class 2 Information Warfare is not only about the acquisition of information; it is also possible to spread information, real or fictitious. The possibility that a drug competitor corporation spreads the information that the widely used ABC drug against asthma by the Indian Corporation X causes significant lung cancer, the doctors will probably stop with the prescription of ABC until a study will be published. This study could be fake and part of a whole campaign of well-designed disinformation. The damage is made and millions of dollars for the X Corporation is lost until they can prove (if they can) that their product is OK.

This previous example uses a drug manufacturer. In today's world, many processes are controlled by computer chips. It would be even easier for an IC manufacturer to claim that the widely used chip by their competitors does not work as it should. Would you buy a car with an airbag where it was written in the newspapers that the chip that controls the airbag does not work properly in 40% of all cases? How will the company prove contrary? You cannot test the system easily by yourself if it functions; you have to trust the manufacturer. If a corporation looses the trust of their customer, they also lose millions of dollars.

Also class 2 warfare can cause global changes. What if, for example, Person A announced her complaint against Chief Minister not months before his election but only a few days before the hi party Convention? Indian history and its influence on world's history could have changed.

Global Information Warfare

This type of Warfare works against industries, global economical forces or against entire countries or states. It is not any more sneaking in Research data of a competitor but about the theft of secrets and then turning this information against its owners.

In this class, you can multiply the power of class one and class two warfare by a large factor and still not be able to imagine all the damage that can be done within global information warfare. Here, money and personal are not the critical factor. Second and third world countries are spending billions of dollars every year in airplanes, bombs and bullets. What if a country decides that they will spend only a tenth of its yearly expenses for second wave weapons in third wave weapons? As an example, a dictator in the South East could about 200 million dollars a year in third wave weapons and be able within about three years to damage the Indian industry and government in an unimaginable way. In relation to traditional weapons, Information Warfare opens new horizons of cost effectiveness for terrorists or enemy governments. Class three warfare enables attacks over ten thousands of miles with dramatic effects. With the weapons described in one of the following sections of this paper, the described dictator would be able to crash Dalal Street, shut down the banking system of the India; then the last Dalal Street crash will look harmless in comparison to the effects that would follow.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now