The History Of A Curse Or Blessing

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

VIRTUALIZATION

A Curse or Blessing

BSc Hons Business Information Technology

Dissertation U21813

http://www.cxo.eu.com/media/article-images/article-image/CXOEU/issue-16/How_smart_desktop_virtualisation_enables_success_LRG.jpg

STATEMENT OF ORIGINALITY

" I, the undersigned, declare that this dissertation is my own original work, and I give Permission that it may be photocopied and made available for inter-library loan".

Signed ………………………………………………………………….

Contents

Introduction

Around the world, a personal computer (PC) has progressed from merely being a rarity to a mainstay in our society. Our society still expects even more from technology than it frequently receives. Often I am amazed by the impact certain technologies have on our society and how organizations look at the functionality benefits but fail to realize the overall impact they can have on a long term.

"The strongest principle of growth lies in human choice."

George Eliot.

Every technology magazine or article published, today mentions cloud computing and virtualization. Virtualization has transformed the way computing works, the changing business conditions i.e. the way data centres are implemented and managed, also the way software’s are obtained or installed. Information Technology (IT) in the past was expensive, being limited to those people who could afford it such as superstores and higher-class individuals. A middle-class person held no knowledge of how to operate a computer. Training was provided to staff at superstores on how to use their technological equipment especially tills. During such times, a group of wizards managed IT using their own special language, as it was indeed so complex for everyone to have such skills. The futures although, brought more promises with IT becoming cheaper, hence applications were universal. People from all around the world are now able to develop software/applications for smart-phones and even personal computers (PC). Low priority applications are also be recognised and experimented with the increasing number of users and IT.

Many individuals still have not got a clue what virtualization is, they might be unaware of its existence while still be using it in some way or the other. The purpose of my research study on virtualization is for reader to have no confusion between virtualization and visualization. This article is designed to provide an introduction to virtualization and give readers a good grasp of the topic. Virtualization and its many elements are also important to know, and be comfortable with terms such as hypervisor to be able to contribute and participate to your own virtualization initiatives, either organizational or private.

This article does not assume readers to know much about virtualization or any of its technical details and certainly do not need to have done hands-on experience with virtualization. Although, this article does assume readers to have a basic knowledge of hardware, software, operating systems, applications and how all these components fit together to form, what we call a PC. I’ve defined key virtualization terms you may encounter and also explained complex topics so readers can understand the connections between different virtualization elements.

Chapter 1 - Overview

Every individual with regards to their experience with technology must have been exposed to the term virtualization. Even if they may not have heard the term, they still may have used it’s principles on their PC at home or at work. To begin with, this chapter gives an overview of virtualization – what it is – what it does – how it came to existence – where it will lead? Organizations are so dependent upon it, that they frequently update their virtualization software/hardware components to get increasing speeds and thrive ahead of competitors. I often ask myself why virtualize – is it really that important and efficient? Can we not do things as they were done before virtualization was introduced? Is our IT industry so involved with virtualization that it is the next big thing and we rely on it so much that there is no going back!

Conroy (2010) illustrates virtualization was invented in the early 1960’s. IBM had a wide range of systems; each significantly different from each other due to newer generations of systems were substantially advanced. Each new system had changes and requirements, which eventually caused distress to customers as they could not keep up. In those days, computers could only do one task at any one time. However, if there were two tasks to accomplish, the processes would run in batches. IBM designed the CP-40 main frame. This system was not sold to customers and was made to be used in labs (Conroy, 2010). Later, the CP-40 evolved into the CP-67 system. This was the first commercial Main Frame which supported virtualization. The operating system on the CP-67 was called CP/CMS (Control Program/Console Monitor System). The Control Program (CP) created ‘Virtual Machines’ and the Console Monitor System (CMS) was a small-single user operating system designed to be interactive. The purpose was to run the CP on the mainframe and create virtual machines which ran on the CMS; which the user would then interact with (Conroy, 2010).

A tradition physical computer has one operating system supporting one or more application programs. Virtualization enables the same single physical computer through means of software that abstracts the machine’s resources to be shared between multiple ‘virtual machines’. Hence, a computer in software is logically represting a virtual computer. The software is to decouple the physical hardware from its operating system (OS). By doing so, the utilization rate of the underlying physical hardware is increased and also provides more operational flexibility.

What is Virtualization?

There are many ways to define virtualization but in simple terms we can describe this as versions of operating systems, servers, storage or network resources which lack physical existence yet are present and operate/function exactly like the respective version e.g. operating system or network resource. Another example could be creating a partition of a physical hard drive on a PC; in-effect is the single physical hard-drive being divided into two separate hard-drives, not necessary for both to appear physically. These two separate hard-drives are called ‘virtual machines’. The purpose to do so could be simply to save one type of data on both hard-drives, if one hard-drive is corrupted or infected, there’s always the other one as back-up. Although, if the data is corrupted or infected, it can also affect both virtual hard drives, however depending upon the virtualization software, it does not affect the physical machine or hardware.

"Virtualization is like TCP/IP. It’s a means to an end."

David Greschler, Directory of Integrated Virtualization Strategy, Microsoft

VMWare the global leader in virtualization and cloud infrastructure describes virtualisation as not only a benefit to large enterprise but small and midsized organisations too. They believe virtualization is the most effective way to reduce IT costs while maintaining the efficiency and agility of handling tasks. The virtualization software enables multiple operating systems and applications to run on a single physical server also known as ‘host’. Each operating system is self-contained creating a ‘virtual machine’, isolating itself from others and only utilizes as much of the host’s computing resources as it requires (VMWare, Virtualization Basics, 2013). Every virtualization consists of a host machine (the actual machine where virtualization takes place) and a guest machine (the virtual machine). The words host and guest distinguish the software running on the physical machine from the software that is running on the virtual machine. Each virtual machine/computer may run a different operating system from other virtual machines on the single physical computer. An error or application/program crash on any of the virtual machines leaves all other virtual machines unaffected. Virtualization is a technology which allows abstraction of hardware and software this includes networking, storage, memory, processor and other components that they utilize, to an emulated environment. With virtualization, administrative tasks are centralized while improving overall hardware-resource utilization. Enterprises manage updates and changes to OS’s and applications without disrupting their daily tasks by using virtualization. http://www.vmware.com/files/images/diagrams/vmw-virtualization-defined.jpg

Figure 1.1 Virtual Architecture

Hardware or platform virtualization refers to creating a virtual machine, which acts exactly like a real computer with an operating system. The software implemented on such virtual machines is separate to the underlying hardware resources. E.g. a computer running the operating system Windows 98 can also host a virtual machine operating like a computer with the Ubuntu Linux operating system; Ubuntu-based software can be run on the virtual machine (Efraim Turban, 2001), (IBM, 2007). "A virtual computer is a logical representation of a computer in software. By decoupling the physical hardware from the operating system, virtualization provides more operational flexibility and increases the utilization rate of the underlying physical hardware" (IBM, 2007).

1.2 What is a Virtual Machine?

Popek and Goldberg define a virtual machine as "an efficient, isolated duplicate of a real machine" (Goldberg, 1974). Current use includes virtual machines which have no direct correspondence to any real hardware (Smith & Nair, 2005). A tightly isolated software container known as a ‘Virtual Machine’ (VM) is required for any kind of virtualization to take place. Alternatively described as the software implementation of a machine i.e. a PC, to execute programs like a physical machine. Each VM contains an operating system and an application inside. Each VM is entirely separate and independent, although several can simultaneously run on a single machine or computer. There are two major classifications of virtual machines, based on usage and correspondence to any real machine: A system virtual machines and a process virtual machine.

1.2.1 System VM:

A complete system platform which supports the exectuion of a complete operating system, is provided in a system virtual machine (Virtualization vs Emulation, 2006). The purpose of a system virtual machine is to emulate the existing architecture. In situations where the real hardware is not available for use, system virtual machines provides a platform to run programs or for efficient use of computing resources, in terms of energy consumption and cost effectiveness which are lead by having multiple instances of virtual machines.

1.2.2 Process VM:

Also known as language virtual machine, supports a single process. Such virtual machines are desgined to run a single program. They also provide program portability and flexibility (amongst others) by closely being suited to one or more programming languages. Virtual machines are limited to the resources and abstractions due to the software running inside a physical machine, hence it cannot escape its virtual environment. These changes to your computer system open doors to many benefits. Clearly, as discussed before each VM analyses and summarizes an entire machine to run several operating systems and applications simultaneously on one host. This parallelism ensures cost reductions due to few servers being deployed. Hence, every physical machine is used to it full capacity (VMWare, Virtualization Basics, 2013).

1.3 What is a Hypervisor?

The software, hardware or firmware which creates runs and monitors virtual machines is known as a hypervisor or virtual machine monitor. The hypervisor presents a virtual operating platform to manage execution, on the guest operation systems. The virtualized hardware resources share multiple varieties of operation systems. Gerald J. Popek and Robert P. Goldberg classified two types of hypervisor in their article "Formal Requirements for Virtualizable Third Generation Architectures" in 1974 (Goldberg, 1974):

1.3.1 Type 1 (Native or Bare Metal):

Such hypervisors manage guest operating systems by running directly on host hardware to control the hardware – hence the term ‘bare-metal’. The guest operating system thus runs on a level above the hypervisor. The classic implementation of virtual machine architectures is represented by this model; originally hypervisors were used as test tools, SIMMONS and CP/CMS are examples, both developed at IBM during 1960s. Modern examples include Oracle VM Server for SPARC, Oracle VM Server for x86, the Citrix XenServer, VMware ESX/ESXi, KVM and Microsoft Hyper-V hypervisor. A hardware compatibility list (HCL) dictating hardware requirements for the virtualization product, is provided by vendors. This is because the hypervisor kernel, device drivers is small in size in order to keep the size of the hypervisor to a minimum. In this theory priority is given to performance and products are distributed as server operating systems or appliances. The product is installed on the hard-drive without having to load the existing OS, when the server is booted with the installation CD-ROM. Hyperviseur.png

Figure 1.2 Types of Hypervisors

1.3.2 Type 2 or Hosted:

An existing operating system is required for hosted virtualization. Just like any other application, this virtualization software is installed on the host desktop, which means the host OS can still be used. The virtual machine can have a different operating system than the host operating system. In this theory, guest operating systems run on the third level above the hardware with the hypervisor layer as a distinct second software level. (Goldberg, 1974) Although, this theory proves slow in performance. Examples include BhvVE, Vmware Workstation and VirtualBox.

In other words, Type 1 hypervisor runs directly on the hardware; a Type 2 hypervisor runs on another operating system, such as FreeBSD, Linux, or Windows. Type 1 and Type 2 are not clear about the classification of specific hypervisor implementations. For example, Kernel-based Virtual Machine (KVM) is implemented as a kernel module for Linux 2.6.20 (operating system) which allows the Linux Kernel to operate as a bare metal (i.e., Type 1) hypervisor (Graziano, 2011). However, KVM is also argued to be a Type 2 hypervisor, as Linux is an operating system in its own right (Pariseau, 2011). Released in June 2008, Microsoft Hyper-V (Tulloch, 2010) is misidentified to be a Type 2 hypervisor (Vanover, 2009). The free stand-alone version and the commercial Windows Server 2008 version use a virtualized Windows Server 2008 parent partition to manage the Type 1 Hyper-V hypervisor. In both cases, the Hyper-V hypervisor loads prior to the management operating system, and any virtual environments created run directly on the hypervisor, not via the management operating system. Also, to set apart particular hypervisor implementations, several attempts have been made to introduce the term Type 0 (Zero) (Bradley, 2013), (Vizard, 2012). Though, no consensus as to the validity of this term has been reached (Haletky, 2012).

Chapter 2 – Literature Review

In Chapter 1 the topic of discussion was the term virtualization. It provides the acknowledgment of how virtualization technologies provide a way to detach the physical hardware (computer) and the software (OS and applications) by imitating hardware using software. Before this topic is discussed any further, it is essential to have good understanding of certain terms i.e. server, cloud and virtual environment. This chapter will discuss this relative terms as different sectors of virtualization are exposed. We also look at which sectors of computing is virtualization most conspicuous. Although, there are many sectors where virtualization is been exploited but our discussion is based upon Client, Server and Storage. Expert opinions will offer self expertize of exploiting virtualization within its parameters.

"Being able to virtualize everything is the number one thing that drives VMware."

Raghu Raghuram, Vice President, VMware Products and Solutions.

We already know software (hypervisor) is loaded on a personal computer (PC) that loads files, which in-turn defines a virtual computer known as a Virtual Machine (VM). The VM is not a physical computer, it is in-fact a data file which can be copied and transferred to other computers just like any other file. This allows multiple configurations of a working computer running on any physical machine (PC) by mean of appropriate virtualization technology.

2.1 Client Virtualisation:

Client Virtualization also referred as Desktop Virtualization is the impression of separating the desktop from its physical machine. In order words, the desktop of a physical computer is separated from its hardware, OS and applications, simulating user’s desktop experience. Client virtualization is the capability of the virtualization residing on the client or desktop. The virtual machine is on the data centre server, hence it kind-off replicates the client-server model. The user is presented by the virtual machine anywhere, with virtual desktop infrastructure (VDI) client (Golden, 2011). VDI is the interaction with the host computer by the means of another desktop computer or smart phone through a network connection i.e LAN, wireless connection or internet. The keyboard, mouse or the monitor are not being utilized to interact with its respectively physical computer. A device such as a smart phone is capable of using such software to connect to your home personal computer (PC) via internet to operate or view files you wish, provided the PC has access to the internet and is turned on. In this type of virtualization, the host computer or PC becomes the server computer capable of hosting multiple virtual machines at any one time for multiple users (Corporation, 2011). Dubie’s illustrates ‘Successful server virtualization deployments lead many IT managers to believe desktop virtualization would provide the same benefit’ (Dubie, 2009). The information technology (IT) department of any organization facilitated with such centralization of computing via client virtualization helps reduce hardware costs. While companies as such need to be aware of how two technologies differ (Dubie, 2009). Desktop Virtualization should not be thought off lightly, is the impression IT managers should receive. Many organizations focus on server virtualization, although Dubie has investigated deep into the concept which reveals that, desktop virtualization is the next big movement for virtualization and mainstream IT department should consider it valid to pursue upon with caution and understanding (Dubie, 2009). It simplifies tasks i.e. software updates and virus scans to keep client computers up-to-date. Without client virtualization, time would be exploited as these tasks would have to be carried out on every physical machine. Kennedy provides solutions to desktop deployment issues he believes, "Despite rumours virtualization is not only for data-centres, and will continue to grow in the future" (Kennedy, 2007).

2.2 Server Virtualization

Server virtualization also known as Virtual Private Server (VPS): allows one of many virtual machines to control and run a single computer (Virtual private servers and security contexts, 2004). VPS solves the problem of sharing resources of a shared server. The solution is the allocation of resources to each user and allowing each virtual server to run completely separate from other, even if the virtual machines are running separate OSs. A VPS fulfils user’s individual needs, while maintaining the privacy of a separate physical computer configured to run the server software. Most of the virtualization action in data centres occurs on servers, changing data-centres dramatically (Golden, 2011). Data centres reduce their energy consumption by introducing virtualization and server consolidation techniques which proposes an increase in the utilization of underutilized servers thus, reducing carbon footprint (Richard Talaber, 2009). Thousands of servers have been added to datacentres which have grown significantly to handle the sheer magnitude of today’s data. It has become much more costly to operate these servers which are consuming more power due to being larger, denser and hotter (W. McNamara, 2008). There has been a massive increase in the energy consumption by datacentres due to the increase in infrastructure and IT equipment. This increase in the energy consumption is thought to double, every five years (Gartner, 2008). Data centres are filled with high density and power consuming equipment. Therefore, data centre energy costs will increase by 1600% until 2025, if there is continuity in the cost doubling every five year (Tomory, 2010). The largest data centre power usage is in Europe and USA. Asia pacific region are not too far behind (Fehrenbacher, 2008).

2.3 Storage Virtualization

Group together physical storage from numerous network storage devices to appear as a single storage device is defined as Storage Virtualization (Janssen, 2013). It is time-consuming and becoming more difficult to manage storage of data. The storage virtualization process involves abstracting internal function of a storage device (i.e. external hard drive) from the host application, host servers or a general network in order to facilitate the application and network-independent management (Janssen, 2013). Storage Virtualization helps facilitate easy backup, archiving and recovery tasks by utilizing less time but also aggregate functions and hide storage area network (SAN) actual complexity (Janssen, 2013). Tasks such as backup, archiving and recovering data is easily performed with less time consumption, by disguising the actual complexity of the SAN. SAN are used to make storage devices i.e. disk arrays, servers can access them as they appear to be devices attached locally to the operating system. SAN sits between the storage and servers and causes less disruption to these systems, which was initially the approach to storage virtualization (Yoshida, 2008). SAN interconnects different kinds of storage devices with data servers which does not require large network of users. It also has its own storage devices that are not accessible by other devices through the local area network. However, server virtualization was accepted widely after nearly a decade and a break through brought by the ability to virtualize physical logical unit numbers (LUNs) without remapping them using a virtualization technique based on storage control units (Yoshida, 2008). This theory brings together physical storage mediums from multiple network storage devices into a single storage device, all managed from a central console.

2.4 Network Virtualization

A virtualized network offering multiple individual networks placed over a single physical infrastructure. The concept is similar to server virtualization, wherea physical server can host multiple virtual machines (Cisco, 2013) Network virtualization is defined as the decoupling of roles of traditional Internet Service Providers (ISPs) into two entities (Taylor J. T., 2005), (N. Feamster, 2007): infrastructure providers who manage physical infrastructure and service providers, who create virtual networks by aggregating resources from multiple infrastructure providers and offer end-to-end services. There are two characteristics to network virtualization: Internal and External. Internal provides network-like functionality to the software containers on a single system and the external combines many networks, or parts of networks into a virtual unit (Network Virtualization, 2013).

.

Chapter 3 - iCloud vs NAS technology for small businesses and home users

It is the 21st century, and the world is moving up the technological advancement ladder. Technology and gadgets change every day best suiting customer expectation and fulfilling needs. Most of the population in UK have a smart-phone. Smart phone is the innovation which enabled users to store their data on Cloud Storage. This study project will further discuss how cloud storage is used by many among us, and yet are oblivious to the virtualized environment we are surrounded by. Storage Virtualization - it’s the pooling together of various physical storage devices to appear as a single storage device. This technology although is a bit different in smart phone. In smart phones, cloud storage enables user to store data from various locations and devices. The data can be of any form or type. Once the data is backup onto the cloud it can be simply be retrieved if the data is lost or damaged. The theory is to have a single storage medium (cloud) which synchronizes all registered devices and updates them periodically. This is a discussion between three main types of smart phone cloud storage; Microsoft’s SkyDrive, Google’s Google Drive and Apple’s iCloud. If documents or photos which were not backed up is lost for any reason. It is certainly not easy to get it back and requires software designed specifically for retrieving lost data. People are unaware of such software, and presume there is no hope that the lost data will be restored. Storage virtualization is also known as Cloud Storage (Janssen, 2013). Apple’s perspective of Cloud Storage is merely a feature (Miller, 2012). iCloud, introduced by Apple can only be used by any Apple devices i.e. iPhone, iPod, iPad or Mac. Apple describe iCloud not just somewhere you can store your content – but it also allows users to access music, photos, calendars, contacts, documents and more from whatever Apple device one is using. iCloud is built in every new iOS device and every new Mac (Apple, 2013). This application is only available to install on Windows PC so that users can save files on their laptops. iCloud offer great back-up storage facility, it saves data from all Apple device which can be restored at any time. This service enables all music, videos, photos, documents, safari, apps and iBooks to be accessed from all devices registered under the Apple ID and iCloud account. It is a feature to strengthen the ecosystem of Apple products. All data from all Apple devices are available to one another, through a shared medium of storage called iCloud (Miller, 2012). The Apple website describes step-by-step procedures to install iCloud on any device and how to back up or restore files (Apple, 2013). All Apple devices are registered to an Apple ID, which is also used as an iCloud account. Every device must use the same Apple ID and iCloud account to share data across multiple devices. This facility enables data to be updated on each device as any one device is updated (Apple, 2013). Users can take a picture on their phone and as soon as they are under Wi-Fi, all other device registered to the iCloud account get updated. Hence, the user gets the flexibility to switch between Apple devices without having to update each one.

Microsoft SkyDrive is a software which saves data and facilitate back up of all important files to the cloud (What is Microsoft SkyDrive?, 2012). Using SkyDrive stored files can be accessed from wherever, whenever. SkyDrive can also be installed on smart phones, Mac and/or iPad, it is not only based on a Windows operating system. A SkyDrive website is available for users to access via internet (What is Microsoft SkyDrive?, 2012). The concept of SkyDrive is simple, users save files on their respective SkyDrive – a copy of that file is therefore saved on the cloud. This way, files can be easily accessed from any location by logging into SkyDrive using your Microsoft account from a smart phone, PC or skydrive.com (What is Microsoft SkyDrive?, 2012). The new Nokia phones which have the Windows 8 operating system installed, can share data across devices having the windows 8 operating system via SkyDrive. Google Drive has similar principles to SkyDrive. This cloud too, can be accessed via web at drive.google.com and PC or smartphone. Keeps files up-to-date on all devices, if change is made on any single device is also a feature of Google Drive (Google, 2013). Let’s not forget these services are provided to users for free but, how much free space are they providing each user? Microsoft SkyDrive offers the most, free space of 7 Gigs not far behind is Google-Drive and iCloud both offering 5 Gigs (SEGET, 2012). The concept and principles are similar although architectural boundaries differ. From this, we can understand how a cloud creates an environment of its own to save files around the registered devices of that cloud. Apple’s iCloud is only introduced for Apple devices (platforms), fact addressed previously. SkyDrive supports Windows, Mac, iPhone, iPad, Windows Phone platforms with no support to Linux operating system or Windows XP. Google Drive on the other hand offers support to Windows, Mac, Android phones and tablets with iPad/iPhone soon going to be available (SEGET, 2012). These services offer common home users to explore a tiny fraction of virtualization. Virtualization is broad topic, and has various sectors indeed complex to explain. Although, cloud storage can prove beneficial for common users, they can save files from work and resume at home. Taking a picture from on an iPhone in Egypt, can be viewed on an iPad registered to the same Apple ID in UK instantly. This proves ideal for overseas project or simply to catch up with family without any costs. Children on a school trip can take pictures, while parents can view them at home without having to call their child. As Apple CEO Steve Jobs addressed at 2011 WWDC conference, "We are going to demote PC’s and Macs to just being a device. We’re going to move the digital hub into the cloud" (Taylor C. , 2011). All cloud storage medium discussed are based on the concept of Software as a Service (SaaS). iCloud is using cloud computing to create an infrastructure for common users to increase their personal productivity across PCs, Macs, iPads and iPhones (Dayaratna, 2011).

It is a computing device which can be attached anywhere on your network, to store files and make them available for authorized users while being independent from network and application servers (iomega, 2009). Network-attached storage (NAS) is targeting high capacity for lower costs. The BlackArmor NAS 220 (Model No: ST340005LSD10G-RK) manufactured by Seagate can be purchased from Amazon for £320.73 (Seagate, 2013). This device is capable of supporting upto 20 PCs and Macs. Automatic incremental and full-system backups for networked PCs while maintaining security of data via hardware-based encryption. A free application is also available for smartphones including iPhone, iPad, Windows Phone 7 and Android devices. Data is protected by a password and file sharing is secure among client and colleagues (Seagate, 2013). Links of files uploaded can be send via SMS, email and Facebook. NAS technology allows the shared utilization of printers and scanners as they can be attached via USB and every computer in the network can print and scan documents through a single piece of equipment, without having to adjust cables around. The BlackArmor device is set-up easily simply by following guided setup instruction provided when the product is purchased. Once it is connected to the router, and the software is installed on physical machines, the device centralizes and backs up all user files from all devices authorized into one location. This enables the user to access their files anywhere and anytime. With this NAS technology all devices at home or in a small business can access stored files using the existing wireless network, eliminating the need for multiple hard drives. The BlackArmor device is convenient as it comes with hardware and software solution that automatically and continuously back up data, giving users the sense of their data being safe and secures (Seagate, 2013).

Chapter 4 – Evaluation

There is no doubt is referring Cloud Storage as Storage Virtualization (Janssen, 2013). Apple introduced iCloud and prescribed cloud storage as a feature (Miller, 2012). Virtualization is concept based on system architectures. It is the process of using hardware resources from physical machines to create multiple virtual machines, each independent from each other. However this concept of storage is remarkably different. What we established in Chapter 3, was how common users can use storage virtualization. In this chapter, we will address the users data saved on the cloud; where is it saved – physical or virtual servers and locations. The questions we seek answers to are highly confidential. It has been impossible to find a case-study were any company describing on what sectors and how virtualization is implemented in their organization. On the ones you can find, general benefits such as saving costs and time efficiency are listed. No organization would publish their virtualization strategy publically. It contains crucial information about their hardware systems and location of their data centres – what system configurations they use, how they set it up etc. We have established that a physical machine is needed somewhere in the virtualization process for it to take place. Data from users is saved on a cloud. This cloud could be represented as a hypervisor running on a physical server or a virtual server. It is fair to assume both, although without facts let us assume that the data is stored on physical servers. These servers must be in their hundreds to handle such vast amount of data. Organizations handling such storage servers, as discussed previously are called data centres. Apple for example may have their own data centre to reduce risk of losing data to an outsource company. If that is true, we still cannot determine how many data centre this company requires as its data is required globally. We can although make assumption about this topic. If we are to believe that all the data is stored at one location, only in a single data centre. Then it is fair to comment a ‘high risk strategy’. In an event of a fire or natural disaster, data could be lost or damaged due to damage to these servers if the data was not backed up elsewhere. There is no doubt in mentioning that user data is very important for both the users and Apple. If all the data is stored at one location e.g. UK, people in Asia and Australia would have to wait for their data to arrive from the UK data centre. This would cause havoc for Apple. It is difficult to predict the architecture, although a reasonable assumption can be given. A data centre containing physical storage service is required. The numerous servers in a data centre are pooled together to act as one large storage device with the same capacity of all the storage devices added together. We need the data to be spread around a globe. Depending on the data centres ability and speed to transfer data around; it is a fair assumption that there will be more than one data centre. Based on the storage virtualization theory we can conclude that all data centres can be pooled together to act as one, which can also be used as a cloud. If this is the case then Apple can surely save the users’ data to their data centre cloud which allocates resource to server around the globe. We can still predict that to save costs, service providers (such as Apple) may outsource the service to data centres big enough to handle and distribute around the world. It could be possible if there are such data centres, they may be providing their service to both Apple and Microsoft due to their capacity of handling data. This assumption is surely not at all correct, but may provide some techniques to exploit virtualization to its limits. The previous chapter have acknowledged the core benefits of different cloud storage services. We already know Apple’s iCloud downfalls are that it is only available for Apple devices. Google-drive and SkyDrive promises more open source nature, by widely spreading to Macintosh and Android operating systems. Although SkyDrive and Google Drive do not have the popularity compared to the iCloud in the market. But personal favour is towards SkyDrive due to the facilities being far more convenient than the Google Drive and iCloud.

The discussion between NAS and iCloud, clearly states that free services such as iCloud and SkyDrive offer some but not all features as the Seagate BlackArmor device. This device may cost money, but is available to you in your personal space. SkyDrive or any other such service, we have no knowledge about where data is stored and the security levels they have to encrypt data inorder to keep it safe and secure. iCloud or SkyDrive does facilitate printers or scanner to be operated by multiple machines. With NAS technology, users running different types of machine (PC, Apple Mac, etc.) and running different operating systems (Windows, Unix, Mac OS etc.) can share files (Silicon Press). This facility is also available amongst the free services but the flexibility in different machines and operating systems is lacking. NAS appliances are "plug and play" hence very little installation and configuration is required beyond connecting them to the LAN. NAS technologies have a reputation to clog up the shared LAN negatively affecting other users on the LAN (Silicon Press). Whereas, SkyDrive and other cloud services have a more powerful approach towards handling data. NAS devices share the network with other computing devices; therefore NAS devices consume more bandwidth from the TCP/IP network. The performance of the remotely based NAS will depend upon the amount of bandwidth available for Wide Area Networks (WAN), although the bandwidth is still being shared with other devices. Hence limited bandwidth scenarios require a WAN optimization to be performed to deploy NAS solution. These issues are of no regard to SkyDrive or iCloud. They only have the backup storage facility, without secure and encryption of data. NAS devices prove a much more valued investment despite being a bit costly. SkyDrive and iCloud too charge if users wish to increase their storage allowance. But, NAS BlackArmor comes with a 4TB (terabyte) storage capacity and is more than sufficient for backing up procedures.

Chapter 5 - Review

Virtualization, it is indeed a very broad subject. The initial intention was to carry out a virtualization project on the University, which was not proceeded due to my research leading me to many questions; I could not receive answers to. Before taking on the subject I had no impression the topic would be so intense and difficult to perform the project intended. My financial condition was a cause too, not allowing me to gather primary and secondary resources due to timing of work shifts. Surely, there was a basic concept of virtualization I knew which led me to write my dissertation on this topic. Soon there was a realization of a study project occurred where common users unaware of virtualization benefits need to be addressed. So I started looking at the overview of virtualization describing a broad description and principles of virtualization in chapter 1. Chapter 2 discussed Client, Server and Storage virtualization remarked by various experts in the topic. The deeper the research got, it was more difficult to think of a case-study for chapter 3.

There was confusion between storage virtualization and iCloud services. Virtualization always occurred to me as a theory to architecture. iCloud is a piece of software which provides the service of storing data. The principles are those of a hypervisor too! But can we call iCloud a hypervisor? – if so type 1, type 2 or type zero. Such answers with the pressure of time constraints (work) could not be addressed effectively. Although the increase in the use of smart phone and cloud storage is the reason why these service are looked at and architectural boundaries are discussed. However, the analyses raised further question to which answers where seeming impossible to receive, hence a personal assumption of a virtualized environment by Apple was evaluated in Chapter 4. The article has not described everything about virtualization; it only gives knowledge on what the technology is about. Many colleagues asked me about my dissertation topic, and I replied ‘Virtualization’. Many of them had no clue; frankly I did not have the full grasp of it either. There are still certain questions in my mind about this topic and hopefully future may bring some answers.

There were many situations where the whole concept seemed pointless, but the continuous research and the commitment to gather effective resources were a great driving force towards the completion of this article. Several discussions with the IT staff in Richmond building gave inspiration to understand complex matters and illustrate personal assumption towards problem solving.

Conclusion

Simply put, virtualization is a technique for hiding the physical characteristics of computing resources from the way in which other systems, applications or end users interact with those resources. As a practical matter, when you get a Google map on your cell phone or when you let your PC automatically shop for the lowest price, you are using virtualization. The concept of virtualization is very broad and can be applied to devices, servers, operating systems, applications and even networks. Virtualization poses many challenges to the data centre physical infrastructure like dynamic high density, under-loading of power/cooling systems, and the need for real-time rack-level management. The rising cost of energy is influencing more organizations to move to a greener datacentre. The intention behind a greener datacentre is to reduce the amount of space resulting greatly towards reducing the energy consumption and cooling to host small number of physical servers. The benefits outweigh disadvantages since the only good physical server operating at 10% capacity is a server that has been transformed into a virtual machine with a utilization rate of 70-80%. The proposed cloud services for common users’ helps readers gather their own perspective of Virtualization and exploit it within its architectural boundaries. This is just the beginning to the road of virtualization there are many more theories to be researched and analysed. The vastness of virtualization is constantly expanding. Researchers are looking to further develop on virtualization principals. The future has more to offer and answer certain question hiding how organizations implement virtualization within their system and under what sectors (storage, network etc.).



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now