Basic Roots Of Cloud Computing

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Introduction

Now-a-days, three technological storms are, smart mobile devices, ubiquitous high-speed connectivity, and Cloud computing. Computer scientists are predicting that perhaps Cloud systems will be the next generation operating system. Prosaically, cloud computing has a strong foundation in virtualization governed by hypervisors, which are providing slices of resources. The developments in the hypervisors such as Xen, VitualBox, KVM, VMWare etc. triggering the development of the Commercial and Open source Cloud environments. The Clouds can definitely playing its role in "Hosting" part of the computations, and can be best on federation with Grids. When we plug an electric lamp into outlet socket, we do not think about how electric power is generated and how it is get through outlet socket because of that electricity is virtualized, it is readily available on wall socket that hides power generation stations and huge distribution grid.

See another example, one way to think of cloud computing consider your experience with email. Our mail client, if it is Yahoo, Hotmail, Gmail, Indiatimes and so on, take care of housing all of the hardware and software necessary to support your personal email account. When you want access your mail, open web browser, go to email client and log in. The most vital is internet access, your mail is not housed on your physical access your physical computer, you access through internet connection and access anywhere in world. Suppose you work in a picnic, at work you can check your email as long as possible through internet, your email id different than software that you installed on your computer, such as Ms word processing program, when you create a document using word processing software that document is always stay in device you used anytime. Email client is similar to how cloud computing works; except instead of accessing just email, you can choose what information you have access in cloud environment [81].

Technologies such as cluster, grid, and now cloud computing have decide to provide access large amount of computing power in fully, para virtualized manner. Consumer who access cloud services that pay based on usage such as pay-as-you-go , pay-as-you-use-energy model similar to services which traditional public utility services such as water, electricity, gas, telephony, TV channels and so on. Cloud computing is coined umbrella term, it is in nutshell, Amazon, Google, Microsoft is commercial providers on demand computing services.

Many practitioners, researchers defined cloud computing and their characteristics, Rajkumar Buyya [77] defined " Cloud is a parallel and distributed computing system consists of a collections of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources based on service level agreement established through negotiation between the service provider and consumers".

University of California Berkeley [47] summarized characteristics of cloud computing "(1) illusion of infinite computing resources (2) the elimination of an up-front commitment by cloud users; and (3) the ability to pay for use as needed".

Vaquero [82] have state and defined cloud computing "clouds are a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and services). These resources can be dynamically reconfigured to adjust to a variable load (scale), allowing also for an optimum resource utilization. This pool of resources is typically exploited by a pay-per-use model in which guarantees are offered by the Infrastructure Provider by means of customized Service Level Agreements."

The National Institute of Standards and Technology (NIST) [78] characterizes cloud computing as " a pay-per-use model for enabling available, convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction."

In a more basic definition, Armbrust [47] define cloud as the "data center hardware and software that provide services." Similarly, B.Sotomayor [24] point out that "cloud" is more often used to refer to the IT infrastructure deployed on an Infrastructure as a Service provider data center.

Common characteristics of cloud computing is defined (i) pay-as-per-use (ii) pay-as-per-go (iii) elastic capacity and illusion of infinite services (iv) self service interface (v) resources are virtualized. In addition of that cloud computing providers usually offer a wide range of software services, also include API and developments tools for an example Virtual Box with Java API provide to developers for creation cloud environment such as public, private and protected cloud. In recent years several technologies have grown and radically donate to make cloud computing practicable. In section 1.1 contain basic roots of cloud computing, section 1.2 consider services of cloud computing provide towards consumers of cloud, section 1.3 contain types of clouds, section 1.4 contain virtual infrastructure managers which is responsible for creating cloud environment and virtual machines creations and last but best is features of cloud computing that we discuss in this Chapter 1.

1.1 Basic Roots of Cloud Computing

E:\PH.D\Thesis write\Diagrams Of thesis\Roots Of cloud Computing.png

Figure1.1 Basic Roots of Cloud Computing

We can find the basic roots of cloud computing by monitoring the different technologies specially in hardware (virtulization, multicore chips), Internet technologies (Web services, service oriented architecture, web 2.0), distributed computing (cluster, grids) and system management (automatic computing, data center automation) as shown in above figure 1.1, as we discuss one by one technology.

1.1.1 SOA, Web Services, Web 2.0, and Mashups

Web services (WS) open standards have appreciably throw in to advance domain of software assimilation. WS standards have been created on top of existing ubiquitous technologies such as HTTP, XML, thus providing a common mechanism for delivering services, making them ideal for implementing a service-oriented architecture i.e. SOA. In SOA software resources are packaged as "services" which is well defined self contained modules that provide by business functionality [43]. The concept of gluing services initially focused on the enterprise Web but gained space in the consumer realm as well especially with the advent of Web 2.0, in the consumer Web information and services may be programmatically aggregated acting as a building blocks of complex compositions called service mashups. Google make their service APIs publically accessible using standard protocols such as SOAP and REST [44].

1.1.2 Grid & Utility Computing

Grid computing facilitates aggregation of distributed resources and transparently accesses them. Most production grids TeraGrid, EGEE share more computable and storage resources distributed across different administrative domains with their main intension is speeding up a broad range of scientific application such as climate modeling, drug design and protein analysis. Globus Toolkit is a middleware that implements several standard Grid services and over the years has abet the deployment of several service oriented.

In utility computing environments users assign a "utility" value to their jobs, where utility is a fixed or time-varying value that captures various QoS constraints ( deadline, satisfaction).The service providers attempt their own utility , where said utility may directly correlate with their profit.

1.1.3 Hardware Virtualization

The cloud computing service providers basically provides large scale data center composed of millions of computers. Such data centers are built to serve many consumers and host many isolated applications. For this purpose hardware virtualization can be considered. The idea of hardware virtualization comes from distributed operating system, Hardware virtualization allows running multiple operating system call Virtual machines and software stacks on a single physical platform, Virtual machine monitor call hypervisor is hypervisor between guest os and host os as shown in below figure 1.2

E:\PH.D\Thesis write\Diagrams Of thesis\Basic Architecture.png

Figure 1.2 Hardware Virtualization

Virtual machine 1, Virtual machine 2 acts as a guest operating system which is run on single physical machine at that time hardware is virtualized such as processor, I/O devices and memory , hypervisor Workload isolation is achieved through API by the service providers, workload migration, VM resume, VM migration, VM pause, VM clone these capabilities also applied using API at that time VM state observation is more important, A number of VMM platform available to handle VMs and Physical machines. The most notables ones is VMWare, Xen and KVM, VirtualBox. In our work we use VirtualBox to create cloud environments and handling all VMs using API of VirtualBox. VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use. Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers, it is also the only professional solution that is freely available as Open Source Software under the terms of the GNU General Public License (GPL) version 2.

1.1.4 Automatic Computing

Autonomic or self-managing system rely on monitoring probes and sensors, on adaptation engine for computing optimization based on monitoring data, and on effectors to carry out changes on the system. IBM Automatic Computing Initiative has contributing to define four characteristics of automatic systems such as : (i) self-configuration, (ii) self-optimization (iii)self-healing (iv) self-protection, IBM also introduced automatic control loops of automatic managers called MAPE-K i.e. Monitor Analyze Plan Execute-Knowledge [45], [46].

1.2 Services of Cloud Computing

E:\PH.D\Thesis write\Diagrams Of thesis\Services of clouds.png

Figure 1.3 Services of Cloud providers

Cloud computing is born for to provide services to consumers as per their requests, basically cloud computing services is divided into three main classes, according to abstraction level of the capability provided by service model of providers namely

Infrastructure as a Service i.e IaaS

Platform as a Service i.e. PaaS

Software as a Service i.e. SaaS

1.2.1 Infrastructure as a Service

Providing virtualized resources (computation, storage, and communication) on demand is known as Infrastructure as a Service (IaaS), Infrastructure services are considered to be the base layer of cloud computing systems. Amazon Web Services mainly offers IaaS, which in the case of its EC2 service means offering VMs with a software stack that can be customized similar to how an ordinary physical server would be customized. Users, service providers are given privileges, permissions to perform numerous activities to the server, such as: starting and stopping it, customizing it by installing software packages, attaching virtual disks to it, and configuring access permissions and firewalls rules.

1.2.2 Platform as a Service

In addition to infrastructure-oriented clouds that provide raw computing and storage services, another approach is to offer a higher level of abstraction to make a cloud easily programmable, known as Platform as a Service (PaaS). A cloud platform offers an environment on which developers create and install applications and do not essentially need to know how many processors or how much memory that applications will be using. In addition, multiple programming models and specialized services (e.g., data access, authentication, and payments) are offered as building blocks to new applications.

Google AppEngine, Microsoft Windows7 these are examples of Platform as a Service, offers a scalable environment for developing and hosting Web applications, which should be written in specific programming languages such as Python or Java, and use the services’ own proprietary structured object data store. Building blocks include an in-memory object cache (memcache), mail service, instant messaging service (XMPP), an image manipulation service, and integration with GoogleAccounts authentication service.

1.2.3 Software as a Service

Applications reside on the top of the cloud stack. Services provided by this layer can be accessed by end users through Web portals. Therefore, consumers are increasingly shifting from locally installed computer programs to on-line software services that offer the same functionally. Traditional desktop applications such as word processing and spreadsheet can now be accessed as a service in the Web. This service of delivering applications, known as Software as a Service (SaaS), assuages the burden of software maintenance for customers and simplifies development and testing for providers. Salesforce.com, which relies on the SaaS model, offers business productivity applications (CRM) that reside completely on their servers, allowing consumer to customize and access applications on demand.

1.3 Types of Clouds

E:\PH.D\Thesis write\Diagrams Of thesis\Types of clouds.png

Figure 1.4 Types of Cloud

These are different types of clouds that can subscribe depend upon Consumer need such as home user, small business owner, organization and universities need, on base of subscription base, consumer need cloud can be classified into

Public Cloud

Private Cloud

Community Cloud

Hybrid or Mixed cloud

as shown in above Figure 1.4.

Armbrust suggest definitions for public cloud as a "cloud made available in a pay-as-you-go manner to the general public" and private cloud as "internal data center of a business or other organization, not made available to the general public [47]."

In most cases, establishing a private cloud means restructuring an existing infrastructure by adding virtualization and cloud-like interfaces. This allows users to interact with the local data center while experiencing the same advantages of public clouds, most notably self-service interface, privileged access to virtual servers, and per-usage metering and billing.

A community cloud is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations) .

A hybrid cloud takes shape when a private cloud is supplemented with computing capacity from public clouds. The approach of temporarily renting capacity to handle spikes in load is known as cloud-bursting [24].

1.4 Virtual Infrastructure Managers

In this section we consider most popular virtual infrastructure managers which id recently developed. VI managers is use for to manage virtual machines infrastructure and provides IaaS, PaaS, & SaaS services of cloud, we discuss one by one as fallowing.

1.4.1 VirtualBox

VirtualBox is a cross-platform virtualization application. Oracle corporation is providing by VirtualBox it is open source, freely available. For one thing, it installs on your existing Intel or AMD-based computers, whether they are running Windows, Mac, Linux or Solaris operating systems. Secondly, it extends the capabilities of your existing computer so that it can run multiple operating systems (inside multiple virtual machines) at the same time. So, for example, you can run Windows and Linux on your Mac, run Windows Server 2008 on your Linux server, run Linux on your Windows PC, and so on, all along side your existing applications. VirtualBox is deceptively simple yet also very powerful. It can run everywhere from small embedded systems or desktop class machines all the way up to datacenter deployments and even Cloud environments. Following are key features of VirtulBox [48].

Running multiple operating systems simultaneously:

VirtualBox allows you to run more than one operating system at a time. This way, you can run software written for one operating system on another (for example, Windows software on Linux or a Mac) without having to reboot to use it. Since you can configure what kinds of "virtual" hardware should be presented to each such operating system, you can install an old operating system such as DOS or OS/2 even if your real computer's hardware is no longer supported by that operating system.

Easier software installations:

Software vendors can use virtual machines to ship entire software configurations. For example, installing a complete mail server solution on a real machine can be a tedious task. With VirtualBox, such a complex setup (then often called an "appliance") can be packed into a virtual machine. Installing and running a mail server becomes as easy as importing such an appliance into VirtualBox.

Testing and disaster recovery:

Once installed, a virtual machine and its virtual hard disks can be considered a "container" that can be arbitrarily frozen, woken up, copied, backed up, and transported between hosts. On top of that, with the use of another VirtualBox feature called "snapshots", one can save a particular state of a virtual machine and revert back to that state, if necessary. This way, one can freely experiment with a computing environment. If something goes wrong (e.g. after installing misbehaving software or infecting the guest with a virus), one can easily switch back to a previous snapshot and avoid the need of frequent backups and restores.Any number of snapshots can be created, allowing you to travel back and forward in virtual machine time. You can delete snapshots while a VM is running to reclaim disk space.

Infrastructure consolidation:

Virtualization can significantly reduce hardware and electricity costs. Most of the time, computers today only use a fraction of their potential power and run with low average system loads. A lot of hardware resources as well as electricity is thereby wasted. So, instead of running many such physical computers that are only partially used, one can pack many virtual machines onto a few powerful hosts and balance the loads between them.

In our work cloud environments is created by using VirtualBox, its API is also available to handle virtual machines through java programming again its feature, in summary of VitualBox provides fallowing features, which is full-virtualization environment with features like Network Address Translation (NAT); Dynamic Host Configuration Protocol (DHCP); and having software based Network Interface Cards (NICs). This also provides the Java based web services for accessing the various facilities such as start, stop, and pause, resume, migrate, clone VM using API of VirtualBox.

1.4.2 Eucalyptus

The Eucalyptus [49] framework was one of the first open-source projects to focus on building IaaS clouds. It has been developed with the intent of providing an open-source implementation nearly identical in functionality to Amazon Web Services APIs. Therefore, users can interact with a Eucalyptus cloud using the same tools they use to access Amazon EC2. It also distinguishes itself from other tools because it provides a storage cloud API—emulating the Amazon S3 API—for storing general user data and VM images.

In summary, Eucalyptus provides the following features: Linux-based controller with administration Web portal; EC2-compatible (SOAP, Query) and S3- compatible (SOAP, REST) CLI and Web portal interfaces; Xen; KVM, and VMWare backends; Amazon EBS-compatible virtual storage devices; interface to the Amazon EC2 public cloud; virtual networks.

1.4.3 Nimbus3

The Nimbus toolkit [50] is built on top of the Globus framework. Nimbus provides most features in common with other open-source VI managers, such as an EC2-compatible front-end API, support to Xen, and a backend interface to Amazon EC2. It providing a Globus Web Services Resource Framework (WSRF) interface. It also provides a backend service, named Pilot, which spawns VMs on clusters managed by a local resource manager (LRM) such as PBS and SGE.Nimbus’ core was engineered around the pring framework to be easily extensible, thus allowing several internal components to be replaced and also eases the integration with other systems.

In summary, Nimbus provides the following features: Linux-based controller; EC2-compatible (SOAP) and WSRF interfaces; Xen and KVM backend and a Pilot program to spawn VMs through an LRM; interface to the Amazon EC2 public cloud; virtual networks, one-click virtual clusters.

1.4.4 OpenPEX

OpenPEX (Open Provisioning and EXecution Environment) was constructed around the notion of using advance reservations as the primary method for allocatingVMinstances. It distinguishes from other VI managers by its leases negotiation mechanism, which incorporates a bilateral negotiation protocol that allows users and providers to come to an agreement by exchanging offers and counter offers when their original requests cannot be satisfied.

In summary, OpenPEX provides the following features: multi-platform (Java) controller; Web portal and Web services (REST) interfaces, Citrix XenServer backend, advance reservation of capacity with negotiation.

1.4.5 OpenNebula

OpenNebula is one of the most feature-rich open-source VI managers. It was initially conceived to manage local virtual infrastructure, but has also included remote interfaces that make it viable to build public clouds. Altogether, four programming APIs are available: XML-RPC and libvirt for local interaction; a subset of EC2 (Query) APIs and the OpenNebula Cloud API (OCA) for public access [24], [51].

Its architecture is modular, encompassing several specialized pluggable components. The Core module orchestrates physical servers and their hypervisors, storage nodes, and network fabric. Management operations are performed through pluggable Drivers, which interact with APIs of hypervisors, storage and network technologies, and public clouds. The Scheduler module, which is in charge of assigning pending VM requests to physical hosts, offers dynamic resource allocation features. Administrators can choose between different scheduling objectives such as packing VMs in fewer hosts or keeping the load balanced. Via integration with the Haizea lease scheduler, OpenNebula also supports advance reservation of capacity and queuing of best-effort leases [24].

In summary, OpenNebula provides the following features: Linux-based controller; CLI, XML-RPC, EC2-compatible Query and OCA interfaces; Xen, KVM, and VMware backend; interface to public clouds (Amazon EC2, ElasticHosts); virtual networks; dynamic resource allocation; advance reservation of capacity.

1.4.6 oVirt

oVirt [52] is an open-source VI manager, sponsored by Red Hat’s Emergent Technology group. It provides most of the basic features of other VI managers, including support for managing physical server pools, storage pools, user accounts, and VMs. All features are accessible through a Web interface [52].

The oVirt admin node, which is also a VM, provides a Web server, secure authentication services based on freeIPA, and provisioning services to manage VM image and their transfer to the managed nodes. Each managed node libvirt, which interfaces with the hypervisor.

In summary, oVirt provides the following features: Fedora Linux-based controller packaged as a irtual appliance, Web portal interface, KVMbackend.

1.4.7 Platform ISF

Infrastructure Sharing Facility (ISF) is the VI manager offering from Platform Computing [53]. The company, mainly through its LSF family of products, has been serving the HPC market for several years. ISF’s architecture is divided into three layers. The top most Service Delivery layer includes the user interfaces (i.e., self-service portal and APIs); the Allocation Engine provides reservation and allocation policies; and the bottom layer—Resource Integrations—provides adapters to interact with hypervisors, provisioning tools, and other systems (i.e., external public clouds). The Allocation Engine also provides policies to address several objectives, such as minimizing energy consumption, reducing impact of failures, and maximizing application performance [54].

ISF is built upon Platform’s VM Orchestrator, which, as a standalone product, aims at speeding up delivery of VMs to end users. It also provides high availability by restarting VMs when hosts fail and duplicating the VM that hosts the VMO controller [55].

In summary, ISF provides the following features: Linux-based controller packaged as a virtual appliance, Web portal interface; dynamic resource allocation; advance reservation of capacity; high availability.

1.4.8 VMWare vSphere and vCloud

vSphere is VMware’s suite of tools aimed at transforming IT infrastructures into private clouds [56], [57]. It distinguishes from other VI managers as one of the most feature-rich, due to the company’s several offerings in all levels the architecture.

In the vSphere architecture, servers run on the ESXi platform. A separate server runs vCenter Server, which centralizes control over the entire virtual infrastructure. Through the vSphere Client software, administrators connect to vCenter Server to perform various tasks.The Distributed Resource Scheduler (DRS) makes allocation decisions based on predefined rules and policies. It continuously monitors the amount of resources available to VMs and, if necessary, makes allocation changes to meet VM requirements. In the storage virtualization realm, vStorage VMFS is a cluster file system to provide aggregate several disks in a single volume.

VMFS is especially optimized to store VM images and virtual disks. It supports storage equipment that use Fibre Channel or iSCSI SAN.

In its basic setup, vSphere is essentially a private administration suite. Selfservice VM provisioning to end users is provided via the vCloud API, which interfaces with vCenter Server. In this configuration, vSphere can be used by service providers to build public clouds. In terms of interfacing with public clouds, vSphere interfaces with the vCloud API, thus enabling cloud-bursting into external clouds.

In summary, vSphere provides the following features: Windows-based controller (vCenter Server); CLI, GUI, Web portal, and Web services interfaces; VMware ESX, ESXi backend; VMware vStorage VMFS storage virtualization; interface to external clouds (VMware vCloud partners); virtual networks (VMWare Distributed Switch); dynamic resource allocation (VMware DRM) high availability; data protection (VMWare Consolidated Backup).

1.4.9 Apache VCL

The Virtual Computing Lab [58], [59] project has been incepted in 2004 by researchers at the North Carolina State University as a way to provide customized environments to computer lab users. The software components that support NCSU’s initiative have been released as open-source and incorporated by the Apache Foundation.

Since its inception, the main objective of VCL has been providing desktop (virtual lab) and HPC computing environments anytime, in a flexible costeffective way and with minimal intervention of IT staff. In this sense, VCL was one of the first projects to create a tool with features such as: self-service Web portal, to reduce administrative burden, advance reservation of capacity, to provide resources during classes; and deployment of customized machine images on multiple computers, to provide clusters on demand.

In summary, Apache VCL provides the following features: (i) multi-platform controller, based on Apache/PHP (ii) Web portal and XML-RPC interfaces (iii) support for VMware hypervisors (ESX, ESXi, and Server) (iv) Virtual networks (v) virtual clusters; and (vi) advance reservation of capacity.

1.4.10 AppLogic

AppLogic [60] is a commercial VI manager, the flagship product of 3tera Inc. from California, USA. The company has labeled this product as a Grid Operating System.

AppLogic provides a fabric to manage clusters of virtualized servers, focusing on managing multi-tier Web applications. It views an entire application as a collection of components that must be managed as a single entity.

Several components such as firewalls, load balancers, Web servers, application servers, and database servers can be set up and linked together. Whenever the application is started, the system manufactures and assembles the virtual infrastructure required to run it. Once the application is stopped, AppLogic tears down the infrastructure built for it [61].

AppLogic offers dynamic appliances to add functionality such as Disaster Recovery and Power optimization to applications [60]. The key differential of this approach is that additional functionalities are implemented as another pluggable appliance instead of being added as a core functionality of the VI manager.

In summary, 3tera AppLogic provides the following features: Linux-based controller CLI and GUI interfaces, Xen backend, Global Volume Store (GVS) storage virtualization; virtual networks; virtual clusters; dynamic resource allocation; high availability; and data protection.

1.4.11 Citrix Essentials

The Citrix Essentials suite is one the most feature complete VI management software available, focusing on management and automation of data centers. It is essentially a hypervisor-agnostic solution, currently supporting Citrix Xen Server and Microsoft Hyper-V [62].

By providing several access interfaces, it facilitates both human and programmatic interaction with the controller. Automation of tasks is also aided by a workflow orchestration mechanism.

In summary, Citrix Essentials provides the following features: Windowsbased controller; GUI, CLI, Web portal, and XML-RPC interfaces; support for XenServer and Hyper-V hypervisors; Citrix Storage Link storage virtualization; virtual networks; dynamic resource allocation; three-level high availability (i.e., recovery by VM restart, recovery by activating paused duplicate VM, and running duplicate VM continuously) [63]; data protection with Citrix Consolidated Backup.

1.4.12 Enomaly ECP

The Enomaly Elastic Computing Platform, in its most complete edition, offers most features a service provider needs to build an IaaS cloud.

Most notably, ECP Service Provider Edition offers a Web-based customer dashboard that allows users to fully control the life cycle of VMs. Usage accounting is performed in real time and can be viewed by users. Similar to the functionality of virtual appliance marketplaces, ECP allows providers and users to package and exchange applications.

In summary, Enomaly ECP provides the following features: Linux-based controller; Web portal and Web services (REST) interfaces; Xen back-end; interface to the Amazon EC2 public cloud; virtual networks; virtual clusters (ElasticValet).

1.5 Features of Cloud Computing

Now we discuss a list of basic and advanced features that are usually available in virtual infrastructure manager (VIM), again we consider key features of IaaS, PaaS service providers.

Virtualization Support:

The multi-tenancy aspect of clouds needs manifold consumers with isparate requirements to be served by a only hardware infrastructure. Virtualized resources (CPUs, memory, etc.) can be sized and resized with certain elasticity. These features make hardware virtualization, the ideal technology to create a virtual infrastructure that partitions a data center among multiple residents.

Multiple Backend Hypervisors:

Different virtualization models and tools present different benefits, drawbacks, and limitations. So, some VI managers provide a homogeneous management layer regardless of the virtualization technology used. This attribute is more visible in open-source VI managers, which usually provide pluggable drivers to interact with multiple hypervisors. In this direction, the aim of libvirt [75] is to provide a uniform API that VI managers can use to manage domains (a VM or container running an instance of an operating system) in virtualized nodes using standard operations that abstract hypervisor specific calls.

Storage Virtualization:

Virtualizing storage means abstracting logical storage from physical storage. By consolidating all available storage devices in a data center, it allows creating virtual disks independent from device and location. Storage devices are commonly organized in a storage area network (SAN) and attached to servers via protocols such as Fibre Channel, iSCSI, and NFS; a storage controller provides the layer of abstraction between virtual and physical storage [76].

In the VI management sphere, storage virtualization support is often restricted to commercial products of companies such as VMWare and Citrix. Other products feature ways of pooling and managing storage devices, but administrators are still aware of each individual device.

Self-Service, On-Demand Resource Provisioning:

Self-service access to resources such as VM has been perceived as one the most attractive features of cloud computing. This feature enables users, consumers to directly obtain services from clouds, such as spawning the creation of a server and tailoring its software, configurations, and security policies, without interacting with a human system administrator. This capability "eliminates the need for more time-consuming, labor-intensive, human driven procurement processes familiar to many in IT" [64]. Therefore, exposing a self-service interface, through which users can easily interact with the system, is a highly desirable feature of a VI manager.

Virtual Networking:

Virtual networks tolerate creating remote network on top of a physical infrastructure independently from physical topology and locations [65]. A virtual LAN (VLAN) allows isolating traffic that shares a switched network, allowing VMs to be grouped into the same broadcast domain. Additionally, a VLAN can be configured to block traffic originated from VMs from other networks. Similarly, the VPN (virtual private network) concept is used to describe a secure and private overlay network on top of a public network (most commonly the public Internet) [66].

Support for creating and configuring virtual networks to group VMs placed through out a data center is provided by most VI managers. Additionally, VI managers that interface with public clouds often support secure VPNs connecting local and remote VMs.

Dynamic Resource Allotment:

Increased awareness of energy consumption in data centers has encouraged the practice of dynamic consolidating VMs in a fewer number of servers. In cloud infrastructures, where applications have variable and dynamic needs, capacity management and demand prediction are especially complicated. This fact triggers the need for dynamic resource allocation aiming at obtaining a timely match of supply and demand [67].

Energy consumption reduction and better management of SLAs can be achieved by dynamically remapping VMs to physical machines at regular intervals. Machines that are not assigned any VM can be turned off or put on a low power state. In the same fashion, overheating can be avoided by moving load away from hotspots [68].

A number of VI managers include a dynamic resource allocation feature that continuously monitors utilization across resource pools and reallocates available resources among VMs according to application needs.

Virtual Clusters:

Numerous VI managers can holistically manage groups of VMs. This feature is useful for provisioning computing virtual clusters on demand, and interconnected VMs for multi-tier Internet applications [69].

Reservation and Negotiation Mechanism:

When users, consumers request computational resources to available at a specific time, requests are termed advance reservations (AR), in contrast to best-effort requests, when users request resources whenever available [70]. To support complex requests, such as AR, a VI manager must allow users to "lease" resources expressing more complex terms (e.g., the period of time of a reservation). This is especially useful in clouds on which resources are scarce; since not all requests may be satisfied immediately, they can benefit of VM placement strategies that support queues, priorities, and advance reservations [25].

Additionally, leases may be negotiated and renegotiated, allowing provider and consumer to modify a lease or present counter proposals until an agreement is reached. This feature is illustrated by the case in which an AR request for a given slot cannot be satisfied, but the provider can offer a distinct slot that is still satisfactory to the user. This problem has been addressed in OpenPEX, which incorporates a bilateral negotiation protocol that allows users and providers to come to an alternative agreement by exchanging offers and counter offers [71].

High Availability and Data Revival:

The high availability (HA) feature of VI managers aims at minimizing application downtime and preventing business commotion. A few VI managers finish this by providing a failover mechanism, which detects failure of both physical and virtual servers and restarts VMs on healthy physical servers. This style of HA protects from host, but not VM, failures [72], [73].

For mission critical applications, when a failover solution involving restarting VMs does not suffice, additional levels of fault tolerance that rely on redundancy of VMs are implemented. In this style, redundant and synchronized VMs (running or in standby) are kept in a secondary physical server. The HA solution monitors failures of system components such as servers, VMs, disks, and network and ensures that a duplicate VM serves the application in case of failures [73].

Data backup in clouds should take into account the high data volume involved in VM management. Frequent backup of a large number of VMs, each one with multiple virtual disks attached, should be done with minimal interference in the systems performance. In this sense, some VI managers offer data protection mechanisms that perform incremental backups of VM images. The backup workload is often assigned to proxies, thus offloading production server and reducing network overhead [74].

1.5.1 Desired features of a Cloud

Self-Service:

Consumers of cloud computing services except on-demand, nearly instant access to resources. Cloud must be allowed self-service so consumer can request, customize, pay and use services without intervention of human operators [78].

Per-Usage Metering and Billing:

Services must be priced on a shortterm basis (e.g., by the hour), allowing consumers, users to release (and not pay for) resources as soon as they are not required [47]. For these reasons, clouds must implement features to allow efficient trading of service such as pricing, accounting, and billing [77]. Metering should be done accordingly for different types of service (e.g., storage, processing, and bandwidth, power, energy usage) and usage promptly reported, thus providing greater transparency [78].

Elasticity: Cloud computing provides the illusion of infinite computing resources available on demand [47]. Therefore users expect clouds to rapidly provide resources in any quantity at any time. In particular, it is expected that the additional resources can be (i) provisioned, possibly automatically, when an application load increases and (ii) released when load decreases (scale up and down) [78].

Customization:

In a multi-tenant cloud a great disparity between user needs is often the case. Thus, resources rented from the cloud must be highly customizable. In the case of infrastructure services, customization means allowing users to deploy specialized virtual appliances and to be given privileged (root) access to the virtual servers. Other service classes (PaaS and SaaS) offer less flexibility and are not suitable for general-purpose computing [47], but still are expected to provide a certain level of customization.

1.5.2 Features of Infrastructures as a service Providers

In spite of being based on a general set of features, IaaS offerings can be notable by the availability of specialized features that influence the cost benefit ratio to be experienced by user applications when moved to the cloud. The most relevant features are:

(i) geographic distribution of data centers

(ii) variety of user interfaces and APIs to access the system

(iii) specialized components and services that aid particular applications (e.g., loadbalancers, firewalls)

(iv) choice of virtualization platform and operating systems

(v) different billing methods and period (e.g., prepaid vs. post-paid, hourly vs. monthly)

1.5.3 Features of Platform as a service Providers

Public PaaS providers commonly offer a development and deployment environment that allow users to create and run their applications with tiny or no concern to low-level details of the platform. In addition, specific programming languages and frameworks are made available in the platform, as well as other services such as persistent data storage and in memory caches.

Programming Models, Languages, and Frameworks:

Programming models made available by IaaS providers define how users, consumers can convey their applications using higher levels of abstraction and efficiently run them on the cloud platform. Each model aims at efficiently solving a particular problem. In the cloud computing domain, the most common activities that require specialized models are: processing of large dataset in clusters of computers (MapReduce model), development of request-based Web services and applications; definition and orchestration of business processes in the form of workflows (Workflow model); and high-performance distributed execution of various computational tasks.

For user convenience, PaaS providers usually support multiple programming languages. Most commonly used languages in platforms include Java and python (e.g. Virtual Box, Google AppEngine), .NET languages (e.g., Microsoft Azure), and Ruby (e.g., Heroku). Force.com has devised its own programming language (Apex) and an Excel-like query language, which provide higher levels of abstraction to key platform functionalities.

An array of software frameworks are usually made available to PaaS developers, depending on application focus. Providers that focus on Web and enterprise application hosting offer popular frameworks such as Ruby on Rails, spring, Java EE, and .NET.

Persistence Options:

A persistence layer is essential to allow applications to record their state and recover it in case of crashes, as well as to store user data. Traditionally, Web and enterprise application developers have chosen relational databases as the preferred persistence method. These databases offer fast and reliable structured data storage and transaction processing, but may lack scalability to handle several petabytes of data stored in commodity computers [80].

In the cloud computing domain, distributed storage technologies have emerged, which seek to be robust and highly scalable, at the expense of relational structure and convenient query languages. For example, Amazon SimpleDB and Google AppEngine datastore offer schema-less, automatically indexed database services[79].

Data queries can be performed only on individual tables; that is, join operations are unsupported for the sake of scalability.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now