The Services Of Cloud Providers

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Introduction

Now-a-days, three technological storms are, smart mobile devices, ubiquitous high-speed connectivity, and Cloud computing. Computer scientists are predicting that perhaps Cloud systems will be the next generation operating system. Prosaically, cloud computing has a strong foundation in virtualization governed by hypervisors, which are providing slices of resources. The developments in the hypervisors such as Xen, VitualBox, KVM, VMWare etc. triggering the development of the Commercial and Open source Cloud environments. The Clouds can definitely playing its role in "Hosting" part of the computations, and can be best on federation with Grids. When we plug an electric lamp into outlet socket, we do not think about how electric power is generated and how it is get through outlet socket because of that electricity is virtualized, it is readily available on wall socket that hides power generation stations and huge distribution grid.

See another example, one way to think of cloud computing consider your experience with email. Our mail client, if it is Yahoo, Hotmail, Gmail, Indiatimes and so on, take care of housing all of the hardware and software necessary to support your personal email account. When you want access your mail, open web browser, go to email client and log in. The most vital is internet access, your mail is not housed on your physical access your physical computer, you access through internet connection and access anywhere in world. Suppose you work in a picnic, at work you can check your email as long as possible through internet, your email id different than software that you installed on your computer, such as Ms word processing program, when you create a document using word processing software that document is always stay in device you used anytime. Email client is similar to how cloud computing works; except instead of accessing just email, you can choose what information you have access in cloud environment [81].

Technologies such as cluster, grid, and now cloud computing have decide to provide access large amount of computing power in fully, para virtualized manner. Consumer who access cloud services that pay based on usage such as pay-as-you-go , pay-as-you-use-energy model similar to services which traditional public utility services such as water, electricity, gas, telephony, TV channels and so on. Cloud computing is coined umbrella term, it is in nutshell, Amazon, Google, Microsoft is commercial providers on demand computing services.

Many practitioners, researchers defined cloud computing and their characteristics, Rajkumar Buyya [77] defined " Cloud is a parallel and distributed computing system consists of a collections of inter-connected and virtualized computers that are dynamically provisioned and presented as one or more unified computing resources based on service level agreement established through negotiation between the service provider and consumers".

University of California Berkeley [47] summarized characteristics of cloud computing "(1) illusion of infinite computing resources (2) the elimination of an up-front commitment by cloud users; and (3) the ability to pay for use as needed".

Vaquero [82] have state and defined cloud computing "clouds are a large pool of easily usable and accessible virtualized resources (such as hardware, development platforms and services). These resources can be dynamically reconfigured to adjust to a variable load (scale), allowing also for an optimum resource utilization. This pool of resources is typically exploited by a pay-per-use model in which guarantees are offered by the Infrastructure Provider by means of customized Service Level Agreements."

The National Institute of Standards and Technology (NIST) [78] characterizes cloud computing as " a pay-per-use model for enabling available, convenient, on-demand network access to a shared pool of configurable computing resources (e.g. networks, servers, storage, applications, services) that can be rapidly provisioned and released with minimal management effort or service provider interaction."

In a more basic definition, Armbrust [47] define cloud as the "data center hardware and software that provide services." Similarly, B.Sotomayor [24] point out that "cloud" is more often used to refer to the IT infrastructure deployed on an Infrastructure as a Service provider data center.

Common characteristics of cloud computing is defined (i) pay-as-per-use (ii) pay-as-per-go (iii) elastic capacity and illusion of infinite services (iv) self service interface (v) resources are virtualized. In addition of that cloud computing providers usually offer a wide range of software services, also include API and developments tools for an example Virtual Box with Java API provide to developers for creation cloud environment such as public, private and protected cloud. In recent years several technologies have grown and radically donate to make cloud computing practicable. In section 1.1 contain basic roots of cloud computing, section 1.2 consider services of cloud computing provide towards consumers of cloud, section 1.3 contain types of clouds, section 1.4 contain virtual infrastructure managers which is responsible for creating cloud environment and virtual machines creations and last but best is features of cloud computing that we discuss in this Chapter 1.

1.1 Basic Roots of Cloud Computing

E:\PH.D\Thesis write\Diagrams Of thesis\Roots Of cloud Computing.png

Figure1.1 Basic Roots of Cloud Computing

We can find the basic roots of cloud computing by monitoring the different technologies specially in hardware (virtulization, multicore chips), Internet technologies (Web services, service oriented architecture, web 2.0), distributed computing (cluster, grids) and system management (automatic computing, data center automation) as shown in above figure 1.1, as we discuss one by one technology.

1.1.1 SOA, Web Services, Web 2.0, and Mashups

Web services (WS) open standards have appreciably throw in to advance domain of software assimilation. WS standards have been created on top of existing ubiquitous technologies such as HTTP, XML, thus providing a common mechanism for delivering services, making them ideal for implementing a service-oriented architecture i.e. SOA. In SOA software resources are packaged as "services" which is well defined self contained modules that provide by business functionality [43]. The concept of gluing services initially focused on the enterprise Web but gained space in the consumer realm as well especially with the advent of Web 2.0, in the consumer Web information and services may be programmatically aggregated acting as a building blocks of complex compositions called service mashups. Google make their service APIs publically accessible using standard protocols such as SOAP and REST [44].

1.1.2 Grid & Utility Computing

Grid computing facilitates aggregation of distributed resources and transparently accesses them. Most production grids TeraGrid, EGEE share more computable and storage resources distributed across different administrative domains with their main intension is speeding up a broad range of scientific application such as climate modeling, drug design and protein analysis. Globus Toolkit is a middleware that implements several standard Grid services and over the years has abet the deployment of several service oriented.

In utility computing environments users assign a "utility" value to their jobs, where utility is a fixed or time-varying value that captures various QoS constraints ( deadline, satisfaction).The service providers attempt their own utility , where said utility may directly correlate with their profit.

1.1.3 Hardware Virtualization

The cloud computing service providers basically provides large scale data center composed of millions of computers. Such data centers are built to serve many consumers and host many isolated applications. For this purpose hardware virtualization can be considered. The idea of hardware virtualization comes from distributed operating system, Hardware virtualization allows running multiple operating system call Virtual machines and software stacks on a single physical platform, Virtual machine monitor call hypervisor is hypervisor between guest os and host os as shown in below figure 1.2

E:\PH.D\Thesis write\Diagrams Of thesis\Basic Architecture.png

Figure 1.2 Hardware Virtualization

Virtual machine 1, Virtual machine 2 acts as a guest operating system which is run on single physical machine at that time hardware is virtualized such as processor, I/O devices and memory , hypervisor Workload isolation is achieved through API by the service providers, workload migration, VM resume, VM migration, VM pause, VM clone these capabilities also applied using API at that time VM state observation is more important, A number of VMM platform available to handle VMs and Physical machines. The most notables ones is VMWare, Xen and KVM, VirtualBox. In our work we use VirtualBox to create cloud environments and handling all VMs using API of VirtualBox. VirtualBox is a powerful x86 and AMD64/Intel64 virtualization product for enterprise as well as home use. Not only is VirtualBox an extremely feature rich, high performance product for enterprise customers, it is also the only professional solution that is freely available as Open Source Software under the terms of the GNU General Public License (GPL) version 2.

1.1.4 Automatic Computing

Autonomic or self-managing system rely on monitoring probes and sensors, on adaptation engine for computing optimization based on monitoring data, and on effectors to carry out changes on the system. IBM Automatic Computing Initiative has contributing to define four characteristics of automatic systems such as : (i) self-configuration, (ii) self-optimization (iii)self-healing (iv) self-protection, IBM also introduced automatic control loops of automatic managers called MAPE-K i.e. Monitor Analyze Plan Execute-Knowledge [45], [46].

1.2 Services of Cloud Computing

E:\PH.D\Thesis write\Diagrams Of thesis\Services of clouds.png

Figure 1.3 Services of Cloud providers

Cloud computing is born for to provide services to consumers as per their requests, basically cloud computing services is divided into three main classes, according to abstraction level of the capability provided by service model of providers namely

Infrastructure as a Service i.e IaaS

Platform as a Service i.e. PaaS

Software as a Service i.e. SaaS

1.2.1 Infrastructure as a Service

Providing virtualized resources (computation, storage, and communication) on demand is known as Infrastructure as a Service (IaaS), Infrastructure services are considered to be the base layer of cloud computing systems. Amazon Web Services mainly offers IaaS, which in the case of its EC2 service means offering VMs with a software stack that can be customized similar to how an ordinary physical server would be customized. Users, service providers are given privileges, permissions to perform numerous activities to the server, such as: starting and stopping it, customizing it by installing software packages, attaching virtual disks to it, and configuring access permissions and firewalls rules.

1.2.2 Platform as a Service

In addition to infrastructure-oriented clouds that provide raw computing and storage services, another approach is to offer a higher level of abstraction to make a cloud easily programmable, known as Platform as a Service (PaaS). A cloud platform offers an environment on which developers create and install applications and do not essentially need to know how many processors or how much memory that applications will be using. In addition, multiple programming models and specialized services (e.g., data access, authentication, and payments) are offered as building blocks to new applications.

Google AppEngine, Microsoft Windows7 these are examples of Platform as a Service, offers a scalable environment for developing and hosting Web applications, which should be written in specific programming languages such as Python or Java, and use the services’ own proprietary structured object data store. Building blocks include an in-memory object cache (memcache), mail service, instant messaging service (XMPP), an image manipulation service, and integration with GoogleAccounts authentication service.

1.2.3 Software as a Service

Applications reside on the top of the cloud stack. Services provided by this layer can be accessed by end users through Web portals. Therefore, consumers are increasingly shifting from locally installed computer programs to on-line software services that offer the same functionally. Traditional desktop applications such as word processing and spreadsheet can now be accessed as a service in the Web. This service of delivering applications, known as Software as a Service (SaaS), assuages the burden of software maintenance for customers and simplifies development and testing for providers. Salesforce.com, which relies on the SaaS model, offers business productivity applications (CRM) that reside completely on their servers, allowing consumer to customize and access applications on demand.

1.3 Types of Clouds

E:\PH.D\Thesis write\Diagrams Of thesis\Types of clouds.png

Figure 1.4 Types of Cloud

These are different types of clouds that can subscribe depend upon Consumer need such as home user, small business owner, organization and universities need, on base of subscription base, consumer need cloud can be classified into

Public Cloud

Private Cloud

Community Cloud

Hybrid or Mixed cloud

as shown in above Figure 1.4.

Armbrust suggest definitions for public cloud as a "cloud made available in a pay-as-you-go manner to the general public" and private cloud as "internal data center of a business or other organization, not made available to the general public [47]."

In most cases, establishing a private cloud means restructuring an existing infrastructure by adding virtualization and cloud-like interfaces. This allows users to interact with the local data center while experiencing the same advantages of public clouds, most notably self-service interface, privileged access to virtual servers, and per-usage metering and billing.

A community cloud is shared by several organizations and supports a specific community that has shared concerns (e.g., mission, security requirements, policy, and compliance considerations) .

A hybrid cloud takes shape when a private cloud is supplemented with computing capacity from public clouds. The approach of temporarily renting capacity to handle spikes in load is known as cloud-bursting [24].

1.4 Virtual Infrastructure Managers

In this section we consider most popular virtual infrastructure managers which id recently developed. VI managers is use for to manage virtual machines infrastructure and provides IaaS, PaaS, & SaaS services of cloud, we discuss one by one as fallowing.

1.4.1 VirtualBox

VirtualBox is a cross-platform virtualization application. Oracle corporation is providing by VirtualBox it is open source, freely available. For one thing, it installs on your existing Intel or AMD-based computers, whether they are running Windows, Mac, Linux or Solaris operating systems. Secondly, it extends the capabilities of your existing computer so that it can run multiple operating systems (inside multiple virtual machines) at the same time. So, for example, you can run Windows and Linux on your Mac, run Windows Server 2008 on your Linux server, run Linux on your Windows PC, and so on, all along side your existing applications. VirtualBox is deceptively simple yet also very powerful. It can run everywhere from small embedded systems or desktop class machines all the way up to datacenter deployments and even Cloud environments. Following are key features of VirtulBox [48].

Running multiple operating systems simultaneously:

VirtualBox allows you to run more than one operating system at a time. This way, you can run software written for one operating system on another (for example, Windows software on Linux or a Mac) without having to reboot to use it. Since you can configure what kinds of "virtual" hardware should be presented to each such operating system, you can install an old operating system such as DOS or OS/2 even if your real computer's hardware is no longer supported by that operating system.

Easier software installations:

Software vendors can use virtual machines to ship entire software configurations. For example, installing a complete mail server solution on a real machine can be a tedious task. With VirtualBox, such a complex setup (then often called an "appliance") can be packed into a virtual machine. Installing and running a mail server becomes as easy as importing such an appliance into VirtualBox.

Testing and disaster recovery:

Once installed, a virtual machine and its virtual hard disks can be considered a "container" that can be arbitrarily frozen, woken up, copied, backed up, and transported between hosts. On top of that, with the use of another VirtualBox feature called "snapshots", one can save a particular state of a virtual machine and revert back to that state, if necessary. This way, one can freely experiment with a computing environment. If something goes wrong (e.g. after installing misbehaving software or infecting the guest with a virus), one can easily switch back to a previous snapshot and avoid the need of frequent backups and restores.Any number of snapshots can be created, allowing you to travel back and forward in virtual machine time. You can delete snapshots while a VM is running to reclaim disk space.

Infrastructure consolidation:

Virtualization can significantly reduce hardware and electricity costs. Most of the time, computers today only use a fraction of their potential power and run with low average system loads. A lot of hardware resources as well as electricity is thereby wasted. So, instead of running many such physical computers that are only partially used, one can pack many virtual machines onto a few powerful hosts and balance the loads between them.

In our work cloud environments is created by using VirtualBox, its API is also available to handle virtual machines through java programming again its feature, in summary of VitualBox provides fallowing features, which is full-virtualization environment with features like Network Address Translation (NAT); Dynamic Host Configuration Protocol (DHCP); and having software based Network Interface Cards (NICs). This also provides the Java based web services for accessing the various facilities such as start, stop, and pause, resume, migrate, clone VM using API of VirtualBox.

1.4.2 Eucalyptus

The Eucalyptus [49] framework was one of the first open-source projects to focus on building IaaS clouds. It has been developed with the intent of providing an open-source implementation nearly identical in functionality to Amazon Web Services APIs. Therefore, users can interact with a Eucalyptus cloud using the same tools they use to access Amazon EC2. It also distinguishes itself from other tools because it provides a storage cloud API—emulating the Amazon S3 API—for storing general user data and VM images.

In summary, Eucalyptus provides the following features: Linux-based controller with administration Web portal; EC2-compatible (SOAP, Query) and S3- compatible (SOAP, REST) CLI and Web portal interfaces; Xen; KVM, and VMWare backends; Amazon EBS-compatible virtual storage devices; interface to the Amazon EC2 public cloud; virtual networks.

1.4.3 Nimbus3

The Nimbus toolkit [50] is built on top of the Globus framework. Nimbus provides most features in common with other open-source VI managers, such as an EC2-compatible front-end API, support to Xen, and a backend interface to Amazon EC2. It providing a Globus Web Services Resource Framework (WSRF) interface. It also provides a backend service, named Pilot, which spawns VMs on clusters managed by a local resource manager (LRM) such as PBS and SGE.Nimbus’ core was engineered around the pring framework to be easily extensible, thus allowing several internal components to be replaced and also eases the integration with other systems.

In summary, Nimbus provides the following features: Linux-based controller; EC2-compatible (SOAP) and WSRF interfaces; Xen and KVM backend and a Pilot program to spawn VMs through an LRM; interface to the Amazon EC2 public cloud; virtual networks, one-click virtual clusters.

1.4.4 OpenPEX

OpenPEX (Open Provisioning and EXecution Environment) was constructed around the notion of using advance reservations as the primary method for allocatingVMinstances. It distinguishes from other VI managers by its leases negotiation mechanism, which incorporates a bilateral negotiation protocol that allows users and providers to come to an agreement by exchanging offers and counter offers when their original requests cannot be satisfied.

In summary, OpenPEX provides the following features: multi-platform (Java) controller; Web portal and Web services (REST) interfaces, Citrix XenServer backend, advance reservation of capacity with negotiation.

1.4.5 OpenNebula

OpenNebula is one of the most feature-rich open-source VI managers. It was initially conceived to manage local virtual infrastructure, but has also included remote interfaces that make it viable to build public clouds. Altogether, four programming APIs are available: XML-RPC and libvirt for local interaction; a subset of EC2 (Query) APIs and the OpenNebula Cloud API (OCA) for public access [24], [51].

Its architecture is modular, encompassing several specialized pluggable components. The Core module orchestrates physical servers and their hypervisors, storage nodes, and network fabric. Management operations are performed through pluggable Drivers, which interact with APIs of hypervisors, storage and network technologies, and public clouds. The Scheduler module, which is in charge of assigning pending VM requests to physical hosts, offers dynamic resource allocation features. Administrators can choose between different scheduling objectives such as packing VMs in fewer hosts or keeping the load balanced. Via integration with the Haizea lease scheduler, OpenNebula also supports advance reservation of capacity and queuing of best-effort leases [24].

In summary, OpenNebula provides the following features: Linux-based controller; CLI, XML-RPC, EC2-compatible Query and OCA interfaces; Xen, KVM, and VMware backend; interface to public clouds (Amazon EC2, ElasticHosts); virtual networks; dynamic resource allocation; advance reservation of capacity.

1.4.6 oVirt

oVirt [52] is an open-source VI manager, sponsored by Red Hat’s Emergent Technology group. It provides most of the basic features of other VI managers, including support for managing physical server pools, storage pools, user accounts, and VMs. All features are accessible through a Web interface [52].

The oVirt admin node, which is also a VM, provides a Web server, secure authentication services based on freeIPA, and provisioning services to manage VM image and their transfer to the managed nodes. Each managed node libvirt, which interfaces with the hypervisor.

In summary, oVirt provides the following features: Fedora Linux-based controller packaged as a irtual appliance, Web portal interface, KVMbackend.

1.4.7 Platform ISF

Infrastructure Sharing Facility (ISF) is the VI manager offering from Platform Computing [53]. The company, mainly through its LSF family of products, has been serving the HPC market for several years. ISF’s architecture is divided into three layers. The top most Service Delivery layer includes the user interfaces (i.e., self-service portal and APIs); the Allocation Engine provides reservation and allocation policies; and the bottom layer—Resource Integrations—provides adapters to interact with hypervisors, provisioning tools, and other systems (i.e., external public clouds). The Allocation Engine also provides policies to address several objectives, such as minimizing energy consumption, reducing impact of failures, and maximizing application performance [54].

ISF is built upon Platform’s VM Orchestrator, which, as a standalone product, aims at speeding up delivery of VMs to end users. It also provides high availability by restarting VMs when hosts fail and duplicating the VM that hosts the VMO controller [55].

In summary, ISF provides the following features: Linux-based controller packaged as a virtual appliance, Web portal interface; dynamic resource allocation; advance reservation of capacity; high availability.

1.4.8 VMWare vSphere and vCloud

vSphere is VMware’s suite of tools aimed at transforming IT infrastructures into private clouds [56], [57]. It distinguishes from other VI managers as one of the most feature-rich, due to the company’s several offerings in all levels the architecture.

In the vSphere architecture, servers run on the ESXi platform. A separate server runs vCenter Server, which centralizes control over the entire virtual infrastructure. Through the vSphere Client software, administrators connect to vCenter Server to perform various tasks.The Distributed Resource Scheduler (DRS) makes allocation decisions based on predefined rules and policies. It continuously monitors the amount of resources available to VMs and, if necessary, makes allocation changes to meet VM requirements. In the storage virtualization realm, vStorage VMFS is a cluster file system to provide aggregate several disks in a single volume.

VMFS is especially optimized to store VM images and virtual disks. It supports storage equipment that use Fibre Channel or iSCSI SAN.

In its basic setup, vSphere is essentially a private administration suite. Selfservice VM provisioning to end users is provided via the vCloud API, which interfaces with vCenter Server. In this configuration, vSphere can be used by service providers to build public clouds. In terms of interfacing with public clouds, vSphere interfaces with the vCloud API, thus enabling cloud-bursting into external clouds.

In summary, vSphere provides the following features: Windows-based controller (vCenter Server); CLI, GUI, Web portal, and Web services interfaces; VMware ESX, ESXi backend; VMware vStorage VMFS storage virtualization; interface to external clouds (VMware vCloud partners); virtual networks (VMWare Distributed Switch); dynamic resource allocation (VMware DRM) high availability; data protection (VMWare Consolidated Backup).

1.4.9 Apache VCL

The Virtual Computing Lab [58], [59] project has been incepted in 2004 by researchers at the North Carolina State University as a way to provide customized environments to computer lab users. The software components that support NCSU’s initiative have been released as open-source and incorporated by the Apache Foundation.

Since its inception, the main objective of VCL has been providing desktop (virtual lab) and HPC computing environments anytime, in a flexible costeffective way and with minimal intervention of IT staff. In this sense, VCL was one of the first projects to create a tool with features such as: self-service Web portal, to reduce administrative burden, advance reservation of capacity, to provide resources during classes; and deployment of customized machine images on multiple computers, to provide clusters on demand.

In summary, Apache VCL provides the following features: (i) multi-platform controller, based on Apache/PHP (ii) Web portal and XML-RPC interfaces (iii) support for VMware hypervisors (ESX, ESXi, and Server) (iv) Virtual networks (v) virtual clusters; and (vi) advance reservation of capacity.

1.4.10 AppLogic

AppLogic [60] is a commercial VI manager, the flagship product of 3tera Inc. from California, USA. The company has labeled this product as a Grid Operating System.

AppLogic provides a fabric to manage clusters of virtualized servers, focusing on managing multi-tier Web applications. It views an entire application as a collection of components that must be managed as a single entity.

Several components such as firewalls, load balancers, Web servers, application servers, and database servers can be set up and linked together. Whenever the application is started, the system manufactures and assembles the virtual infrastructure required to run it. Once the application is stopped, AppLogic tears down the infrastructure built for it [61].

AppLogic offers dynamic appliances to add functionality such as Disaster Recovery and Power optimization to applications [60]. The key differential of this approach is that additional functionalities are implemented as another pluggable appliance instead of being added as a core functionality of the VI manager.

In summary, 3tera AppLogic provides the following features: Linux-based controller CLI and GUI interfaces, Xen backend, Global Volume Store (GVS) storage virtualization; virtual networks; virtual clusters; dynamic resource allocation; high availability; and data protection.

1.4.11 Citrix Essentials

The Citrix Essentials suite is one the most feature complete VI management software available, focusing on management and automation of data centers. It is essentially a hypervisor-agnostic solution, currently supporting Citrix Xen Server and Microsoft Hyper-V [62].

By providing several access interfaces, it facilitates both human and programmatic interaction with the controller. Automation of tasks is also aided by a workflow orchestration mechanism.

In summary, Citrix Essentials provides the following features: Windowsbased controller; GUI, CLI, Web portal, and XML-RPC interfaces; support for XenServer and Hyper-V hypervisors; Citrix Storage Link storage virtualization; virtual networks; dynamic resource allocation; three-level high availability (i.e., recovery by VM restart, recovery by activating paused duplicate VM, and running duplicate VM continuously) [63]; data protection with Citrix Consolidated Backup.

1.4.12 Enomaly ECP

The Enomaly Elastic Computing Platform, in its most complete edition, offers most features a service provider needs to build an IaaS cloud.

Most notably, ECP Service Provider Edition offers a Web-based customer dashboard that allows users to fully control the life cycle of VMs. Usage accounting is performed in real time and can be viewed by users. Similar to the functionality of virtual appliance marketplaces, ECP allows providers and users to package and exchange applications.

In summary, Enomaly ECP provides the following features: Linux-based controller; Web portal and Web services (REST) interfaces; Xen back-end; interface to the Amazon EC2 public cloud; virtual networks; virtual clusters (ElasticValet).

1.5 Features of Cloud Computing

Now we discuss a list of basic and advanced features that are usually available in virtual infrastructure manager (VIM), again we consider key features of IaaS, PaaS service providers.

Virtualization Support:

The multi-tenancy aspect of clouds needs manifold consumers with isparate requirements to be served by a only hardware infrastructure. Virtualized resources (CPUs, memory, etc.) can be sized and resized with certain elasticity. These features make hardware virtualization, the ideal technology to create a virtual infrastructure that partitions a data center among multiple residents.

Multiple Backend Hypervisors:

Different virtualization models and tools present different benefits, drawbacks, and limitations. So, some VI managers provide a homogeneous management layer regardless of the virtualization technology used. This attribute is more visible in open-source VI managers, which usually provide pluggable drivers to interact with multiple hypervisors. In this direction, the aim of libvirt [75] is to provide a uniform API that VI managers can use to manage domains (a VM or container running an instance of an operating system) in virtualized nodes using standard operations that abstract hypervisor specific calls.

Storage Virtualization:

Virtualizing storage means abstracting logical storage from physical storage. By consolidating all available storage devices in a data center, it allows creating virtual disks independent from device and location. Storage devices are commonly organized in a storage area network (SAN) and attached to servers via protocols such as Fibre Channel, iSCSI, and NFS; a storage controller provides the layer of abstraction between virtual and physical storage [76].

In the VI management sphere, storage virtualization support is often restricted to commercial products of companies such as VMWare and Citrix. Other products feature ways of pooling and managing storage devices, but administrators are still aware of each individual device.

Self-Service, On-Demand Resource Provisioning:

Self-service access to resources such as VM has been perceived as one the most attractive features of cloud computing. This feature enables users, consumers to directly obtain services from clouds, such as spawning the creation of a server and tailoring its software, configurations, and security policies, without interacting with a human system administrator. This capability "eliminates the need for more time-consuming, labor-intensive, human driven procurement processes familiar to many in IT" [64]. Therefore, exposing a self-service interface, through which users can easily interact with the system, is a highly desirable feature of a VI manager.

Virtual Networking:

Virtual networks tolerate creating remote network on top of a physical infrastructure independently from physical topology and locations [65]. A virtual LAN (VLAN) allows isolating traffic that shares a switched network, allowing VMs to be grouped into the same broadcast domain. Additionally, a VLAN can be configured to block traffic originated from VMs from other networks. Similarly, the VPN (virtual private network) concept is used to describe a secure and private overlay network on top of a public network (most commonly the public Internet) [66].

Support for creating and configuring virtual networks to group VMs placed through out a data center is provided by most VI managers. Additionally, VI managers that interface with public clouds often support secure VPNs connecting local and remote VMs.

Dynamic Resource Allotment:

Increased awareness of energy consumption in data centers has encouraged the practice of dynamic consolidating VMs in a fewer number of servers. In cloud infrastructures, where applications have variable and dynamic needs, capacity management and demand prediction are especially complicated. This fact triggers the need for dynamic resource allocation aiming at obtaining a timely match of supply and demand [67].

Energy consumption reduction and better management of SLAs can be achieved by dynamically remapping VMs to physical machines at regular intervals. Machines that are not assigned any VM can be turned off or put on a low power state. In the same fashion, overheating can be avoided by moving load away from hotspots [68].

A number of VI managers include a dynamic resource allocation feature that continuously monitors utilization across resource pools and reallocates available resources among VMs according to application needs.

Virtual Clusters:

Numerous VI managers can holistically manage groups of VMs. This feature is useful for provisioning computing virtual clusters on demand, and interconnected VMs for multi-tier Internet applications [69].

Reservation and Negotiation Mechanism:

When users, consumers request computational resources to available at a specific time, requests are termed advance reservations (AR), in contrast to best-effort requests, when users request resources whenever available [70]. To support complex requests, such as AR, a VI manager must allow users to "lease" resources expressing more complex terms (e.g., the period of time of a reservation). This is especially useful in clouds on which resources are scarce; since not all requests may be satisfied immediately, they can benefit of VM placement strategies that support queues, priorities, and advance reservations [25].

Additionally, leases may be negotiated and renegotiated, allowing provider and consumer to modify a lease or present counter proposals until an agreement is reached. This feature is illustrated by the case in which an AR request for a given slot cannot be satisfied, but the provider can offer a distinct slot that is still satisfactory to the user. This problem has been addressed in OpenPEX, which incorporates a bilateral negotiation protocol that allows users and providers to come to an alternative agreement by exchanging offers and counter offers [71].

High Availability and Data Revival:

The high availability (HA) feature of VI managers aims at minimizing application downtime and preventing business commotion. A few VI managers finish this by providing a failover mechanism, which detects failure of both physical and virtual servers and restarts VMs on healthy physical servers. This style of HA protects from host, but not VM, failures [72], [73].

For mission critical applications, when a failover solution involving restarting VMs does not suffice, additional levels of fault tolerance that rely on redundancy of VMs are implemented. In this style, redundant and synchronized VMs (running or in standby) are kept in a secondary physical server. The HA solution monitors failures of system components such as servers, VMs, disks, and network and ensures that a duplicate VM serves the application in case of failures [73].

Data backup in clouds should take into account the high data volume involved in VM management. Frequent backup of a large number of VMs, each one with multiple virtual disks attached, should be done with minimal interference in the systems performance. In this sense, some VI managers offer data protection mechanisms that perform incremental backups of VM images. The backup workload is often assigned to proxies, thus offloading production server and reducing network overhead [74].

1.5.1 Desired features of a Cloud

Self-Service:

Consumers of cloud computing services except on-demand, nearly instant access to resources. Cloud must be allowed self-service so consumer can request, customize, pay and use services without intervention of human operators [78].

Per-Usage Metering and Billing:

Services must be priced on a shortterm basis (e.g., by the hour), allowing consumers, users to release (and not pay for) resources as soon as they are not required [47]. For these reasons, clouds must implement features to allow efficient trading of service such as pricing, accounting, and billing [77]. Metering should be done accordingly for different types of service (e.g., storage, processing, and bandwidth, power, energy usage) and usage promptly reported, thus providing greater transparency [78].

Elasticity: Cloud computing provides the illusion of infinite computing resources available on demand [47]. Therefore users expect clouds to rapidly provide resources in any quantity at any time. In particular, it is expected that the additional resources can be (i) provisioned, possibly automatically, when an application load increases and (ii) released when load decreases (scale up and down) [78].

Customization:

In a multi-tenant cloud a great disparity between user needs is often the case. Thus, resources rented from the cloud must be highly customizable. In the case of infrastructure services, customization means allowing users to deploy specialized virtual appliances and to be given privileged (root) access to the virtual servers. Other service classes (PaaS and SaaS) offer less flexibility and are not suitable for general-purpose computing [47], but still are expected to provide a certain level of customization.

1.5.2 Features of Infrastructures as a service Providers

In spite of being based on a general set of features, IaaS offerings can be notable by the availability of specialized features that influence the cost benefit ratio to be experienced by user applications when moved to the cloud. The most relevant features are:

(i) geographic distribution of data centers

(ii) variety of user interfaces and APIs to access the system

(iii) specialized components and services that aid particular applications (e.g., loadbalancers, firewalls)

(iv) choice of virtualization platform and operating systems

(v) different billing methods and period (e.g., prepaid vs. post-paid, hourly vs. monthly)

1.5.3 Features of Platform as a service Providers

Public PaaS providers commonly offer a development and deployment environment that allow users to create and run their applications with tiny or no concern to low-level details of the platform. In addition, specific programming languages and frameworks are made available in the platform, as well as other services such as persistent data storage and in memory caches.

Programming Models, Languages, and Frameworks:

Programming models made available by IaaS providers define how users, consumers can convey their applications using higher levels of abstraction and efficiently run them on the cloud platform. Each model aims at efficiently solving a particular problem. In the cloud computing domain, the most common activities that require specialized models are: processing of large dataset in clusters of computers (MapReduce model), development of request-based Web services and applications; definition and orchestration of business processes in the form of workflows (Workflow model); and high-performance distributed execution of various computational tasks.

For user convenience, PaaS providers usually support multiple programming languages. Most commonly used languages in platforms include Java and python (e.g. Virtual Box, Google AppEngine), .NET languages (e.g., Microsoft Azure), and Ruby (e.g., Heroku). Force.com has devised its own programming language (Apex) and an Excel-like query language, which provide higher levels of abstraction to key platform functionalities.

An array of software frameworks are usually made available to PaaS developers, depending on application focus. Providers that focus on Web and enterprise application hosting offer popular frameworks such as Ruby on Rails, spring, Java EE, and .NET.

Persistence Options:

A persistence layer is essential to allow applications to record their state and recover it in case of crashes, as well as to store user data. Traditionally, Web and enterprise application developers have chosen relational databases as the preferred persistence method. These databases offer fast and reliable structured data storage and transaction processing, but may lack scalability to handle several petabytes of data stored in commodity computers [80].

In the cloud computing domain, distributed storage technologies have emerged, which seek to be robust and highly scalable, at the expense of relational structure and convenient query languages. For example, Amazon SimpleDB and Google AppEngine datastore offer schema-less, automatically indexed database services[79].

Data queries can be performed only on individual tables; that is, join operations are unsupported for the sake of scalability.

1.6 Literature Review

Bresnahan, J., Freeman, T., LaBissoniere, D., Keahey, K. (2011)

This paper introduces a specific addition to the IaaS cloud cosystem: the cloudinit.d program, a tool for launching, configuring, monitoring, and repairing a set of interdependent VMs in an infrastructure-as-a-service (IaaS) cloud or over a set of IaaS clouds. The cloudinit.d program was developed in the context of the Ocean Observatory Initiative (OOI) project to help it launch and maintain complex virtual platforms provisioned on demand on top of infrastructure clouds. Like the UNIX init.d program, cloudinit.d can launch specified groups of services and the VMs in which they run, at different run levels representing dependencies of the launched VMs. Once launched, cloudinit.d monitors the health of each running service to ensure that the overall application is operating properly. If a problem is detected in a service, cloudinit.d will restart only that service and any other service that failed that depended on it [1].

Bresnahan, J., LaBissoniere, D., Freeman, T., Keahey, K. (2011)

Amazon’s Simple Storage Service (S3) are provides reliable data cloud access to commercial users, scientific data centers must provide their users with a similar level of service. Unfortunately, S3 is closed. Bresnahan et; al., discusses the an open source implementation of the Amazon S3 REST API. It is packaged with the Nimbus IaaS toolkit and provides scalable and reliable access to scientific data. Its performance compares favorably with that of GridFTP and SCP [2].

M.Brock, A. Goscinski (2010)

In this article proposed the application of the Resources Via Web Services framework (RVWS) to offer higher level abstraction of clouds in the form of a new technology. Authors new technology makes possible the provision of service (and resource) publication, discovery and selection based on dynamic attributes which express the current state and characteristics of cloud services and resources.

M. Brock mainly concentrate on implementation that allowed the easy publication, discovery, selection and use of an existing cluster (one of the most frequently used cloud resource) via a simple interface using Webpages backed by extensive sets of tests has demonstrated that the design is sound and the proposed technology is feasible. The proposed solution is beneficially: instead of spending time and effort locating, evaluating and learning about clusters, clients are able to easily discover, select and use the required resources. Furthermore, service providers (which can be entities external to clouds themselves) can easily publish (and keep current) information about their services (and the resources behind them) [3].

M.Brock, A. Goscinski (2008)

This article contains the Resources Via Web Instances (RVWI) framework. RVWI grants to web services the ability to include their state and characteristics in their WSDL. This was prepared by allowing snapshots (instances) of a web service to be listed in the WSDL of the web service. Instances were utilized as they contain state and characteristic information directly from the web service. Thanks to the inclusion of state and characteristics, queries for web services can now be carried out on the availability of a web service and the ‘dimensions’ of resources [4].

Ludmila Cherkasova, Diwaker Gupta, Amin Vahdat (2007)

At first glance, it may seem that the choice of VM scheduler and parameter configuration is not relevant to most users because they are often shielded from such decisions. However, our experience suggests that "reasonable defaults" (e.g., equal weights that are typically used in WC-mode) are not very useful beyond toy experiments. Thus far, all our experiments have focused on one particular virtualization platform: Xen. The work has been done using Xen, among others, was source code availability and the freedom to modify it [5].

M. Devare, M. Sheikhalishahi, L. Grandinetti (2010)

M. Devare implemented the Desktop Cloud system, at the University of Calabria. This system uses the idle resources of the desktops with permission of the owner. The system works on the "utilization" factor and mutual agreement between "the scheduler" strategies, owner and consumer. The various new cloud lease schemes and strategies are under development in Desktop Cloud System [6].

M. Devare, M. Sheikhalishahi, L. Grandinetti (2009)

M. Devare discusses various hypervisors their development strategies and facilities for the Cloud systems are discussed. The Xen, VirtualBox, KVM and VMWare are being discussed. Moreover, illustrated the comparison of cost reduction in electricity utilization due to virtualization and cloud systems [7].

Andreas Berl Gelenbe, Marco Di Girolamo , Giovanni Giuliani , Minth Quan Dang, Kostas Pentiousis (2009)

In this paper discussing whole survey of current best practice and relevant survey this paper proposed energy saving techniques are useful in cloud computing environment .This paper proposed main sources of energy consumption and significant tradeoffs between performance, Qos and energy efficient. Offer’s insight into the manner in which energy saving can be achieved. This paper also focuses advantages such as i) Reducing software and hardware related energy cost of single or federated data center that executes "Cloud Applications". ii) Improving load balancing and hence Qos and performance of single and federated data centers. iii) Reducing energy consumption due to communication. iv) Saving GHG and CO2 emission resulting data center and networks so as to offer computing power that is "environment protecting / conserving " [8].

L. Yousefy, M. Butrico, D. D. Silva (2008)

In this article we identify and classify the main security concerns and solutions in cloud computing, and propose a taxonomy of security in cloud computing, giving an overview of the current status of security in this emerging technology [9].

Keahey, K., Tsugawa, M., Matsunaga, A., Fortes, J (2009)

The authors describe the context in which cloud computing arose, discuss its current strengths and shortcomings, and point to an emerging computing pattern it enables that they call sky computing. The ability to trust remote sites with a trusted networking environment, and can now lay a virtual site over distributed resources. In sky computing, dynamically provisioned distributed domains are built over several clouds [10].

S. Kelly, J.-P. Tolvanen (2008)

Domain-Specific Modeling (DSM) is the latest approach to software development, promising to greatly increase the speed and ease of software creation. Early adopters of DSM have been enjoying productivity increases of 500–1000% in production for over a decade. This paper introduces DSM and offers examples from various fields to illustrate to experienced developers how DSM can improve software development in their teams [11].

G. Lawton (2008)

This paper concentrate on PaaS systems are generally hosted, Web-based application-development platforms, provided that end-to-end or, in some cases, partial environments for developing full programs online. They handle tasks from editing code to debugging, deployment, runtime, and management. In PaaS, the system's provider makes most of the choices that determine how the application infrastructure operates, such as the type of OS used, the APIs, the programming language, and the management capabilities. Users build their applications with the provider's on-demand tools and collaborative development environment [12].

Jiandun Li, Junjie Peng, Wu Zhang (2011)

In this paper proposed Hybrid energy efficient scheduling algorithm which is use for private cloud computing. The algorithm which use dynamic migration. The experiment results show reduce response time , conserve energy and achieve higher level of load balancing [13].

Jiandun Li, Junjie Peng, Wu Zhang (2011)

In this paper something different happening in VM workflow scheduling. The scheduling algorithm that explain save more time, more energy and archive higher level of load balancing. The approach that specifies hardware best but less cost value required [14].

Bin Lin, Arindam Mallik, Peter Dinda, Gokhan Memik, Robert Dick (2007)

Authors describe and evaluate two new, independently-applicable power reduction techniques for power management on processors that support dynamic voltage and frequency scaling (DVFS): user-driven frequency scaling (UDFS) and process-driven voltage scaling (PDVS). In PDVS, a CPU-customized profile is derived offline that encodes the minimum voltage needed to achieve stability at each combination of CPU frequency and temperature UDFS, on the other hand, dynamically adapts CPU frequency to the individual user and the workload through direct user feedback. Our UDFS algorithms dramatically reduce typical operating frequencies and voltages while maintaining performance at a satisfactory level for each user [15].

Marshall, P., Tufo, H., Keahey, K., LaBissoniere, D., Woitaszek (2011), (2012) proposed a cloud infrastructure that combines on-demand allocation of resources with opportunistic provisioning of cycles from idle cloud nodes to other processes by deploying backfill VMs. For demonstration and experimental evaluation, the Nimbus cloud computing toolkit to deploy backfill VMs on idle cloud nodes for processing a high throughput computing (HTC) workload. Initial tests show an increase in IaaS cloud utilization from 37.5% to 100% during a portion of the evaluation trace but only 6.39% overhead cost for processing the HTC workload [16],[17].

Marshall, P., Keahey, K., Freeman, T. (2010)

In this work, a model of an "elastic site" that adapts services provided within a site, such as batch schedulers, storage archives, or web services to take advantage of elastically provisioned resources. They described the system architecture along with the issues involved with elastic provisioning, such as security, privacy, and various logistical considerations. To avoid over- or under-provisioning the resources three different policies have been proposed. They implemented a resource manager, built on the Nimbus toolkit to dynamically and securely extend existing physical clusters into the cloud. They have developed and evaluated policies for resource provisioning on a Nimbus-based cloud at the University of Chicago, another at Indiana University, and Amazon EC2 [18].

M. Sheikhalishai, M. Devare, L. Grandinetti (2011)

This paper discussed and clears that the two technologies "Grid" and "Cloud" will not replace each other but "Complementary" to each other. In context to this, it clear that Cloud is suitable for "hosting" Grid is for "federation" for Virtual Organizations. The various scientific applications can work fine with the help of Globus toolkit components [19].

M. Sheikhalishai, M. Devare, L. Grandinetti (2011) proposed multi-level and general-purpose scheduling approach for energy efficient computing through software part of the green computing. The consolidation are well defined for IaaS cloud paradigm, however it is not limited to IaaS cloud model. The policies, models, algorithms and cloud pricing strategies are being discussed in general. The solutions in the context of Haizea are shown, through experiments. The big improvement in utilization and energy consumption is found as workloads are running with lower frequencies. The coincidence of energy consumption and utilization improvement [20].

K. Sledziewski (2009, 2010)

A framework for developing Cloud based applications was presented and its application illustrated by a case study. The evaluation of the framework and the resulting tool has shown that this approach can be effective in addressing many of issues that hinder the wider adoption of Cloud. These include complexity, development time and cost ineffectiveness. This was achieved in two stages. First, the Domain Specific Language was implemented and deployed as a SaaS. Second, the SaaS is made accessible to designers for creating applications on the Cloud. It was also demonstrated that the application of Domain Specific Languages enhances the process of developing and deploying applications seamlessly on Cloud. We consider that Domain Specific Languages offer a valid solution for delivering Cloud based applications in the form of Software as a Service [21], [22].

B.Sotomayor, R.Santiago Montero, I.Martín Llorente, I.Foster (2009)

In this paper researcher present a model for predicting various runtime overheads involved in VM , allowing us to efficiently support advance reservations. Extend lease management software Haizea with OpenNebula VIM so scheduling decisions will be extracted. Provide a model that suspend and resume the VM. Paper mainly focus on physical and simulated experimental results showing the degree of accuracy of our model and long term effects of variables in our model on several workloads B. Sotomayor , developed a plug and play virtual infrastructure manager named Haizea. Haizea works with OpenNebula IaaS cloud system. Several leasing strategies are available with experimental mode and simulation mode [23].

B.Sotomayor, R.Santiago Montero, I.Martín Llorente, I.Foster (2009)

This work addresses OpenNebula IaaS cloud system and Haizea as virtual infrastructure manager. By relying on a flexible, open, and loosely coupled architecture, OpenNebula is designed from the outset to be easy to integrate with other components, such as the Haizea lease manager. When used together, OpenNebula and Haizea are the only VI management solution that provides leasing capabilities beyond immediate provisioning, including best-effort leases and advance reservation of capacity [24], [25].

Jan stoess, Christian Lang, Marcus Reainhardt (2006)

This paper focuses on a management framework for energy aware processor management in virtualized environments. A host level scheduler subsystems responsible for processor allocation to VM, controls each processor energy consumption by migrating should not be sufficient. Basic ally two scheduler implements in that paper more focus on migration and load distribution [26].

Susanne Albers (2010)

Susanne Albers suggests algorithmic solutions can help reduce energy consumption in computing environs. The goal is to design energy-efficient algorithms that reduce energy consumption while minimizing compromise to service. This article focus on system and device level, how we can minimize energy consumption is a single computational device. Firstly author concentrate on power down mechanism and then power management & competitiveness & again concentrate on algorithmic solution [27].

Wei-Tek Tsai, Xin Sun, Janaka Balasooriya (2010)

This paper gives an overview survey of current cloud computing architectures, discusses issues that current cloud computing implementations have and proposes a Service-Oriented Cloud Computing Architecture (SOCCA), Multi-tenancy Architecture (MTA) so that clouds can interoperate with each other. Furthermore, the SOCCA also proposes high level designs to better support multi-tenancy feature of cloud computing [28].

Chuliang Weng, Zhigang Wang, Minglu Li, Xinda Lu (2009)

However, in this paper scheduling strategy may deteriorate the performance when the VM is used to execute the concurrent applications such as parallel programs or multithreaded programs. In this paper, we analyze the CPU scheduling problem in the VM monitor theoretically, and the result is that the asynchronous CPU scheduling strategy will waste considerable physical CPU time when the system workload is the concurrent application. Then, paper mainly present a hybrid scheduling framework for the CPU scheduling in the VM monitor. There are two types of VMs in the system: the high-throughput type and the concurrent type. The VM can be set as the concurrent type when the majority of its workload is concurrent applications in order to reduce the cost of synchronization. In this paper, we analyze the CPU scheduling problem in the VM monitor theoretically, and the result is that the asynchronous CPU scheduling strategy will waste considerable physical CPU time when the majority of the workload is the concurrent application. Therefore, we present a hybrid scheduling framework for the CPU scheduling in the VM, which supports two types of VMs in the system: the high-throughput type and the concurrent type. The VM can be set as the concurrent type when the majority of its workload is the concurrent application, in order to reduce the cost of synchronization [29].

R.Buyya, Beloglazov A., Abwajy J. (2010)

This paper focus on vision, challenges and architectural elements for energy-efficient management of Cloud computing environments. Authors mainly focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering quality-of-service expectations, and devices power usage characteristics; and (c) a novel software technology for energy-efficient management of Clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios [30].

Nicolae B., Bresnahan, J., Kehey, K., Antoniu G. (2011)

This paper addresses challenges of snapshotting the VM images by proposing a virtual file system specifically optimized for virtual machine image storage. It is based on a lazy transfer scheme coupled with object versioning that handles snapshotting transparently in a hypervisor-independent fashion, ensuring high portability for different configurations.Large-scale experiments on hundreds of nodes with reduction in bandwidth utilization of as much as 90% [31].

Bo Li, Jianxin Li (2009) states Energy aware heuristic algorithm on base of distributes workload in virtual machine with minimum number of virtual machines or nodes required that workload. So that workload migration, workload resizes virtual machine migration these approaches are used in algorithm [32].

Hien Nguyen Van, Frederic Dang Tan, Jean-Marc Menaud (2010)

In this paper, authors propose a resource management framework combining a utility-based dynamic Virtual Machine provisioning manager and a dynamic VM placement manager. Both problems are modeled as constraint satisfaction problems. The VM provisioning process aims at maximizing a global utility capturing both the performance of the hosted applications with regard to their SLAs and the energy-related operational cost of the cloud computing infrastructure. Show several experiments how our system can be controlled through high level handles to make different trade-off between application performance and energy consumption or to arbitrate resource allocations in case of contention [33].

C. Clark, K. Fraser, S.Hand, J.G.Hansen, E.Jule, C. Limpach, I. Pratt, A. Warfield (2005)

In this paper authors focus on consider the design options for migrating OSes running services with liveness constraints, focusing on data center and cluster environments. We introduce and analyze the concept of writable working set, and present the design, implementation and evaluation of high-performance OS migration built on top of the Xen, VMM [34].

C. C. Lee and D. T. Lee (1985)

This paper consider one-dimensional on-line bin-packing problem is considered, A simple O(1)-space and O(n)-time algorithm, called HARMONICM, is presented. It is shown that this algorithm can achieve a worst-case performance ratio of less than 1.692, which is better than that of the O(n)-space and O(n log n)-time algorithm FIRST FIT. Also shown is that 1.691 … is a lower bound for all 0(1)-space on-line bin-packing algorithms. Finally a revised version of HARMON



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now