Controlling System Of Power And Performance

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Cloud Computing simply denotes the "Internet Computing". Cloud Computing is a technology that uses the internet and data centers to maintain data as well as applications. Cloud Computing allows business and consumers to access their personal files from any computer via internet. This technology allows more efficient Computing by centralized storage, processing and bandwidth.

A simple example for this Cloud Computing is Gmail, Yahoomail etc. There is no need of software or server to use them. Just an internet connection is needed to start sending an e-mails. There is no need to worry about the internal processing. Cloud service provider is the responsible for all server and e-mail management. The analogy is "If you want to stay in Chennai for one day, would you buy a house? The users get to use the software alone and enjoying the benefits of Cloud Computing.

For instance, a web server(ie..a single computer)can run without a Cloud Computing. It means the computer can serve 500 pages per minute. If the website becomes popular, the audience will demand for more pages. At that time, the server will become slow down and the audience loses their interest. For this, a server should move to the Cloud, you should rent computer power from the Cloud service provider who has thousands of servers , that all connected together allows sharing of work between each other. This solves the previous problems. It provides Pay-as-you-go model which denotes you have to pay for how much you use. We have to pay more rent for the extra usage. The ultimate goal of Cloud economy is to optimize 1)user satisfaction and 2)Cloud profit.

How Cloud Works?

Cloud Computing enables the ability to store, access, manipulate and share information without the need of storing data in a local computer.

There are three service models

Infrastructure as a Service(IaaS) - It provides physical or virtualized hardware in the form of storage, servers and network and pay a fee based upon usage. An example of this is Amazon’s S3.

Software as a Service(SaaS) - The user opearates with software only, sometimes for free or by subscription. There is no need to install any application or software in Computing device, rather to use it on the Cloud. An example of this is Google apps provides online document creation and formatting on Cloud.

Platform as a Service(PaaS) - It allows a client to develop their own application without a need of buying hardware and software. An example of this is Google App engine and Windows Azure.

Cloud Storage:

Over the decades, the big internet based companies like Amazon and Google etc identified that only a small amount of data storage capacity is used. This leads to the renting out of space and storage of information on remote servers. Information is then cached on either desktop computers or mobile phones or other internet-linked devices. Amazon Elastic compute(EC2) and the simple storage solutions(s3) are the current best available facilities. The Energy Efficiency is low in Cache services. That can be improved with the help of Energy Efficient algorithms which has been described below.

II) RELATED WORKS

2.1 I/O Virtualization

Virtualization defines the separation of a resource or request for a service from the physical entity which is underlying. I/O Virtualization is a methodology to improve the performance of servers. Due to Virtualization overhead, I/O operations are more expensive than native system. The significant I/O overhead with the page flipping technique can be replaced by the MemCpy functions to avoid the overheads. The I/O performance can be optimized with the help of Virtual Machine Monitor(VMM). The I/O performance overhead can be tackled by doing full functional breakdown with the profiling tools.

After the importance of Virtualization is known, hardware level features become popular. It has been evaluated to seek for near Native performance. The Intel Virtualization technology is used to provide better I/O performance. All these focus only on Network I/O. The disk I/O will be considered in cache device to improve the low disk I/O performance.

To handle multiple applications, Virtualized servers demand more network bandwidth connections to more networks and storage. In virtualized data centers, I/O performance problems are occurred by running multiple Virtual Machines (VMs) on one server. This can be overcome with the help of I/O Virtualization.

2.2 Cache Device

Cooperative Cache is a kind of Remote memory cache which improves the performance of Networked file system. It uses client memory as a cache. This caching scheme is effective because Remote Memory is faster than a local disk of the client who is requested. The advanced Buffer management technique for Cooperative caching has been introduced based on the degree of locality. This emphasize the data that has high locality should be placed in high-level cache and the data that has low locality should be placed in low-level cache. The Cooperative caching system is also designed at the Virtualization layer reduces the disk I/O operations for shared working sets of virtual machines.

To improve the I/O performance of a local disk instead of a remote disk by using Remote Memory as a Cache in storage area network, Remote Direct Memory Access (RDMA) is used. The data management system utilizes either Solid State Drive (SSD) or Hybrid Disk Drive (HDD) according to data usage patterns. However latency of an SSD is still higher than Remote Memory (RM). A new approach called RAM Cloud which reduces the latency by storing the data entirely in DRAM of distributed systems. But RAMCloud incurs more cost and high usage of Energy.

The low disk I/O Performance can be enhanced with the help of Cache as a Service (CaaS) model. This is an additional service with IaaS.

2.3 Cache as a Service

Existing Cloud focus on the CaaS model which consists of two mechanisms: an elastic cache system and a service model with pricing scheme.

The elastic cache use RM based cache at the block device level which is exported from dedicated memory servers. The elastic properties are On-demand allocation and reduction of storage and Computing resources. The elastic cache system can use any of the Cache replacement algorithms. VMs utilize RM to provide a necessary amount of cache on demand. The exported memory can be seen as available memory pool. The elastic cache uses this memory pool for VMs.

To deploy the elastic cache system, service components are necessary. The users can choose their cache service according to their cache requirement. The elastic cache system consists of two components i.e. VM and a cache server. A VM demands RM to use as a disk cache. A server can have several chunks. The chunk denotes the memory space. If VM wants to access RM, a VM should mark their rights on assigned chunks and then it uses that chunk as a cache. When multiple VMs try to mark their rights on same chunk concurrently, the conflict can be eliminated by safe and Atomic chunk allocation method. It improves the performance and provides reliable environment. The effective use of capacity and utilization is not limited in this model.

The service model describes the modeling cache services and pricing model. This service model describes two CaaS types. They are High Performance (HP) which uses LM as a Cache and Best Value (BV) which uses RM as a cache. The goal of service model is to reduce the active number of physical machines.

The cost benefit of this CaaS model is Profit Maximization and Performance improvement. But it consumes more Energy to improve the performance effectively. The total cost of the system will be high as well as it gives high complexity.

2.4 Energy Efficient Algorithms

The major issues in Cloud Computing is improving Energy Efficiency. It can be done with the help of 1) Energy Aware Consolidation Technique 2) Dynamic VM Management Algorithm 3) Power and Migration cost aware Application placement and 4) Server Consolidation.

Energy Aware Consolidation

This technique is used to reduce the total Energy consumption in a Cloud Computing system. The server is modeled as a function of CPU and disk utilization. The performance can be determined only for small input size. It focuses only on the scalability of the system and it does not involve in the minimization of operational cost during the problem of assigning VMs on physical servers provides a major drawback of the system.

Dynamic VM Management Algorithm

This algorithm reduces the total power consumption with a restriction on SLA of each VM or minimizes the SLA violation rates by considering a fixed set of active servers.

Power-Aware VM Placement

This algorithm is designed for heterogeneous servers to reduce the total Energy consumption. It does not consider multiple copies of VM and considers only one dimension of resource in the servers provides a major drawback of this algorithm. The proposed paper focuses on Server Consolidation to overcome all the drawbacks which are described above.

III) ENERGY EFFICIENT ASSIGNMENT ALGORITHM

The Energy consumption can be reduced with the help of Energy Efficient Assignment algorithm and increases the resource availability in the data center.

3.1 Data Center Management

A data center consists of a number of heterogeneous servers from a well-known server types. The servers of a given type are designed by their processing capacity or CPU Cycles (Cc) or Memory Bandwidth (Mb). The Energy cost is related with Power utilization. The operational cost of the system is the total Energy cost for servicing all clients request. The Energy cost can be calculated by the server Energy consumption by the duration of time in seconds (TS). The power cost of communication resources is not included in the data center power cost.

The client is considered as VM. The amount of resources needed for each client is determined by using workload prediction. Each VM can be copied to different servers which imply the requests can be assigned to more than one server that is generated by a single client. Therefore upper bound Lb limits the maximum number of copies of VM in data center. If multiple copies of VM have to be placed in different server means, it should satisfy the below conditions

Σj ∂pij Upj = UPI …… (a)

∂mij yij Umj = umi …. (b)

Where ∂pij and ∂mij denotes the portion of the jth server CPU Cycles and Memory BW assigned to the VM which is related to the ith client.

UPI, umi - Required total processing capacity and memory BW for the ith client

Upj, Umj -Total CPU Cycle and Memory BW of the jth server.

Yij - a pseudo- Boolean factor to identify whether VM related to client i is assigned to the server j or not.

CPU

CPU

VM1

VM1 1ST COPY BW BW

CPU

VM1

2nd COPY BW

Fig 1.An example of multiple copies of VM

The Constraint (a) shows the summation of the reserved CPU cycles on the assigned servers to be equal to the required CPU cycles for client i. The Constraint (b) shows the provided memory BW on assigned servers to be equal to the required memory BW for the original VM. This condition enforces not to give up the Quality of Service (Quos) for the clients.

3.2 Reducing Energy Cost of Data Center

Data center management is responsible for allowing the VMs in to the data center to reduce the Energy cost of the data center. The VM Controller (VMC) is responsible for identifying the resource requirements of the VMs and placing these on the servers as well as VM migration to minimize the performance overhead.

The VMC performs this operations based on two different optimization procedures. They are 1) semi-static optimization and 2) Dynamic optimization. Semi-Static optimization has to be done periodically whereas dynamic optimization can be done only whenever it is required.

Here, semi-static optimization procedure is focused. In this technique, the resource requirements for VMs are assumed to be identified based on the SLA specification for the next decision period. The Energy cost of this optimization can be done without considering the previous decision period.

The function of semi-static optimization in the VMC is to decide whether to create several copies of VMs on different servers and assign VMs to the servers. By considering the fixed payments by the client for Cloud services they use, the total Energy cost of active servers in data center gets reduced. This will increase the resource availability in the data center.

3.3 VM Migration

VM migration provides a major advantage in Cloud Computing via load balance in data centers. It is more beneficial in case of certain workload changes. VM migration is performed to minimize the workload changes in a Cloud Computing environment. VM migration reduces the migration cost with the help of semi-static optimization.

3.4 Local Search Method

A local search method is proposed to find out the number of copies for each VM and place these copies on servers to minimize the total cost in the system.

Initially the threshold can be set by the Cloud provider. The all servers with utilization less than the threshold means, the total Energy consumption will be reduced. The utilization of server is defined as the maximum resource utilization in different magnitude.

The formulation of the problem is given by

Σj ∂pij Upj = upi ……(1)

Σi yij ≤ Li ………….(2)

Where Li represents the maximum number of servers which is allowed to serve the ith client.

The constraint (1) describes the needed processing capacity is given. The constraint (2) guarantees that the number of copies of VM does not exceed the highest possible number of copies.

To identify the under-utilized servers, each of the servers is turned off one by one. With the help of dynamic programming method, the total Energy consumption is determined by placing their VMs on other active servers. The dynamic programming is introduced to identify the number of copies of each VM and assign these VMs to the servers. This will reduce the total Energy consumption of a system in a Cloud Computing environment.

3.5 Server Consolidation

Server Consolidation is defined as the assignment of multiple VMs to a single physical server. In a Cloud Computing System, Server Consolidation is an efficient approach to minimize the total Energy consumption and provides the better utilization of resources. A single server is enough to consolidate VMs which is locating in multiple under-utilized servers with the help of VM migration technology and the remaining servers can be set to the power-saving state i.e. by turning of the unused machines. Server Consolidation should consider the SLA constraints. The SLA constraints may be resource related (e.g. memory space, storage space, network bandwidth) or performance related (e.g. Throughput, reliability, scalability).

The steps involved in Energy Efficient Assignment algorithm is

Step 1: Initially ∂pj and ∂mj for each server is set to zero.

Step 2: VMs are sorted based on their processing requirements in decreasing order.

Step 3: For every VM, a method based on DP is used to identify the number of copies placed on the server.

Step 4: The Energy cost can be calculated for assigning a copy of the ith VM to a server k is

Cik (ɸ) = (∂pijPPj+ Poj umi)/Umj …..(3)

Where ɸ denotes the size of the assigned VM to the server. The first term in (3) is the cost related to CPU utilization of the server. The second term denotes the replacement of the constant Energy cost of the active server.

The ∂pij can be calculated as

(α upi/Li)/ Upj …..(4)

Step 5: Find the active and inactive servers.

For active servers, value of cost is decremented by ε.

Step 6: Calculate cost for each assignment

Min ΣkεP yαik cik (ɸ)

w.r.to

ΣkεP yαik =Li

Where P denotes the server which is both active or inactive servers and yαik denotes the assignment parameter. Dynamic Programming is used to find the best assignment decision. This algorithm will improve the total Energy Efficiency of this system.

IV) CONCLUSION AND FUTURE WORK

We proposed an approach to produce multiple copies of VMs. An algorithm based on dynamic programming and local search was given to determine the number of copies, and then place them on the servers to minimize the total Energy cost. This method also increases the resource availability in the data center.

Cloud provider can decide how to service VMs with big processing resource requirements and how to distribute the client request. For future work, other resources such as secondary storage can also be considered in this decision making. Moreover, different methods can be provisioned for consistency between VM copies and failure recovery.

V) REFERENCES

[1] Hyuck Han, Young Choon Lee, Woong Shin, Hyungsoo Jung, Heon Y. Yeom and Albert Y. Zomaya, Fellow, "Cashing in on the Cache in the Cloud," VOL. 23, NO. 8, August 2012.

[2] M.D. Dahlin, R.Y. Wang, T.E. Anderson, and D.A. Patterson, "Cooperative Caching: Using Remote Client Memory to Improve File System Performance," Proc. First USENIX Conf. Operating Systems Design and Implementation (OSDI ’94), 1994.

[3] T.E. Anderson, M.D. Dahlin, J.M. Neefe, D.A. Patterson, D.S.Roselli, and R.Y. Wang, "Serverless Network File Systems," ACM Trans. Computer Systems, vol. 14, pp. 41-79, Feb. 1996.

[4] H. Kim, H. Jo, and J. Lee, "XHive: \Efficient Cooperative Caching for Virtual Machines," IEEE Trans. Computers, vol. 60, no. 1,pp. 106-119, Jan. 2011.

[5] L. Cherkasova and R. Gardner, "Measuring CPU Overhead for I/O Processing in the Xen Virtual Machine Monitor," Proc. Ann. Conf.USENIX Ann. Technical Conf. (ATC ’05), 2005.

[6] J. Liu, W. Huang, B. Abali, and D.K. Panda, "High Performance VMMBypass I/O in Virtual Machines," Proc. Ann. Conf. USENIX Ann. Technical Conf. (ATC ’06), 2006.

[7] E.R. Reid, "Drupal Performance Improvement via SSD Technology," technical report, Sun Microsystems, Inc., 2009

[8] S. Srikantaiah, A. Kansal, and F. Zhao, "Energy aware consolidation for Cloud Computing," In proc. of the 2008 conference on Power aware Computing and systems (HotPower'08). 2008.

[9] A. Verrna, P. Ahuja and A. Neogi, "pMapper: Power and migration cost aware application placement in virtualized systems," In proc. Of the 9th ACM/IFIP/USENIX International Middleware Conference.2008.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now