Specific Addition To The Iaas Cloud Cosystem

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

This paper introduces a specific addition to the IaaS cloud cosystem: the cloudinit.d program, a tool for launching, configuring, monitoring, and repairing a set of interdependent VMs in an infrastructure-as-a-service (IaaS) cloud or over a set of IaaS clouds. The cloudinit.d program was developed in the context of the Ocean Observatory Initiative (OOI) project to help it launch and maintain complex virtual platforms provisioned on demand on top of infrastructure clouds. Like the UNIX init.d program, cloudinit.d can launch specified groups of services and the VMs in which they run, at different run levels representing dependencies of the launched VMs. Once launched, cloudinit.d monitors the health of each running service to ensure that the overall application is operating properly. If a problem is detected in a service, cloudinit.d will restart only that service and any other service that failed that depended on it [1].

Bresnahan, J., LaBissoniere, D., Freeman, T., Keahey, K. (2011)

Amazon’s Simple Storage Service (S3) are provides reliable data cloud access to commercial users, scientific data centers must provide their users with a similar level of service. Unfortunately, S3 is closed. Bresnahan et; al., discusses the an open source implementation of the Amazon S3 REST API. It is packaged with the Nimbus IaaS toolkit and provides scalable and reliable access to scientific data. Its performance compares favorably with that of GridFTP and SCP [2].

M.Brock, A. Goscinski (2010)

In this article proposed the application of the Resources Via Web Services framework (RVWS) to offer higher level abstraction of clouds in the form of a new technology. Authors new technology makes possible the provision of service (and resource) publication, discovery and selection based on dynamic attributes which express the current state and characteristics of cloud services and resources.

M. Brock mainly concentrate on implementation that allowed the easy publication, discovery, selection and use of an existing cluster (one of the most frequently used cloud resource) via a simple interface using Webpages backed by extensive sets of tests has demonstrated that the design is sound and the proposed technology is feasible. The proposed solution is beneficially: instead of spending time and effort locating, evaluating and learning about clusters, clients are able to easily discover, select and use the required resources. Furthermore, service providers (which can be entities external to clouds themselves) can easily publish (and keep current) information about their services (and the resources behind them) [3].

M.Brock, A. Goscinski (2008)

This article contains the Resources Via Web Instances (RVWI) framework. RVWI grants to web services the ability to include their state and characteristics in their WSDL. This was prepared by allowing snapshots (instances) of a web service to be listed in the WSDL of the web service. Instances were utilized as they contain state and characteristic information directly from the web service. Thanks to the inclusion of state and characteristics, queries for web services can now be carried out on the availability of a web service and the ‘dimensions’ of resources [4].

Ludmila Cherkasova, Diwaker Gupta, Amin Vahdat (2007)

At first glance, it may seem that the choice of VM scheduler and parameter configuration is not relevant to most users because they are often shielded from such decisions. However, our experience suggests that "reasonable defaults" (e.g., equal weights that are typically used in WC-mode) are not very useful beyond toy experiments. Thus far, all our experiments have focused on one particular virtualization platform: Xen. The work has been done using Xen, among others, was source code availability and the freedom to modify it [5].

M. Devare, M. Sheikhalishahi, L. Grandinetti (2010)

M. Devare implemented the Desktop Cloud system, at the University of Calabria. This system uses the idle resources of the desktops with permission of the owner. The system works on the "utilization" factor and mutual agreement between "the scheduler" strategies, owner and consumer. The various new cloud lease schemes and strategies are under development in Desktop Cloud System [6].

M. Devare, M. Sheikhalishahi, L. Grandinetti (2009)

M. Devare discusses various hypervisors their development strategies and facilities for the Cloud systems are discussed. The Xen, VirtualBox, KVM and VMWare are being discussed. Moreover, illustrated the comparison of cost reduction in electricity utilization due to virtualization and cloud systems [7].

Andreas Berl Gelenbe, Marco Di Girolamo , Giovanni Giuliani , Minth Quan Dang, Kostas Pentiousis (2009)

In this paper discussing whole survey of current best practice and relevant survey this paper proposed energy saving techniques are useful in cloud computing environment .This paper proposed main sources of energy consumption and significant tradeoffs between performance, Qos and energy efficient. Offer’s insight into the manner in which energy saving can be achieved. This paper also focuses advantages such as i) Reducing software and hardware related energy cost of single or federated data center that executes "Cloud Applications". ii) Improving load balancing and hence Qos and performance of single and federated data centers. iii) Reducing energy consumption due to communication. iv) Saving GHG and CO2 emission resulting data center and networks so as to offer computing power that is "environment protecting / conserving " [8].

L. Yousefy, M. Butrico, D. D. Silva (2008)

In this article we identify and classify the main security concerns and solutions in cloud computing, and propose a taxonomy of security in cloud computing, giving an overview of the current status of security in this emerging technology [9].

Keahey, K., Tsugawa, M., Matsunaga, A., Fortes, J (2009)

The authors describe the context in which cloud computing arose, discuss its current strengths and shortcomings, and point to an emerging computing pattern it enables that they call sky computing. The ability to trust remote sites with a trusted networking environment, and can now lay a virtual site over distributed resources. In sky computing, dynamically provisioned distributed domains are built over several clouds [10].

S. Kelly, J.-P. Tolvanen (2008)

Domain-Specific Modeling (DSM) is the latest approach to software development, promising to greatly increase the speed and ease of software creation. Early adopters of DSM have been enjoying productivity increases of 500–1000% in production for over a decade. This paper introduces DSM and offers examples from various fields to illustrate to experienced developers how DSM can improve software development in their teams [11].

G. Lawton (2008)

This paper concentrate on PaaS systems are generally hosted, Web-based application-development platforms, provided that end-to-end or, in some cases, partial environments for developing full programs online. They handle tasks from editing code to debugging, deployment, runtime, and management. In PaaS, the system's provider makes most of the choices that determine how the application infrastructure operates, such as the type of OS used, the APIs, the programming language, and the management capabilities. Users build their applications with the provider's on-demand tools and collaborative development environment [12].

Jiandun Li, Junjie Peng, Wu Zhang (2011)

In this paper proposed Hybrid energy efficient scheduling algorithm which is use for private cloud computing. The algorithm which use dynamic migration. The experiment results show reduce response time , conserve energy and achieve higher level of load balancing [13].

Jiandun Li, Junjie Peng, Wu Zhang (2011)

In this paper something different happening in VM workflow scheduling. The scheduling algorithm that explain save more time, more energy and archive higher level of load balancing. The approach that specifies hardware best but less cost value required [14].

Bin Lin, Arindam Mallik, Peter Dinda, Gokhan Memik, Robert Dick (2007)

Authors describe and evaluate two new, independently-applicable power reduction techniques for power management on processors that support dynamic voltage and frequency scaling (DVFS): user-driven frequency scaling (UDFS) and process-driven voltage scaling (PDVS). In PDVS, a CPU-customized profile is derived offline that encodes the minimum voltage needed to achieve stability at each combination of CPU frequency and temperature UDFS, on the other hand, dynamically adapts CPU frequency to the individual user and the workload through direct user feedback. Our UDFS algorithms dramatically reduce typical operating frequencies and voltages while maintaining performance at a satisfactory level for each user [15].

Marshall, P., Tufo, H., Keahey, K., LaBissoniere, D., Woitaszek (2011), (2012) proposed a cloud infrastructure that combines on-demand allocation of resources with opportunistic provisioning of cycles from idle cloud nodes to other processes by deploying backfill VMs. For demonstration and experimental evaluation, the Nimbus cloud computing toolkit to deploy backfill VMs on idle cloud nodes for processing a high throughput computing (HTC) workload. Initial tests show an increase in IaaS cloud utilization from 37.5% to 100% during a portion of the evaluation trace but only 6.39% overhead cost for processing the HTC workload [16],[17].

Marshall, P., Keahey, K., Freeman, T. (2010)

In this work, a model of an "elastic site" that adapts services provided within a site, such as batch schedulers, storage archives, or web services to take advantage of elastically provisioned resources. They described the system architecture along with the issues involved with elastic provisioning, such as security, privacy, and various logistical considerations. To avoid over- or under-provisioning the resources three different policies have been proposed. They implemented a resource manager, built on the Nimbus toolkit to dynamically and securely extend existing physical clusters into the cloud. They have developed and evaluated policies for resource provisioning on a Nimbus-based cloud at the University of Chicago, another at Indiana University, and Amazon EC2 [18].

M. Sheikhalishai, M. Devare, L. Grandinetti (2011)

This paper discussed and clears that the two technologies "Grid" and "Cloud" will not replace each other but "Complementary" to each other. In context to this, it clear that Cloud is suitable for "hosting" Grid is for "federation" for Virtual Organizations. The various scientific applications can work fine with the help of Globus toolkit components [19].

M. Sheikhalishai, M. Devare, L. Grandinetti (2011) proposed multi-level and general-purpose scheduling approach for energy efficient computing through software part of the green computing. The consolidation are well defined for IaaS cloud paradigm, however it is not limited to IaaS cloud model. The policies, models, algorithms and cloud pricing strategies are being discussed in general. The solutions in the context of Haizea are shown, through experiments. The big improvement in utilization and energy consumption is found as workloads are running with lower frequencies. The coincidence of energy consumption and utilization improvement [20].

K. Sledziewski (2009, 2010)

A framework for developing Cloud based applications was presented and its application illustrated by a case study. The evaluation of the framework and the resulting tool has shown that this approach can be effective in addressing many of issues that hinder the wider adoption of Cloud. These include complexity, development time and cost ineffectiveness. This was achieved in two stages. First, the Domain Specific Language was implemented and deployed as a SaaS. Second, the SaaS is made accessible to designers for creating applications on the Cloud. It was also demonstrated that the application of Domain Specific Languages enhances the process of developing and deploying applications seamlessly on Cloud. We consider that Domain Specific Languages offer a valid solution for delivering Cloud based applications in the form of Software as a Service [21], [22].

B.Sotomayor, R.Santiago Montero, I.Martín Llorente, I.Foster (2009)

In this paper researcher present a model for predicting various runtime overheads involved in VM , allowing us to efficiently support advance reservations. Extend lease management software Haizea with OpenNebula VIM so scheduling decisions will be extracted. Provide a model that suspend and resume the VM. Paper mainly focus on physical and simulated experimental results showing the degree of accuracy of our model and long term effects of variables in our model on several workloads B. Sotomayor , developed a plug and play virtual infrastructure manager named Haizea. Haizea works with OpenNebula IaaS cloud system. Several leasing strategies are available with experimental mode and simulation mode [23].

B.Sotomayor, R.Santiago Montero, I.Martín Llorente, I.Foster (2009)

This work addresses OpenNebula IaaS cloud system and Haizea as virtual infrastructure manager. By relying on a flexible, open, and loosely coupled architecture, OpenNebula is designed from the outset to be easy to integrate with other components, such as the Haizea lease manager. When used together, OpenNebula and Haizea are the only VI management solution that provides leasing capabilities beyond immediate provisioning, including best-effort leases and advance reservation of capacity [24], [25].

Jan stoess, Christian Lang, Marcus Reainhardt (2006)

This paper focuses on a management framework for energy aware processor management in virtualized environments. A host level scheduler subsystems responsible for processor allocation to VM, controls each processor energy consumption by migrating should not be sufficient. Basic ally two scheduler implements in that paper more focus on migration and load distribution [26].

Susanne Albers (2010)

Susanne Albers suggests algorithmic solutions can help reduce energy consumption in computing environs. The goal is to design energy-efficient algorithms that reduce energy consumption while minimizing compromise to service. This article focus on system and device level, how we can minimize energy consumption is a single computational device. Firstly author concentrate on power down mechanism and then power management & competitiveness & again concentrate on algorithmic solution [27].

Wei-Tek Tsai, Xin Sun, Janaka Balasooriya (2010)

This paper gives an overview survey of current cloud computing architectures, discusses issues that current cloud computing implementations have and proposes a Service-Oriented Cloud Computing Architecture (SOCCA), Multi-tenancy Architecture (MTA) so that clouds can interoperate with each other. Furthermore, the SOCCA also proposes high level designs to better support multi-tenancy feature of cloud computing [28].

Chuliang Weng, Zhigang Wang, Minglu Li, Xinda Lu (2009)

However, in this paper scheduling strategy may deteriorate the performance when the VM is used to execute the concurrent applications such as parallel programs or multithreaded programs. In this paper, we analyze the CPU scheduling problem in the VM monitor theoretically, and the result is that the asynchronous CPU scheduling strategy will waste considerable physical CPU time when the system workload is the concurrent application. Then, paper mainly present a hybrid scheduling framework for the CPU scheduling in the VM monitor. There are two types of VMs in the system: the high-throughput type and the concurrent type. The VM can be set as the concurrent type when the majority of its workload is concurrent applications in order to reduce the cost of synchronization. In this paper, we analyze the CPU scheduling problem in the VM monitor theoretically, and the result is that the asynchronous CPU scheduling strategy will waste considerable physical CPU time when the majority of the workload is the concurrent application. Therefore, we present a hybrid scheduling framework for the CPU scheduling in the VM, which supports two types of VMs in the system: the high-throughput type and the concurrent type. The VM can be set as the concurrent type when the majority of its workload is the concurrent application, in order to reduce the cost of synchronization [29].

R.Buyya, Beloglazov A., Abwajy J. (2010)

This paper focus on vision, challenges and architectural elements for energy-efficient management of Cloud computing environments. Authors mainly focus on the development of dynamic resource provisioning and allocation algorithms that consider the synergy between various data center infrastructures (i.e., the hardware, power units, cooling and software), and holistically work to boost data center energy efficiency and performance. In particular, this paper proposes (a) architectural principles for energy-efficient management of Clouds; (b) energy-efficient resource allocation policies and scheduling algorithms considering quality-of-service expectations, and devices power usage characteristics; and (c) a novel software technology for energy-efficient management of Clouds. We have validated our approach by conducting a set of rigorous performance evaluation study using the CloudSim toolkit. The results demonstrate that Cloud computing model has immense potential as it offers significant performance gains as regards to response time and cost saving under dynamic workload scenarios [30].

Nicolae B., Bresnahan, J., Kehey, K., Antoniu G. (2011)

This paper addresses challenges of snapshotting the VM images by proposing a virtual file system specifically optimized for virtual machine image storage. It is based on a lazy transfer scheme coupled with object versioning that handles snapshotting transparently in a hypervisor-independent fashion, ensuring high portability for different configurations.Large-scale experiments on hundreds of nodes with reduction in bandwidth utilization of as much as 90% [31].

Bo Li, Jianxin Li (2009) states Energy aware heuristic algorithm on base of distributes workload in virtual machine with minimum number of virtual machines or nodes required that workload. So that workload migration, workload resizes virtual machine migration these approaches are used in algorithm [32].

Hien Nguyen Van, Frederic Dang Tan, Jean-Marc Menaud (2010)

In this paper, authors propose a resource management framework combining a utility-based dynamic Virtual Machine provisioning manager and a dynamic VM placement manager. Both problems are modeled as constraint satisfaction problems. The VM provisioning process aims at maximizing a global utility capturing both the performance of the hosted applications with regard to their SLAs and the energy-related operational cost of the cloud computing infrastructure. Show several experiments how our system can be controlled through high level handles to make different trade-off between application performance and energy consumption or to arbitrate resource allocations in case of contention [33].

C. Clark, K. Fraser, S.Hand, J.G.Hansen, E.Jule, C. Limpach, I. Pratt, A. Warfield (2005)

In this paper authors focus on consider the design options for migrating OSes running services with liveness constraints, focusing on data center and cluster environments. We introduce and analyze the concept of writable working set, and present the design, implementation and evaluation of high-performance OS migration built on top of the Xen, VMM [34].

C. C. Lee and D. T. Lee (1985)

This paper consider one-dimensional on-line bin-packing problem is considered, A simple O(1)-space and O(n)-time algorithm, called HARMONICM, is presented. It is shown that this algorithm can achieve a worst-case performance ratio of less than 1.692, which is better than that of the O(n)-space and O(n log n)-time algorithm FIRST FIT. Also shown is that 1.691 … is a lower bound for all 0(1)-space on-line bin-packing algorithms. Finally a revised version of HARMONICM , an O(n)-space and O(n)- time algorithm, is presented and is shown to have a worst-case performance ratio of less than 1.636.d , authors considered comparison between HARMONICM and FIRST FIT algorithm both are scheduling algorithm, showing result and comparisons HARMONICM is always better than FIRST FIT [35].

Rajamani K., Lefurgy C. (2003)

Energy efficiency is obtained by powering down some servers when the desired quality of service can be met with fewer servers, authors have found that it is critical to take into account the system and workload factors during both the design and the evaluation of such request distribution scheme, identify the key system and workload factors that impact such policies and their effectiveness in saving energy and measure a web cluster running an industry standard commercial web workload to demonstrate that understanding this system-workload context is critical to performing valid evaluations and even for improving the energy saving scheme [36].

Gregor von Laszewski, LizheWang, Andrew J. Younge, Xi He (2009) proposed scheduling virtual machine in a compute cluster to reduce power consumption through Dynamic Voltage Frequency Scaling (DVFS), implementation of energy efficient algorithm to allocate virtual machine [37].

J. M. Ramirez-Alcaraz, Andrei Tchernykh, R.Yahayapour, U. Schwiegelshohm, J. L. Gonzalez_Garcia, A. Hirales-Carbajal, A. Quezada-Pina (2011)

The authors address non-preemptive non-clairvoyant online scheduling of parallel jobs on a Grid. We consider a Grid scheduling model with two stages. At the first stage, jobs are allocated to a suitable Grid site, while at the second stage, local scheduling is independently applied to each site. Analyze allocation strategies depending on the type and amount of information they require and conduct a comprehensive performance evaluation study using simulation and demonstrate that our strategies perform well with respect to several metrics that reflect both user- and system-centric goals. Unfortunately, user run time estimates and information on local schedules does not help to significantly improve the outcome of allocation strategies [38].

Aman Kansal (2010) states virtual machine power metering and provisioning architecture i.e. Joulemeter measure power of virtual machines per second in watt so we are carefully observe each host energy consumption and conservation [39].

T.Tamir (2011) proposed scheduling bully selfish jobs precedence-constraints, i ≺ j means that job j cannot start being processed before job i is completed. In this paper author consider selfish bully jobs who do not let other jobs start their processing if they are around. Officially, author define the selfish precedence-constraint where i ≺ s j means that j cannot start being processed if I has not started its processing yet [40].

Pradeep kumar Sharma (2012) proposed the algorithms for creating the small cloud using simulator CloudSim, and some key feature of conserving the energy in cloud with the help of migration of virtual machines in between data centers. The redundant datacenter consumes the large amount of energy which becomes the challenging for the data center [41].

Ismael Solis Moreno, Jie Xu (2011)

This paper focus on pollution , energy consumption causes a serious environmental and economic problems. The growing use of ICT, there is need of energy-efficient mechanism must start to play major role. Authors are concentrate energy-efficient mechanism must start to play major role in data center [42].

CHAPTER-2

Problem defination

Despite interoperability and portability standardization concerns are addressed by the Open Cloud Computing Interface (OCCI), energy efficiency at the algorithmic level need to be observed. Clouds computing denotes energy efficiency in all components of computing systems i.e. hardware, software, local area and etc. Energy efficient computing has to achieve manifold objectives of energy consumption, optimization and utilization improvement for computing paradigms that are not pay-per-use such as cluster and Grid computing, and revenue maximization as another additional metric for Cloud computing model. We survey cloud computing data centers, private and public cloud environments then observed fallowing examples which is seriously considered and think. We discuss one by one Eg.1 Google engineers maintaining thousands of servers, warned that if power consumption continuous to grow then power cost is easily overtake by hardware cost by a large margin [83], Eg.2 today’s computers consumes large amount of energy such as, a 360 –T flops supercomputer ( IBM Blue Gene/L) with conventional processor requires 20MW to operate which is approximately equal to sum of 22,000 US households power consumption [37],[85],[86], Eg.3 according to a data published by HP [84] 100 server racks can consume 1.3MWof power and another 1.3MW are required by the cooling system, thus costing USD 2.6 million per year. Besides the monetary cost, data centers significantly impact the environment in terms of CO2 emissions from the cooling systems [68], Eg.4 Reports [87], [88] indicate that UK like many other developed countries spends over one-third of its total energy on domestic and business sectors, energy demand is rising and natural resources are limited, and renewable green energy sources are far short of meeting our energy needs, Eg.5 Reports that data centers [89] in the USA and worldwide have doubled their energy consumption from 2000 to 2005. However, also end-devices have considerably contributed to the increase of electricity consumption, according to a 2006 survey report [90], Eg.6 according to Murugesan report [91] each of the stages of a computer’s life, from its manufacture, use and disposal produces environmental problems. Among these problems, the excessive electrical power consumption by hardware such as servers, networks, monitors and cooling systems appears to be the most critical since it results in increased greenhouse gas emissions and electronics waste (monitor, cpu, printer, keyboard, mouse) is become serious fast growing, some of computer components contain toxic materials such as lead, chromium, cadmium and mercury again this components in the landfill burned then toxic gases are released into atmosphere polluting the air and contributing to the changes in climate patterns and global warming , in Hickey report [92] Gartner Research Vice President Simon Mingay mentions in accordance with the 2020 report that the global amount of carbon dioxide emissions needs to be reduced from 60 to 80 percent by 2050, but more immediately, a 25 percent reduction is necessary by 2020 in order to diminish the environmental effects.

All above these examples show that energy conservation has become important since energy demand and cost is increasing, and natural resources are limited. Especially cloud computing approaches are exploring the beneficial of virtualization technology to maximize the use of physical resources. The huge scope is possible to do work in the better lease scheduling in the Clouds. The commercial industry is widely spreading the Clouds such as Amazon, VMWare, UnivaUD, Eucalyptus and RightScale etc. need such algorithms which will improve the efficiency of storage and computational units.

This research work has been background of interacting with Nimbus Science Cloud. This work can be performed with the help of laboratory set up. The better results can be checked by working with some High Performance computing units such as scalar and vectors. The results for research work can be well-formed into the API library. Afterwards, through knowledge transfer process it can be converted into industry useful product. The schemes designed in Design and Optimization of Scheduling Scheme in Cloud Computing (DOSSCC) useful for energy efficiency, and would be useful for the finding out the solutions to the challenging scientific applications. The DOSSCC schemes will be designed such that power saving directly prepositional to time of given jobs.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now