Effective Resource Management Technique For Cloud Environment

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Abstract--Cloud computing is raising as a new computing paradigm change. Organizing resources at large-volume is a key dispute for cloud Environments. We deal with the difficulty of dynamic resource administration for a large-volume cloud background. Our role relies on a middleware structural design and defining one of its aspects, a gossip protocol. The protocol distributes the resources consistently among services, adapt to the changes in load and, it expands in the number of machines and sites. To deal with variability in resource capacity and application performance in the Cloud, we develop a method to predict the job completion time delivery that is applicable to making refined decisions in resource allocation and scheduling. The Protocol at first provides a solution without considering CPU and Memory resources. Then the protocol is widened to offer proficient outcome. Here we also extend our proposed work to fault tolerance in resource scheduling problems.

Index terms-Gossip protocol, Resource Allocation, Job Scheduling, Fault tolerance, Virtualization.

1. INTRODUCTION

Cloud computing is the provision of services. In cloud computing resources, information’s are afforded to computer devices over a network. Computing resources are function as a usage based subscription model. Amazon Simple Storage Service (S3) and Amazon Elastic Compute Cloud (EC2), Microsoft Azure are familiar examples for cloud computing vendors. Managing the resource is an important dispute in cloud environments. The delinquent of resource management for a large-scale cloud environment are addressed. Such environment includes the physical infrastructure and

supplementary control functionality that enables the provisioning and management of cloud services. In cloud computing environments, there are two participants: cloud providers and cloud users. Cloud providers hold computing resources in their datacenters and rent out resources to users. Cloud users get their resources from cloud providers for their applications. Figure 1 explains the cloud set-up. The user asks for resources to a provider. On receiving the request, cloud provider looks for the resources to persuade the request and assigns the resources to the cloud user, typically as a form of Virtual Machines (VMs).

Figure 1.Cloud working set-up.

Then the user uses the assigned resources to run their applications and pays for the resources that are used. After using the resources, the resources are returned to the provider.

A. Virtualization for Resource Allocation.

Through Virtualization a single physical machine can function as multiple logical VMs (Virtual Machines).On a single physical machine multiple operating systems are hosted. Our approach is based on virtualization; it contributes towards a middleware layer that performs resource allocation, with the following goals.

Flexibility: The resource allocation process must dynamically bend to modifications for cloud services

Scalability: The Resource allocation process must be scalable both in the number of machines and sites.

Performance Measure: The objective is to meet maximum fairness for computational resources under CPU and memory constraints.

Fault tolerance: Network failure is prevented.

Gossip protocol is the main key factor that is used to attain the design goals. Gossip protocols have been widely used for load balancing in distributed systems. Memory constraints and the cost of a configuration changes are not considered. Thus it results in difficulty for allocating resources. This paper is an extension of the work stated in .In this paper, we give a best possible solution for a simplified description of the resource allocation problem and an efficient heuristic solution for the inflexible problem. The gossip protocol we suggest endlessly executes, while its input and output dynamically changes. They take into account only the static input and produce a single output value (e.g., [1]–[2]).They get restarted whenever the input changes, thus it causes synchronization.

RELATED WORK

The crisis of allocating the resources dynamically to the applications has been measured. In [4] cluster nodes and its applications are considered for resource allocation. Two mechanisms are taken into account for request routing and application placement. Request routing relies on selective update propagation, in selective update propagation system size does not depends on the cluster nodes. In a decentralized manner application placement is approached. [5] Extends the work of [4], for application placement it introduces a distributed middleware. As in this paper, the main function of that work is for changing demand. Cluster utility is maximized on changing demand. Utility must be chosen in such a way it must tackle the overload conditions. Our work guarantees that each module receives its resources moderately. In the proposed plan of [4] and [5] the number of machines can be expanded, but not in the applications as the aim of this paper does. In [6] for computing clouds, it is required to avoid wasting resources and prolonged responses. Our plan works on a distributed architecture in which resource management is decomposed into self-governing tasks. Autonomous Node Agent performs these tasks in data center. Through Multiple Criteria Decision Analysis Autonomous Node Agents carry out configurations. The work in [6] deals with the difficulty of placing the virtual machine under CPU and memory constraints. It uses multi-criteria decision analysis for placing the virtual machines in a decentralized mode, it limits the scalability. Resource pools are considered in [7], they are the collections of resources. Resources are shared by diverse applications. The goal is to split the total number of resources for workload of the applications. Here, utility function behaves as the key tool for enabling self-optimizing behavior. In this paper, decentralized utility maximization approaches for adaptive and optimal management of shared resource pools are presented. In [7] a single type of resource for utility maximization is taken into account. The author presents a best possible solution beneath the assumption that the demand of an application can be split over several machines. Their solution has limited applicability, whereas in our work we look into the local memory constraints.

METHODOLOGY

Several datacenters are interconnected in cloud atmosphere. High-speed networks connect these machines. Sites are gained access through the public Internet using a URL that is translated to a network address through a global directory service, such as DNS. Such a request to a machine either processes the request or forwards it. Figure 2 shows the architecture of a system we consider in this paper.

Our concept is based on a decentralized design. Datacenters running a cloud environment contain a large number of machines. The site contains one or more modules. Each machine runs a machine manager component that computes the resource allocation policy. The resource allocation policy is computed by a protocol that runs in the resource manager component. This component takes the projected order for each module that the machine runs as input. To perform execution computed allocation policy is sent to the module scheduler. A distributed algorithm is carried out by an overlay manager that maintains an overlay graph of the machines in the cloud. It initiates the interaction among the list of machines.

User requests are handled by site manager. There involves two components: a demand profiler and a request forwarder. The demand profiler calculates the resource demand. (Example for such profiler can

Be seen in [3].) This estimate is forwarded to all machine managers that operate the instances of modules belonging to this site. The request forwarder sends user requests for processing to module instances .Figure 2 (right) shows the components of a site manager and their relationship to machine managers.

VMn

VM3

VM2

VM1

………

SM3

SM2

SM1

Machine Manager

Module Instances scheduler

CLOUD MIDDLEWARE

OPERATING SYSTEM

USER DEMAND

SITE MODULE

Local modules

REQUEST FORWARDER

DEMAND PROFILER

Figure 2: Middleware Architecture for cloud.

PROTOCOL FOR RESOURCE ALLOCATION

Gossip protocols have proved to be a feasible solution to supervise large-scale services in decentralized circumstances. Behind many large-volume protocols there is a basic idea called gossip protocol. Many notable features are provided by gossip protocol, such as scalability, robustness to failures, load balancing and redundancy of information. Gossip protocol P* is the main involvement of this paper, and it is executed in a middleware structure to attain the goals.

The structure of a round based distributed algorithm was followed by gossip protocol P*.While executing this protocol, it randomly selects the neighboring nodes for interaction. Small messages were interacted between the nodes. Nodes are interacted through small messages. State changes are begun by the nodes. During a round each node gets updated. Many protocols consider only static input and produce a single output value (e.g., [1]–[2]).The protocol we propose endlessly executes, while its input and output dynamically changes.

P* has contributions for all machines of the cloud. At the time of starting, cloud configuration was carried out by the source manager. Then the source manager invokes P* to compute and dynamically adapts the configuration for optimizing the cloud efficacy. Thus the maximum-minimum fairness is obtained.

STUDY OF THE PROTOCOL

Algorithm 1: Initializing P’ or P*.

1: k be a range of all modules

2: set of all machines be L

3: freeNode (L) returns a machine l ∈ L with the largest free memory,

5: for i = 1 to |K| do

6: k = K[i];

7: l = freeNode (L)

In this section, protocol P ’is analyzed, a simplified narration of P*.Memory constraints are not considered in this algorithm. The distinction between P’ and the protocol in [1] is the way these changes are updated while interacting. While in [1] the averages of two local state variables are rationalized. P’ thus results in complexity.

Algorithm 2: Optimal solution is computed by the Protocol P’.

Initialization

1: read ωn, rown(C);

2: Begin the passive and active threads

active thread

3: while true

4: choose l’ uniformly at random from L;

5: send rown (C) Then, l sends its state (i.e., rown(C))

6: obtain rown’ (C).

7: balance (l’,rowl’(C),Ωl’ );

balance(), which equalizes its own relative demand with that of l’

8: write rown(C);

passive thread

1: true do

2: obtain rowl’(C),Ωl’ from l’;

whenever l receives the state from another machine l’;

3: send rown(C),Ωl to l’;

4: balance(l’, rowl’ (C),Ωl’ );

5: write rown(C);

6. balance(i, rowi(A) )

7: compute Δω

8: Ω/l(∑m ωm,l − Δω) = 1/Ωl_ (∑m ωm,l’ +Δω).

On change demand ω P’ is executed rown(C) is the lth row of the configuration matrix C. Active threads are periodically executed. Active threads are started, by choosing l with another machine l’ evenly at random from L. Passive threads are initiated by another machine. Computed demand Δω is moved from l to l’, where l is the larger demand and l’ is the lower demand.

C. P*: A HEURISTIC SOLUTION

The protocol P* is presented in this section, a distributed heuristic algorithm which is an extension of P’.At the start of the active or passive threads, a machine reads the current demands of the modules it runs. At end of the active or passive threads a machine configuration matrix C is updated. Using the heuristic algorithm, memory demand of the process is computed. The process selected by the client in the above algorithm, is modified by the client and sent to the cloud for reallocation. To allocate cloud resources, the modified process memory demand is computed and it is allocated in the most free memory VM.

Algorithm 3 Heuristic solution for Protocol P*.

Initialization

1: read Ωl, Γl;

2: Begin the passive and active threads

active thread

3. In the active thread of Algorithm 3, machine l chooses l’ uniformly at random from the set Ll;

4. read ωn, γn, rown(C),Ll;

5: choose l’ at random from Ll;

6: else

7: choose l’ at random from L − Ll;

8: send (ωl, γl, rown(C),Ωl) to l’;

9: receive (ωl’, γl’, rowl’ (C),Ωl’ ) from l’;

10: balance(l’, (ωl’ , γl’, rowl’ (C),Ωl’ ));

11: write rown(C);

passive thread

12 .receive (ωn’ , γn’, rowl’ (C),Ωl’) from l’;

13: read ωn, γn, rown(C),Nn;

balance (i, rowi (C),)

if i∈ Ll

S = max{wl, wi}

S’=min{wl, wi};

12: shift Demand1 (S,S’);

13: else

14: shift Demand2 (S, S’);

P* is alike P’, except that ω and row L(C) are not read in P* during initialization. The relative demands of machines l and l’ are balanced. If l’ belongs to Ll at least one common module instance is being processed, procedure move Demand1 () is invoked. Otherwise, move Demand2 () is invoked.

RESULTS

Figure 3: Assessment of Resource allocation protocol P*.

Site Fairness.

CLF: CPU Load Factor.

MLF: Memory Load Factor.

(b).Satisfied demand of sites.

(c) Average number of instances per module.

CONCLUSION

In this paper, we introduced a dynamic resource provisioning administration model for computing clouds. Our model consists of a middleware architecture that performs resource allocation using gossip protocol. Gossip protocol P* computes, in a distributed and incessant manner, a heuristic result to the resource allocation problem for a dynamically changing resource demand. Network failure is prevented. This protocol satisfies our design goals: flexibility, adaptability and efficient resource allocation.

Our approach in this paper can be useful to include other constraints and other resource types. This paper is a key for resource management in large-volume clouds. Attaining this goal, we also plan to tackle the following disputes in future work: (1) Build up a mechanism that proficiently places new sites and applications. (2) Enlarge the middleware plan to span several data centers.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now