Application For Research Grant

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

1.      

2.      

3.      

4.      

Keywords: maximum 5 (mandatory)

Cloud Computing, performance management, queuing models, power consumption, green IT

Project Duration:       months / 1 2 3 year(s)

Total Budget Requested (KD) 4000

If total budget requested less than 4001 KD., do you prefer external evaluation for your project? Yes No

Research Team (Please attach updated CVs for every member of the Research Team)

First: Principal Investigator (Applicant)

Name: Dr. Faruk Bagci

Rank:

University ID: 212010425

Faculty:

Department: Computer Engineering

Civil ID: 273071509502

Phone: 94018645

Fax:      

Second: Co-Investigator(s) (maximum 4 )

1. Name :      

Rank:

University ID:      

Faculty:

Department:      

2. Name :      

Rank:

University ID:      

Faculty:

Department:      

3. Name :      

Rank:

University ID:      

Faculty:

Department:      

4. Name :      

Rank:

University ID:      

Faculty:

Department:      

Third: Contributor(s) (maximum 4 )

1. Name:      

2. Name:      

3. Name:      

4. Name:      

The Project 's Abstract (about 200 words)

1. In English

Information and communication technology (ICT) has indisputable impacts on environment due to large amount of CO2 emissions. "Green" IT and low power consumption networking infrastructures become more and more of great importance for both service/network providers and equipment manufacturers. An emerging technology called cloud computing can increase the utilization and efficiency of hardware equipment and can potentially reduce the global CO2 emission. Cloud Computing has become a necessity in the modern resource allocation and storage technologies. There is a new trend for computing resource provision that offers infrastructure as a service (IaaS) and platform as a service (PaaS) for web applications nowadays. Thus resource allocation, load balancing and performance management are important aspects in such field. In this research project our aim is to develop a new methodology of assigning resources based on queuing models taking into consideration of power saving techniques. The basic principle is to add and remove virtual machines based on the proposed queuing models since these virtual machines are acting as servers, but also to check if such adding/removing will affect the power consumption of the overall system. Based on these performance investigations the system will decide whether to add/remove a virtual machine or just leave the system as it is. In case of rejection of a virtual machine to run, the system must still fulfill the quality of service measures defined in the service level of agreement (SLA) signed by the service provider for this web application that clearly defines the quality level to meet the consumer satisfaction.

2. In Arabic

Research Major Items

1. Project Background

(Explain - depending on the literature- WHAT has been already done in the research field of your project)

Cloud computing infrastructure depends on resources virtualization. Virtualization concept divides a single hardware resource into separate isolated virtual resources which enables resource pooling; so a shared hardware resource can be used by multiple applications and multiple operators. This hardware resource might be a server, switch, router or a storage device. Resource pooling is very effective for increasing the resources utilization and cost efficiency. On the other hand, cloud computing has challenges in cloud security, network management, load balancing and resource allocation.

Based on National Institute of Standards and Technology (NIST) definition [2], cloud computing infrastructure has 5 main characteristics; as shown in Figure 1.1. The five C/Cs are rapid elasticity, resource pooling, measured services, broadband access and on demand service. Cloud computing C/Cs should facilitate offering Infrastructure as a service (IaaS), Platform as a service (PaaS) and Software as a service (SaaS).

Figure 1.1: Cloud Computing Definition

Rapid Elasticity:

NIST definition for rapid elasticity is: Cloud capabilities can be rapidly and elastically provisioned, in some cases automatically to quickly scale out and rapidly released to quickly scale in [2]. By this definition, cloud computing can be dynamically and immediately scaled up or down. The resource licenses should support this feature and scaling up should be location independent.

Measured Service:

Cloud systems automatically control and optimize resource use by leveraging a metering capability at some level of abstraction appropriate to the type of service (e.g., storage, processing, bandwidth, and active user accounts). Resource usage can be monitored, controlled, and reported providing transparency for both the provider and consumer of the utilized service [2]. So up on the performance metrics results for during the networks operation, the cloud resources should be automatically optimized. Pay per use should be applied as for billing calculations.

Broadband Access:

Capabilities are available over the network and accessed through standard mechanisms that promote use by heterogeneous thin or thick client platform (e.g. mobile phone, laptops or PDAs) [2]. By this feature, cloud computing should support all fixed and mobile end users standards with self-management Interface.

Resource Pooling:

The provider’s computing resources are pooled to serve multiple consumers using a multi-tenant model, with different physical and virtual resources dynamically assigned and reassigned according to consumer demand [2]. By resource pooling and virtual isolation, cloud computing meets the customers’ requirements for cost minimization and simple management.

On Demand Self Service:

A consumer can unilaterally provision computing capabilities, such as server time and network storage, as needed automatically without requiring human interaction with each service provider. By this feature, the customer can easily and automatically scale the IT infrastructure resources without any human interaction.

Some IT companies are specialized in implementing public clouds as service for companies to use it to deploy their own web applications on it. Amazon elastic compute cloud (Amazon EC2) and Microsoft windows Azure are examples for public clouds that deliver scalable, pay as you go [3] computing for companies to establish their web applications or use the existing ones. The IBM company provides a cloud service with ability to build private or hybrid clouds which allow creation a level of abstraction over pooled of it resources resulting in a new trend for it businesses to introduce and serve their web applications to consumers. As long as proliferation of the web applications are increasing, performance and resource scheduling became an important issue needed to be considered in cloud computing that greatly affects the profit for the IT commercial companies in this field.

Related Work:

Both Performance management on a cluster and performance management on a cloud are two quite similar issues that have been discussed in previous works. In [4], it targeted optimization of online response time of Apache Web Server and discussed advantages and disadvantages of Fuzzy Control, Newton’s Method and heuristic method for optimization. There are two main differences between performance management on a cloud and performance management on a cluster. First, using a cloud the web applications run heterogeneously on the other hand using a cluster the web applications must be run in a homogenous manner as each node has the unified software platform. Second, the mechanism of resource sharing of computing on a cloud is more complicated than resource sharing on a cluster. Many researches were carried out considering these differences. RAD Lab of Berkeley studies the usage of aggressive and pervasive of statistical machine learning as a tool for prediction and diagnosis allowing automatic reaction to correctness and performance problems and automatic management. In [5], a Queuing Theory method is proposed to be used in predicting the performance provided by the cloud, but the model proposed using this method is quite simple, i.e. it consists of one cloud that exposes one service without consideration of any special context of cloud computing.

There is a margin of error between predicted and real-time status, but that margin is acceptable where the prediction provides an effective way for optimizing the usage of computing resources. Prediction is a method with some challenges as it is risky and it needs many samples to be used, which is not available for applications newly deployed. In [6] a packing algorithm is proposed targets saving energy through minimizing the number of running machines. In [7] a dynamic scaling up/down algorithm is proposed which targets the queuing based model of web applications to calculate the sojourn time (response time).

In [8], Zhu et al. present a virtual network named Cabernet (Connectively Architecture for Better Network Service). They use three-layer network architecture to lower the barrier for deploying new services. The connectivity provider uses virtual network to satisfy the network requirements of the service provider. In the virtual network environment, the service provider can run "intra-domain" protocols without regard for the many underlying infrastructure networks. Each service provider employing a dedicated wide-area virtual network can easily deploy new services. Cabinet provides not only computing resources, but also the whole virtual network to service providers.

He et al. present dynamically adaptive virtual networks for a customized Internet (DaVinci) [9]. Each of their proposed virtual networks can run customized protocols to optimize network performance. Da Vinci can use multiple paths between two end hosts to forward packets in order to satisfy the QoS requirements. For facilitating to run their virtual network, an edge virtual node is established to encapsulate packets. Users may use various methods, such as establishing a tunnel to a virtual node, configuring a Web browser to use a virtual node as a proxy, or DNS redirection, or connect to a virtual network. Bavier et al. present a virtual network infrastructure called VINI [10] to facilitate network researchers to evaluate their network protocols and services. VINI supports simultaneous experiments with arbitrary network topologies on a shared physical infrastructure to provide researchers flexibility in designing their experiments. Their paper illustrates and tackles many important design issues in network virtualization. In [11], Restrepo et al. present an energy profile aware routing scheme to decrease the energy consumption of the data transmission in the Internet. They take the energy consumption of equipment and traffic loads into account when the routing decision is made. They define several equations to minimize the overall energy consumption of an already dimensioned network. The energy profile of a network node is scalable. Before making a routing decision, the router has to know all energy profiles of nodes in every possible routing path. However, in reality, the energy profile will be changed constantly.

2. Objectives

The main objectives of this project are:

Define an abstract model of cloud computing environment including architecture of virtual networks.

Define a service model for web applications based on virtual machines.

Define metrics of interest to evaluate the performance of proposed algorithms.

Define an energy model which maps realistic power consumption of cloud components and services.

Design a cloud service controller

3. Importance

Based on the research study achieved in [1] by IDC about the digital universe, the growth in information has an exponential increase. The study shows that the universe data is increased almost 7 times during 5 years (from 2005 to 2010); from 35 to 260 PetaBytes as shown in Figure 3.1. In the next decade (from 2009 to 2020), it is expected to have 44 times the current universe data [1].

Figure 3.1: Worldwide information growth in PetaBytes

Resource optimization, availability enhancement, cost efficiency and productivity increase are main challenges that face the IT industry. Cloud computing is a new era in the IT business that allows the IT vendors to provide IT, platform or application as a service. Cloud computing is a promising solution that saves the hardware, the operation and the maintenance cost for the customers and achieves the vendors expectations for providing simple and rapid solution for building IT service. Another benefit from cloud computing is the power consumption saving due to resource pooling and switching off inactive resources. Power saving is one of the hot challenges in the green IT vision.

Companies spend the same on cooling in a data center as they do on electronics. Hence, there is a strong need for optimization. In the future, we will see significant interest in initiatives such as server virtualization and storage consolidation. About 25% of mid-size businesses have already completed some form of virtualization or storage consolidation; another 50% are planning these for the coming years. This growth in adoption speaks to the benefits offered by server virtualization and storage consolidation: cost-efficiency, ease of management and reduction in energy use.

Controlling cost is the strongest factor driving several initiatives. Under the cost-savings umbrella, four main benefits rise to the top: decreased electricity use, decreased consumables use, decreased future operational expenses or investments and realizing credits or rebates from local utilities and governments. Two additional benefits were also cited as key considerations by many businesses: the ability to better meet customers’ demands and increased features and functionality for the business.

Companies thinking about implementing a Green IT project should consider that the majority of implementations are considered successful. In 65% of all Green IT projects, organizations’ initial goals for these projects are met or exceeded. In other words, businesses typically accomplish what they set out to do, and realize additional benefits they weren’t expecting.

4. Research Methodology

The process of network virtualization covers the virtualization of the network components such as the link, the switch, and router. Implementation of virtual links can be realized by tunneling, whereas a soft router can implement the virtual router. On demand of a user, a customized virtual network will be provided. The virtual network provision is a critical step, since it needs to satisfy the requirements of users. Furthermore, it affects the utilization of the physical network. Figure 4.1 shows the architecture of a virtual network.

Figure 4.1: Architecture of a Virtual Network

When providing a virtual network the energy for computation and communication can be minimized to decrease the total energy consumption. While the user demands a virtual network, the available network resources are searched to create the virtual network that meets the user's requirements, such as the topology, the number of nodes, and link bandwidth. In available virtualizing services, energy considerations are usually not taken into account. However, simply minimizing the number of assigned hardware would decrease the static power consumption of the computation. Unfortunately, this would affect directly the communication energy, since minimizing the static power consumption of the computation may increase the energy consumed by the communication.

Another possibility to save energy is on service level. The cloud infrastructure does not only provide users with network solutions, but also with virtual services. These services run on virtual machines using dedicated hardware and storage space. As we are targeting power saving performance and resource scheduling problem, a service level agreement (SLA) is signed by service provider and its clients when deploying a new web application. SLA includes the expected response for clients and an acceptable error range that restricts the difference between the actual response time and the expected response time.

Figure 4.3: Cloud infrastructure for web applications

For real cloud infrastructures a cloud design for web applications on large scale is shown in Figure 4.3. Clients are using interface of a cloud controller to interact with the cloud environment. The controller consists of a dispatcher, web application queues, database storage, and cloud nodes. These nodes represent real hardware servers where several virtual machines are running in parallel. A dispatcher has two roles in the cloud controller. First, it assigns client requests to targeted queues and record the time of entrance for the request in the data storage. During runtime it records the leaving time of response and calculates the total response time which is compared to the agreed response time defined in the SLA. The second task of the dispatcher is to decide whether to add or remove virtual machines. The data base storage contains all the SLAs for all web applications and is used to save all entering and leaving times of any cloud request. Queues in the cloud controller are implemented as buffers or registers to save pending client requests. Each cloud node has a set of virtual machines, which each one of them are dedicated to serve one web application only. Running different virtual machines in parallel, a cloud node can serve different web applications at the same time.

Our focus in this project is to target performance on power saving and resource utilization by dynamically resizing cloud via scaling up and down virtual servers for each web application to improve the power saving performance and delay on the overall cloud. Our proposal is based on a queuing theory model M/M/S/K. The first M represents memory less exponential inter-arrival process that controls number of requests entering the system. The second M in proposed model represents memory less exponential inter-departure process. The letter S stands for number of servers in the system, which in our case will be number of the virtual machines serving each application. Finally, K represents maximum queue size for each web application. In this model, virtual machines act as servers and web applications act as queues. Each web application has its own queue and its virtual machines that serve its own requests only. Dynamic scaling up and down occurs by adding or removing virtual machines for each web application based on the service level agreement.

In this model, we assume the following for any client requests to any web application:

The model is memory less which means any two successive events are independent of each other.

Inter-arrival times denoted by λ base on exponential distribution.

Queues for each web application have certain size which limits the maximum number of client requests to be stored and served successfully. Any new requests arriving after the queue is fully occupied will be discarded.

The clients requests in a queue served on base of first in first out (FIFO) order, which means first arriving request will be served first with no restrictions.

Clients can receive responses from servers in time, based on their requests or receive notifications indicating an exception after a time out.

Response time will be equal to the waiting time in a queue plus the service time taken in a server.

Figure 4.2 shows an example for a simple queuing model describing two web applications A and B, each having its own queue and its own virtual machines serving their clients. Requests are controlled through a dispatcher that decides which application should target the incoming request. Web application A has one queue where maximum seven requests can be handled concurrently and three virtual machines dedicatedly assigned to web application A. This queuing model forms an M/M/3/7 queue. The dispatcher will assign any client requests targeting web application A to queue A. The requests are stored first in available empty slots of the queue. If the queue is fully occupied, any new request for web application A will be discarded until new free slots for requests are available. Same case is considered for web application B with a difference of an M/M/2/7 queue, i.e. B has only two virtual machines assigned to serve its requests.

Figure 4.2: Queuing model for two web applications A and B.

Applications are defined by an expected sojourn time, which is the serving time plus the waiting time, also the acceptable error е, and acceptable successive failures n. The dispatcher is used to record the time to serve each request and the length of each queue. At the same time it is controlling interface between user and cloud. Every requests of client will be sent to dispatcher first before they are forwarded to the queues. The same way back, all responses from queues pass the dispatcher to reach clients. This allows to record sojourn times of requests and to calculate its mean. In this way, the dispatcher can record arrival and departure rates of client requests. Periodically, mean number of arriving requests, mean waiting time, mean serving time, and mean sojourn time is calculated based on recorded data. These values are compared to values specified for the application. The error between mean sojourn time and specified sojourn time defines, if an application can satisfy its requirements or not. If error e exceeds threshold for n successive times, then the application will fail to fulfill requirements, and thus a new virtual machine is created. If the error e stays below the threshold for 2n times then a virtual machine can be removed.

In order to reduce the high add/remove rate of virtual machines, two algorithms are provided. The first algorithm records the rejoin time, i.e. time between adding a virtual machine and removing it and adding it again. In order to overcome high rejoins in a short time, we introduce a minimum allowed time Tmin. If rejoin time is less than Tmin, the acceptable error e will be increased. The second algorithm checks every time a new virtual machine has to be added, the time it was removed before. Respectively, it checks the time it was added previously, if a virtual machine needs to be removed. If the time between adding/removing is less than the minimum allowed time Tmin, the algorithm will not add or remove the virtual machine. Both algorithms can be considered as a number of states. Each state is entered in certain conditions and performs certain commands and tasks. These states are depicted in Figure 4.3 and are as follows:

State initialize: all variables are initialized.

State request: entered when a request is received and perform the following:

Check, which the target application is and forward the request to the equivalent queue.

Update the queue length.

Update the numbers of requests arrived.

State response: entered when a response is received and perform the following:

Check which the target client is and forward the response to it.

Update the queue length.

State record: entered periodically in order to record the number of requests arrived and the queue lengths during the previous period.

State calculate: entered periodically every 10 periods and perform the following:

Calculates the mean arrival rate for each application during the last period.

Calculates the mean queue length of each application during the last period.

Calculates the mean sojourn time for each application during the last period.

State check: this states is entered directly after the above state, it differs in the two algorithms.

Algorithm 1 performs the following:

Calculates the error between mean sojourn time and expected sojourn time.

If error is greater than acceptable error e then increment the number of successive failures by one, else increment the number of successive success by one.

If number of successive failures is greater than accepted number of failures n then add a new virtual machine.

If number of successive success greater than 2n then remove a virtual machine.

If a cycle was completed, then check, if the cycle time TCyclce is less than Tmin and increase in that case the accepted error e.

Algorithm 2 performs the following:

Calculates the error between mean sojourn time and expected sojourn time.

If error greater then acceptable error e then increment the number of successive failures by one, else increment the number of successive success by one.

If number of successive failures is greater than accepted number of failures n, check, if the last time a virtual machine removed was greater than Tmin then add a virtual machine.

If number of successive success is greater than 2n, check, if the last time a virtual machine added was greater than Tmin then remove a virtual machine.

Figure 4.3: Finite State Machine for proposed model

REFERENCES

[1] John F. Granz. The Expanding Digital Universe. IDC White Paper. http://www.emc.com/collateral/analyst-reports/expanding-digital-idc-white-paper.pdf, March 2007

[2] Peter Mell and Tim Grance. The NIST Definition of Cloud Comuting. National Institute of Standards and Technology (2009), Volume: 53, Issue: 6, Publisher: NIST, Pages: 50

[3] Michael Armbrust, Armando Fox, et.al. Above the Clouds: A Berkeley View of Cloud Computing, http://www.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.pdf, 2009

[4] Xue Liu, Lui Sha, Yixin Diao, Steven Froehlich, Joseph L. Hellerstein, and Sujay Parekh. Online Response Time Optimization of Apache Web Server, IWQoS 2003, LNCS 2707, pp. 461-478, 2003.

[5] K. Xiong and H. Perros. Service performance and analysis in cloud computing, ICWS 2009, in Proc International Workshop on Cloud Computing, July, 6-10 (2009), LA

[6] Bo Li, Jianxin Li, Jinpeng Huai, Tianyu Wo, Qin Li, Liang Zhong. EnaCloud: An Energy-saving Application Live Placement Approach for Cloud Computing Environments, in Proc 2009 IEEE International Conference on Cloud Computing, 2009

[7] Hao-peng Chen, Shao-Chong LI, A Queueing-based Model for Performance Management on Cloud, in School of Software, Shanghai Jiao Tong University, Shanghai, China, 2011

[8] Y. Zhu, R Z. S. , S. Rangarajan, and J. Rexford. Cabernet: Connectivity Architecture for Better Network Services. In Proceedings of ReArch '08, Dec. 9, 2008, Madrid, SPAIN

[9] J. He, Z. S. Rui, Ying Li, C. Y. Lee, J. Rexford, and M. Chiang. DaVinci: Dynamically Adaptive Virtual Networks for a Customized Internet. In Proceedings of CoN ext, December 2008

[10] A. Bavier, N. Feamster, M. Huang, L. Peterson, and J. Rexford. In VINI Veritas: Reaslistic and Controlled Network Experimentation. Proceedings of SIGCOMM'06 , 2006, pp. 3-14

[11] J. C. C. , Restrepo, C. G. Gruber, and C. M. Machuca. Energy Profile Aware Routing. In Proceedings of IEEE International Conference on Communications Workshops 2009, pp. 1-5.

5. Time Frame for EXECUTION of Research

Specify the research major activities during the project duration (in months).

Major Activities

Ist year

2nd Year

3rd Year

1

2

3

4

5

6

7

8

9

10

11

12

1

2

3

4

5

6

7

8

9

10

11

12

1

2

3

4

5

6

7

8

9

10

11

12

1.      

2.      

3.      

4.      

5.      

6.      

Research & Teaching Load for the Research TEam

Specify the time commitments in hour(s) per week for this project and others as well as teaching duties

Research Team

(Name)

Hours / Week

This Project

Other Project(s)

Teaching

1st year

2nd year

3rd year

1st year

2nd year

3rd year

1st year

2nd year

3rd year

PI

     

     

     

     

     

     

     

     

     

     

CoI 1

     

     

     

     

     

     

     

     

     

     

CoI 2

     

     

     

     

     

     

     

     

     

     

CoI 3

     

     

     

     

     

     

     

     

     

     

CoI 4

     

     

     

     

     

     

     

     

     

     

Specify the duties of every CoI and the significance of his/her contribution

     

Expected Research Output

Published Paper(s)

Number       (Only these will be considered as "productivity" of the project)*

Conference Presentation(s)

as Manuscript       , as Poster      

You must acknowledge the financial support of Kuwait University Research Grant in all your Published Output from this project by mentioning the Research Administration Project GRANT Number, as per the following format:

This work was supported by Kuwait University, Research Grant No. [ ]

* Please note that published paper with author names not belonging to the research project team will NOT be considered as productivity.

* Please note that for Scientific Faculties only papers that are published in journals with impact factors according to the Journal Citation Reports (JCR) are counted as productivity for the projects.

Ethics & Risks Prevention (if any)

Does your research proposal require:

Specify what protection the PI has for the safety of subjects

Human Subjects/ Samples

     

Radiation Material

     

Dangerous Chemicals

     

Laboratory Hazards

     



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now