The Types Of Cloud Computing

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

Everyone is talking about cloud computing today but no one the gives exact meaning to it. There is a general idea behind the cloud where applications or other business functions exist somewhere away from the business and companies look in order to use the technology. Cloud computing offers a variety of ways for businesses to increase their IT capacity or functionality without having to add infrastructure, personnel and software.

3.1.1 Types of Cloud Computing

Cloud Computing can be classified under 4 types based on location where the cloud is hosted:

Public Cloud: Computing infrastructure is hosted at the vendor’s premises. The customer has no visibility over the location of the cloud computing infrastructure. The computing infrastructure is shared between organizations.

Private Cloud: Computing architecture is dedicated to the customer and is not shared with other organizations. They are expensive and are considered more secure than Public Clouds. Private clouds may be externally hosted ones as well as in the premise hosted clouds.

Hybrid Cloud: Organizations host some critical, secure applications in private clouds and not-so-critical applications in the public cloud. The combination is known as Hybrid Cloud. Cloud bursting is used to define a system where the organization uses its own infrastructure for normal usage and cloud is used for peak loads.

Community Cloud: The cloud infrastructure is shared between the organizations of the same community. For example, all the government agencies in a city can share the same cloud but not the non government agencies.

Here are six different types of cloud computing and a little about what they offer to businesses:

Web-based cloud service is a type of cloud service that exploits certain web service functionality, rather than using fully developed applications. For example, it might include an API for Google Maps, or for a service such as one involving payroll or credit card processing.

SaaS (Software as a Service) is an idea of providing a given application to multiple tenants, typically using the browser. SaaS solutions are common in sales, HR and ERP.

Platform as a Service is a variant of SaaS, one can run their own applications but it is executed on the cloud provider’s infrastructure.

Utility cloud services are virtual storage and server options that organizations can access on demand, even allowing the creation of a virtual data center.

Managed services are perhaps the oldest iteration of cloud solutions. In this scenario, a cloud provider utilizes an application rather than end-users. For example, this might include anti-spam services, or even application monitoring services.

Service commerce is a mix of SaaS and managed services. It provides a hub of services where end-user interacts. Common implementations include expense tracking, travel ordering, or even virtual assistant services.

There are constantly new ideas and new iterations being brought into the forefront. As cloud computing becomes a viable and even necessary option for many businesses, the types of services that providers can offer to organizations will continue to grow.

3.1.2 Public cloud

A public cloud is one based on the standard cloud computing model, in which a service provider makes resources such as applications and storage available to the general public over the Internet. Public cloud services may be free or offered on a pay-per-usage model. The main benefits of using a public cloud service are:

Easy and inexpensive setup because hardware, application and bandwidth costs are covered by the provider.

Scalability to meet needs.

No wasted resources because you pay for what you use.

Examples of public clouds include Amazon Elastic Compute Cloud (EC2), IBM's Blue Cloud, Sun Cloud, Google AppEngine and Windows Azure Services Platform.

3.1.3 Private Cloud (Internal Cloud or Corporate Cloud)

Private cloud (also called internal cloud or corporate cloud) is a marketing term for a proprietary computing architecture that provides hosted services to a limited number of people behind a firewall.

Advances in virtualization and distributed computing have allowed corporate network and data center administrators to effectively become service providers that meet the needs of their "customers" within the corporation.

Marketing media that uses the word "private cloud" is designed to appeal to an organization that needs or wants more control over their data than they can get by using a third-party hosted service, such as Amazon's Elastic Compute Cloud (EC2) or Simple Storage Service (S3). 

3.1.4 Difference between public cloud and a private cloud

Cloud storage and computing are no longer a new concept. Cloud is one of the most recognized terms for the industry. During the initial stages, it was offered as a service, which earned recognition with Web 2.0 startups further taking a toll towards outsourcing the storage administration activity. With the evolution of the technology to the level today, cloud can be offered in two types, the public cloud hosting and the private cloud services. We differentiate between public cloud and private cloud and also, users would be able to understand the difference based upon various factors such as a pattern of usability, security, and performance, which is insignificance with the costs.

3.1.5 Knowing the difference between Public and Private

It is pretty easy to differentiate between the private cloud and the public cloud. The location of Cloud's deployment should be the primary question. Cloud offering as a service over the Internet is the Public Cloud Hosting solution. Whereas, a private cloud is the deployment within the Firewall and whose management is taken care by the user enterprise. The location of deployment is the prime differentiating factor between the two.

Usually, the Public cloud is charged on the month to month basis. Users have to pay for the usage per GB in combination of bandwidth transfer fees. It is based on an on-demand storage scalability and isn’t required to buy storage hardware. The cloud services offered by a company bear the responsibility of managing the infrastructure and pool of resources into capacity which any user can make a claim.

But the Private cloud is created using software’s operating on a hardware provided by the customer. The storage isn’t ideally shared between anyone except the organization and is entirely controlled and retained by the enterprise. Scalability being the biggest advantage of cloud, users can make additions of servers to the existing architecture. Furthermore, this architecture is fully-managed by the users. Due to the capability of self-management, the architecture even if expanded, adds further to its performance and capacity.

3.1.5 Deciding factors for Public Cloud Vs. Private Cloud

Elementary Expense

It is falsely believed that the Private Cloud architecture requires a very large investment in the early stages but the truth is, private clouds can be built at an affordable budget also deploying the architecture is pretty easy. The users can download the software and the private cloud can be operational within an hour.

Public Cloud Hosting is offered at an affordable price of Rs.3000. Since users are not required to buy hardware nor software, the elementary expenditure is trivial, assuming that the user applications can use the needed protocols.

Volume of Data

Cloud storage is primarily reputed for its extensive scalability features but most of the companies start in small scale. Private cloud can initiate from few TB’s in size also offering simple scalability out of capacity by simply adding additional nodes or disks, whereas, the public clouds start even smaller. Public clouds make it simple to back up a single laptop or deploy an application of a few GBs. With the expansion, users can lease more capacities and the cost scales linearly.

Duration of Data Storage

The duration of time for storing user date onto the cloud could mainly affect the Cloud selection. The longer the data stays onto the public cloud the more is the increase as the costs. That means, if you store content that keeps changing regularly, then Public Cloud solutions are the perfect suit for you.

Private clouds are licensed like enterprise. The duration of time for data storage would not affect the cost, hence making it an ideal solution for archives or content repository applications.

Performance Expectations

The deployment of private clouds is within the firewall and is accessible via an Ethernet Local Area Network. Read access in the 100 MB/s range per node is pretty usual. Every addition of nodes offers a performance improvement to this type of cloud. There is a limitation to Public cloud accessibility, since it is accessed via an Internet.

Access Patterns and Locations

Public cloud offerings ideally encompass the replication of data to many geo-locations, usually for an additional charge. If you have users from all over the world and can benefit from locality of data, the public cloud can always be an alternate for a content distribution network.

The deployment of Private cloud is in a single location for Local Area Network. The remotely located users would essentially require to get connected to the Wide Area Network and work with Internet type latencies. Bigger private cloud deployments can include various locations and start to approach the public cloud distribution, although by getting users to make greater initial investments.

Security and Data Isolation

We can find multiple published opinions and dedicated websites, which describe the security of public cloud offerings. But the primarily concern is the control-ability of your data. Public clouds are exactly that, public. Isolation of data is only as strong as the virtualization technologies used to build the cloud and the provider’s firewall.

The ownership, deployment and management of Private clouds is done by in-house administrators. Isolation of data depends on your requirements and security is based on internal processes.

Confidentiality and Destruction of Data

Alike security, confidentiality of data is a factor, which needs consideration while choosing cloud storage solutions. The law is defined based on control over the data. Users must go through the terms and conditions laid by the Cloud hosting providers. Since private clouds are maintained and controlled by you, you have complete control over deletion of the data stored on the cloud.

SLA’s-Service Level Agreements

Private Cloud all-together have a different mechanism for data availability and service of access. Most leverage multiple copies of files on multiple nodes and treat each node as a failure domain. Individual server malfunctions do not get the cloud down nor result in data loss. Therefore, most SLA agreements are fulfilled. Get assured with a clear understanding of the architecture and its capabilities when selecting and deploying a private cloud.

Public cloud SLAs are published by the providers and it’s their sole responsibility. Remediation is ideally a cash payment and while they will help get your data recovered from their last backup files.

In-House Technical Crew

In Public, cloud solution, it is not required to buy hardware, nor an in-house team of administrators. Data center infrastructure and related costs are borne by the Cloud providers. This enables you to concentrate on your business's core competencies. Since the deployment of Private Cloud is inside the firewall, you would need a crew of system administrators of your own to manage the cloud.

3.2 Cloud Infrastructure

3.2.1 Cloud Computing Infrastructure:

Ideal for Everything from Online Sports Betting to Major Corporations

A description...

Fig. 5: Cloud Computing Infrastructure

Cloud computing infrastructures work a lot, like an electricity grid. When you need light in a room, you flip the light switch, that signal travels through the electricity grid, then power is sent to your switch and you have been light. When you demand power, it comes to you via the grid. A cloud computing infrastructure works the same way. Whenever you need resources such as information and software, they are stored in a network called a cloud. You can find it in your cloud computing infrastructure and pull it up. If someone is in need of that same resource, then he or she can access it at that computer itself. Information is stored in the cloud computing infrastructure instead of on the computer. Figure 5 shows the basic infrastructure for a cloud, comprising of client and server machines. Application, Platform and Infrastructure services are used by the two machines. Server will deploy the services and acts as a provider, whereas a client uses it and acts as a requestor.

Any business that requires multiple computers could benefit from a cloud computing infrastructure. Anything from online sports betting companies to major corporations with operations around the world can personalize a cloud computing infrastructure to meet their specific needs. It eliminates the need for individual employees to back up data regularly, because the network administrator would be responsible for backup data on the cloud computing infrastructure. It also allows each employee to access the same information, which makes operations run much more efficiently in an office setting.

One can get cloud computing infrastructure for his business, within five steps they are:

1. Choose on-demand technology that will be the foundation for your infrastructure.

2. Determine how employees can access the information about the infrastructure.

3. Prepare the infrastructure with the necessary software and hardware.

4. Set up each computer to access the infrastructure.

5. Integrate all aspects of the infrastructure so all employees can participate in resource sharing.

Setting up a cloud computing infrastructure does take an investment but improved efficiency will make it worthwhile.

3.2.2 Status of Cloud Computing in India

India is the outsourcing capital of the world but when it comes to offering cloud services, its record is spotty at the best. There are some really cool companies in the application space and platforms space but they are a very small fraction of the service's market in India. Even though the Indian outsourcing giants keep making noise about infrastructure services, they are cloud washing at the best and, until now, there was not a single provider who offered Amazon EC2 kind of services in India. Partly, it is because infrastructure services are capital intensive and also due to the unreliable infrastructure (like electricity, network, etc.) in India.

The potential for Infrastructure market in India is huge. To give you an idea, India has 1.4 million developers, over 11000 system integrators and more than 1300 independent software vendors along with millions of other businesses of all sizes and shapes trying to take advantage of IT in the growing economy. They all want to take advantage of zero capital cloud infrastructure services. Moreover, Indian regulators are unpredictable and they recommend Indian businesses to store the data inside the country’s borders. Clearly, India is desperately looking for a home-grown infrastructure service's provider.

Tata Communications, India’s leading service provider, announced an expansion of their cloud offerings, making it the first comprehensive India based cloud solution available for customers. In addition to the existing services, they added Instacompute and Instaoffice in their list of service offerings. This new services marks the company’s expansion in the cloud space to deliver self-service, pay-as-you-use IT application and data center infrastructure services.

3.2.2.1 Tata Communications offers India’s home-grown infrastructure services.

The long wait for many users in India is over. By offering a metered on-demand service like Amazon EC2, Tata Communications is catering to the needs of both SMEs and big enterprises. Their new compute and storage service, called Instacompute, comes with free load balancers, firewall, public IP address, etc. They have an aggressive pricing strategy, which could potentially help them compete with Amazon Web Services in the international market.

Their other offering, called Instaoffice is just a repackaging of Google Apps. From Tata Communications point of view, this allows them to sell full cloud stack to their customers. For Google, they are a good channel partner, who can help push Google Apps deeper into the Indian market. So, there is nothing interesting in this part of the announcement.

The positive and negative

The positive side of this announcement is that Indian customers get an infrastructure service offering hosted in India. It not only solves their problem of having to deal with foreign exchange issues. It will also help them be compliant to any government regulations (present and future). The aggressive pricing will also help them with substantial cost savings. The fact that it is offered by a well established Indian telecom provider add some reliability to the offerings. The biggest hit will be on the small Indian web hosts, who offer shared hosting and dedicated servers to the local market, followed by Amazon EC2 (assuming they have a decent-sized customer base from India).

The negative side comes from the international angle. During the outsourcing boom, there was much well publicized news about employees of the outsourcing firms abusing the business data about their customers. The possibility for such security and privacy problems from the insiders will hinder their growth in the international market. Moreover, there is always the big danger of Indian government forcing their hands into user’s data and this threat will also keep international customers away from Tata Communications offering.

3.3 Cloud Application Architectures

Cloud computing is a relatively new way of referring to the use of shared computing resources and it is an alternative to having local servers handle applications. Cloud computing group's together large numbers of compute servers and other resources and typically offer their combined capacity on an on-demand, pay-per-cycle basis. The end users on a cloud computing network usually have no idea, where the servers are physically located.

Cloud computing is fully enabled by virtualization technology (hypervisors) and virtual appliances. A virtual appliance is an application that is bundled with all the components that it needs to run, along with a streamlined operating system. In a cloud computing environment, a virtual appliance can be instantly provisioned and decommissioned as needed, without complex configuration of the operating environment.

This flexibility is the key advantage to cloud computing and what distinguishes it from other forms of grid or utility computing and software as a service (SaaS). The ability to launch new instances of an application with minimal labor and expense allows application providers to:

• Scale up and down rapidly

• Recover from a failure

• Bring up development or test instances

• Roll out new versions to the customer base

• Efficiently load test an application

Once you’ve decided to offer your application in a cloud computing environment, it is important to avoid the "success disaster." When your application becomes popular overnight, it may crash under the unanticipated load. So designing the application for the cloud at the outset to take maximum advantage of the cloud environment is important.

3.3.1 Architectural Considerations

Designing an application to run as a virtual appliance in a cloud computing environment is very different than designing it for an on-premise or SaaS deployment. We discuss the following considerations. To be successful in the cloud, an application must be designed to scale easily, tolerate failures and include management tools.

Scale

Cloud computing offers the potential for nearly unlimited scalability, as long as the application is designed to scale from the outset. The best way to ensure this is to follow some basic application design guidelines are discussed below:

Start simple

Avoid complex design and performance enhancements or optimizations in favor of simplicity. It’s a good idea to start with the simplest application and rely on the scalability of the cloud to provide enough servers to ensure good application performance. Once you’ve got some traction and demand increases, then you can focus on improving the efficiency of your application, which allows you to serve more users with the same number of servers or to reduce the number of servers, while maintaining performance. Some common design techniques to improve performance include caching, server affinity, multi-threading and tight sharing of data but they all make it more difficult to distribute your application across many servers. This is the reason, why you don’t want to introduce them at the outset and only consider them, when you need to and can ensure that you are not breaking horizontal scalability.

Split application functions and couple loosely

Use separate systems for different pieces of application functionality and avoid synchronous connections between them. Again, as demand grows, you can scale each one independently instead of having to scale the entire application, when you hit a bottleneck. The separation and reusability of functions inherent in SOA make it an ideal architecture for the cloud.

Network communication

Design the application to use network-based interfaces and not interprocess communication or file-based communication paradigms. This allows you to effectively scale in the cloud because each piece of the application can be separated into distinct systems.

Deployment Cluster

Rather than scale a single system up to serve, all users, consider splitting your system into multiple smaller clusters, each serving a fraction of the application load. This is often called "sharing" and many web services can be split up along one dimension, either users or account. Requests can then be directed to the appropriate cluster based on some request attribute or users can be redirected to a specific cluster at login. To deploy a clustered system, determine the right collection of servers that yield efficient application performance, taking any needed functional redundancy into account, for example, 2 web, 4 application and 2 database servers. One can then scale the application by replicating the ideal cluster size and splitting the system load across the servers in the clusters. The advantages of cloud computing when it comes to scalability are:

Inexpensive testing

Testing can be done against a test cluster without risking the performance or integrity within the production system. You can also test the upper limits of the ideal cluster’s performance by using "robot users" in the cloud to generate load.

Reduced risk

Bring up a test instance of the cluster to prove a new code base and roll out a new version with one cluster at a time. Fall back to an older version if the new version doesn’t work, without disrupting current users.

Ability to segment the customer base

Use clusters to separate customers with varying demands, such as a large customer who wants a private instance of the application, or one who requires extensive customizations.

Auto-scaling based on application load

With the ready availability of resources, applications can be built to recognize when they are reaching the limits of their current configuration and automatically bring up new resources.

Fail

Inevitably, an application will fail, no matter, what its environment. When you design on premise or SaaS application, you typically consider several "doomsday" scenarios.

The same must be true for designing an application that runs in the cloud. Build-in resiliency and fault tolerance: To tolerate failures, applications must operate as a part of a group, while not being too tightly coupled to their peers. Each piece of the application should be able to continue to execute despite the loss of other functions. Asynchronous interfaces are an ideal mechanism to help application components tolerate failures or momentary unavailability of other components.

Distribute the impact of failure:

With a distributed cloud application, a failure in any one application cluster affects only a portion of the application and not the whole application. By spreading the load across multiple clusters in the cloud, you can isolate the individual clusters against failure in another cluster.

Get back up quickly

Automate the launching of new application clusters in order to recover quickly. Application components must be able to come up in an automated fashion, configure them and join the application cluster. Cloud computing provides the ideal environment for this fast start up and recovery process.

Data considerations

When an application fails, data persistence and system state cannot be taken for granted. To ensure data preservation, move all data on persistent storage and make sure it is replicated and distributed. The system state is stored and used for the recovery process and the system can be restarted from the point of failure.

Test your "doomsday" scenario: Cloud computing makes it easy to bring up an instance of your application to test various failure scenarios. Because of the flexible nature of cloud computing, it is possible to simulate many different failure scenarios at a very reasonable cost. Single instances of a system can be taken off-line to see how the rest of the application will respond. Likewise, multiple recovery scenarios can be planned and executed ahead of any real production failure.

Be aware of the real cost of failure:

Of course, the ideal situation is avoiding any application failure but what is the cost to provide that assurance? A large internet company once said that they could tolerate failure as long as the impact was small enough so that it is not noticeable to the overall customer base.

3.3.1.4 Manage

Deploying cloud applications as virtual appliances make management significantly easier. The appliances should bring with them all the software that they need for their entire life cycle in the cloud. More important, the aspect is that it should be built in a systematic way, akin to an assembly-line production effort in contrast to a hand-crafted approach. The reason for this systematic approach is the consistency of creating and re-creating images.

When building appliances, it is obvious that they should contain the operating system and any middleware components they need. Less obvious are the software packages that allow them to automatically configure themselves, monitor and report their state back to a management system and update them in an automated fashion. Automating the appliance configuration and updates means that as the application grows in the cloud, the management overhead does not grow in proportion. In this way appliances can live inside the cloud for any length of time with minimal management overhead.

When appliances are instantiated in the cloud, they should also plug into a monitoring and management system. This system will allow you to track application instances running in the cloud, migrate or shutdown instances as needed and gather logs and other system information necessary for troubleshooting or auditing. Without a management system to handle the virtual appliances, it is likely that the application will slowly sprawl across the cloud, wasting resources and money.

Your management system also plays an important role in the testing and deployment process. We’ve already highlighted how the cloud can be used for everything from general testing to load testing to testing for specific failure scenarios. To include your testing in your management system allows you to bring up a test cluster, conduct any testing that is required and then migrate the tested application into production. The uniform resources that underlie the cloud mean that you can achieve a rapid release to production process, allowing you to deliver updated features and functions to your customers faster.

Finally, by automating the creation and management of these appliances, you are tackling one of the most difficult and expensive problems in software today, variability. By producing a consistent appliance image and managing it effectively, you are removing variability from the release management and deployment process. Reducing the variability reduces the chances of mistakes that can cost you money.

The advantages of designing your application for management in the cloud include:

• Reducing the cost and overhead of preparing the application for the cloud

• Reducing the overhead of bringing up new instances of the application

• Eliminating application sprawl

• Reducing the chance for mistakes as the application is scaled out, failed over, upgraded, etc.



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now