Regions And Availability Zones

Print   

02 Nov 2017

Disclaimer:
This essay has been written and submitted by students and is not an example of our work. Please click this link to view samples of our professional work witten by our professional essay writers. Any opinions, findings, conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of EssayCompany.

EC2 and S3 (and a number of other services, see Figure 3-1) are organized in regions.

All regions provide more or less the same services, and everything we talk about in this

chapter applies to all the available AWS regions.

Figure 3-1. Overview of some of AWS services

A region is comprised of two or more availability zones (Figure 3-2), each zone consisting

of one or more distinct data centers. Availability zones are designed to shield our in‐

frastructures from physical harm, like thunderstorms or hurricanes, for example. If one

data center is damaged, you should be able to use another one by switching to another

availability zone. Availability zones are, therefore, important in getting your apps resil‐

ient and reliable.

Route 53: Domain Name System Service

If you register a domain, you often get the Domain Name System service for free. Your

registrar will give you access to a web application where you can manage your records.

This part of your infrastructure is often overlooked. But it is a notoriously weak spot; if

it fails, no one will be able to reach your site. And it is often outside of your control.

8 | Chapter 3: Crash Course in AWS

Figure 3-2. AWS regions and edge locations

There were three or four high quality, commercial DNS services before AWS introduced

Route 53. The features of all of these DNS services are more or less similar, but the prices

can vary enormously. Route 53 changed this market. It offered basic features for a frac‐

tion of the price of competing offerings.

But Route 53 is different in its approach. DNS is viewed as a dynamic part of its software,

which you can utilize for things like failover or application provisioning. Services like

RDS and ElastiCache rely heavily on Route 53 (behind the scenes, for the most part).

Just as AWS does, we often rely on the programmatic nature of Route 53. As you will

see in later chapters we will implement failover strategies with relative ease.

Not all software is ready for the dynamic nature of DNS. The assumption often is that

DNS records hardly change. These systems adopt an aggressive caching mechanism (just

never resolve domain names again for the duration of the execution) that breaks when

underlying IP addresses change.

Route 53 is a very important tool at our disposal!

IAM (Identity and Access Management)

IAM is exactly what it says it is. It lets you manage identities that can be allowed (or

denied) access to AWS resources. Access is granted on services (API actions) or resour‐

ces (things like S3 buckets or EC2 instances). Access can be organized by users and

groups. Both users and groups have permissions assigned to them by way of policies. The

user’s credentialsare used to authenticate with the AWS web services. A user can belong

to zero or more groups.

Regions and Availability Zones | 9

You can use IAM to give access to people. And you can use it to give access to particular

components. For example, an elasticsearch EC2 instance (more about this in Chap‐

ter 5) only needs restricted read access on the EC2 API to "discover" the cluster, and it

needs restricted read/write access on S3 on a particular bucket(a sort of folder) for

making backups.

Access is granted in policies. For example, the following policy allows access to all EC2

API operations starting with Describe, on all resources (or globally), some kind of readonly policy for EC2:

{

"Statement": [

{

"Effect": "Allow",

"Action": "EC2:Describe*",

"Resource": "*"

}

]

}

IAM is VERY important

This service is very, very important. It not only protects you from serious

exposure in case of security breaches; it also protects you from inad‐

vertent mistakes or bugs. If you only have privileges to work on one

particular S3 bucket, you can do no harm to the rest.

IAM has many interesting features, but two deserve to be mentioned explicitly. Multi

Factor Authentication (MFA) adds a second authentication step to particular opera‐

tions. Just assuming a particular identity is not enough; you are prompted for a dynamic

security code generated by a physical or virtual device that you own, before you can

proceed.

The second feature that needs to be mentioned explicitly is that you can add a role to

an EC2 instance. The role’s policies will then determine all the permissions available

from that instance. This means that you no longer need to do a lot of work rotating

(replacing) access credentials, something that is a tedious-to-implement security best

practice.

10 | Chapter 3: Crash Course in AWS

The Basics: EC2, RDS, ElastiCache, S3, CloudFront, SES, and

CloudWatch

The basic services of any IaaS (Infrastructure as a Service) are compute and storage.

AWS offers compute as EC2 (Elastic Compute Cloud) and storage as S3 (Simple Storage

Service). These two services are the absolute core of everything that happens on Amazon

AWS.

RDS (Relational Database Service) is "database as a service," hiding many of the diffi‐

culties of databases behind a service layer. This has been built with EC2 and S3.

CloudFront is the CDN (Content Distribution Network) AWS offers. It helps you dis‐

tribute static, dynamic, and streaming content to many places in the world.

Simple Email Service (SES) helps you send mails. You can use it for very large batches.

We just always use it, because it is reliable and has a very high deliverability (spam is

not solved only by Outlook or Gmail).

We grouped the services like this because these are the basic services for a web appli‐

cation: we have computing, storage, relational database services, content delivery, and

email sending. So, bear with us, here we go…

CloudWatch

CloudWatch is AWS’s own monitoring solution. All AWS services come with metrics

on resource utilization. An EC2 instance has metrics for CPU utilization, network, and

IO. Next to those metrics, an RDS instance also creates metrics on memory and disk

usage.

CloudWatch has its own tab in the console, and from there you can browse metrics and

look at measurements over periods of up to two weeks. You can look at multiple metrics

at the same time, comparing patterns of utilization.

You can also add your own custom metrics. For example, if you build your own managed

solution for MongoDB, you can add custom metrics for all sorts of operational param‐

eters, as we will see in Chapter 7. Figure 3-3shows a chart of the "resident memory"

metric in a MongoDB replica set.

The Basics: EC2, RDS, ElastiCache, S3, CloudFront, SES, and CloudWatch | 11

Figure 3-3. Showing some MongoDB-specific metrics using CloudWatch

EC2 (et al.)

To understand EC2 (Figure 3-4) you need to be familiar with a number of concepts:

• Instance

• Image (Amazon Machine Image, AMI)

• Volume and Snapshot (EBS and S3)

• Security Group

• Elastic IP

There are other concepts we will not discuss here, like Virtual Private Cloud (VPC),

which has some features (such as multiple IP addresses and flexible networking) that

can help you make your application more resilient and reliable. But some of these con‐

cepts can be implemented with other AWS services like IAM or Route 53.

12 | Chapter 3: Crash Course in AWS

Figure 3-4. Screenshot from AWS Console, EC2 Dashboard

Instance

An instance is a server, nothing more and nothing less. Instances are launched from an

image(an AMI) into an availability zone. There are S3-backed instances, a kind of

ephemeral storage in which the root device is part of the instance itself. (Instances

launched from an S3-backed AMI cannot be stopped and started; they can only be

restarted or terminated.) EBS-backed instances, which are more the norm now, provide

block level storage volumes that persist independently of the instance (that is, the root/

boot disk is on a separate EBS volume, allowing the instance to be stopped and started).

See "Volume and snapshot (EBS and S3)" (page 15).

Dependencies

EBS still has a dependency on S3 (when a new volume is created from

an existing S3 snapshot). Even though this dependency is extremely

reliable, it might not be a good idea to increase dependencies.

Instances come in types(sizes). Types used to be restricted to either 32-bit or 64-bit

operating systems, but since early 2012 all instance types are capable of running 64-bit.

We work mainly with Ubuntu, and we mostly run 64-bit now. There are the following

instance types:

The Basics: EC2, RDS, ElastiCache, S3, CloudFront, SES, and CloudWatch | 13

Small Instance (m1.small) – default

1.7 GB memory, 1 EC2 Compute Unit

Medium Instance (m1.medium)

3.75 GB memory, 2 EC2 Compute Unit

Large Instance (m1.large)

7.5 GB memory, 4 EC2 Compute Units

Extra Large Instance (m1.xlarge)

15 GB memory, 8 EC2 Compute Units

Micro Instance (t1.micro)

613 MB memory, Up to 2 EC2 Compute Units (for short periodic bursts)

High-Memory Extra Large Instance (m2.xlarge)

17.1 GB of memory, 6.5 EC2 Compute Units

High-Memory Double Extra Large Instance (m2.2xlarge)

34.2 GB of memory, 13 EC2 Compute Units

High-Memory Quadruple Extra Large Instance (m2.4xlarge)

68.4 GB of memory, 26 EC2 Compute Units

High-CPU Medium Instance (c1.medium)

1.7 GB of memory, 5 EC2 Compute Units

High-CPU Extra Large Instance (c1.xlarge)

7 GB of memory, 20 EC2 Compute Units

The micro instance is "fair use." You can burst CPU for short periods of time, but when

you misbehave and use too much, your CPU capacity is capped for a certain amount of

time.

For higher requirements, such as high performance computing, there are also cluster

type instances, with increased CPU and network performance, including one with

graphics processing units. Recently Amazon also released high I/Oinstances, which give

very high storage performance by using SSD (Solid State Drives) devices.

At launch, an instance can be given user data. User data is exposed on the instance

through a locally accessible web service. In the bash Unix shell, we can get the user data

as follows (in this case json). The output is an example from the Mongo setup we will

explain in Chapter 7, so don’t worry about it for now:

14 | Chapter 3: Crash Course in AWS

$ curl --silent http://169.254.169.254/latest/user-data/ | python -mjson.tool

{

"name" : "mongodb",

"size" : 100,

"role" : "active"

}

Almost all information about the instance is exposed through this interface. You can

learn the private IP address, the public hostname, etc.:

$ curl --silent http://169.254.169.254/latest/meta-data/public-hostname

ec2-46-137-11-123.eu-west-1.compute.amazonaws.com

Image (AMI, Amazon Machine Image)

An AMI is a bit like a boot CD. You launch an instance from an AMI. You have 32-bit

AMIs and 64-bit AMIs. Anything that runs on the XEN Hypervisorcan run on Amazon

AWS and thus be turned into an AMI.

There are ways to make AMIs from scratch. These days that is not necessary unless you

are Microsoft, Ubuntu, or you want something extremely special. We could also launch

an Ubuntu AMI provided by Canonical, change the instance, and make our own AMI

from that.

AMIs are cumbersome to work with, but they are the most important raw ingredient of

your application infrastructures. AMIs just need to work. And they need to work reliably,

always giving the same result. There is no simulator for working with AMIs, except EC2

itself. (EBS-backed AMIs are so much easier to work with that we almost forgot S3-backed AMIs still exist.)

If you use AMIs from third parties, make sure to verify their origin (which is now easier

to do than before.)

Volume and snapshot (EBS and S3)

EBS (Elastic Block Store) is one of the of the more interesting inventions of AWS. It has

been introduced to persist local storage, because S3 (Simple Storage Service) was not

enough to work with.

Basically EBS offers disks, or volumes, between 1 GB and 1 TB in size. A volume resides

in an availability zone, and can be attached to one (and only one) instance. An EBS

volume can have a point-in-time snapshot taken, from which the volume can be re‐

stored. Snapshots are regional, but not bound to an availability zone.

If you need disks (local storage) that are persistent (you have to make your own backups)

you use EBS.

The Basics: EC2, RDS, ElastiCache, S3, CloudFront, SES, and CloudWatch | 15

EBS is a new technology. As such, it has seen its fair share of difficulties. But it is very

interesting and extremely versatile. See the coming chapters (the chapter on Postgres in

particular) for how we capitalize on the opportunities EBS gives.

Security group

Instances are part of one or more security groups. With these security groups, you can

shield off instances from the outside world. You can expose them on only certain ports

or port ranges, or for certain IP masks, like you would do with a firewall. Also you can

restrict access to instances which are inside specific security groups.

Security groups give you a lot of flexibility to selectively expose your assets.

VPC

VPC (Virtual Private Cloud) offers much more functionality as part of

the Security Groups. For example, it is not possible to restrict incoming

connections in normalsecurity groups. With VPC you can control both

incoming and outgoing connections.

Elastic IP

Instances are automatically assigned a public IP address. This address changes with

every instance launch. If you have to identify a particular part of your application

through an instance and therefore use an address that doesn’t change, you can use an

elastic IP(EIP). You can associate and dissociate them from instances, manually in the

console or through the API.

Route 53 makes elastic IPs almost obsolete. But many software packages do not yet

gracefully handle DNS changes. If this is the case, using an elastic IP might help you.

RDS

Amazon’s RDS (Relational Database Service) now comes in three different flavors:

MySQL, Oracle, and Microsoft SQLServer. You can basically run one of these databases,

production ready, commercial grade. You can scale up and down in minutes. You can

grow storage without service interruption. And you can restore your data up to 31 days

back.

The maximum storage capacity is 1 TB. Important metrics are exposed through Cloud‐

Watch. In Figure 3-5 you can see, for example, the CPU utilization of an instance. This

service will be explained more in detail later.

The only thing RDS doesn’t do for you is optimize your schema!

16 | Chapter 3: Crash Course in AWS

Figure 3-5. Screenshot from the AWS Console, showing CPU utilization of an RDS in‐

stance

ElastiCache

This is like RDS for memcached, an object caching protocol often used to relieve the

database and/or speed up sites and apps. This technology is not very difficult to run,

but it does require close monitoring. Before ElastiCache, we always ran it by hand,

replacing instances when they died.

ElastiCache adds the ability to easily grow or shrink a memcached cluster. Unfortunately

you can’t easily change the type of the instances used. But more importantly, ElastiCache

manages failure. If a node fails to operate, it will replace it.

As with other services, it exposes a number of operational metrics through CloudWatch.

These can be used for capacity planning, or to understand other parts of your system’s

behavior.

S3/CloudFront

S3 stands for Simple Storage Service. This is probably the most revolutionary service

AWS offers at this moment. S3 allows you to store an unlimited amount of data. If you

do not delete your objects yourself, it is almost impossible for them to be corrupt or lost

entirely. S3 has 99.999999999% durability.

You can create buckets in any of the regions. And you can store an unlimited amount

of objects per bucket, with a size between 1 byte to 5 TB.

The Basics: EC2, RDS, ElastiCache, S3, CloudFront, SES, and CloudWatch | 17

S3 is reliable storage exposed through a web service. For many things this is fast enough,

but not for static assets of websites or mobile applications. For these assets, AWS intro‐

duced CloudFront, a CDN (Content Distribution Network).

CloudFront can expose an S3 bucket, or it can be used with what AWS calls a custom

origin (another site). On top of S3, CloudFront distributes the objects to edge locations

all over the world, so latency is reduced considerably. Apart from getting them closer to

the users, it offloads some of the heavy lifting your application or web servers used

to do.

SES

Sending mails in a way that they actually arrive is getting more and more difficult. On

AWS you can have your elastic IP whitelisted automatically. But it still requires operating

an MTA (Mail Transfer Agent) like Postfix. But with Amazon SES (Simple Email System)

this has all become much easier.

After signing up for the service you have to practice a bit in the sandbox before you can

request production access. It might take a while before you earn the right to send a

significant volume. But if you use SES from the start, you have no problems when your

service takes off.

Growing Up: ELB, Auto Scaling

Elasticity is still the promise of "The Cloud." If the traffic increases, you get yourself

more capacity, only to release it when you don’t need it anymore. The game is to increase

utilization, often measured in terms of the CPU utilization. The other way of seeing it

is to decrease waste, and be more efficient.

AWS has two important services to help us with this. The first is ELB, or Elastic Load

Balancer. The second is Auto Scaling.

ELB (Elastic Load Balancer)

An ELB sits in front of a group of instances. You can reach an ELB through a hostname.

Or, with Route 53, you can have your records resolve directly to the IP addresses of the

ELB.

An ELB can distribute any kind of TCP traffic. It also distributes HTTP and HTTPS.

The ELB will terminateHTTPS and talk plain HTTP to the instances. This is convenient,

and reduces the load on the instances behind.

18 | Chapter 3: Crash Course in AWS

Traffic is evenly distributed across one or more availability zones, which you configure

in the ELB. Remember that every EC2 instance runs in a particular availability zone.

Within an availability zone, the ELB distributes the traffic evenly over the instances. It

has no sophisticated (or complicated) routing policies. Instances are either healthy, de‐

termined with a configurable health check, or not. A health check could be something

like pinging /status.htmlon HTTP every half a minute, and a response status 200

would mean the instance is healthy.

ELBs are a good alternative for elastic IPs. ELBs cost some money in contrast to elastic

IPs (which are free while they are associated to an instance), but ELBs increase security

and reduce the complexity of the infrastructure. You can use an auto scaling group (see

below) to automatically register and unregister the instance, instead of managing elastic

IP attachments yourself.

ELBs are versatile and the features are fine, but they are still a bit immature and the

promise of surviving availability zone problems are not always met. It’s not always the

case that when one availability zone fails, the ELB keeps running normally on the other

availability zones. We choose to work with AWS to improve this technology, instead of

building (and maintaining) something ourselves.

Auto Scaling

If you want an elastic group of instances that resizes based on demand, you want Auto

Scaling. This service helps you coordinate these groups of instances.

Auto Scaling launches and terminates instances based on CloudWatch metrics. For

example, you can use the average CPU utilization (or any of the other instance metrics

available) of the instances in the group itself. You could configure your group so that

Growing Up: ELB, Auto Scaling | 19

every time the average CPU utilization of your group is over 60% for a period of 5

minutes, it will launch two new instances. If it goes below 10%, it will terminate two

instances. You can make sure the group is never empty by setting the minimum size to

two.

You can resize the group based on any CloudWatch metric available. When using SQS

(see below) for a job queue, you can grow and shrink the group’s capacity based on the

number of items in that queue. And you can also use CloudWatch custom metrics. For

example, you could create a custom metric for the number of connections to NGiNX

or Apache, and use that to determine the desired capacity.

Auto Scaling ties in nicely with ELBs, as they can register and unregister instances au‐

tomatically. At this point, this mechanism is still rather blunt. Instances are first removed

and terminated before a new one is launched and has the chance to become "in service."

Decoupling: SQS, SimpleDB & DynamoDB, SNS, SWF

The services we have discussed so far are great for helping you build a good web appli‐

cation. But when you reach a certain scale, you will require something else.

If your app starts to get so big that your individual components can’t handle it any more,

there is only one solution left: to break your app into multiple smaller apps. This method

is called decoupling.

Decoupling is very different from sharding. Sharding is horizontal partitioning across

instances and it can help you in certain circumstances, but is extremely difficult to do

well. If you feel the need for sharding, look around. With different components (Dyna‐

moDB, Cassandra, elasticsearch, etc.) and decoupling, you are probably better off not

sharding.

Amazon travelled down this path before. The first service to see the light was SQS,

Simple Queue Service. Later other services followed like SimpleDB and SNS (Simple

Notification Service). And only recently (early 2012) they introduced SWF, Simple

Workflow Service.

These services are like the glue of your decoupled system: they bind the individual apps

or components together. They are designed to be very reliable and scalable, for which

they had to make some tradeoffs. But at scale you have different problems to worry

about.

If you consider growing beyondthe relational database model (either in scale or in

features) DynamoDB is a very interesting alternative. You can provision your Dyna‐

moDB database to be able to handle insane amounts of transactions. It does require

some administration, but completely negligible compared to building and operating

your own Cassandra cluster or MongoDB Replica Set (See Chapter 7).

20 | Chapter 3: Crash Course in AWS

SQS (Simple Queue Service)

In the SQS Developer Guide, you can read that "Amazon SQS is a distributed queue

system that enables web service applications to quickly and reliably queue messages that

one component in the application generates to be consumed by another component. A

queue is a temporary repository for messages that are awaiting processing" (Figure 3-6).

Figure 3-6. Some SQS queues shown in the AWS Console

And that’s basically all it is. You can have many writers hitting a queue at the same time.

SQS does its best to preserve order, but the distributed nature makes it impossible to

guarantee this. If you really need to preserve order, you can add your own identifier as

part of the queued messages, but approximate order is probably enough to work with

in most cases. A trade-off like this is necessary in massively scalable services like SQS.

This is not very different from eventual consistency, as is the case in S3 and in SimpleDB.

In addition to many writers hitting a queue at the same time, you can also have many

readers, and SQS guarantees each message is delivered at least once (more than once if

the receiving reader doesn’t delete it from the queue). Reading a message is atomic; locks

are used to keep multiple readers from processing the same message. Because you can’t

assume a message will be processed successfully and deleted, SQS first sets it to invisi‐

ble. This invisibility has an expiration, called visibility timeout, that defaults to thirty

seconds. After processing the message, it must be deleted explicitly (if successful, of

Decoupling: SQS, SimpleDB & DynamoDB, SNS, SWF | 21

course). If it’s not deleted and the timeout expires, the message shows up in the queue

again. If 30 seconds is not enough, the timeout can be configured in the queue or per

message, although the recommended way is to use different queues for different visibility

timeouts.

You can have as many queues as you want, but leaving them inactive is a violation of

intended use. We couldn’t figure out what the penalties are, but the principle of cloud

computing is to minimize waste. Message size is variable, and the maximum is 64 KB.

If you need to work with larger objects, the obvious place to store them is S3.

One last important thing to remember is that messages are not retained indefinitely.

Messages will be deleted after four days by default, but you can have your queue retain

them for a maximum of two weeks.

SimpleDB

AWS says that SimpleDB is "a highly available, scalable, and flexible nonrelational data

store that offloads the work of database administration." There you have it! In other

words, you can store an extreme amount of structured information without worrying

about security, data loss, and query performance. And you pay only for what you use.

SimpleDB is not a relational database, but to explain what it is, we will compare it to a

relational database since that’s what we know best. SimpleDB is not a database server,

so therefore there is no such thing in SimpleDB as a database. In SimpleDB, you create

domainsto store related items. Items are collections of attributes, or key-value pairs. The

attributes can have multiple values. An item can have 256 attributes and a domain can

have one billion attributes; together, this may take up to 10 GB of storage.

You can compare a domain to a table, and an item to a record in that table. A traditional

relational database imposes the structure by defining a schema. A SimpleDB domain

does not require items to be all of the same structure. It doesn’t make sense to have all

totally different items in one domain, but you can change the attributes you use over

time. As a consequence, you can’t define indexes, but they are implicit: every attribute

is indexed automatically for you.

Domains are distinct—they are on their own. Joins, which are the most powerful feature

in relational databases, are not possible. You cannot combine the information in two

domains with one single query. Joins were introduced to reconstruct normalized data,

where normalizing data means ripping it apart to avoid duplication.

Because of the lack of joins, there are two different approaches to handling relations.

You can either introduce duplication (for instance, by storing employees in the employer

domain and vice versa), or you can use multiple queries and combine the data at the

22 | Chapter 3: Crash Course in AWS

application level. If you have data duplication and if several applications write to your

SimpleDB domains, each of them will have to be aware of this when you make changes

or add items to maintain consistency. In the second case, each application that reads

your data will need to aggregate information from different domains.

There is one other aspect of SimpleDB that is important to understand. If you add or

update an item, it does not have to be immediately available. SimpleDB reserves the

right to take some time to process the operations you fire at it. This is what is called

eventual consistency, and for many kinds of information getting a slightly earlier version

of that information is not a huge problem.

But in some cases, you need the latest, most up-to-date information, and for these cases,

consistency can be enforced. Think of an online auction website like eBay, where people

bid for different items. At the moment a purchase is made, it’s important that the correct

(latest) price is read from the database. To address those situations, SimpleDB intro‐

duced two new features in early 2010: consistent readand conditional put/delete. A con‐

sistent read guarantees to return values that reflect all previously successful writes.

Conditional put/delete guarantees that a certain operation is performed only when one

of the attributes exists or has a particular value. With this, you can implement a counter,

for example, or implement locking/concurrency.

We have to stress that SimpleDB is a service, and as such, it solves a number of problems

for you. Indexing is one we already mentioned. High availability, performance, and

infinite scalability are other benefits. You don’t have to worry about replicating your

data to protect it against hardware failures, and you don’t have to think of what hardware you are using if you have more load, or how to handle peaks. Also, the software

upgrades are taken care of for you.

But even though SimpleDB makes sure your data is safe and highly available by seam‐

lessly replicating it in several data centers, Amazon itself doesn’t provide a way to man‐

ually make backups. So if you want to protect your data against your own mistakes and

be able to revert back to previous versions, you will have to resort to third-party solutions

that can back up SimpleDB data, for example by using S3.

SNS (Simple Notification Service)

Both SQS and SimpleDB are kind of passive, or static. You can add things, and if you

need something from it, you have to pull. This is OK for many services, but sometimes

you need something more disruptive; you need to push instead of pull. This is what

Amazon SNS gives us. You can push information to any component that is listening,

and the messages are delivered right away.

SNS is not an easy service, but it is incredibly versatile. Luckily, we are all living in "the

network society," so the essence of SNS should be familiar to most of us. It is basically

Decoupling: SQS, SimpleDB & DynamoDB, SNS, SWF | 23

the same concept as a mailing list or LinkedIn group—there is something you are in‐

terested in (a topic, in SNS-speak), and you show that interest by subscribing. Once the

topic verifies that you exist by confirming the subscription, you become part of the group

receiving messages on that topic.

SNS can be seen as an event system, but how does it work? First, you create topics. Topics

are the conduits for sending (publishing) and receiving messages, or events. Anyone

with an AWS account can subscribe to a topic, though that doesn’t mean they will be

automatically permitted to receive messages. And the topic owner can subscribe nonAWS users on their behalf. Every subscriber has to explicitly "opt in," though that term

is usually related to mailing lists and spam; it is the logical consequence in an open

system like the Web (you can see this as the equivalent of border control in a country).

The most interesting thing about SNS has to do with the subscriber, the recipient of the

messages. A subscriber can configure an end point, specifying how and where the mes‐

sage will be delivered. Currently SNS supports three types of end points: HTTP/ HTTPS,

email, and SQS; this is exactly the reason we feel it is more than a notification system.

You can integrate an API using SNS, enabling totally different ways of execution.

SWF (Simple Workflow Service)

Simple Workflow Service helps you build and run complex process flows. It takes a

workflow description and fires two tasks. Some tasks require a decision that affects the

flow of the process; these are handled by a decider. Other, more mundane tasks, are

handled by workers.

If your application requires some kind of workflow you usually start with a couple of

cron jobs. If this starts to get out of hand, task queues and workers take over that role.

This is a huge advantage, especially if you use a service like SQS.

But, if your process flows get more complex, the application starts to bloat. Information

about current state and history starts to creep into your application. This is not desirable,

to say the least, especially not at scale.

heystaq.com, one of our own applications, is about rating AWS infrastructures. Our core

business is to say something knowledgable about an infrastructure, not manage hun‐

dreds of workflows generating thousands of individual tasks. For heystaq.com we could

build a workflow for scanning the AWS infrastructure. We could scan instances, vol‐

umes, snapshots, ELBs, etc. Some of these tasks are related, like instances and volumes.

Others can easily be run in parallel.

We also scan for CloudWatch alarms, and add those alarms if they are not present. We

could create another SWF workflow for this. Now we have two, entirely unrelated se‐

quences of activities that can be run in any combination. And, as a consequence, we can

auto scale our system on demand relatively easily. (We’ll show you how we do this in a

later chapter.)



rev

Our Service Portfolio

jb

Want To Place An Order Quickly?

Then shoot us a message on Whatsapp, WeChat or Gmail. We are available 24/7 to assist you.

whatsapp

Do not panic, you are at the right place

jb

Visit Our essay writting help page to get all the details and guidence on availing our assiatance service.

Get 20% Discount, Now
£19 £14/ Per Page
14 days delivery time

Our writting assistance service is undoubtedly one of the most affordable writting assistance services and we have highly qualified professionls to help you with your work. So what are you waiting for, click below to order now.

Get An Instant Quote

ORDER TODAY!

Our experts are ready to assist you, call us to get a free quote or order now to get succeed in your academics writing.

Get a Free Quote Order Now