Amazon’s AWS: Pros & Cons after hands-on use
by mschwartz | January 22nd, 2009
Amazon’s AWS is a very interesting service, in theory. But how does it actually work in practice? I’ve spent a few days working with it for the first time and here are my impressions. I’ll be writing from a Linux deployment perspective, though a lot of this information will apply to a Windows deployment.
The first thing I did was to read through some of the online documentation. What I found is that they have a number of services, 3 of which were of interest to my project: Elastic Compute Cloud (EC2), Elastic Block Store (EBS), and Simple Storage Service (S3). All three make use of Amazon’s networking and server infrastructure, and the interfaces (UI) and APIs are reasonably well done.
EC2 is a virtualization service where you can create virtual server instances and even build out a resonable network infrastructure among several instances.
A few of the pros are that creating instances takes about 5 minutes, you can choose from a handful of instance sizes, unlike most virtualization you can have multiple core/processor instances, there is a fairly good choice of types of instance images (Linux vs. Windows) with ready to go software installed, reasonably good interfaces to the other (S3 and EBS) services, and it’s wonderful for scaling.
The cons are worthy of more detailed information.
It seems that inbound bandwidth to an instance is limited to the point where uploading a mysqldump of a decent sized database can take hours. I specifically saw no better than 500K/sec transfer rates to an instance from a dedicated server at a professional hosting company with 100MBit line to the Internet. Transfers to and from this machine to other machines scattered around the Internet get 20MBit speeds or more.
The good news is that the “high I/O Performance” instance types do seem to get much better inbound bandwidth. Too bad I didn’t know about this! The term “I/O Performance” isn’t well defined on their WWW site, and it could mean disk transfer speed as well as network speed. Realize that AWS has an internal network infrastructure, where their systems communicate with one another, and an external one where your systems communicate with theirs.
The “ECU Compute Units” (CPU cores) of an instance are described as 2007 era Xeon CPUs at 1.0 to 1.2 GHz. Some of the instance configuration choices are described as “moderate I/O performance.” Compared to a physical leased or purchased machine, the performance isn’t all that great if your application is CPU and/or disk intensive.
The instance configurations include “instance storage” of various sizes, depending on your choice of Instance Type. On the surface, this looks to be your main “disk drive” for things like your home directories and data sets. There are three serious flaws with the AWS concept of instance storage:
- For a Linux instance, they preconfigure a fixed 10GB root partition with a 160GB /mnt partition. Anything stored in /var is on that 10GB partition and you’re almost certain to fill that up in a hurry. MySQL as configured via Ubuntu’s neat package management system keeps its database files in /var/lib/mysql. Not that this is a huge deal for a decently skilled Linux administrator, but you’d think that Amazon would preconfigure their instances to save people this work.
- Instance storage can be thought of as being similar to RAM disk. When you delete the instance, all the data on the logical drive goes away forever. Better back up your files or you can easily wipe out your valuable data.
- The instance storage partition is actually slower than a mounted EBS volume, which doesn’t make sense or seem obvious. My benchmark isn’t scientific by any means, but the measured real time to unpack a tar+gzip file reading from the instance storage partition and writing to the instance storage partition was about 50% slower than unpacking that same tar+gzip file reading from an EBS partition and writing to that same EBS partition.
All of the Ubuntu Intrepid 8.10 Amazon Machine Images (AMIs) seem to all boot from the same kernel image that is outside the AMI proper. This means you cannot tune and compile your own kernel. However, they do provide kernel sources to the common kernel so you can build kernel modules and those can be loaded into an instance kernel properly.
X Window client program network performance is so slow it’s almost unbearable. For my purposes, it was sufficient but for a lot of the kinds of things I do it would be a great source of aggrivation. In the next section, I’ll be writing about S3 and that’s where the X performance became a noticable issue.
Simple Storage Service is a simple web services based means of storing and retrieving persistent data objects. It’s something like WebDAV – it provides a REST based service to read, write, and delete objects (files). It is not a file system proper!
Because the upload speeds to a Small Instance server were so slow, I sought to find a way around this limitation. I found that upload speeds to their S3 service was about as fast as I could expect from the high speed Internet connections involved. I figued I’d upload my large files to S3 and then from the instance download them from S3.
The upload process was reasonably fast, but still took several hours due to the number and sizes of the files involved. I downloaded and installed Jungle Disk for Linux and mounted the S3 bucket on the source machine (one with all the files). Copying to the S3 bucket went well to a point. Some of the files exceeded 5G in size and 5G is the size limit for objects in S3. I ended up using split(1) to break these files into 1G chunks.
Jungle Disk comes with a GUI program, and X client program, for mounting drives and monitoring the network activity to the S3 bucket. Running on a dedicated (source) server, the program was quite reasonable and responsive. Running the GUI program from a “moderate” I/O instance, as I was, was near unbearable. The program was unresponsive for many seconds at a time, and trying to edit configuration values in a text field was a real chore.
Copying the files from the S3 bucket to the Small Instance was quite fast. Accessing the data on their internal network produced speeds of up to 40MBit/sec. Fast, but not gigabit speed.
EBS provides block level storage volumes for use with EC2 instances.
This is Amazon’s answer to Instance Local Storage being somewhat transient. You can create an EBS volume from 1GB to 1TB in size, then associate the volume with an instance. From inside the instance, the EBS volume looks like a block device so you can partition it and format it and mount it as you would with any hard drive. If you terminate the instance, the EBS volume persists and can be mounted on another instance.
As I mentioned earlier, the I/O performance to the EBS volumes is about 50% faster than to the Instance Local Volumes. This is not an unexpected drawback, but you cannot mount an EBS volume on two instances and you can only assign the EBS volume to one instance at a time. It might be interesting to have these work as a shared and replicated file system mechanism so you could bring up a 2nd instance of the same server with the same live data. The obvious scenario for this is when your WWW site gets slash dotted and you need instant capacity.
AWS is quite useful and reasonably well thought out. It can certainly be used to create some interesting and highly scalable services.
From a cost perspective, it has the advantage of zero in capital cost but the ongoing monthly cost can easily exceed the cost of traditional dedicated hosting.