Amazon EC2 comes out of beta
Amazon made a flurry of EC2 announcements today.
First off, EC2 is now out of beta, which means that there’s now a service-level agreement. It’s a 99.95% SLA, where downtime is defined as two or more Availability Zones within the same region, in which you are running instances, are unavailable (your running instances have no external connectivity and you can’t launch new instances that do). Since EC2 only has one region right now, for practical purposes, that means “I have disconnected instances in at least two zones”. That pretty much implies that Amazon thinks that if you care enough to want an SLA, you ought to care enough to be running your instances in at least two zones.
Note that the 99.95% SLA is at least as good as what you’d get out of a typical dedicated hosting provider for an HA/load-balanced solution. (Non-HA dedicated solutions usually get you an SLA in the 99.50-99.75% range.) Hosting SLAs are typically derived primarily from the probability of hardware failure, in conjunction with facility failure, and thus should be broadly realistic. This suggests that Amazon’s SLA is probably a mathematically realistic one. I’d expect that catastrophic failures would be rooted in the EC2 software itself, as with the July S3 outage.
Second, the previously-announced Windows and Microsoft SQL Server AMIs are going into beta. These instances are more expensive than the Linux ones — from a price differential of $0.10 for Linux vs. $0.125 for Windows on the small instances, up to a whopping $0.80 for Linux vs. $1.20 for Windows on the largest high-CPU instance. That’s the difference between $72 and $90, or $576 and $874, over a month of full-time running. On a percentage basis, this is broadly consistent with the price differential between Windows and Linux VPS hosting.
Third, Amazon announced plans to offer a management console, monitoring, load balancing, and automatic scaling. That’s going to put it in direct competition with vendors who offer EC2 overlays, like Rightscale. That is not going to come as a surprise to those vendors, most of whom intend to be cloud-agnostic, with their value-add being providing a single consistent interface across multiple clouds. So in some ways, Amazon’s new services, which will also be directly API supported, will actually make life easier for those vendors — it just raises the bar for what value-added features they need.
The management console is a welcome addition, as anyone who has ever attempted to provision through the API and its wrapper scripts will undoubtedly attest. It’s always been an unnecessary level of pain, and the management console doesn’t need to do much of anything to be an improvement over that. People have been managing their own EC2 monitoring just fine, but having Amazon’s view, integrated into the management console, will be a nice plus. (But monitoring itself is an enabling technology for other services; see below.)
There’s never really been a great way to load-balance on EC2. DNS round-robin is crude, and running a load-balancing proxy creates a single point of failure. Native, smart load-balancing would be a boon; here’s a place where Amazon could deliver some great value-adds that are worth paying extra for.
Automatic scaling has been one of the key missing pieces of EC2. Efforts like Scalr have been an attempt to address it, and it’s going to be interesting to see how sophisticated the Amazon native offering will be.
Note that three of these new EC2 elements go together. Implicit in both load-balancing and automatic scaling is the need to be able to monitor instances. The more complete the instrumentation, the smarter the load-balancing and scaling decisions can be.
For a glimpse at the way Amazon is thinking about the interlinkages, check out Amazon CTO’s blog post on Amazon’s efficiency principles.
Posted on October 24, 2008, in Infrastructure and tagged Amazon, cloud. Bookmark the permalink. 1 Comment.
I must say this is a great article i enjoyed reading it keep the good work 🙂