IBM SoftLayer, versus Amazon or versus Rackspace?
Near the beginning of July, IBM closed its acquisition of SoftLayer (which I discussed in a previous blog post). A little over three months have passed since then, and IBM has announced the addition of more than 1,500 customers, the elimination of SmartCloud Enterprise (SCE, IBM’s cloud IaaS offering), and went on the offensive against Amazon in an ad campaign (analyzed in my colleague Doug Toombs’s blog post). So what does this all mean for IBM’s prospects in cloud infrastructure?
IBM is unquestionably a strong brand with deep customer relationships — it exerts a magnetism for its customers that competitors like HP and Dell don’t come anywhere near to matching. Even with all of the weaknesses of the SCE offering, here at Gartner, we still saw customers choose the service simply because it was from IBM — even when the customers would openly acknowledge that they found the platform deficient and it didn’t really meet their needs.
In the months since the SoftLayer acquisition has closed, we’ve seen this “we use IBM for everything by preference” trend continue. It certainly helps immensely that SoftLayer is a more compelling solution than SCE, but customers continue to acknowledge that they don’t necessarily feel they’re buying the best solution or the best technology, but they are getting something that is good enough from a vendor that they trust. Moreover, they are getting it now; IBM has displayed astonishing agility and a level of aggression that I’ve never seen before. It’s impressive how quickly IBM has jump-started the pipeline this early into the acquisition, and IBM’s strengths in sales and marketing are giving SoftLayer inroads into a mid-market and enterprise customer base that it wasn’t able to target previously.
SoftLayer has always competed to some degree against AWS (philosophically, both companies have an intense focus on automation, and SoftLayer’s bare-metal architecture is optimal for certain types of use cases), and IBM SoftLayer will as well. In the IBM SoftLayer deals we’ve seen in the last couple of months, though, their competition isn’t really Amazon Web Services (AWS). AWS is often under consideration, but the real competitor is much more likely to be Rackspace — dedicated servers (possibly with a hybrid cloud model) and managed services.
IBM’s strategy is actually a distinctively different one from the other providers in the cloud infrastructure market. SoftLayer’s business is overwhelmingly dedicated hosting — mostly small-business customers with one or two bare-metal servers (a cost-sensitive, high-churn business), though they had some customers with large numbers of bare-metal servers (gaming, consumer-facing websites, and so forth). It also offers cloud IaaS, called CloudLayer, with by-and-hour VMs and small bare-metal servers, but this is a relatively small business (AWS has individual customers that are bigger than the entirety of CloudLayer). SoftLayer’s intellectual property is focused on being really, really good at quickly provisioning hardware in a fully automated way.
IBM has decided to do something highly unusual — to capitalize on SoftLayer’s bare-metal strengths, and to strongly downplay virtualization and the role of the cloud management platform (CMP). If you want a CMP — OpenStack, CloudStack, vCloud Director, etc. — on SoftLayer, there’s an easy way to install the software on bare metal. But if you want it updated, maintained, etc., you’ll either have to do it yourself, or you need to contract with IBM for managed services. If you do that, you’re not buying cloud IaaS; you’re renting hardware and CMP software, and building your own private cloud.
While IBM intends to expand the configuration options available in CloudLayer (and thus the number of hardware options available by the hour rather than by the month), their focus is upon the lower-level infrastructure constructs. This also means that they intend to remain neutral in the CMP wars. IBM’s outsourcing practice has historically been pretty happy to support whatever software you use, and the same largerly applies here — they’re offering managed services for the common CMPs, in whatever way you choose to configure them.
In other words, while IBM intends to continue its effort to incorporate OpenStack as a provisioning manager in its “Smarter Infrastructure” products (the division formerly known as Tivoli), they are not launching an OpenStack-based cloud IaaS, replacing the existing CloudLayer cloud IaaS platform, or the like.
IBM also intends to use SoftLayer as the underlying hardware platform for the application infrastructure components that will be in its Cloud Foundry-based framework for PaaS. It will depend on these components to compete against the higher-level constructs in the market (like Amazon’s RDS database-as-a-service).
IBM SoftLayer has a strong value proposition for certain use cases, but today their distinctive value proposition is a different one than AWS’s, but a very similar one to Rackspace’s (although I think Rackspace is going to embrace DevOps-centric managed services, while IBM seems more traditional in its approach). But IBM SoftLayer is still an infrastructure-centric story. I don’t know that they’re going to compete with the vision and speed of execution currently being displayed by AWS, Microsoft, and Google, but ultimately, those providers may not be IBM SoftLayer’s primary competitors.
Ecosystems in conflict – Amazon vs. VMware, and OpenStack
Citrix contributing CloudStack to the Apache Software Foundation isn’t so much a shot at OpenStack (it just happens to get caught in the crossfire), as it’s a shot against VMware.
There are two primary ecosystems developing in the world: VMware and Amazon. Other possibilities, like Microsoft and OpenStack, are completely secondary to those two. You can think of VMware as “cloud-out” and Amazon as “cloud-in” approaches.
In the VMware world, you move your data center (with its legacy applications) into the modern era with virtualization, and then you build a private cloud on top of that virtualized infrastructure; to get additional capacity, business agility, and so forth, you add external cloud IaaS, and hopefully do so with a VMware-virtualized provider (and, they hope, specifically a vCloud provider who has adopted the stack all the way up to vCloud Director).
In the Amazon world, you build and launch new applications directly onto cloud IaaS. Then, as you get to scale and a significant amount of steady-state capacity, you pull workloads back into your own data center, where you have Amazon-API-compatible infrastructure. Because you have a common API and set of tools across both, where to place your workloads is largely a matter of economics (assuming that you’re not using AWS capabilities beyond EC2, S3, and EBS). You can develop and test internally or externally, though if you intend to run production on AWS, you have to take its availability and performance characteristics into account when you do your application architecture. You might also adopt this strategy for disaster recovery.
While CloudStack has been an important CMP option for service providers — notably competing against the vCloud stack, OnApp, Hexagrid, and OpenStack — in the end, these providers are almost a decoration to the Amazon ecosystem. They’re mostly successful competing in places that Amazon doesn’t play — in countries where Amazon doesn’t have a data center, in the managed services / hosting space, in the hypervisor-neutral space (Amazon-style clouds built on top of VMware’s hypervisor, more specifically), and in a higher-performance, higher-availability market.
Where CloudStack has been more interesting is in its use to be a “cloud-in” platform for organizations who are using AWS in a significant fashion, and who want their own private cloud that’s compatible with it. Eucalyptus fills this niche as well, although Eucalyptus customers tend to be smaller and Eucalyptus tends to compete in the general private-cloud-builder CMP space targeted at enterprises — against the vCloud stack, Abiquo, HP CloudSystem, BMC Cloud Lifecycle Manager, CA’s 3Tera AppLogic, and so on. CloudStack tends to be used by bigger organizations; while it’s in the general CMP competitive space, enterprises that evaluate it are more likely to be also evaluating, say, Nimbula and OpenStack.
CloudStack has firmly aligned itself with the Amazon ecosystem. But OpenStack is an interesting case of an organization caught in the middle. Its service provider supporters are fundamentally interested in competing against AWS (far more so than with the VMware-based cloud providers, at least in terms of whatever service they’re building on top of OpenStack). Many of its vendor contributors are afraid of a VMware-centric world (especially as VMware moves from virtualizing compute to also virtualizing storage and networks), but just as importantly they’re afraid of a world in which AWS becomes the primary way that businesses buy infrastructure. It is to their advantage to have at least one additional successful widely-adopted CMP in the market, and at least one service provider successfully competing strongly against AWS. Yet AWS has established itself as a de facto standard for cloud APIs and for the way that a service “should” be designed. (This is why OpenStack has an aptly named “Nova Feature Parity Team” playing catch-up to AWS, after all, and why debates about the API continue in the OpenStack community.)
But make no mistake about it. This is not about scrappy free open-source upstarts trying to upset an established vendor ecosystem. This is a war between vendors. As Simon Wardley put it, beware of geeks bearing gifts. CloudStack is Citrix’s effort to take on VMware and enlist the rest of the vendor community in doing so. OpenStack is an effort on the part of multiple vendors — notably Rackspace and HP — to pool their engineering efforts in order to take on Amazon. There’s no altruism here, and it’s not coincidental that the committers to the projects have an explicit and direct commercial interest — they are people working full-time for vendors, contributing as employees of those vendors, and by and large not individuals contributing for fun.
So it really comes down to this: Who can innovate more quickly, and choose the right ways to innovate that will drive customer adoption?
Ladies and gentlemen, place your bets.
To become like a cloud provider, fire everyone here
A recent client inquiry of mine involved a very large enterprise, who informed me that their executives had decided that IT should become more like a cloud provider — like Google or Facebook or Amazon. They wanted to understand how they should transform their organization and their IT infrastructure in order to do this.
There were countless IT people on this phone consultation, and I’d received a dizzying introducing to names and titles and job functions, but not one person in the room was someone who did real work, i.e., someone who wrote code or managed systems or gathered requirements from the business, or even did higher-level architecture. These weren’t even people who had direct management responsibility for people who did real work. They were part of the diffuse cloud of people who are in charge of the general principle of getting something done eventually, that you find everywhere in most large organizations (IT or not).
I said, “If you’re going to operate like a cloud provider, you will need to be willing to fire almost everyone in this room.”
That got their attention. By the time I’d spent half an hour explaining to them what a cloud provider’s organization looks like, they had decidedly lost their enthusiasm for the concept, as well as been poleaxed by the fundamental transformations they would have to make in their approach to IT.
Another large enterprise client recently asked me to explain Rackspace’s organization to them. They wanted to transform their internal IT to resemble a hosting company’s, and Rackspace, with its high degree of customer satisfaction and reputation for being a good place to work, seemed like an ideal model to them. So I spent some time explaining the way that hosting companies organize, and how Rackspace in particular does — in a very flat, matrix-managed way, with horizontally-integrated teams that service a customer group in a holistic manner, coupled with some shared-services groups.
A few days later, the client asked me for a follow-up call. They said, “We’ve been thinking about what you’ve said, and have drawn out the org… and we’re wondering, where’s all the management?”
I said, “There isn’t any more management. That’s all there is.” (The very flat organization means responsibility pushed down to team leads who also serve functional roles, a modest number of managers, and a very small number of directors who have very big organizations.)
The client said, “Well, without a lot of management, where’s the career path in our organization? We can’t do something like this!”
Large enteprise IT organizations are almost always full of inertia. Many mid-market IT organizations are as well. In fact, the ones that make me twitch the most are the mid-market IT directors who are actually doing a great job with managing their infrastructure — but constrained by their scale, they are usually just good for their size and not awesome on the general scale of things, but are doing well enough to resist change that would shake things up.
Business, though, is increasingly on a wartime footing — and the business is pressuring IT, usually in the form of the development organization, to get more things done and to get them done faster. And this is where the dissonance really gets highlighted.
A while back, one of my clients told me about an interesting approach they were trying. They had a legacy data center that was a general mess of stuff. And they had a brand-new, shiny data center with a stringent set of rules for applications and infrastructure. You could only deploy into the new shiny data center if you followed the rules, which gave people an incentive to toe the line, and generally ensured that anything new would be cleanly deployed and maintained in a standardized manner.
It makes me wonder about the viability of an experiment for large enterprise IT with serious inertia problems: Start a fresh new environment with a new philosophy, perhaps a devops philosophy, with all the joy of having a greenfield deployment, and simply begin deploying new applications into it. Leave legacy IT with the mess, rather than letting the morass kill every new initiative that’s tried.
Although this is hampered by one serious problem: IT superstars rarely go to work in enterprises (excepting certain places, like some corners of financial services), and they especially don’t go to work in organizations with inertia problems.
Citrix buys Cloud.com
(This is part of a series of “catch-up” posts of announcements that I’ve wanted to comment on but didn’t previously find time to blog about.)
Recently, Citrix acquired Cloud.com. The purchase price was reported to be in the $200m+ vicinity — around 100x revenues. (Even in this current run of outsized valuations, that’s a rather impressive payday for an infrastructure software start-up. I heard that VMware’s Paul Maritz was talking about how these guys were shopping themselves around, into which some people have read that they ‘had’ to sell, but companies that sell themselves for 100x trailing revenues don’t ‘have’ to be doing anything, other than sniffing around to see if anyone is willing to give them even more money.)
Cloud.com (formerly known as VMOps) is one of a great many “cloud operating system” companies — it competes with Abiquo, OpenStack, Eucalyptus, Nimbula, VMware (in the form of vCloud Director), and so on. By that, I mean that you can take Cloud.com and use it to build cloud IaaS of your very own. While you can use Cloud.com to build a private cloud, the reason that Cloud.com commanded such a high valuation is that it’s currently the primary alternative to VMware for service providers who want to build public cloud IaaS.
Cloud.com is a commercial open-source vendor, but realistically, it’s heavily on the commercial side, not the open-source side; people running Cloud.com in production are generally using the licensed, much more featureful, version. Large service providers who want to build commodity clouds, particularly on the Xen hypervisor (especially Citrix Xen, rather than open-source Xen), are highly likely to choose Cloud.com’s CloudStack product as the underlying “cloud OS”. We’re also increasingly hearing from service providers who intend to use Cloud.com to manage VMware-based environments (using the VMware stack minus vCloud Director), as part of a hypervisor-neutral strategy.
Key service provider customers include GoDaddy and Tata Communications. A particular private cloud customer of note is Zynga, which uses Cloud.com to provide Amazon-compatible (and thus Rightscale-compatible) infrastructure internally, letting it easily move workloads across their own infrastructure and Amazon’s.
Citrix, of course, now has a significant commitment to OpenStack, in the form of Project Olympus, their planned commercial distribution. The Cloud.com acquisition is nevertheless complementary, though, not competitive to the OpenStack commitment.
Cloud.com provides a much more complete set of features than OpenStack — it’s got much of what you need to have a turnkey cloud. Over time, as OpenStack matures, Cloud.com will be able to replace the lower levels of its software stack with OpenStack components instead. For Citrix, though (and broadly, service providers interested in VMware alternatives), this is a time-to-market issue as well as a solution-completeness issue.
In my conversations with a variety of organizations that are deeply strategically involved with OpenStack and working in-depth on the codebase, consensus seems to have developed that OpenStack is about 18 months from maturity (in the sense that it will be stable enough for a service provider who needs to depend on it to run their business to be able to reasonably do so). That’s forever in this fast-moving market. While Swift (the storage piece) is currently reliable and in production use at a variety of service providers, Nova (the compute piece) is not — there are no major service providers running Nova, and it’s acknowledged to not be service-provider-ready. (Rackspace is running the code it got via the acquisition of Slicehost, not the Nova project.) Service providers want to work with proven, stable code, and that’s not Nova right now — that’s Cloud.com. (Or VMware, and even there, people have been touchy about vCloud Director.)
It’s not that the service providers have a deep interest in running an open-source codebase; rather, they are looking for an alternative to VMware that is less expensive. Cloud.com currently fills that need reasonably well.
Similarly, it’s not that most of the members of the OpenStack coalition are vastly interested in an open-source cloud world, but rather, that they realize that there needs to be an alternative to VMware’s ecosystem, and it is in the best interests of VMware’s various competitors to pool their efforts (and for vendors in more of an “arms merchant” role, to ensure that their stuff works with every ecosystem out there). Open source is a means to an end there. Cloud.com’s stack, whether commercial or open source, is only a benefit to the OpenStack project, in the long term.
This acquisition means something pretty straightforward: Citrix is ensuring that it can deliver a full service provider stack of software that will enable providers to successfully compete against vCloud — or to have hypervisor-neutral solutions peacefully coexist, in a way that can be easily blended to meet business needs for a broad range of IaaS solutions. While Citrix would undoubtedly love to sell more XenServer licenses, ultimately the real money is in selling the rest of its portfolio to service providers — like NetScaler ADCs. Having a hypervisor-neutral cloud stack benefits Citrix’s overall position, even if some Cloud.com customers will choose to go VMware or KVM or open-source Xen rather than Citrix Xen for the hypervisor.
It certainly doesn’t hurt that Cloud.com’s Amazon-compatible APIs (and thus support of RightScale’s functionality) is also tremendously useful for organizations seeking to build Amazon-compatible private clouds at scale. No one else has really addressed this need, and VMware (in an infrastructure context) has largely targeted the market for “dependable”, classically enterprise-like infrastructure, rather than explored the opportunities in the emerging demand for commodity cloud.
In short, I think Cloud.com is a great buy for Citrix, and VMware-watchers interested in whether or not their vCloud service provider initiative is working well should certainly track Cloud.com wins vs. vCloud wins in the service provider space.
Amazon’s Free Usage Tier
Amazon recently introduced a Free Usage Tier for its Web Services. Basically, you can try Amazon EC2, with a free micro-instance (specifically, enough hours to run such an instance full-time, and have a few hours left over to run a second instance, too; or you can presumably use a bunch of micro-instances part-time), and the storage and bandwidth to go with it.
Here’s what the deal is worth at current pricing, per-month:
- Linux micro-instance – $15
- Elastic load balancing – $18.87
- EBS – $1.27
- Bandwidth – $3.60
That’s $38.74 all in all, or $464.88 over the course of the one-year free period — not too shabby. Realistically, you don’t need the load-balancing if you’re running a single instance, so that’s really $19.87/month, $238.44/year. It also proves to be an interesting illustration of how much the little incremental pennies on Amazon can really add up.
It’s a clever and bold promotion, making it cost nothing to trial Amazon, and potentially punching Rackspace’s lowest-end Cloud Servers business in the nose. A single instance of that type is enough to run a server to play around with if you’re a hobbyist, or you’re a garage developer building an app or website. It’s this last type of customer that’s really coveted, because all cloud providers hope that whatever he’s building will become wildly popular, causing him to eventually grow to consume bucketloads of resources. Lose that garage guy, the thinking goes, and you might not be able to capture him later. (Although Rackspace’s problem at the moment is that their cloud can’t compete against Amazon’s capabilities once customers really need to get to scale.)
While most of the cloud IaaS providers are actually offering free trials to most customers they’re in discussions with, there’s still a lot to be said about just being able to sign up online and use something (although you still have to give a valid credit card number).
Rackspace and OpenStack
Rackspace is open-sourcing its cloud software — Cloud Files and Cloud Servers — and merging its codebase and roadmap with NASA’s Nebula project (not to be confused with OpenNebula), in order to form a broader community project called OpenStack. This will be hypervisor-neutral, and initially supports Xen (which Rackspace uses) and KVM (which NASA uses), and there’s a fairly broad set of vendors who have committed to contributing to the stack or integrating with it.
While my colleagues and I intend to write a full-fledged research note on this, I feel like shooting from the hip on my blog, since the research note will take a while to get done.
I’ve said before that hosters have traditionally been integrators, not developers of technology, yet the cloud, with its strong emphasis on automation, and its status as an emerging technology without true turnkey solutions at this stage, has forced hosters into becoming developers.
I think the decision to open-source its cloud stack reinforces Rackspace’s market positioning as a services company, and not a software company — whereas many of its cloud competitors have defined themselves as software companies (Amazon, GoGrid, and Joyent, notably).
At the same time, open sourcing is not necessarily a way to software success. Rackspace has a whole host of new challenges that it will have to meet. First, it must ensure that the roadmap of the new project aligns sufficiently with its own needs, since it has decided that it will use the project’s public codebase for its own service. Second, it now has to manage and just as importantly, lead, an open-source community, getting useful commits from outside contributors and managing the commit process. (Rackspace and NASA have formed a board for governance of the project, on which they have multiple seats but are in the minority.) Third, as with all such things, there are potential code-quality issues, the impact of which become significantly magnified when running operations at massive scale.
In general, though, this move is indicative of the struggle that the hosting industry is going through right now. VMware’s price point is too high, it’ll become even higher for those who want to adopt “Redwood” (vCloud), and the initial vCloud release is not a true turnkey service provider solution. This is forcing everyone into looking at alternatives, which will potentially threaten VMware’s ability to dominate the future of cloud IaaS. The compelling value proposition of single pane of glass management for hybrid clouds is the key argument for having VMware both in the enterprise and in outsourced clouds; if the service providers don’t enthusiastically embrace this technology (something which is increasingly threatening), the single pane of glass management will go to a vendor other than VMware, probably someone hypervisor-neutral. Citrix, with its recent moves to be much more service provider friendly, is in a good position to benefit from this. So are hypervisor-neutral cloud management software vendors, like Cloud.com.
The (temporary?) transformation of hosters
Classically, hosting companies have been integrators of technology, not developers of technology. Yet the cloud world is increasingly pushing hosting companies into being software developers — companies who create competitive advantage in significant part by creating software which is used to deliver capabilities to customers.
I’ve heard the cloud IaaS business compared to the colocation market of the 1990s — the idea that you build big warehouses full of computers and you rent that compute capacity to people, comparable conceptually to renting data center space. People who hold this view tend to say things like, “Why doesn’t company X build a giant data center, buy a lot of computers, and rent them? Won’t the guy who can spend the most money building data centers win?” This view is, bluntly, completely and utterly wrong.
IaaS is functionally becoming a software business right now, one that is driven by the ability to develop software in order to introduce new features and capabilities, and to drive quality and efficiency. IaaS might not always be a software business; it might eventually be a service-and-support business that is enabled by third-party software. (This would be a reasonable view if you think that VMware’s vCloud is going to own the world, for instance.) And you can get some interesting dissonances when you’ve got some competitors in a market who are high-value software businesses vs. other folks who are mostly commodity infrastructure providers enabled by software (the CDN market is a great example of this). But for the next couple of years at least, it’s going to be increasingly a software business in its core dynamics; you can kind of think of it as a SaaS business in which the service delivered happens to be infrastructure.
To illustrate, let’s talk about Rackspace. Specifically, let’s talk about Rackspace vs. Amazon.
Amazon is an e-commerce company, with formidable retail operations skills embedded in its DNA, but it is also a software company, with more than a decade of experience under its belt in rolling out a continuous stream of software enhancements and using software to drive competitive advantage.
Amazon, in the cloud IaaS part of its Web Services division, is in the business of delivering highly automated IT infrastructure to customers. Custom-written software drives their entire infrastructure, all the way down to their network devices. Software provides the value-added enhancements that they deliver on top of the raw compute and storage, from the spot pricing marketplace to auto-scaling to the partially-automated MySQL management provided by the RDS service. Amazon’s success and market leadership depends on consistently rolling out new and enhanced features, functions, capabilities. It can develop and release software on such aggressive schedules that it can afford to be almost entirely tactical in its approach to the market, prioritizing whatever customers and prospects are demanding right now.
Rackspace, on the other hand, is a managed hosting company, built around a deep culture of customer service. Like all managed hosters, they’re imperfect, but on the whole, they are the gold standard of service, and customer service is one of the key differentiators in managed hosting, driving Rackspace’s very rapid growth over the last five years. Rackspace has not traditionally been a technology leader; historically, it’s been a reasonably fast follower implementing mainstream technologies in use by its target customers, but people, not engineering, has been its competitive advantage.
And now, Rackspace is going head to head with Amazon on cloud IaaS. It has made a series of acquisitions aimed at acquiring developers and software technology, including Slicehost, JungleDisk, and Webmail.us. (JungleDisk is almost purely a software company, in fact; it makes money by selling software licenses.) Even if they emphasize other competitive differentiation, like customer support, they’re still in direct competition with Amazon on pure functionality. Can Rackspace obtain the competencies it will need to be a software leader?
And in related questions: Can the other hosters who eschew the VMware vCloud route manage to drive the featureset and innovation they’ll need competitively? Will vCloud be inexpensive enough and useful enough to be widely adopted by hosters, and if it is, how much will it commoditize this market? What does this new emphasis upon true development, not just integration, do to hosters and to the market as a whole? (I’ve been thinking about this a lot, lately, although I suspect it’ll go into a real research note rather than a blog post.)
Are multiple cloud APIs bad?
Rackspace has recently launched a community portal called Cloud Tools, showcasing third-party tools that support Rackspace’s cloud compute and storage services. The tools are divided into “featured” and “community”. Featured tools are ones that Rackspace has looked at and believes deserve highlighting; they’re not necessarily commercial projects, but Rackspace does have formal relationships with the developers. Community tools are fro any random joe out there who’d like to be listed. The featured tools get a lot more bells and whistles.
While this is a good move for Rackspace, it’s not ground-breaking stuff, although the portal is notable for a design that seems more consumer-friendly (by contrast with Amazon’s highly text-dense, spartan partner listings). Rather, what’s interesting is Rackspace’s ongoing (successful) efforts to encourage an ecosystem to develop around its cloud APIs, and the broader question of cloud API standardization, “de facto” standards, and similar issues.
There are no small number of cloud advocates out there that believe that rapid standardization in the industry would be advantageous, and that Amazon’s S3 and EC2 APIs, as the APIs with the greatest current adoption and broadest tools support, should be adopted as a de facto standard. Indeed, some cloud-enablement packages, like Eucalyptus, have adopted Amazon’s APIs — and will probably run into API dilemmas as they evolve, as private cloud implementations will be different than public ones, leading to inherent API differences, and a commitment to API compatibility means that you don’t fully control your own feature roadmap. There’s something to be said for compatibility, certainly. Compatibility drives commoditization, which would theoretically lower prices and deliver benefits to end-users.
However, I believe that it’s too early in the market to seek commoditization. Universal commitment to a particular API at this point clamps standardized functionality within a least-common-denominator range, and it restricts the implementation possibilities, to the detriment of innovation. As long as there is rapid innovation and the market continues to offer a slew of new features — something which I anticipate will continue at least through the end of 2011 and likely beyond — standardization is going to be of highly limited benefit.
Rackspace’s API is different than Amazon’s because Rackspace has taken some different fundamental approaches, especially with regard to the network. For another example of significant API differences, compare EMC’s Atmos API to Amazon’s S3 API. Storage is a pretty simple thing, but there are nevertheless meaningful differences in the APIs, reflecting EMC’s different philosophy and approach. (As a sideline, you might find William Vambenepe’s comparison of public cloud APIs in the context of REST, to be an interesting read.)
Everyone can agree on a certain set of core cloud concepts, and I expect that we’ll see libraries that provide unified API access to different underlying clouds; for instance, libcloud (for Python) is the beginning of one such effort. And, of course, third parties like RightScale specialize in providing unified interfaces to multiple clouds.
One thing to keep in mind: Most of the cloud APIs to date are really easy to work with. This means that if you have a tool that supports one API, it’s not terribly hard or time-consuming to make it support another API, assuming that you’re confining yourself to basic functionality.
There’s certainly something to be said in favor of other cloud providers offering an API compatibility layer for basic EC2 and S3 functionality, to satisfy customer demand for such. This also seems to be the kind of thing that’s readily executed as a third-party library, though.
Bits and pieces
Interesting recent news:
Amazon’s revocation of Orwell novels on the Kindle has stirred up some cloud debate. There seems to have been a thread of “will this controversy kill cloud computing”, which you can find in plenty of blogs and press articles. I think that question, in this context, is silly, and am not going to dignify it with a lengthy post of my own. I do think, however, that it highlights important questions around content ownership, application ownership, and data ownership, and the role that contracts (whether in the form of EULAs or traditional contracts) will play in the cloud. By giving up control over physical assets, whether data or devices, we place ourselves into the hands of thir parties, and we’re now subject to their policies and foibles. The transition from a world of ownership to a world of rental, even “permanent” lifetime rental, is not a trivial one.
Engine Yard has expanded its EC2 offering. Previously, Engine Yard was offering Amazon EC2 deployment of its stack via an offering called Solo, for low-end customers who only needed a single instance. Now, they’ve introduced a version called Flex, which is oriented around customers who need a cluster and associated capabilities, along with a higher level of support. This is notable because Engine Yard has been serving these higher-end customers out of their own data center and infrastructure. This move, however, seems to be consistent with Engine Yard’s gradual shift from hosting towards being more software-centric.
The Rackspace Cloud Servers API is now in open beta. Cloud Servers is essentially the product that resulted from Rackspace’s acquisition of Slicehost. Previously, you dealt with your Cloud Server through a Web portal; this new release adds a RESTful API, along with some new features, like shared IPs (useful for keepalived and the like). Also of note is the resize operation, letting you scale your server size up or down, but this is really handwaving magic in front of replacing a smaller virtual server with a larger virtual server, rather than expanding an already-running virtual instance. The API is fairly extensive and the documentation seems decent, although I haven’t had time to personally try it out yet. The API responses, interestingly, include both human-readable data as well as WADL (Web Application Description Language, which is machine-parseable).
SOASTA has introduced a cloud-based performance certification program. Certification is something of a marketing gimmick, but I do think that SOASTA is, overally, an interesting company. Very simply, SOASTA leverages cloud system infrastructure to offer high-volume load-testing services. In the past, you’d typically execute such tests using a tool like HP’s LoadRunner, and many Web hosters offer, as part of their professional services offerings, performance testing using LoadRunner or a similar tool. SOASTA is a full-fledged software as a service offering (i.e., it is their own test harness, monitors, analytics, etc., not a cloud repackaging of another vendor), and the price point makes it reasonable not just for the sort of well-established organizations that could previously afford commercial performance-testing tools, but also for start-ups.
Pricing transparency and CDNs
It is possible that I am going to turn out to be mildly wrong about something. I predicted that neither Amazon’s CloudFront CDN nor the comparable Rackspace/Limelight offering (Mosso Cloud Files) would really impact the mainstream CDN market. I am no longer as certain that’s going to be the case, as it appears that behavioral economics play into these decisions more than one might expect. The impact is subtle, but I think it’s there.
I’m not talking about the giant video deals, mind you; those guys already get prices well below that of the cloud CDNs. I’m talking about the classic bread-and-butter of the CDN market, the e-commerce and enterprise customers, significant B2B and B2C brands that have traditionally been Akamai loyalists, or been scattered with smaller players like Mirror Image.
Simply put, the cloud CDNs put indirect pressure on mainstream CDN prices, and will absorb some new mainstream (enterprise but low-volume) clients, for a simple reason: Their pricing is transparent. $0.22/GB for Rackspace/Limelight. $0.20/GB for SoftLayer/Internap. $0.17/GB for Amazon CloudFront. And so on.
Transparent pricing forces people to rationalize what they’re buying. If I can buy Limelight service on zero commit for $0.22/GB, there’s a fair chance that I’m going to start wondering just what exactly Akamai is giving me that’s worth paying $2.50/GB for on a multi-TB commit. Now, the answer to that might be, “DSA Secure that speeds up my global e-commerce transactions and is invaluable to my business”, but that answer might also be, “The same old basic static caching I’ve been doing forever and have been blindly signing renewals for.” It is going to get me to wonder things like, “What are the actual competitive costs of the services I am using?” and, “What is the business value of what I’m buying?” It might not alter what people buy, but it will certainly alter their perception of value.
Since grim October, businesses have really cared about what things cost and what benefit they’re getting out of them. Transparent pricing really amps up the scrutiny, as I’m discovering as I talk to clients about CDN services. And remember that people can be predictably irrational.
While I’m on the topic of cloud CDNs: There have been two recent sets of public performance measurements for Rackspace (Mosso) Cloud Files on Limelight. One is part of a review by Matthew Sacks, and the other is Rackspace’s own posting of Gomez metrics comparing Cloud Files with Amazon CloudFront. The Limelight performance is, unsurprisingly, overwhelmingly better.
What I haven’t seen yet is a direct performance comparison of regular Limelight and Rackspace+Limelight. The footprint appears to be the same, but differences in cache hit ratios (likely, given that stuff on Cloud Files will likely get fewer eyeballs) and the like will create performance differences on a practical level. I assume it creates no differences for testing purposes, though (i.e., the usual “put a 10k file on two CDNs”), unless Limelight prioritizes Cloud Files requests differently.