Monthly Archives: May 2014

The 2014 Cloud IaaS Magic Quadrant

Gartner’s Magic Quadrant for Cloud Infrastructure as a Service, 2014, has just been released (see the client-only interactive version, or the free reprint). If you’re a Gartner client, you can also view the related charts, which summarize the offerings, features, and data center locations in a convenient table format. (The charts are unfortunately less readable than they could be, as our publication system doesn’t allow comments in Excel spreadsheets. Sorry.)

We’re continuing to update this Magic Quadrant every nine months, since the market is moving so quickly. There have been significant changes in vendor positions since the August 2013 Magic Quadrant (the free reprint has expired, but the graphic is floating around, and Gartner clients can use the “History” tab in the online Magic Quadrant tool, which allows you to compare 2012, 2013, and 2014 interactively).

We’ve observed, over the last nine months, a major shift in Gartner’s client base — the desire to make strategic bets on cloud IaaS providers. In general, this reduces the number of significant suppliers to an organization to just one or two (whereas many organizations had as many as four), with the overwhelming bulk of the workloads going to one provider. It also means that clients are interested in knowing not just who is winning right now, but who is going to be the winner in five or even ten years. That’s really the lens that this Magic Quadrant should be viewed through: Who has what it takes to convince the customer that they can serve both current needs and will sustain market leadership over the long term?

Our clients have, since September of 2013 (which seemed to mark a change in Microsoft’s go-to-market approach for Azure), consistently viewed this as an AWS vs. Microsoft battle, with AWS continuing to win the vast majority of business but Microsoft winning significant inroads, especially with later-adopter customers. In recent weeks since the big price drops, lots of clients have been asking about the future of Google, as well, and there are a lot of curiosity questions about IBM (SoftLayer) also, although the IBM questions tend to be more outsourcing and broader-strategy in orientation. Of course, prospects consider other vendors, especially their existing incumbent vendors, as well, but AWS and Microsoft are overwhelmingly the top contenders.

What’s interesting about this year’s Visionaries is that they all have new platforms — CenturyLink with the Tier 3 acquisition, CSC with the ServiceMesh acquisition coupled with the AWS partnership, Google with Google Compute Engine, IBM with the SoftLayer acquisition, and Verizon Terremark with the still-beta Verizon Cloud. (Arguably VMware falls into this bucket as well, despite being a Niche Player this year.) These providers are in the middle of reinventing themselves, most with the idea of battling it out for the #3 spot in the market.

This is not a market for the faint of heart. (I recently asked a large vendor if they intended to compete seriously in the IaaS space, and was told, “Only an idiot takes on Amazon, Microsoft, and Google simultaneously.”) For that matter, this is not a market for the shallow of pocket. You can’t spend your way to success here, but you need engineers, intellectual property, and to be a real #3, substantial capital investment in infrastructure.

There’s also a clear convergence with the PaaS market that’s taking place here. AWS has long offered an array of services that are PaaS elements, as well as many things that sit on the spectrum between pure IaaS and pure PaaS. Microsoft and Google started as PaaS providers and then launched IaaS offerings. The distintions will blur and increasingly become less relevant, as providers fight it out on features and capabilities.

Gartner continues to separate our evaluation of related managed and professional services from the core cloud IaaS platform, because we believe that clients are increasingly choosing a platform, and then choosing consultants and managed services providers (or alternatively, turning to a trusted integrator who helps them choose the right platforms for their needs). I’ll be writing on this more in the future, but keep an eye out for the upcoming regional Magic Quadrants for Cloud-Enabled Managed Hosting for a managed services-oriented view.

Reflections on the OpenStack Atlanta summit

When I wrote a research note called Don’t Let OpenStack Hype Distort Your Selection of a Cloud Management Platform in 2012, in September 2012, I took quite a bit of flak in public for my statements about OpenStack’s maturity. At the time, I felt that the industry was about 18 to 24 months from the point where real commercial adoption of OpenStack would begin. It now looks like I made the right call — 20 months have passed since I wrote that note, and indeed, OpenStack seems to be on the cusp of that tipping point. OpenStack is truly becoming a business. Last year’s Portland summit was a developer summit. This year’s summit has the feel of a trade show, although of course it’s still a set of working meetings as well as a user conference.

There’s much work to be done still, but things are grinding onwards in an encouraging fashion. The will to solve the common problems of installs, upgrades, and networking seems to have permeated the community sufficiently that these basic elements of usability and stability are getting into the core. The involvement of larger vendors has created a collective determination to do what it takes to make enterprise adoption of OpenStack possible, in due time.

In March of this year, I wrote a new document called An Overview of OpenStack, 2014. It contains the updated Gartner positions on OpenStack — along with practical information for users, like use cases, vendors, and how to select a distro. (No vendor has done a free reprint of the note, so it’s behind the paywall, sorry.) I have no updates to that position after this summit; it has been largely what I expected it to be. However, I did want to comment on what I see as one of the key questions now facing the OpenStack Foundation and contributing vendors.

One of the positions taken in my recent note is a re-iteration of a 2012 position — we believe that OpenStack “will eventually mature into a solid open-source core at the heart of multiple commercial products and services.” One of the key questions that seems to be at hand now is how large that core should be — a fundamental controversy for OpenStack Foundation members, each of whom has a position based on where their company adds value.

At one pole of the spectrum are the vendors who want to maximize the capabilities in OpenStack that are fully open-source — I’ll call them the “more open” camp. (End-users, of course, also all want this.) These vendors typically differentiate in some way that is not the software itself. They do consulting, they are managed services providers, they are cloud IaaS providers, or they are selling some kind of product or service that uses OpenStack under the covers but delivers some other kind of value (NFV, SaaS, and so on). They want the maximum capabilities delivered in the software, and they’re willing to contribute their own work towards this end.

At the other pole of the spectrum are the vendors who intend to sell a cloud management platform (CMP) and need to be able to differentiate — I’ll call them the “more proprietary” camp. That means that there’s the question of “how does a distro differentiate”. It has already been previously argued that installation and upgrades should be left to commercial distributions. At long last it seems to be agreed that for the good of the community at least some of these capabilities need to be decent in the core. The next controversial one seems to be an HA control plane. But it also gets into the broader question of how deep the functionality of OpenStack as a whole should go. Vendors that sell OpenStack software really fall into two broad categories — those that intend to supportively wrap what is essentially vanilla OpenStack (like the Linux vendors), and those who are building a full-fledged CMP (or CMP suite) into which OpenStack may essentially disappear near-invisibly (except for maybe an exposed API), surrounded by a rich fudgy layer of proprietary software (like HP and IBM). Most of these vendors want just enough in the OpenStack open-source to make OpenStack overall successful.

There are nuances here, of course, and many vendors fall somewhere between these two poles, but I think that summarizes the two camps pretty well. Each camp has its own beliefs about what is best for their own companies and what is best for OpenStack. These are legitimate debates about what is “just enough” functionality in OpenStack (and how that “just enough” changes over time), even amongst vendors who occupy the “more proprietary” camp — and whether that “just enough” is sufficient to satisfy the “more open” camp. Indeed, the “more open” camp may find that they cannot get their contributions accepted because the “more proprietary” camp is gatekeeping.

It is critical to note that no vendor I’ve ever spoken to thinks that OpenStack interoperability means that you should be able to easily switch between distributions or OpenStack-based service providers. Rather, the desire is to ensure that there’s enough of an interoperability construct that there can be a viable OpenStack ecosystem — it’s about the ability of ecosystem vendors to interoperate with a variety of OpenStack-based vendors, far more than it is about the user’s ability to interoperate between OpenStack-based solutions. To reiterate another point from my previous research notes: Customers should expect to be no less locked into an OpenStack-based vendor/provider than they would into any other CMP or cloud IaaS provider.

%d bloggers like this: