Category Archives: Infrastructure

Verizon Cloud is technically innovative, but is it enough?

Verizon Terremark has announced the launch of its new Verizon Cloud service built using its own technology stack.

Verizon already owns a cloud IaaS offering — in fact, it owns several. Terremark was an early AWS competitor with the Terremark Enterprise Cloud, a VMware-based offering that got strong enterprise traction during the early years of this market (and remains the second-most-common cloud provider amongst Gartner’s clients, with many companies using both AWS and Terremark), as well as a vCloud Express offering. Verizon entered the game later with Verizon Compute as a Service (now called Enterprise Cloud Managed Edition), also VMware-based. Since Verizon’s acquisition of Terremark, the company has continued to operate all the existing platforms, and intends to continue to do so for some time to come.

However, Verizon has had the ambition to be a bigger player in cloud; like many other carriers, it believes that network services are a commodity and a carrier needs to have stickier, value-added, higher-up-the-stack services in order to succeed in the future. However, Verizon also understood that it would have to build technology, not depend on other people’s technology, if it wanted to be a truly competitive global-class cloud player versus Amazon (and Microsoft, Google, etc.).

With that in mind, in 2011, Verizon went and made a manquisition — acquiring CloudSwitch not so much for its product (essentially hypervisor-within-a-hypervisor that allows workloads to be ported across cloud infrastructures using different technologies), as for its team. It gave them a directive to go build a cloud infrastructure platform with a global-class architecture that could run enterprise-class workloads, at global-class scale and at fully competitive price points.

Back in 2011, I conceived what I called the on-demand infrastructure fabric (see my blog post No World of Two Clouds, or, for Gartner clients, the research note, Market Trends: Public and Private Cloud Infrastructure Converge into On-Demand Infrastructure Fabrics) — essentially, a global-class infrastructure fabric with self-service selectable levels of availability, performance, and isolation. Verizon is the first company to have really built what I envisioned (though their project predates my note, and my vision was developed independently of any knowledge of what they were doing).

The Verizon Cloud architecture is actually very interesting, and, as far as I know, unique amongst cloud IaaS providers. It is almost purely a software-defined data center. Components are designed at a very low level — a custom hypervisor, SDN augmented with the use of NPUs, virtualized distributed storage. Verizon has generally tried to avoid using components for which they do not have source code. There are very few hardware components — there’s x86 servers, Arista switches, and commodity Flash storage (the platform is all-SSD). The network is flat, and high bandwidth is an expectation (Verizon is a carrier, after all). Oh, and there’s object-based storage, too (which I won’t discuss here).

The Verizon Cloud has a geographically distributed control plane designed for continuous availability, and it, along with the components, are supposed to be updatable without downtime (i.e., maintenance should not impact anything). It’s intended to provide fine-grained performance controls for the compute, network, and storage resource elements. It is also built to allow the user to select fault domains, allowing strong control of resource placement (such as “these two VMs cannot sit on the same compute hardware”); within a fault domain, workloads can be rebalanced in case of hardware failure, thus offering the kind of high availability that’s often touted in VMware-based clouds (including Terremark’s previous offerings). It is also intended to allow dynamic isolation of compute, storage, and networking components, allowing the creation of private clouds within a shared pool of hardware capacity.

The Verizon Cloud is intended to be as neutral as possible — the theory is that all VM hypervisors can run natively on Verizon’s hypervisor, many APIs can be supported (including its own API, the existing Terremark API, and the AWS, CloudStack, and OpenStack APIs), and there’ll be support for the various VM image formats. Initially, the supported hypervisor is a modified Xen. In other words, Verizon wants to take your workloads, wherever you’re running them now, and in whatever form you can export them.

It’s an enormously ambitious undertaking. It is, assuming it all works as promised, a technical triumph — it’s the kind of engineering you expect out of an organization like AWS or Google, or a software company like Microsoft or VMware, not a staid, slow-moving carrier (the mere fact that Verizon managed to launch this is a minor miracle unto itself). It is actually, in a way, what OpenStack might have aspired to be; the delta between this and the OpenStack architecture is, to me, full of sad might-have-beens of what OpenStack had the potential to be, but is not and is unlikely to become. (Then again, service providers have the advantage of engineering to a precisely-controlled environment. OpenStack, and for that matter, VMware, need to run on whatever junk the customer decides to use, instantly making the problem more complex.)

Unfortunately, the question at this stage is: Will anybody care?

Yes, I think this is an important development in the market, and the fact that Verizon is already a credible cloud player in the enterprise, with an entrenched base in the Terremark Enterprise Cloud, will help it. But in a world where developers control most IaaS purchasing, the bare-bones nature of the new Verizon offering means that it falls short of fulfilling the developer desire for greater productivity. In order to find a broader audience, Verizon will need to commit to developing all the richness of value-added capabilities that the market leaders will need — which likely means going after the PaaS market with the same degree of ambition, innovation, and investment, but certainly means committing to rapidly introducing complementing capabilities and bringing a rich ecosystem in the form of a software marketplace and other partnerships. Verizon needs to take advantage of its shiny new IaaS building blocks to rapidly introduce additional capabilities — much like Microsoft is now rapidly introducing new capabilities into Azure.

With that, assuming that this platform performs as designed, and Verizon can continue to treat Terremark’s cloud folks like they belong to a fast-moving start-up and not an ossified pipe provider, Verizon may have a shot at being one of the leaders in this market. Without that, the Verizon Cloud is likely to be relegated to a niche, just like every other provider whose capabilities stop at the level of offering infrastructure resources.

No world of two clouds

Massimo Re Ferre’ recently posted some thoughts as a follow-up to his talk at VMworld, about vCHS vs. AWS. That led to a Twitter exchange that made me think that I should highlight a viewpoint of mine:

I do not believe in a “world of two clouds”, where there are cloud IaaS offerings that are targeted at enterprise workloads, and there are cloud IaaS offerings that are targeted at cloud-native workloads — broadly, different clouds for applications designed with the assumption of infrastructure resilience, versus applications designed with the assumption that resilience must reside at the application layer.

Instead, I believe that the market leaders will offer a range of infrastructure resources. Some of those infrastructure resources will be more resilient, and will be more expensive. And customers will pay for the level of performance they receive. There’s no need to build two clouds; in fact, customers actively do not want two different clouds, since nobody really wants to shift between different clouds as you go through an application’s lifecycle, or for different tiers of an app, some of which might need greater infrastructure resilience and guaranteed performance.

I do not believe that application design patterns change to be fully cloud-native over time. First, enterprises have hundreds if not thousands of existing legacy applications that they will need to host. Second, enterprises continue to write non-cloud-native apps, because the typical app is small — it’s some kind of business process app (I call these “paperwork” apps, usually online forms with some workflow and reporting), and it runs on a tiny VM, has few users. It’s neither cost-effective to spend the developer time to make these apps resilient, nor cost-effective to distribute them. Putting them on decently resilient infrastructure is less expensive. Some of these apps should more logically be written on a business process management suite or PaaS (BPMS or bpmPaaS), or on a more general PaaS; that underlying BMPS/PaaS should hopefully functionally provide resilience, but that won’t deal with the existing legacy apps, so there’ll continue to be a need for resilient infrastructure.

When people talk about infrastructure resilience, they’re generally referring to compute resilience in particular — essentially, trying to protect the application from the impact of potential server hardware failure. VMware pioneered two technologies in this space — they call them “HA” (fast detection of physical host failure and automatic restart of the VMs that were running on that host, on some other host) and “vMotion” (live migration of VMs from one physical host to another). However, all the other major hypervisors have now incorporated these features. There’s absolutely no reason why a cloud IaaS provider like AWS, which doesn’t currently support these capabilities, can’t add them, and charge a premium for these VMs.

When people talk about performance consistency, they’re generally referring to storage and network performance. (Most cloud IaaS providers do not oversubscribe either CPU or RAM resources.) Predictable storage performance is a very difficult engineering problem. Companies like SolidFire are offering all-SSD storage to help accomplish this (since it reduces the variability of seek times), and we’re seeing gradual uptake of this technology into cloud IaaS providers. AWS has done “provisioned iops” (PIOPS), allowing customers to buy into a more predictable range of storage performance. There’s no reason why providers wouldn’t offer this kind of predictability for both storage and network — especially when they can charge extra for it.

Now, there are tons of service providers out there building to that world of two clouds — often rooted in the belief that IT operations will want one thing, and developers another, and they should build something totally different for both. This is almost certainly a losing strategy. Winning providers will satisfy both needs within a single cloud, offering architectural flexibility that allows developers to decide whether or not they want to build for application resiliency or infrastructure resiliency.

For more on this: I’ve covered this in detail in my research note, Market Trends: Public and Private Cloud Infrastructure Converge into On-Demand Infrastructure Fabrics (Gartner clients only).

Where are the challengers to AWS?

This is part of 2 of my response to Bernard Golden’s recent CIO.com blog post in response to my announcement of Gartner’s 2013 Magic Quadrant for Cloud IaaS. (Part 1 was posted yesterday.)

Bernard: “What skill or insight has allowed AWS to create an offering so superior to others in the market?”

AWS takes a comprehensive view of “what does the customer need”, looks at what customers (whether current customers or future target customers) are struggling with, and tries to address those things. AWS not only takes customer feedback seriously, but it also iterates at shocking speed. And it has been willing to invest massively in engineering. AWS’s engineering organization and the structure of the services themselves allows multiple, parallel teams to work on different aspects of AWS with minimal dependencies on the other teams. AWS had a head start, and with every passing year their engineering lead has grown larger. (Even though they have a significant burden of technical debt from having been first, they’ve also solved problems that competitors haven’t had to yet, due to their sheer scale.)

Many competitors haven’t had the willingness to invest the resources to compete, especially if they think of this business as one that’s primarily about getting a VM fast and that’s all. They’ve failed to understand that this is a software business, where feature velocity matters. You can sometimes manage to put together brilliant, hyper-productive small teams, but this is usually going to get you something that’s wonderful in the scope of what they’ve been able to build, but simply missing the additional capabilities that better-resourced competitors can manage (especially if a competitor can muster both resources and hyper-productivity). There are some awesome smaller companies in this space, though.

Bernard: “Plainly stated, why hasn’t a credible competitor emerged to challenge AWS?”

I think there’s a critical shift happening in the market right now. Three very dangerous competitors are just now entering the market — Microsoft, Google, and VMware. I think the real war for market share is just beginning.

For instance, consider the following, off the cuff, thoughts on those vendors. These are by no means anything more than quick thoughts and not a complete or balanced analysis. I have a forthcoming research note called “Rise of the Cloud IaaS Mega-Vendors” that focuses on this shift in the competitive landscape, and which will profile these four vendors in particular, so stay tuned for more. So, that said:

Microsoft has brand, deep customer relationships, deep technology entrenchment, and a useful story about how all of those pieces are going to fit together, along with a huge army of engineers, and a ton of money and the willingness to spend wherever it gains them a competitive advantage; its weakness is Microsoft’s broader issues as well as the Microsoft-centricity of its story (which is also its strength, of course). Microsoft is likely to expand the market, attracting new customers and use cases to IaaS — including blended PaaS models.

Google has brand, an outstanding engineering team, and unrivaled expertise at operating at scale; its weakness is Google’s usual challenges with traditional businesses (whatever you can say about AWS’s historical struggle with the enterprise, you can say about Google many times over, and it will probably take them at least as long as AWS did to work through that). Google’s share gain will mostly come at the expense of AWS’s base of HPC customers and young start-ups, but it will worm its way into the enterprise via interactive agencies that use its cloud platform; it should have a strong blended PaaS model.

VMware has brand, a strong relationship with IT operations folks, technology it can build on, and a hybrid cloud story to tell; whether or not its enterprise-class technology can scale to global-class clouds remains to be seen, though, along with whether or not it can get its traditional customer base to drive sufficient volume of cloud IaaS. It might expand the market, but it’s likely that much of its share gain will come at the expense of VMware-based “enterprise-class” service providers.

Obviously, it will take these providers some time to build share, and there are other market players who will be involved, including the other providers that are in the market today (and for all of you wondering “what about OpenStack”, I would classify that under the fates of the individual providers who use it). However, if I were to place my bets, it would be on those four at the top of market share, five years from now. They know that this is a software business. They know that innovative capabilities are vitally necessary. And they know that this has turned into a market fixated on developer productivity and business benefits. At least for now, that view is dominating the actual spending in this market.

You can certainly argue that another market outcome should have happened, that users should have chosen differently, or even that users are making poor decisions now that they’ll regret later. That’s an interesting intellectual debate, but at this point, Sisyphus’s rock is rolling rapidly downhill, so anyone who wants to push it back up is going to have an awfully difficult time not getting crushed.

Cloud IaaS market share and the developer-centric world

Bernard Golden recently wrote a CIO.com blog post in response to my announcement of Gartner’s 2013 Magic Quadrant for Cloud IaaS. He raised a number of good questions that I thought it would be useful to address. This is part 1 of my response. (See part 2 for more.)

(Broadly, as a matter of Gartner policy, analysts do not debate Magic Quadrant results in public, and so I will note here that I’m talking about the market, and not the MQ itself.)

Bernard: “Why is there such a distance between AWS’s offering and everyone else’s?”

In the Magic Quadrant, we rate not only the offering itself in its current state, but also a whole host of other criteria — the roadmap, the vendor’s track record, marketing, sales, etc. (You can go check out the MQ document itself for those details.) You should read the AWS dot positioning as not just indicating a good offering, but also that AWS has generally built itself into a market juggernaut. (Of course, AWS is still far from perfect, and depending on your needs, other providers might be a better fit.)

But Bernard’s question can be rephrased as, “Why does AWS have so much greater market share than everyone else?”

Two years ago, I wrote two blog posts that are particularly relevant here:

These posts were followed up wih two research notes (links are Gartner clients only):

I have been beating the “please don’t have contempt for developers” drum for a while now. (I phrase it as “contempt” because it was often very clear that developers were seen as lesser, not real buyers doing real things — merely ignoring developers would have been one thing, but contempt is another.) But it’s taken until this past year before most of the “enterprise class” vendors acknowledged the legitimacy of the power that developers now hold.

Many service providers held tight to the view espoused by their traditional IT operations clientele: AWS was too dangerous, it didn’t have sufficient infrastructure availability, it didn’t perform sufficiently well or with sufficient consistency, it didn’t have enough security, it didn’t have enough manageability, it didn’t have enough governance, it wasn’t based on VMware — and it didn’t look very much like an enterprise’s data center architecture. The viewpoint was that IT operations would continue to control purchases, implementations would be relatively small-scale and would be built on traditional enterprise technologies, and that AWS would never get to the point that they’d satisfy traditional IT operations folks.

What they didn’t count on was the fact that developers, and the business management that they ultimately serve, were going to forge on ahead without them. Or that AWS would steadily improve its service and the way it did business, in order to meet the needs of the traditional enterprise. (My colleagues in GTP — the Gartner division that was Burton Group — do a yearly evaluation of AWS’s suitability for the enterprise, and each year, AWS gets steadily, materially better. Clients: see the latest.)

Today, AWS’s sheer market share speaks for itself. And it is definitely not just single developers with a VM or two, start-ups, or non-mission-critical stuff. Through the incredible amount of inquiry we take at Gartner, we know how cloud IaaS buyers think, source, succeed, and sometimes suffer. And every day at Gartner, we talk to multiple AWS customers (or prospects considering their options, though many have already bought something on the click-through agreement). Most are traditional enterprises of the G2000 variety (including some of the largest companies in the world), but over the last year, AWS has finally cracked the mid-market by working with systems integrator partners. The projected spend levels are clearly increasing dramatically, the use cases are extremely broad, the workloads increasingly have sensitive data and regulatory compliance concerns, and customers are increasingly thinking of AWS as a strategic vendor.

(Now, as my colleagues who cover the traditional data center like to point out, the spend levels are still trivial compared to what these customers are spending on the rest of their data center IT, but I think what’s critical here is the shift in thinking about where they’ll put their money in the future, and their desire to pick a strategic vendor despite how relatively early-stage the market is.)

But put another way — it is not just that AWS advanced its offering, but it convinced the market that this is what they wanted to buy (or at least that it was a better option than the other offerings), despite the sometimes strange offering constructs. They essentially created demand in a new type of buyer — and they effectively defined the category. And because they’re almost always first to market with a feature — or the first to make the market broadly aware of that capability — they force nearly all of their competitors into playing catch-up and me-too.

That doesn’t mean that the IT operations buyer isn’t important, or that there aren’t an array of needs that AWS does not address well. But the vast majority of the dollars spent on cloud IaaS are much more heavily influenced by developer desires than by IT operations concerns — and that means that market share currently favors the providers who appeal to development organizations. That’s an ongoing secular trend — business leaders are currently heavily growth-focused, and therefore demanding lots of applications delivered as quickly as possible, and are willing to spend money and take greater risks in order to obtain greater agility.

This also doesn’t mean that the non-developer-centric service providers aren’t important. Most of them have woken up to the new sourcing pattern, and are trying to respond. But many of them are also older, established organizations, and they can only move so quickly. They also have the comfort of their existing revenue streams, which allow them the luxury of not needing to move so quickly. Many have been able to treat cloud IaaS as an extension of their managed services business. But they’re now facing the threat of systems integrators like Cognizant and Capgemini entering this space, combining application development and application management with managed services on a strategic cloud IaaS provider’s platform — at the moment, normally AWS. Nothing is safe from the broader market shift towards cloud computing.

As always, every individual customer’s situation is different from another’s, and the right thing to do (or the safe, mainstream thing to do) evolves through the years. Gartner is appropriately cautionary when it discusses such things with clients. This is a good time to mention that Magic Quadrant placement is NEVER a good reason to include or exclude a vendor from a short list. You need to choose the vendor that’s right for your use case, and that might be a Niche Player, or even a vendor that’s not on the MQ at all — and even though AWS has the highest overall placement, they might be completely unsuited to your use case.

The 2013 Cloud IaaS Magic Quadrant

Gartner’s Magic Quadrant for Cloud Infrastructure as a Service, 2013, has just been released (see the client-only interactive version, or the free reprint). Gartner clients can also consult the related charts, which summarize the offerings, features, and data center locations.

We’re now updating this Magic Quadrant on a nine-month basis, and quite a bit has changed since the 2012 update (see the client-only 2012, or the free 2012 reprint).

In particular, market momentum has strongly favored Amazon Web Services. Many organizations have now had projects on AWS for several years, even if they hadn’t considered themselves to have “done anything serious” on AWS. Thus, as those organizations get serious about cloud computing, AWS is their incumbent provider — there are relatively few truly greenfield opportunities in cloud IaaS now. Many Gartner clients now actually have multiple incumbent providers (the most common combination is AWS and Terremark), but nearly all such customers tell us that the balance of new projects are going to AWS, not the other providers.

Little by little, AWS has systematically addressed the barriers to “mainstream”, enterprise adoption. While it’s still far from everything that it could be, and it has some specific and significant weaknesses, that steady improvement over the last couple of years has brought it to the “good enough” point. While we saw much stronger momentum for AWS than other providers in 2012, 2013 has really been a tipping point. We still hear plenty of interest in competitors, but AWS is overwhelmingly the dominant vendor.

At the same time, many vendors have developed relatively solid core offerings. That means that the number of differentiators in the market has decreased, as many features become common “table stakes” features that everyone has. It means that most offerings from major vendors are now fairly decent, but only a few are really stand out for their capabilities.

That leads to an unusual Magic Quadrant, in which the relative strength of AWS in both Vision and Execution essentially forces the whole quadrant graphic to rescale. (To build an MQ, analysts score providers relative to each other, on all of the formal evaluation criteria, and the MQ tool automatically plots the graphic; there is no manual adjustment of placements.) That leaves you with centralized compression of all of the other vendors, with AWS hanging out in the upper right-hand corner.

Note that a Magic Quadrant is an evaluation of a vendor in the market; the actually offering itself is only a portion of the overall score. I’ll be publishing a Critical Capabilities research note in the near future that evaluates one specific public cloud IaaS offering from each of these vendors, against its suitability for a set of specific use cases. My colleagues Kyle Hilgendorf and Chris Gaun have also been publishing extremely detailed technical evaluations of individual offerings — AWS, Rackspace, and Azure, so far.

A Magic Quadrant is a tremendous amount of work — for the vendors as well as for the analyst team (and our extended community of peers within Gartner, who review and comment on our findings). Thanks to everyone involved. I know this year’s placements came as disappointments to many vendors, despite the tremendous hard work that they put into their offerings and business in this past year, but I think the new MQ iteration reflects the cold reality of a market that is highly competitive and is becoming even more so.

Instart Logic launches a new kind of acceleration service

There have been three core techniques for accelerating content and application delivery over the Internet — caching (“classic” CDN), network optimization (think protocol tricks, like F5 Web Application Accelerator on the hardware side, or Akamai DSA on the service side), and front-end optimization (FEO, think content re-write, like Aptimize/Riverbed or Strangeloop/Radware on the software side, or Blaze.io/Akamai or Acceloweb/Limelight on the service side).

Now, with the launch of Instart Logic, there’s a fourth technique, that I don’t yet have a name for. In spirit, it’s probably most similar to a SoftWOC, but in this case, the client endpoint is the browser, and the symmetric remote endpoint is the CDN server. The techniques are also different from typical SoftWOC techniques, as far as I know.

From the perspective of an Instart Logic customer, they’re getting a dynamic acceleration service that, from a deployment perspective, is much like a CDN. For most customers, it would entirely replace using a traditional CDN (rather than being additive) — i.e., they would buy this instead of buying Akamai DSA or a similar service. Note that this is a performance play, not a price play — Instart Logic expects that they’ll be in the ballpark of typical dynamic acceleration pricing, and that performance carries a market premium.

The techniques used in the service are intended to dramatically improve load times, especially on congested networks; this is particularly useful in mobile, but it is not mobile-specific. As with FEO, the goal is to allow the end-user to quickly see and interact with the content while the remainder of the page is still downloading.

On the client side, there’s what they call a “NanoVisor” — an HTML5-based thin virtualization layer that runs in the browser. If Instart Logic is full-proxying the customer’s site, the NanoVisor code can simply be injected; otherwise the customer can insert the code into their site. It requires no other changes to the customer’s site. The NanoVisor provides intelligence about the end-user and serves as the client endpoint for the optimization.

On the server side, the “AppSequencer” analyzes page content, and it fragments and orders objects that are then streamed to the NanoVisor. It does large-scale analysis of usage patterns, and it predictively sends things based on the responses that it’s seen before. There’s compression and network optimization techniques, as well as implicit caching.

Like other recent innovators in the CDN space, Instart Logic is predominantly a software company. Whlie they do have servers of their own, they are also using a variety of cloud IaaS providers for capacity. They’re also using Dyn for DNS.

Instart Logic has raised a significant amount of money, almost purely from top-tier VCs — $26 million to date. I think their technology is very promising, which probably means they’ll get a bit of time to prove themselves out and then they’ll get bought by one of the CDNs looking to get an edge on the competition, or maybe even an ADC or WOC vendor.

Instart Logic’s demos are impressive, and they’ve got paying customers at this point, although obviously they’re newly-launched. While it always takes time to build trust in this industry, at this point they’re worth checking out, and I’ve been referring Gartner clients to them ever since I was briefed by them while they were still in stealth mode, a few months back. They’re potentially an excellent fit for customers who are looking for something beyond what DSA-style network optimization offerings can do, but either do not want to do FEO, have reached the limits of what FEO can offer them, or simply want to explore alternatives.

IBM buys SoftLayer

It’s been a hot couple of weeks in the cloud infrastructure as a service space. Microsoft’s Azure IaaS (persistent VMs) came out of beta, Google Compute Engine went into public beta, VMware formally launched its public cloud (vCloud Hybrid Service), and Dell withdrew from the mark. Now, IBM is acquiring SoftLayer, with a deal size in the $2B range, around a 4x-5x multiple — roughly the multiple that Rackspace trades at, with RAX no doubt used as a comp despite the vast differences in the two companies’ business models.

SoftLayer is the largest provider of dedicated hosting on the planet, although they do also have cloud IaaS offerings; they sell direct, but also have a huge reseller channel, and they’re often the underlying provider to many mass-market shared hosting providers. Like other dedicated hosters, they are very SMB-centric — tons of dedicated hosting customers are folks with just a server or two. But they also have a sizable number of customers with scale-out businesses to whom “bare metal” (non-virtualized dedicated servers), provisioned flexibly on demand (figure it typically takes 1 to 4 hours to provision bare metal), is very attractive.

Why bare metal? Because virtualization is great for server consolidation (“I used to have 10 workloads on 10 servers that were barely utilized, and now I have one heavily utilized server running all 10 workloads!”), but it’s often viewed as unnecessary overhead when you’re dealing with an environment where all the servers are running nearly fully utilized, as is common in scale-out, Web-scale environments.

SoftLayer’s secret sauce is its automation platform, which handles virtualized and non-virtualized servers with largely equal ease. One of their value propositions has been to bring the kinds of things you expect from cloud VMs, to bare metal — paid by the hour, fully-automated provisioning, API as well as GUI, provisioning from image, and so forth. So the value proposition is often “get the performance of bare metal, in exactly the configuration you want, with the advantages and security of single-tenancy, without giving up the advantages of the cloud”. And, of course, if you want virtualization, you can do that — or SoftLayer will be happy to sell you VMs in their cloud.

SoftLayer also has a nice array of other value-adds that you can self-provision, including being able to click to provision cloud management platforms (CMPs) like CloudStack / Citrix CloudPlatform, and hosting automation platforms like Parallels. Notably, though, they are focused on self-service. Although SoftLayer acquired a small managed hosting business when it merged with ThePlanet, its customer base is nearly exclusively self-managed. (That makes them very different than Rackspace.)

In terms of the competition, SoftLayer’s closest philosophical alignment is Amazon Web Services — don’t offer managed services, but instead build successive layers of automation going up the stack, that eliminate the need for traditional managed services as much as possible. They have a considerably narrower portfolio than AWS, of course, but AWS does not offer bare metal, which is the key attractor for SoftLayer’s customers.

So why does IBM want these guys? Well, they do fill a gap in the IBM portfolio — IBM has historically not served an SMB market directly in gneral, and its developer-centric SmartCloud Enterprise (SCE) has gotten relatively weak traction (seeming to do best where the IBM brand is important, notably Europe), although that can be blamed on SCE’s weak feature set and significant downtime associated with maintenance windows, more so than the go-to-market (although that’s also been problematic). I’ll be interested to see what happens to the SCE roadmap in light of the acquisition. (Also, IBM’s SCE+ offering — essentially a lightweight data center outsourcing / managed hosting offering, delivered on cloud-enabled infrastructure — uses a totally different platform, which they’ll need to converge at some point in time.)

Beyond the “public cloud”, though, SoftLayer’s technology and service philosophy are useful to IBM as a platform strategy, and potentially as bits of software and best practices to embed in other IBM products and services. SoftLayer’s anti-managed-services philosophy isn’t dissonant with IBM’s broader outsourcing strategy as it relates to the cloud. Every IT outsourcer at any kind of reasonable scale actually wants to eliminate managed services where they can, because at this point, it’s hard to get any cheaper labor — the Indian outsourcers have already wrung that dry, and every IT outsourcer today offshores as much of their labor as possible. So your only way to continue to get costs down is to eliminate the people. If you can, through superior technology, eliminate people, then you are in a better competitive position — not just for cost, but also for consistency and quality of service delivery.

I don’t think this was a “must buy” for IBM, but it should be a reasonable acceleration of their cloud plans, assuming that they manage to retain the brains at SoftLayer, and can manage what has been an agility-focused, technology-driven business with a very different customer base and go-to-market approach than the traditional IBM base — and culture. SoftLayer can certainly use more money to for engineering resources (although IBM’s level of engineering commitment to cloud IaaS has been relatively lackluster given its strategic importance), marketing, and sales, and larger customers that might have been otherwise hesitant to use them may be swayed by the IBM brand.

(And it’s a nice exit for GI Partners, at the end of a long road in which they wrapped up EV1 Servers, ThePlanet, and SoftLayer… then pursued an IPO route during a terrible time for IPOs… and finally get to sell the resulting entity for a decent valuation.)

VMware joins the cloud wars with vCloud Hybrid Service

Although this has been long-rumored, and then was formally mentioned in VMware’s recent investor day, VMware has only just formally announced the vCloud Hybrid Service (vCHS), which is VMware’s foray into the public cloud IaaS market.

VMware has previously had a strategy of being an arms dealer to service providers who wanted to offer cloud IaaS. In addition to the substantial ecosystem of providers who use VMware virtualization as part of various types of IT outsourcing offerings, VMware also signed up a lot of vCloud Powered partners, each of which offered what was essentially vCloud Director (vCD) as a service. It also certified a number of the larger providers as vCloud Datacenter Service Providers; each such provider needed to meet criteria for reliability, security, interoperability, and so forth. In theory, this was a sound channel strategy. In practice, it didn’t work.

Of the certified providers, only CSC has managed to get substantial market share, with Bluelock trailing substantially; the others haven’t gotten much in the way of traction, Dell has now dropped their offering entirely, and neither Verizon nor Terremark ended up launching the service. Otherwise, VMware’s most successful service providers — providers like Terremark, Savvis, Dimension Data, and Virtustream — have been the ones who chose to use VMware’s hypervisor but not its cloud management platform (in the form of vCD).

Indeed, those successful service providers (let’s call them the clueful enterprise-centric providers) are the ones that have built the most IP themselves — and not only are they resistant to buying into vCD, but they are increasingly becoming hypervisor-neutral. Even CSC, which has staunchly remained on VMware running on VCE Vblocks, has steadily reduced its reliance on vCD, bringing in a new portal, service catalog, orchestration engine, and so forth. Similarly, Tier 3 has vCD under the covers, but never so much as exposed the vCD portal to customers. (I think the industry has come to a broad consensus that vCD is too complex of a portal for nearly all customers. Everyone successful, even VMware themselves with vCHS, is front-ending their service with a more user-friendly portal, even if customers who want it can request to use vCD instead.)

In other words, even while VMware remains a critical partner for many of its service providers, those providers are diversifying their technology away from VMware — their success will be, over time, less and less VMware’s success, especially if they’re primarily paying for hypervisor licenses, and not the rest of VMware’s IT operations management (ITOM) tools ecosystem. The vCloud Powered providers that are basically putting out vanilla vCD as a service aren’t getting significant traction in the market — not only can they not compete with Amazon, but they can’t compete against clueful enterprise-centric providers. That means that VMware can’t count on them as a significant revenue stream in the future. And meanwhile, VMware has finally gotten the wake-up call that Amazon’s (and AWS imitators) increasing claim on “shadow IT” is a real threat to VMware’s future not only in the external cloud, but also in internal data centers.

That brings us to today’s reality: VMware is entering the public cloud IaaS market themselves, with an offering intended to compete head-to-head with its partners as well as Amazon and the whole constellation of providers that don’t use VMware in their infrastructure.

VMware’s thinking has clearly changed over the time period that they’ve spent developing this solution. What started out as a vanilla vCD solution intended to enable channel partners who wanted to deliver managed services on top of a quality VMware offering, has morphed into a differentiated offering that VMware will take to market directly as well as through their channel — including taking credit cards on a click-through sign-up for by-the-hour VMs, although the initial launch is a monthly resource-pool model. Furthermore, their benchmark for price-competitiveness is Amazon, not the vCloud providers. (Their hardware choices reflect this, too, including their choice to use EMC software but going scale-out architecture and commodity hardware across the board, rather than much more expensive and much less scalable Vblocks.)

Fundamentally, there is virtually no reason for providers who sell vanilla vCD without any value-adds to continue to exist. VMware’s vCHS will, out of the gate, be better than what those providers offer, especially with regard to interopability with internal VMware deployments — VMware’s key advantage in this market. Even someone like a Bluelock, who’s done a particularly nice implementation and has a few value-adds, will be tremendously challenged in this new world. The clueful providers who happen to use VMware’s hypervisor technology (or even vCD under the covers) will continue on their way just fine — they already have differentiators built into their service, and they are already well on the path to developing and owning their own IP and working opportunistically with best-of-breed suppliers of capabilities.

(There will, of course, continue to be a role for vCloud Powered providers who really just use the platform as cloud-enabled infrastructure — i.e., providers who are mostly going to do managed services or one sort or another, on top of that deployment. Arguably, however, some of those providers may be better served, over the long run, offering those managed services on top of vCHS instead.)

No one should underestimate the power of brand in the cloud IaaS market, particularly since VMware is coming to market with something real. VMware has a rich suite of ITOM capabilities that it can begin to build into an offering. It also has CloudFoundry, which it will integrate, and would logically be as synergistic with this offering as any other IaaS/PaaS integration (much as Microsoft believes Azure PaaS and IaaS elements are synergistic).

I believe that to be a leader in cloud IaaS, you have to develop your own software and IP. As a cloud IaaS provider, you cannot wait for a vendor to do their next big release 12-18 months from now and then take another 6-12 months to integrate it and upgrade to it — you’ll be a fatal 24 months behind a fast-moving market if you do that. VMware’s clueful service providers have long since come to this realization, which is why they’ve moved away from a complete dependence on VMware. Now VMware itself has to ensure that their cloud IaaS offering has a release tempo that is far faster than the software they deliver to enterprises. That, I think, will be good for VMware as a whole, but it will also be a challenge for them going forward.

VMware can be successful in this market, if they really have the wholehearted will to compete. Yes, their traditional buying center is the deeply untrendy and much-maligned IT Operations admin, but if anyone would be the default choice for that population (which still controls about a third of the budget for cloud services), it’s VMware — and VMware is playing right into that story with its emphasis on easy movement of workloads across VMware-based infrastructures, which is the story that these guys have been wanting to hear all along and have been waiting for someone to actually deliver.

Hello, vCHS! Good-bye, vCloud Powered?

Dell withdraws from the public cloud IaaS market

Today, not long after its recent acquisition of Enstratius, Dell announced a withdrawal from the public cloud IaaS market. This removes Dell’s current VMware-based, vCloud Datacenter Service from the market; furthermore, Dell will not launch an OpenStack-based public cloud IaaS offering later this year, as it had originally intended to do. This does not affect Dell’s continued involvement with OpenStack as a CMP for private clouds.

It’s not especially surprising that Dell decided to discontinue its vCloud service, which has gotten highly limited traction in the market, and was expensive even compared to other vCloud offerings — given its intent to launch a different offering, the writing was mostly on the wall already. What’s more surprising is that Dell has decided to focus upon an Enstratius-enabled cloud services broker (CSB) role, when its two key competitors — HP and IBM — are trying to control an entire technology stack that spans hardware, software, and services.

It is clear that it takes significant resources and substantial engineering talent — specifically, software engineering talent — to be truly competitive in the cloud IaaS market, sufficiently so to move the needle of a company as large as Dell. I do not believe that cloud IaaS is, or will become, a commodity; I believe that the providers will, for many years to come, compete to offer the most capable and feature-rich offerings to their customers.

Infrastructure, of course, still needs to be managed. IT operations management (ITOM) tools — whether ITIL-ish as in the current market, or DevOps-ish as in the emerging market — will remain necessary. All the capabilities that make it easy to plan, deploy, monitor, manage, and so forth are still necessary, although you do these things differently in the cloud than on-premise, potentially. Such capabilities can either be built into the IaaS offerings themselves — perhaps with bundled pricing, perhaps as value-added services, but certainly as where much of the margin will be made and providers will differentiate — or they can come from third-party multi-cloud management vendors who are able to overlay those capabilities on top of other people’s clouds.

Dell’s strategy essentially bets on the latter scenario — that Enstratius’s capabilities can be extended into a full management suite that’s multi-cloud, allowing Dell to focus all of its resources on developing the higher-level functionality without dealing with the lower-level bits. Arguably, even if the first scenario ends up being the way the market goes (I favor the former scenario over the latter one, at present), there will still be a market for cloud-agnostic management tools. And if it turns out that Dell has made the wrong bet, they can either launch a new offering, or they may be able to buy a successful IaaS provider later down the line (although given the behemoths that want to rule this space, this isn’t as likely).

From my perspective, as strategies go, it’s a sensible one. Management is going to be where the money really is — it won’t be in providing the infrastructure resources. (In my view, cloud IaaS providers will eventually make thin margins on the resources in order to get the value-adds, which are basically ITOM SaaS, plus most if not all will extend up into PaaS.) By going for a pure management play, with a cloud-native vendor, Dell gets to avoid the legacy of BMC, CA, HP, IBM/Tivoli, and its own Quest, and their struggles to make the shift to managing cloud infrastructure. It’s a relatively conservative wait-and-see play that depends on the assumption that the market will not mature suddenly (beware the S-curve), and that elephants won’t dance.

If Dell really wants to be serious about this market, though, it should start scooping up every other vendor that’s becoming significant in the public cloud management space that has complementing offerings (everyone from New Relic to Opscode, etc.), building itself into an ITOM vendor that can comprehensively address cloud management challenges.

And, of course, Dell is going to need a partner ecosystem of credible, market-leading IaaS offerings. Enstratius already has those partners — now they need to become part of the Dell solutions portfolio.

Recommended reading for Cloud IaaS Magic Quadrant

If you’re a service provider interested in participating in the Cloud IaaS Magic Quadrant process (see the call for vendors), I’d like to recommend a number of my previous blog posts.

Foundational Gartner research notes on cloud IaaS. Recommended reading to understand our thinking on the market.

Having cloud-enabled technology != Having a cloud. Critical for understanding what we do and don’t consider cloud IaaS to be.

AR contacts for a Magic Quadrant should read everything. An explanation of why it’s critical to read every word of every communication received during the MQ process.

The process of a Magic Quadrant. Understanding a little bit about how MQs get put together.

Vendors, Magic Quadrants, and client status. Appropriate use of communications channels during the MQ process.

General tips for Magic Quadrant briefings and Specific tips for Magic Quadrant briefings. Information on how to conduct an effective and concise Magic Quadrant briefing.

The art of the customer reference. Tips on how to choose reference customers.

%d bloggers like this: