Blog Archives
My 2016 research agenda
At the beginning of February, as part of a little bit of organizational deck-chair shuffling here at Gartner, some analysts were transferred to teams that better reflect their current research focus. I am one of those people; I’ve been transferred from our Technology and Service Provider (T&SP) division to our IT Leaders division’s Infrastructure Strategies team, reflecting the fact that I’ve mostly been serving the IT Leaders constituency for several years now.
My planned research agenda for 2016 remains the same, and I’ll continue to work with the exact same people and clients, but I now have a new team manager (Rakesh Kumar — and hopefully a better alignment between the things that I’m actually doing and the way Gartner sets goals and incentives for analysts.
I will continue to write research for both our end-user (IT Leaders) clients, as well as for our T&SP (Business Leaders) clients. Vendor clients should note that my transfer does not change access privileges to my documents targeted at vendors, or inquiry access. You will still need to hold a Business Leaders seat to read T&SP content that I author, or to ask inquiry questions related to that content.
My research agenda for 2016 centers on five major themes:
- The managed and professional services ecosystem for cloud providers
- Large-scale migration to the cloud
- Excellence in governance
- The convergence of IaaS and PaaS
- The adoption of containers
I’ll talk briefly about my interests in each of these spaces.
Managed and professional services
The MSP ecosystem, especially around AWS and Azure, is a vital part of driving successful cloud adoption, especially with late-majority adopters. This is my primary research focus this year, as I build out both research for end-users (IT Leaders clients) as well as service providers and technology vendors. This is largely a run-up to a new Magic Quadrant slated for Q4 2016 publication.
Large-scale migration to the cloud
Customers have begun migrate existing workloads to cloud IaaS at scale (i.e., migrating entire data centers or substantial portions of their existing infrastructure estate). This is no longer just early pioneers, but mainstream, non-tech-centric companies, often in the mid-market, usually with the assistance of third parties. These customers typically articulate needs that have been more commonly associated with outsourcing in the past — cost-optimization, better management of infrastructure and apps, staff reduction, keeping pace with technology evolution, and the like.
This is my next greatest priority in terms of writing this year. I’m building research aimed at clients who are evaluating migrations, as well as those who are in the process of migrating. I’m also writing research that is aimed at the vendor/provider side, since this new wave of customers has different requirements, necessitating both a different go-to-market approach as well as different sets of priorities in service features.
Excellence in governance
As organizations grow their use of the cloud, the need for governance is vital. Governance is not control, per se, and IT organizations need assistance in understanding the emerging best practices around governance. The vendor ecosystem also needs to build appropriate products and services that help IT organizations implement good cloud governance — which differs in some vital ways from traditional models of IT governance.
The convergence of IaaS and PaaS
As the IaaS and PaaS markets move ever closer together, customers need guidance as to how to best adopt integrated offerings. Moreover, this has implications for providers on both sides of the spectrum (and their ecosystems), as well as vendors who sell technology to build private clouds.
Containers
Last year, I spent just about as much time talking to clients about containers as I did about cloud IaaS — that’s how much of a hot topic it was. This year, as container coverage is dispersed across a lot more analysts, it’s become less of a focus for me, but I remain deeply interested in the evolution of the ecosystem, not just in the cloud, but also within traditional data centers.
Other Work
While these topics are some key broad themes, I’ll certainly be thinking about and writing about a great deal more. I tend to be a more spontaneous writer, and so I might sit down in an afternoon and rapidly write a note about something that’s been on my mind lately, even if it’s not part of my planned research. Indeed, I tend not to plan research except to the minimum extent that Gartner requires (generally “big rock” items like Magic Quadrants and so forth), so if you have feedback on what you’d like to see me produce, please feel free to let me know.
Introducing the new Hyperscale Cloud MSP Magic Quadrant
As has been noted in Doug Toombs’s blog post (“Important Updates for Gartner’s Hosting Magic Quadrants in 2016“), I will be leading the introduction of a new Gartner Magic Quadrant this year for managed service providers (MSPs) that deliver services on hyperscale cloud providers (specifically, Amazon Web Services, Microsoft Azure, or Google Cloud Platform).
This new global Magic Quadrant will be titled the “Magic Quadrant for Public Cloud Infrastructure Managed Service Providers”, and it is slated for early Q4 2016 publication (watch the editorial calendar for an official date). It will have accompanying Critical Capabilities that will be specific to each hyperscale cloud provider. In 2016, this will be a “Critical Capabilities for Managed Service Providers for Amazon Web Services”; in the future, as their ecosystems mature, we expect there will be a CC each for MSPs for Microsoft Azure and Google Cloud Platform as well.
Stay tuned for more. In the meantime, I’ve begun building a body of related research (sorry, links are behind client-only paywall):
How to Choose a Managed Service Provider for a Hyperscale Cloud Provider. Hyperscale integrated IaaS and PaaS providers are not mere purveyors of rented virtualization. MSPs need to have a very specific skillset to manage them well. This is the fundamental “what makes a good hyperscale cloud MSP” note.
Best Practices for Planning a Cloud IaaS Strategy: Bimodal IT, Not Hybrid Infrastructure. We advise customers to think differently about cloud IaaS based on their priorities — safety and efficiency-driven IT, vs. speed and agility-driven IT. This tends to lead to different styles of operations, which in turn drive different managed and professional services needs.
Three Journeys Define Migrating a Data Center to Cloud Infrastructure as a Service. An increasing number of customers are migrating existing applications and even entire data centers into cloud IaaS. This sets out those journeys, and explores the managed and professional services that are useful for those journeys.
Use Managed and Professional Services to Improve Cloud Operations for Digital Business. Mode 2 and digital business applications are often architected and operated in ways that are not broadly familiar to many IT organizations. We explore different styles of adopting managed and professional services for these needs.
Market Guide for Managed Service Providers on Amazon Web Services. Our introduction to the AWS MSP market explores use cases, classifies MSPs into categories, and profiles a handful of representative MSPs.
Market Trends: Channel Sales Strategies for Cloud IaaS Should Focus on Developer Ecosystems. We provide advice to cloud IaaS providers who are trying to build ecosystems and channel sales strategies — but MSPs will find this note valuable when trying to understand what their value is to their partner cluod provider.
I’ll soon be publishing a set of notes directed at MSPs who are currently in this market, or intended to enter this market, as well. And I’ll be doing a series of blog posts about what’s ahead.
Foundational Gartner research notes for cloud IaaS and managed hosting, 2014
With the refresh of the Magic Quadrant for Cloud IaaS, and the evolution of the regional Magic Quadrants for Managed Hosting into Magic Quadrants for Cloud-Enabled Managed Hosting, I am following my annual tradition of highlighting researching that myself and others have published that’s important in the context of these MQs. These notes lay out how we see the market, and consequently, the lens that we’re going to be evaluating the service providers through.
As always, I want to stress that service providers do not need to agree with our perspective in order to rate well. We admire those who march to their own particular beat, as long as it results in true differentiation and more importantly, customer wins and happy customers — a different perspective can allow a service provider to serve their particular segments of the market more effectively. However, such providers need to be able to clearly articulate that vision and to back it up with data that supports their world-view.
This updates a previous list of foundational research. Please note that those older notes still remain relevant, and you are encouraged to read them. You might also be interested in a previous research round-up (clients only).
If you are a service provider, these are the research notes that it might be helpful to be familiar with (sorry, links are behind client-only paywall):
Magic Quadrant for Cloud IaaS, 2013. Last year’s Magic Quadrant is full of deep-dive information about the market and the providers. Also check out the Critical Capabilities for Public Cloud IaaS, 2013 for a deeper dive into specific public cloud IaaS offerings (Critical Capabilities is almost solely focused on feature set for particular use cases, whereas a Magic Quadrant positions a vendor in a market as a whole).
Magic Quadrant for Managed Hosting, North America and Magic Quadrant for European Managed Hosting. Last year’s managed hosting Magic Quadrants are likely the last MQs we’ll publish for traditional managed hosting. They still make interesting reading even though these MQs are evolving this year.
Pricing and Buyer’s Guide for Web Hosting and Cloud Infrastructure, 2013. Our market definitions are described here.
Evaluation Criteria for Public Cloud IaaS Providers. Our Technical Professionals research provides extremely detailed criteria for large enterprises that are evaluating providers. While the customer requirements are somewhat different in other segments, like the mid-market, these criteria should give you an extremely strong idea of the kinds of things that we think are important to customers. The cloud IaaS MQ evaluation criteria are not identical (because it is broader than just large-enterprise), but they are very similar — we do coordinate our research.
Technology Overview for Cloud-Enabled System Infrastructure. If you’re wondering what cloud-enabled system infrastructure (CESI) is, this will explain it to you. Cloud-enabled managed hosting is the combination of a CESI with managed services, so it’s important to understand.
Don’t Be Fooled By Offerings Falsely Masquerading as Cloud IaaS. This note was written for our end-user clients, to help them sort out an increasingly “cloudwashed” service provider landscape. It’s very important for understanding what constitutes a cloud service and why the technical and business benefits of “cloud” matter.
Service Providers Must Understand the Real Needs of Prospective Customers of Cloud IaaS. Customers are often confused about what they want to buy when they claim to want “cloud”. This provides structured guidance for figuring this out, and it’s important for understanding service provider value propositions.
How Customers Purchase Cloud IaaS, 2012. A lifecycle exploration of how customers adopt and expand their use of cloud IaaS. Important for understanding our perspective on sales and marketing. (It’s dated 2012, but it’s actually a 2013 note, and still fully current.)
Market Trends: Managed Cloud Infrastructure, 2013. Our view of the evolution of data center outsourcing, managed hosting, and cloud IaaS, and broadly, the “managed cloud”. Critical for understanding the future of cloud-enabled managed hosting.
Managed Services Providers Must Adapt to the Needs of DevOps-Oriented Customers. As DevOps increases in popularity, managed services increasingly want their infrastructure to be managed with a DevOps philosophy. This represents a radical change for service providers. This note explores the customer requirements and market implications.
If you are not a Gartner client, please note that many of these topics have been covered in my blog in the past, if at a higher level (and generally in a mode where I am still working out my thinking, as opposed to a polished research position).
CenturyLink (Savvis) acquires Tier 3
If you’re an investment banker or a vendor, and you’ve asked me in the last year, “Who should we buy?”, I’ve often pointed at enStratius (bought by Dell), ServiceMesh (bought by CSC last week), and Tier 3.
So now I’m three for three, because CenturyLink just bought Tier 3, continuing its acquisition activity. CenturyLink is a US-based carrier (pushed to prominence when they acquired Qwest in 2011). They got into the hosting business (in a meaningful way) when they bought Savvis in 2011; Savvis has long been a global leader in colocation and managed hosting. (It’s something of a pity that CenturyLink is in the midst of killing the Savvis brand, which has recently gotten a lot of press because of their partnership with VMware for vCHS, and is far better known outside the US than the CenturyLink brand, especially in the cloud and hosting space.)
Savvis has an existing cloud IaaS business and a very large number of offerings that have the “cloud” label, generally under the Symphony brand — I like to say of Savvis that they never seem to have a use case that they don’t think needs another product, rather than having a unified but flexible platform for everything.
The most significant of Savvis’s cloud offerings are Symphony VPDC (recently rebranded to Cloud Data Center), SavvisDirect, and their vCHS franchise. VPDC is a vCloud-Powered public cloud offering (although Savvis has done a more user-friendly portal than vCloud Director provides); Savvis often combines it with managed services in lightweight data center outsourcing deals. (Savvis also has private cloud offerings.) SavvisDirect is an offering developed together with CA, and is intended to be a pay-as-you-go, credit-card-based offering, targeted at small businesses (apparently intended to be competitive with AWS, but whose structure seems to illustrate a failure to grasp the appeal of cloud as opposed to just mass-market VPS).
Savvis is the first franchise partner for vCHS; back at the time of VMworld (September) they were offering indications that over the long term that they thought that vCHS would win and that Savvis only needed to build its own IaaS platform until vCHS could fully meet customer requirements. (But continuing to have their own platform is certainly necessary to hedge their bets.)
Now CenturyLink’s acquisition of Tier 3 seems to indicate that they’re going to more than hedge their bets. Tier 3 is an innovative small IaaS provider (with fingers in the PaaS world through a Cloud Foundry-based PaaS, and they added .NET support to Cloud Foundry as “Iron Foundry”). Their offering is vCloud-Powered public cloud IaaS, but they entirely hide vCloud Director under their own tooling (and it doesn’t seem vCloud-ish from either the front-end or the implementation of the back-end), and they have a pile of interesting additional capabilities built into their platform. They’ve made a hypervisor-neutral push, as well They’ve got a nice blend between capabilities that appeal to the traditional enterprise, and forward-looking capabilities that appeal to a DevOps orientation. Tier 3 has some blue-chip enterprise names as customers, and it has historically scored well on Gartner evaluations, and they’re strongly liked by our enterprise clients who have evaluated them — but people have always worried about their size. (Tier 3 has made it easy to white-label the offering, which has given them more success from its partners, like Peer 1.) The acquisition by CenturyLink neatly solves that size problem.
Indeed, CenturyLink seems to have placed a strong vote of confidence in their IaaS offering, because Tier 3 is being immediately rebranded, and immediately offered as the CenturyLink Cloud. (Current outstanding quotes for Symphony VPDC will likely be requoted, and new VPDC orders are unlikely to be taken.) CenturyLink will offer existing VPDC customers a free migration to the Tier 3 cloud (since it’s vCD-to-vCD, presumably this isn’t difficult, and it represents an upgrade in capabilities for customers). CenturyLink is also immediately discontinuing selling the SavvisDirect offering (although the existing platform will continue to run for the time being); customers will be directed to purchase the Tier 3 cloud instead. (Or, I should say, the CenturyLink Cloud, since the Tier 3 brand is being killed.) CenturyLink is also doing a broad international expansion of data center locations for this cloud.
CenturyLink has been surprisingly forward-thinking to date about the way the cloud converges infrastructure capabilities (including networking) and applications, and how application development and operations changes as a result. (They bought AppFog back in June to get a PaaS offering, too.) Their vision of how these things fit together is, I think, much more interesting than either AT&T or Verizon’s (or for that matter, any other major global carrier). I expect the Tier 3 acquisition to help accelerate their development of capabilities.
Savvis’s managed and professional services combined with the Tier 3 platform should provide them some immediate advantages in the cloud-enabled managed hosting and data center outsourcing markets. It’s more competition for the likes of CSC and IBM in this space, as well as providers like Verizon Terremark and Rackspace. I think the broad scope of the CenturyLink portfolio will mesh nicely not just with existing Tier 3 capabilities, but also capabilities that Tier 3 hasn’t had the resources to be able to develop previously.
Even though I believe that the hyperscale providers are likely to have the dominant market share in cloud IaaS, there’s still a decent market opportunity for everyone else, especially when the service is combined with managed and professional services. But I believe that managed and professional services need to change with the advent of the cloud — they need to become cloud-native and in many cases, DevOps-oriented. (Gartner clients only: see my research note, “Managed Service Providers Must Adapt to the Needs of DevOps-Oriented Customers“.) Tier 3 should be a good push for CenturyLink along this path, particularly since CenturyLink will make Tier 3’s Seattle offices the center of their cloud business, and they’re retaining Jared Wray (Tier 3’s founder) as their cloud CTO.
Google Compute Engine and live migration
Google Compute Engine (GCE) has been a potential cloud-emperor contender in the shadows, and although GCE is still in beta, it’s been widely speculated that Google will likely be the third vendor in the trifecta of big cloud IaaS market-share leaders, along with Amazon Web Services (AWS) and Microsoft Windows Azure.
Few would doubt Google’s technology prowess, if it decides to commit itself to a business, though. A critical question has remained, though: Will Google be able to deliver technology capabilities that can be used by mere mortals in the enterprise, and market, sell, contract for, and deliver service in a way that such businesses can use? (Its ability to serve ephemeral large-scale compute workloads, and perhaps meet the needs of start-ups, is not in doubt.)
One of the most heartburn-inducing aspects of GCE has been its scheduled maintenance, To quote Google: “For scheduled zone maintenance windows, Google takes an entire zone offline for roughly two weeks to perform various, disruptive maintenance tasks.” Basically, Google has said, “Your data center will be going away for up to two weeks. Deal with it. You should be running in multiple zones anyway.”
Even most cloud-native start-ups aren’t capable of easily executing this way. Remember that most applications are architected to have their data locally, in the same zone as the compute. Without using Google’s PaaS capabilities (like Datastore), this means that the customer needs to move and/or replicate storage into another zone, which also increases their costs. Many applications aren’t large enough to warrant the complexity of a multi-zone implementation, either — not only business applications, but also smaller start-ups, mobile back-end implementations, and so forth.
So inherently, a hard-line stance on taking zones offline for maintenance, limited GCE’s market opportunity. Despite positioning this as a hard-line stance previously, Google has clearly changed its mind, introducing “transparent maintenance”. This is accomplished with a combination of live migration technology, and some innovations related to their implementation of physical data center maintenance. It’s an interesting indication of Google listening to prospects and customers and flexing to do something that has not been the Google Way.
Not only will Google’s addition of migration help data center maintenance, but more importantly, it will mitigate downtime related to host maintenance. Although AWS, for instance, tries to minimize host maintenance in order to avoid instance downtime or reboots, host maintenance is necessary — and it’s highly useful to have a technology that allows you to host maintenance without downtime for the instances, because this encourages you not to delay host maintenance (since you want to update the underlying host OS, hypervisor, etc.).
VMware-based providers almost always do live migration for host maintenance, since it’s one of the core compelling features of VMware. But AWS, and many competitors that model themselves after AWS, don’t. I hope that Google’s decision to add live migration into GCE pushes the rest of the market — and specifically AWS, which today generally sets the bar for customer expectations — into doing the same, because it’s a highly useful infrastructure resilience feature, and it’s important to customers.
More broadly, though, AWS hasn’t really had innovation competitors to date. Microsoft Azure is a real competitor, but other than in PaaS, they’ve largely been playing catch-up. Thanks to its extensive portfolio of internal technologies, Google has the potential ability to inject truly new capabilities into the market. Similar to what customers have seen with AWS — when AWS has been successful at introducing capabilities that many customers weren’t really even aware that they wanted — I expect Google is going to launch truly innovative capabilities that will turn into customer demands. It’s not that AWS is going to simply mount a competitive response — it will become a situation where customers ask for these capabilities, pushing AWS to respond. That should be excellent for the market.
It’s worth noting that the value of Google is not just GCE — it is Google Cloud Platform as a whole, including the PaaS elements. This is similarly true with Microsoft Azure. And although AWS seems to broadly bucketed as IaaS, in reality their capabilities overlap into the PaaS space. These vendors understand that the goal is the ability to develop and deliver business capaiblities more quickly — not to provide cheap infrastructure.
Capabilities equate lock-in, by the way, but historically, businesses have embraced lock-in whenever it results in more value delivered.
IBM SoftLayer, versus Amazon or versus Rackspace?
Near the beginning of July, IBM closed its acquisition of SoftLayer (which I discussed in a previous blog post). A little over three months have passed since then, and IBM has announced the addition of more than 1,500 customers, the elimination of SmartCloud Enterprise (SCE, IBM’s cloud IaaS offering), and went on the offensive against Amazon in an ad campaign (analyzed in my colleague Doug Toombs’s blog post). So what does this all mean for IBM’s prospects in cloud infrastructure?
IBM is unquestionably a strong brand with deep customer relationships — it exerts a magnetism for its customers that competitors like HP and Dell don’t come anywhere near to matching. Even with all of the weaknesses of the SCE offering, here at Gartner, we still saw customers choose the service simply because it was from IBM — even when the customers would openly acknowledge that they found the platform deficient and it didn’t really meet their needs.
In the months since the SoftLayer acquisition has closed, we’ve seen this “we use IBM for everything by preference” trend continue. It certainly helps immensely that SoftLayer is a more compelling solution than SCE, but customers continue to acknowledge that they don’t necessarily feel they’re buying the best solution or the best technology, but they are getting something that is good enough from a vendor that they trust. Moreover, they are getting it now; IBM has displayed astonishing agility and a level of aggression that I’ve never seen before. It’s impressive how quickly IBM has jump-started the pipeline this early into the acquisition, and IBM’s strengths in sales and marketing are giving SoftLayer inroads into a mid-market and enterprise customer base that it wasn’t able to target previously.
SoftLayer has always competed to some degree against AWS (philosophically, both companies have an intense focus on automation, and SoftLayer’s bare-metal architecture is optimal for certain types of use cases), and IBM SoftLayer will as well. In the IBM SoftLayer deals we’ve seen in the last couple of months, though, their competition isn’t really Amazon Web Services (AWS). AWS is often under consideration, but the real competitor is much more likely to be Rackspace — dedicated servers (possibly with a hybrid cloud model) and managed services.
IBM’s strategy is actually a distinctively different one from the other providers in the cloud infrastructure market. SoftLayer’s business is overwhelmingly dedicated hosting — mostly small-business customers with one or two bare-metal servers (a cost-sensitive, high-churn business), though they had some customers with large numbers of bare-metal servers (gaming, consumer-facing websites, and so forth). It also offers cloud IaaS, called CloudLayer, with by-and-hour VMs and small bare-metal servers, but this is a relatively small business (AWS has individual customers that are bigger than the entirety of CloudLayer). SoftLayer’s intellectual property is focused on being really, really good at quickly provisioning hardware in a fully automated way.
IBM has decided to do something highly unusual — to capitalize on SoftLayer’s bare-metal strengths, and to strongly downplay virtualization and the role of the cloud management platform (CMP). If you want a CMP — OpenStack, CloudStack, vCloud Director, etc. — on SoftLayer, there’s an easy way to install the software on bare metal. But if you want it updated, maintained, etc., you’ll either have to do it yourself, or you need to contract with IBM for managed services. If you do that, you’re not buying cloud IaaS; you’re renting hardware and CMP software, and building your own private cloud.
While IBM intends to expand the configuration options available in CloudLayer (and thus the number of hardware options available by the hour rather than by the month), their focus is upon the lower-level infrastructure constructs. This also means that they intend to remain neutral in the CMP wars. IBM’s outsourcing practice has historically been pretty happy to support whatever software you use, and the same largerly applies here — they’re offering managed services for the common CMPs, in whatever way you choose to configure them.
In other words, while IBM intends to continue its effort to incorporate OpenStack as a provisioning manager in its “Smarter Infrastructure” products (the division formerly known as Tivoli), they are not launching an OpenStack-based cloud IaaS, replacing the existing CloudLayer cloud IaaS platform, or the like.
IBM also intends to use SoftLayer as the underlying hardware platform for the application infrastructure components that will be in its Cloud Foundry-based framework for PaaS. It will depend on these components to compete against the higher-level constructs in the market (like Amazon’s RDS database-as-a-service).
IBM SoftLayer has a strong value proposition for certain use cases, but today their distinctive value proposition is a different one than AWS’s, but a very similar one to Rackspace’s (although I think Rackspace is going to embrace DevOps-centric managed services, while IBM seems more traditional in its approach). But IBM SoftLayer is still an infrastructure-centric story. I don’t know that they’re going to compete with the vision and speed of execution currently being displayed by AWS, Microsoft, and Google, but ultimately, those providers may not be IBM SoftLayer’s primary competitors.
IBM buys SoftLayer
It’s been a hot couple of weeks in the cloud infrastructure as a service space. Microsoft’s Azure IaaS (persistent VMs) came out of beta, Google Compute Engine went into public beta, VMware formally launched its public cloud (vCloud Hybrid Service), and Dell withdrew from the mark. Now, IBM is acquiring SoftLayer, with a deal size in the $2B range, around a 4x-5x multiple — roughly the multiple that Rackspace trades at, with RAX no doubt used as a comp despite the vast differences in the two companies’ business models.
SoftLayer is the largest provider of dedicated hosting on the planet, although they do also have cloud IaaS offerings; they sell direct, but also have a huge reseller channel, and they’re often the underlying provider to many mass-market shared hosting providers. Like other dedicated hosters, they are very SMB-centric — tons of dedicated hosting customers are folks with just a server or two. But they also have a sizable number of customers with scale-out businesses to whom “bare metal” (non-virtualized dedicated servers), provisioned flexibly on demand (figure it typically takes 1 to 4 hours to provision bare metal), is very attractive.
Why bare metal? Because virtualization is great for server consolidation (“I used to have 10 workloads on 10 servers that were barely utilized, and now I have one heavily utilized server running all 10 workloads!”), but it’s often viewed as unnecessary overhead when you’re dealing with an environment where all the servers are running nearly fully utilized, as is common in scale-out, Web-scale environments.
SoftLayer’s secret sauce is its automation platform, which handles virtualized and non-virtualized servers with largely equal ease. One of their value propositions has been to bring the kinds of things you expect from cloud VMs, to bare metal — paid by the hour, fully-automated provisioning, API as well as GUI, provisioning from image, and so forth. So the value proposition is often “get the performance of bare metal, in exactly the configuration you want, with the advantages and security of single-tenancy, without giving up the advantages of the cloud”. And, of course, if you want virtualization, you can do that — or SoftLayer will be happy to sell you VMs in their cloud.
SoftLayer also has a nice array of other value-adds that you can self-provision, including being able to click to provision cloud management platforms (CMPs) like CloudStack / Citrix CloudPlatform, and hosting automation platforms like Parallels. Notably, though, they are focused on self-service. Although SoftLayer acquired a small managed hosting business when it merged with ThePlanet, its customer base is nearly exclusively self-managed. (That makes them very different than Rackspace.)
In terms of the competition, SoftLayer’s closest philosophical alignment is Amazon Web Services — don’t offer managed services, but instead build successive layers of automation going up the stack, that eliminate the need for traditional managed services as much as possible. They have a considerably narrower portfolio than AWS, of course, but AWS does not offer bare metal, which is the key attractor for SoftLayer’s customers.
So why does IBM want these guys? Well, they do fill a gap in the IBM portfolio — IBM has historically not served an SMB market directly in gneral, and its developer-centric SmartCloud Enterprise (SCE) has gotten relatively weak traction (seeming to do best where the IBM brand is important, notably Europe), although that can be blamed on SCE’s weak feature set and significant downtime associated with maintenance windows, more so than the go-to-market (although that’s also been problematic). I’ll be interested to see what happens to the SCE roadmap in light of the acquisition. (Also, IBM’s SCE+ offering — essentially a lightweight data center outsourcing / managed hosting offering, delivered on cloud-enabled infrastructure — uses a totally different platform, which they’ll need to converge at some point in time.)
Beyond the “public cloud”, though, SoftLayer’s technology and service philosophy are useful to IBM as a platform strategy, and potentially as bits of software and best practices to embed in other IBM products and services. SoftLayer’s anti-managed-services philosophy isn’t dissonant with IBM’s broader outsourcing strategy as it relates to the cloud. Every IT outsourcer at any kind of reasonable scale actually wants to eliminate managed services where they can, because at this point, it’s hard to get any cheaper labor — the Indian outsourcers have already wrung that dry, and every IT outsourcer today offshores as much of their labor as possible. So your only way to continue to get costs down is to eliminate the people. If you can, through superior technology, eliminate people, then you are in a better competitive position — not just for cost, but also for consistency and quality of service delivery.
I don’t think this was a “must buy” for IBM, but it should be a reasonable acceleration of their cloud plans, assuming that they manage to retain the brains at SoftLayer, and can manage what has been an agility-focused, technology-driven business with a very different customer base and go-to-market approach than the traditional IBM base — and culture. SoftLayer can certainly use more money to for engineering resources (although IBM’s level of engineering commitment to cloud IaaS has been relatively lackluster given its strategic importance), marketing, and sales, and larger customers that might have been otherwise hesitant to use them may be swayed by the IBM brand.
(And it’s a nice exit for GI Partners, at the end of a long road in which they wrapped up EV1 Servers, ThePlanet, and SoftLayer… then pursued an IPO route during a terrible time for IPOs… and finally get to sell the resulting entity for a decent valuation.)
The myth of zero downtime
Every time there’s been a major Amazon outage, someone always says something like, “Regular Web hosters and colocation companies don’t have outages!” I saw an article in my Twitter stream today, and finally decided that the topic deserves a blog post. (The article seemed rather linkbait-ish, so I’m not going to link it.)
It is an absolute myth that you will not have downtime in colocation or Web hosting. It is also a complete myth that you won’t have downtime in cloud IaaS run by traditional Web hosting or data center outsourcing providers.
The typical managed hosting customer experiences roughly one outage a year. This figure comes from thirteen years of asking Gartner clients, day in and day out, about their operational track record. These outages are typically related to hardware failure, although sometimes they are related to service provider network outages (often caused by device misconfiguration, which can obliterate any equipment or circuit redundancy). Some customers are lucky enough to never experience any outages over the course of a given contract (usually two to three years for complex managed hosting), but this is actually fairly rare, because most customers aren’t architected to be resilient to all but the most trivial of infrastructure failures. (Woe betide the customer who has a serious hardware failure on a database server.) The “one outage a year” figure does not include any outages that the customer might have caused himself through application failure.
The typical colocation facility in the US is built to Tier III standards, with a mathematical expected availability of about 99.98%. In Europe, colocation facilities are often built to Tier II standards intead, for an expected availability of about 99.75%. Many colocation facilities do indeed manage to go for many years without an outage. So do many enterprise data centers — including Tier I facilities that have no redundancy whatsoever. The mathematics of the situation don’t say that you will have an outage — these are merely probabilities over the long term. Moreover, there will be an additional percentage of error that is caused by humans. Single-data-center kings who proudly proclaim that their one data center has never had an outage have gotten lucky.
The amount of publicity that a data center outage gets is directly related to its tenant constituency. The outage at the 365 Main colocation facility in San Francisco a few years back was widely publicized, for instance, because that facility happened to house a lot of Internet properties, including ones directly associated with online publications. There have been significant outages at many other colocation faciliities over the years, though, that were never noted in the press — I’ve found out about them because they were mentioned by end-user clients, or because the vendor disclosed them.
Amazon outages — and indeed, more broadly, outages at large-scale providers like Google — get plenty of press because of their mass effects, and the fact that they tend to impact large Internet properties, making the press aware that there’s a problem.
Small cloud providers often have brief outages — and long maintenance windows, and sometimes lengthy maintenance downtimes. You’re rolling the dice wherever you go. Don’t assume that just because you haven’t read about an outage in the press, it hasn’t occurred. Whether you decide on managed hosting, dedicated hosting, colocation, or cloud IaaS, you want to know a provider’s track record — their actual availability over a multi-year period, not excluding maintenance windows. Especially for global businesses with 24×7 uptime requirements, it’s not okay to be down at 5 am Eastern, which is prime-time in both Europe and Asia.
Sure, there are plenty of reasons to worry about availability in the cloud, especially the possibility of lengthy outages made worse by the fundamental complexity that underlies many of these infrastructures. But you shouldn’t buy into the myth that your local Web hoster or colocation provider necessarily has better odds of availability, especially if you have a non-redundant architecture.
The forthcoming Managed Hosting Magic Quadrant, 2013
Gartner will soon be starting the process of updating our Magic Quadrant for Managed Hosting, currently targeted for publication in Q1 of 2013. This is the update to the Magic Quadrant for Managed Hosting that was published in March 2012 of this year; a free reprint is available. If you consider yourself to be an enterprise-class managed hosting provider, capable of providing fully-managed services for complex, mission-critical websites, this is your Magic Quadrant. (Note that this is a distinct market from data center outsourcing.)
The previous Magic Quadrant was global. However, because regional requirements differ, and many excellent managed hosting providers are not global, we have decided to replace the global Magic Quadrant with three regional Magic Quadrants — one each for North America, Pan-Europe, and Asia-Pacific, published in that order. Each MQ will have its own inclusion criteria and evaluation criteria.
I will be leading the overall global effort, and Gartner’s analysts that cover Managed Hosting will be doing these MQs as a global team, although each regional MQ will have a region-specific lead author. We are going to do a single global data collection effort and set of briefings, though, to reduce the level of effort needed by the service provider AR teams.
Doug Toombs will be the lead author for the North American MQ and will be assisting me in running the global effort. If you are not already following his blog or his Twitter (@DougToombs), I strongly encourage you to do so.
We will imminently be kicking off the process for this set of MQs. If you were not on last year’s Magic Quadrant for Managed Hosting, and you would like to receive a pre-qualification survey, please contact Doug Toombs at Douglas dot Toombs at Gartner dot com. Please note that we allow any service provider to participate in the survey process; reception of a survey does not indicate in any way that we feel that your company is qualified to be in the MQ.
Having cloud-enabled technology != Having a cloud
Many people confuse “using a hardware and software stack that potentially enables a cloud” with “cloud infrastructure as a service”. Analyst firms haven’t necessarily done a good job with drawing the distinction, either — there are plenty of (hopefully non-Gartner) analysts who use “IaaS” interchangeably to describe the technology stack and the service itself.
The technology stack — typically an integrated system (such as a Vblock) plus a cloud management platform (such as vCloud Director), although it could be anything, like whitebox servers + Nexenta storage + Arista switches + OpenStack — is what Gartner is now dubbing “cloud-enabled system infrastructure” (CESI). This is admittedly naming-by-committee, but it parallels our use of “cloud-enabled application platform” which is the technology-stack corollary for PaaS.
Why is it important to make a distinction between CESI and IaaS? Because while CESI can be used to deliver IaaS, there are many services that can be delivered on top of a CESI that are not IaaS. Gartner’s cloud definitions are pretty strict, and one of the cores of our IaaS definition is that IaaS is self-service — you can optionally layer managed services on top, but the customer has to have full control to obtain and remove resources in a fully self-service, fully-automated way. Not only are many of the services that can be delivered on CESI not IaaS, they’re not actually cloud services — cloud requires not only self-service, but also scalability, elasticity, and metering by use, for instance.
Why does this distinction matter? Because there are a significant number of service providers in the market today who offer a service on top of a CESI and call it “cloud IaaS”, when it’s neither cloud nor IaaS, and it misleads customers into thinking that they are getting the technical and/or business benefits of going to the cloud and going to IaaS.
I’ve written two research notes that I’m hoping will cut through some of the market confusion. (Links are Gartner clients only, sorry.)
The first, Technology Overview for Cloud-Enabled System Infrastructure, is an overview of how we define CESI and how it is not merely a virtualization environment — it encompasses compute, storage, and network capabilities; it is automated, scalable, elastic, near-real-time on-demand; and it exposes self-service interfaces. It discusses the range of cloud-enabled infrastructure services from those that are merely cloud-facilitated (using the CESI as an efficient infrastructure platform, without exposing it to the customer via self-service), to those that are fully cloud-native (CESI fully exposed to customer via self-service that is routinely used), and gives examples of the services along this spectrum; for instance, IBM’s SCE+ is cloud-enabled data center outsourcing, not IaaS, in our parlance. Finally, it disusses the distinction between global-class CESI architectures (think Amazon Web Services) and enterprise-class architectures (think Vblock + vCD), and how to build a CESI.
The second, Don’t Be Fooled By Offerings Falsely Masquerading as Cloud Infrastructure as a Service, is a note that is intended to help IT buyers figure out what they’re really buying when they are assessing something that the vendor is claiming is IaaS. It focuses on a handful of principles: Beware of common “cloudwashing” claims; ensure that what you’re buying delivers promised technical and business benefits; identify your requirements and look at competing providers before purchasing a solution that requires a contractual commitment; favor providers who have better cloud infrastructure roadmaps.
The four most common “cloudwashing” claims for infrastructure services:
- “It uses virtualization, so it’s a cloud.”
- “It’s being delivered on a Vblock, so it’s a cloud.”
- “We use vCloud Director, so it’s a cloud.”
- “You can buy it through a portal, so it’s a cloud.”
I apologize for introducing more jargon into an already jargon-heavy corner of the market, but I think these distinctions are critical: You can have a technology stack that potentially enables you to deliver cloud IaaS… and use it to deliver something that is neither cloud nor IaaS. Cloud infrastructure as a technology platform is separate and distinct from cloud infrastructure as a service; the technical construct and the service construct are two different things.