Monthly Archives: November 2013
See me perform the Glazunov violin concerto! (DC area, Dec 15th)
On a totally personal note (a rarity for my blog, for better or for worse):
I am the soloist for the Glazunov violin concerto with the Montgomery Philharmonic, on Sunday, December 15th. (7 pm, Gaithersburg Presbyterian Church, in suburban Washington DC. Free concert, no tickets required, kids welcome.)
It’s an enormously rare opportunity to be able to play a concerto with orchestra, and I’m immensely pleased to have been asked to do so. It’s also an incredibly large investment of time to do the preparation for a performance, and thus, it would be lovely to have a larger audience. So, if you’re in the DC area, I invite you to come — it’s an all-Russian program (the other works are Rimsky-Korsakov’s “Russian Easter” Overture, and Tchaikovsky’s Nutcracker suite).
For me, this is a kind of mark of reclaiming my life outside of work. I used to play semi-professionally when I lived in the Bay Area, and a brief hiatus when I moved cross-country turned into a decade-long break, the last few years of which I have pretty much done nothing but work. During 2013, I’ve tried to find a more reasonable balance between working and other things; if you’re a client of mine, you know that I’ve been much stricter about the way I schedule travel, and pushing more things to my colleagues rather than allowing myself to be as heavily over committed.
My personality doesn’t really ever allow me to just veg out, and so the other things that I do, I tend to do pretty intensively, whether it’s the violin, building Lego sets, or cycling through various games (at the moment, I’m trying to dominate my PvP bracket in Marvel Puzzle Quest). So if you’d like to see me do something in a totally different context, please do come and hear the performance!
CenturyLink (Savvis) acquires Tier 3
If you’re an investment banker or a vendor, and you’ve asked me in the last year, “Who should we buy?”, I’ve often pointed at enStratius (bought by Dell), ServiceMesh (bought by CSC last week), and Tier 3.
So now I’m three for three, because CenturyLink just bought Tier 3, continuing its acquisition activity. CenturyLink is a US-based carrier (pushed to prominence when they acquired Qwest in 2011). They got into the hosting business (in a meaningful way) when they bought Savvis in 2011; Savvis has long been a global leader in colocation and managed hosting. (It’s something of a pity that CenturyLink is in the midst of killing the Savvis brand, which has recently gotten a lot of press because of their partnership with VMware for vCHS, and is far better known outside the US than the CenturyLink brand, especially in the cloud and hosting space.)
Savvis has an existing cloud IaaS business and a very large number of offerings that have the “cloud” label, generally under the Symphony brand — I like to say of Savvis that they never seem to have a use case that they don’t think needs another product, rather than having a unified but flexible platform for everything.
The most significant of Savvis’s cloud offerings are Symphony VPDC (recently rebranded to Cloud Data Center), SavvisDirect, and their vCHS franchise. VPDC is a vCloud-Powered public cloud offering (although Savvis has done a more user-friendly portal than vCloud Director provides); Savvis often combines it with managed services in lightweight data center outsourcing deals. (Savvis also has private cloud offerings.) SavvisDirect is an offering developed together with CA, and is intended to be a pay-as-you-go, credit-card-based offering, targeted at small businesses (apparently intended to be competitive with AWS, but whose structure seems to illustrate a failure to grasp the appeal of cloud as opposed to just mass-market VPS).
Savvis is the first franchise partner for vCHS; back at the time of VMworld (September) they were offering indications that over the long term that they thought that vCHS would win and that Savvis only needed to build its own IaaS platform until vCHS could fully meet customer requirements. (But continuing to have their own platform is certainly necessary to hedge their bets.)
Now CenturyLink’s acquisition of Tier 3 seems to indicate that they’re going to more than hedge their bets. Tier 3 is an innovative small IaaS provider (with fingers in the PaaS world through a Cloud Foundry-based PaaS, and they added .NET support to Cloud Foundry as “Iron Foundry”). Their offering is vCloud-Powered public cloud IaaS, but they entirely hide vCloud Director under their own tooling (and it doesn’t seem vCloud-ish from either the front-end or the implementation of the back-end), and they have a pile of interesting additional capabilities built into their platform. They’ve made a hypervisor-neutral push, as well They’ve got a nice blend between capabilities that appeal to the traditional enterprise, and forward-looking capabilities that appeal to a DevOps orientation. Tier 3 has some blue-chip enterprise names as customers, and it has historically scored well on Gartner evaluations, and they’re strongly liked by our enterprise clients who have evaluated them — but people have always worried about their size. (Tier 3 has made it easy to white-label the offering, which has given them more success from its partners, like Peer 1.) The acquisition by CenturyLink neatly solves that size problem.
Indeed, CenturyLink seems to have placed a strong vote of confidence in their IaaS offering, because Tier 3 is being immediately rebranded, and immediately offered as the CenturyLink Cloud. (Current outstanding quotes for Symphony VPDC will likely be requoted, and new VPDC orders are unlikely to be taken.) CenturyLink will offer existing VPDC customers a free migration to the Tier 3 cloud (since it’s vCD-to-vCD, presumably this isn’t difficult, and it represents an upgrade in capabilities for customers). CenturyLink is also immediately discontinuing selling the SavvisDirect offering (although the existing platform will continue to run for the time being); customers will be directed to purchase the Tier 3 cloud instead. (Or, I should say, the CenturyLink Cloud, since the Tier 3 brand is being killed.) CenturyLink is also doing a broad international expansion of data center locations for this cloud.
CenturyLink has been surprisingly forward-thinking to date about the way the cloud converges infrastructure capabilities (including networking) and applications, and how application development and operations changes as a result. (They bought AppFog back in June to get a PaaS offering, too.) Their vision of how these things fit together is, I think, much more interesting than either AT&T or Verizon’s (or for that matter, any other major global carrier). I expect the Tier 3 acquisition to help accelerate their development of capabilities.
Savvis’s managed and professional services combined with the Tier 3 platform should provide them some immediate advantages in the cloud-enabled managed hosting and data center outsourcing markets. It’s more competition for the likes of CSC and IBM in this space, as well as providers like Verizon Terremark and Rackspace. I think the broad scope of the CenturyLink portfolio will mesh nicely not just with existing Tier 3 capabilities, but also capabilities that Tier 3 hasn’t had the resources to be able to develop previously.
Even though I believe that the hyperscale providers are likely to have the dominant market share in cloud IaaS, there’s still a decent market opportunity for everyone else, especially when the service is combined with managed and professional services. But I believe that managed and professional services need to change with the advent of the cloud — they need to become cloud-native and in many cases, DevOps-oriented. (Gartner clients only: see my research note, “Managed Service Providers Must Adapt to the Needs of DevOps-Oriented Customers“.) Tier 3 should be a good push for CenturyLink along this path, particularly since CenturyLink will make Tier 3’s Seattle offices the center of their cloud business, and they’re retaining Jared Wray (Tier 3’s founder) as their cloud CTO.
Infrastructure resilience, fast VM restart, and Google Compute Engine
If you read Gartner research, you’ve probably noticed that we’ve started referring to something called “fast VM restart”. We consider it to be a critical infrastructure resiliency feature for many business application workloads.
Many applications are small. Really, really small. They take a fraction of a CPU core, and less than 1 GB of RAM. And they’ll never get any bigger. (They drive the big wins in server consolidation.) Most applications in mainstream businesses are like that. I often refer to these as “paperwork apps” — somebody fills out a form, that form is routed and processed, and eventually someone runs a report. Businesses have a zillion of these and continue to write more. When an organization says they have hundreds, or thousands, of apps, most of them are paperwork apps. They can be built by not especially bright or skilled programmers, and for resilience, they rely on the underlying infrastructure platform.
A couple things can happen to these kinds of paperwork apps in the future:
- They can be left on-premise to run as-is within an enterprise virtualization environment (that maybe eventually becomes private cloud-ish), relying on its infrastructure resilience.
- They can be migrated into a cloud IaaS environment, relying on it for infrastructure resilience.
- They can be migrated onto a PaaS, either on-premise or from a service provider, relying on it for resilience.
- They can be moved to business process management (BPM) platforms, either via on-premise deployment of a BPM suite, or a BPM PaaS, thereby making resilience the problem of the BPM software.
Note the thing that’s not on that list: Re-architecting the application for application-level resilience. That requires that your developers be skilled enough to do it, and for you to be able to run in a distributed fashion, which, due to the low level of resources consumed, isn’t economical.
Of the various scenarios above, the lift-and-shift onto cloud IaaS is a hugely likely one for many applications. But businesses want to be comfortable that the availability will be comparable to their existing on-premise infrastructure.
So what does infrastructure resilience mean? In short, it means minimal downtime due to either planned maintenance or a failure of the underlying hardware. Live migration is the most common modern technique used to mitigate downtime for planned maintenance that impacts the physical host. Fast VM restart is the most common technique used to mitigate downtime due to hardware failure.
Fast VM restart is built into nearly all modern hypervisors. It’s not magical — a shocking number of VMware customers believe that VM HA means they’ll instantly get a workload onto a happy healthy host from a failed host (i.e., they confuse live migration with VM HA). Fast VM restart is basically a technique to rapidly detect that a physical host has failed, and restart the VMs that were on that host, on some other host. It doesn’t necessarily need to be implemented at the virtualization level — you can just have monitoring that runs at a very short polling interval and that orchestrates a move-and-restart of VMs when it sees a host failure, for instance. (Of course, you need a storage and network architecture that makes this viable, too.)
Clearly, not all applications are happy when they get what is basically an unexpected reboot, but this is the level of infrastructure resiliency that works just fine for non-distributed applications. When customers babble about the reliability of their on-premise VMware-based infrastructure, this is pretty much what they mean. They think it has value. They’re willing to pay more for it. There’s no real reason why it shouldn’t be implemented by every cloud IaaS providers that intends to take general business applications, not just the VMware-based providers.
By the way: Lost in the news about live migration in Google Compute Engine has been an interesting new subtlety. I missed noticing this in my first read-through of the announcement, since it was phrased purely in the context of maintenance, and only a re-read while finishing up this blog post led me to wonder about general-case restart. And I haven’t seen this mentioned elsewhere:
The new GCE update also adds fast VM restart, which Google calls Automatic Restart. To quote the new documentation, “You can set up Google Compute Engine to automatically restart an instance if it is taken offline by a system event, such as a hardware failure or scheduled maintenance event, using the automaticRestart setting.” Answering a query from me, Google said that the restart time is dependent upon the type of failure, but that in most cases, it should be under three minutes.
So a gauntlet has been (subtly) thrown. This is one of the features that enterprises most want in an “enterprise-grade” cloud IaaS offering. Now Google has it. Will AWS, Microsoft, and others follow suit?
Google Compute Engine and live migration
Google Compute Engine (GCE) has been a potential cloud-emperor contender in the shadows, and although GCE is still in beta, it’s been widely speculated that Google will likely be the third vendor in the trifecta of big cloud IaaS market-share leaders, along with Amazon Web Services (AWS) and Microsoft Windows Azure.
Few would doubt Google’s technology prowess, if it decides to commit itself to a business, though. A critical question has remained, though: Will Google be able to deliver technology capabilities that can be used by mere mortals in the enterprise, and market, sell, contract for, and deliver service in a way that such businesses can use? (Its ability to serve ephemeral large-scale compute workloads, and perhaps meet the needs of start-ups, is not in doubt.)
One of the most heartburn-inducing aspects of GCE has been its scheduled maintenance, To quote Google: “For scheduled zone maintenance windows, Google takes an entire zone offline for roughly two weeks to perform various, disruptive maintenance tasks.” Basically, Google has said, “Your data center will be going away for up to two weeks. Deal with it. You should be running in multiple zones anyway.”
Even most cloud-native start-ups aren’t capable of easily executing this way. Remember that most applications are architected to have their data locally, in the same zone as the compute. Without using Google’s PaaS capabilities (like Datastore), this means that the customer needs to move and/or replicate storage into another zone, which also increases their costs. Many applications aren’t large enough to warrant the complexity of a multi-zone implementation, either — not only business applications, but also smaller start-ups, mobile back-end implementations, and so forth.
So inherently, a hard-line stance on taking zones offline for maintenance, limited GCE’s market opportunity. Despite positioning this as a hard-line stance previously, Google has clearly changed its mind, introducing “transparent maintenance”. This is accomplished with a combination of live migration technology, and some innovations related to their implementation of physical data center maintenance. It’s an interesting indication of Google listening to prospects and customers and flexing to do something that has not been the Google Way.
Not only will Google’s addition of migration help data center maintenance, but more importantly, it will mitigate downtime related to host maintenance. Although AWS, for instance, tries to minimize host maintenance in order to avoid instance downtime or reboots, host maintenance is necessary — and it’s highly useful to have a technology that allows you to host maintenance without downtime for the instances, because this encourages you not to delay host maintenance (since you want to update the underlying host OS, hypervisor, etc.).
VMware-based providers almost always do live migration for host maintenance, since it’s one of the core compelling features of VMware. But AWS, and many competitors that model themselves after AWS, don’t. I hope that Google’s decision to add live migration into GCE pushes the rest of the market — and specifically AWS, which today generally sets the bar for customer expectations — into doing the same, because it’s a highly useful infrastructure resilience feature, and it’s important to customers.
More broadly, though, AWS hasn’t really had innovation competitors to date. Microsoft Azure is a real competitor, but other than in PaaS, they’ve largely been playing catch-up. Thanks to its extensive portfolio of internal technologies, Google has the potential ability to inject truly new capabilities into the market. Similar to what customers have seen with AWS — when AWS has been successful at introducing capabilities that many customers weren’t really even aware that they wanted — I expect Google is going to launch truly innovative capabilities that will turn into customer demands. It’s not that AWS is going to simply mount a competitive response — it will become a situation where customers ask for these capabilities, pushing AWS to respond. That should be excellent for the market.
It’s worth noting that the value of Google is not just GCE — it is Google Cloud Platform as a whole, including the PaaS elements. This is similarly true with Microsoft Azure. And although AWS seems to broadly bucketed as IaaS, in reality their capabilities overlap into the PaaS space. These vendors understand that the goal is the ability to develop and deliver business capaiblities more quickly — not to provide cheap infrastructure.
Capabilities equate lock-in, by the way, but historically, businesses have embraced lock-in whenever it results in more value delivered.
IBM SoftLayer, versus Amazon or versus Rackspace?
Near the beginning of July, IBM closed its acquisition of SoftLayer (which I discussed in a previous blog post). A little over three months have passed since then, and IBM has announced the addition of more than 1,500 customers, the elimination of SmartCloud Enterprise (SCE, IBM’s cloud IaaS offering), and went on the offensive against Amazon in an ad campaign (analyzed in my colleague Doug Toombs’s blog post). So what does this all mean for IBM’s prospects in cloud infrastructure?
IBM is unquestionably a strong brand with deep customer relationships — it exerts a magnetism for its customers that competitors like HP and Dell don’t come anywhere near to matching. Even with all of the weaknesses of the SCE offering, here at Gartner, we still saw customers choose the service simply because it was from IBM — even when the customers would openly acknowledge that they found the platform deficient and it didn’t really meet their needs.
In the months since the SoftLayer acquisition has closed, we’ve seen this “we use IBM for everything by preference” trend continue. It certainly helps immensely that SoftLayer is a more compelling solution than SCE, but customers continue to acknowledge that they don’t necessarily feel they’re buying the best solution or the best technology, but they are getting something that is good enough from a vendor that they trust. Moreover, they are getting it now; IBM has displayed astonishing agility and a level of aggression that I’ve never seen before. It’s impressive how quickly IBM has jump-started the pipeline this early into the acquisition, and IBM’s strengths in sales and marketing are giving SoftLayer inroads into a mid-market and enterprise customer base that it wasn’t able to target previously.
SoftLayer has always competed to some degree against AWS (philosophically, both companies have an intense focus on automation, and SoftLayer’s bare-metal architecture is optimal for certain types of use cases), and IBM SoftLayer will as well. In the IBM SoftLayer deals we’ve seen in the last couple of months, though, their competition isn’t really Amazon Web Services (AWS). AWS is often under consideration, but the real competitor is much more likely to be Rackspace — dedicated servers (possibly with a hybrid cloud model) and managed services.
IBM’s strategy is actually a distinctively different one from the other providers in the cloud infrastructure market. SoftLayer’s business is overwhelmingly dedicated hosting — mostly small-business customers with one or two bare-metal servers (a cost-sensitive, high-churn business), though they had some customers with large numbers of bare-metal servers (gaming, consumer-facing websites, and so forth). It also offers cloud IaaS, called CloudLayer, with by-and-hour VMs and small bare-metal servers, but this is a relatively small business (AWS has individual customers that are bigger than the entirety of CloudLayer). SoftLayer’s intellectual property is focused on being really, really good at quickly provisioning hardware in a fully automated way.
IBM has decided to do something highly unusual — to capitalize on SoftLayer’s bare-metal strengths, and to strongly downplay virtualization and the role of the cloud management platform (CMP). If you want a CMP — OpenStack, CloudStack, vCloud Director, etc. — on SoftLayer, there’s an easy way to install the software on bare metal. But if you want it updated, maintained, etc., you’ll either have to do it yourself, or you need to contract with IBM for managed services. If you do that, you’re not buying cloud IaaS; you’re renting hardware and CMP software, and building your own private cloud.
While IBM intends to expand the configuration options available in CloudLayer (and thus the number of hardware options available by the hour rather than by the month), their focus is upon the lower-level infrastructure constructs. This also means that they intend to remain neutral in the CMP wars. IBM’s outsourcing practice has historically been pretty happy to support whatever software you use, and the same largerly applies here — they’re offering managed services for the common CMPs, in whatever way you choose to configure them.
In other words, while IBM intends to continue its effort to incorporate OpenStack as a provisioning manager in its “Smarter Infrastructure” products (the division formerly known as Tivoli), they are not launching an OpenStack-based cloud IaaS, replacing the existing CloudLayer cloud IaaS platform, or the like.
IBM also intends to use SoftLayer as the underlying hardware platform for the application infrastructure components that will be in its Cloud Foundry-based framework for PaaS. It will depend on these components to compete against the higher-level constructs in the market (like Amazon’s RDS database-as-a-service).
IBM SoftLayer has a strong value proposition for certain use cases, but today their distinctive value proposition is a different one than AWS’s, but a very similar one to Rackspace’s (although I think Rackspace is going to embrace DevOps-centric managed services, while IBM seems more traditional in its approach). But IBM SoftLayer is still an infrastructure-centric story. I don’t know that they’re going to compete with the vision and speed of execution currently being displayed by AWS, Microsoft, and Google, but ultimately, those providers may not be IBM SoftLayer’s primary competitors.