Blog Archives

“Enterprise class” cloud

There seems to be an endless parade of hosting companies eager to explain to me that they have an “enterprise class” cloud offering. (Cloud systems infrastructure services, to be precise; I continue to be careless in my shorthand on this blog, although all of us here at Gartner are trying to get into the habit of using cloud as an adjective attached to more specific terminology.)

If you’re a hosting vendor, get this into your head now: Just because your cloud compute service is differentiated from Amazon’s doesn’t mean that you’re differentiated from any other hoster’s cloud offering.

Yes, these offerings are indeed targeted at the enterprise. Yes, there are in fact plenty of non-startups who are ready and willing and eager to adopt cloud infrastructure. Yes, there are features that they want (or need) that they can’t get on some of the existing cloud offerings, especially those of the early entrants. But that does not make them unique.

These offerings tend to share the following common traits:

1. “Premium” equipment. Name-brand everything. HP blades, Cisco gear except for F5’s ADCs, etc. No white boxes.

2. VMware-based. This reflects the fact that VMware is overwhelmingly the most popular virtualization technology used in enterprises.

3. Private VLANs. Enterprises perceive private VLANs as more secure.

4. Private connectivity. That usually means Internet VPN support, but also the ability to drop your own private WAN connection into the facility. Enterprises who are integrating cloud-based solutions with their legacy infrastructure often want to be able to get MPLS VPN connections back to their own data center.

5. Colocated or provider-owned dedicated gear. Not all workloads virtualize well, and some things are available only as hardware. If you have Oracle RAC clusters, you are almost certainly going to do it on dedicated servers. People have Google search appliances, hardware ADCs custom-configured for complex tasks, black-box encryption devices, etc. Dedicated equipment is not going away for a very, very long time. (Clients only: See statistics and advice on what not to virtualize.)

6. Managed service options. People still want support, managed services, and professional services; the cloud simplifies and automates some operations tasks, but we have a very long way to go before it fulfills its potential to reduce IT operations labor costs. And this, of course, is where most hosters will make their money.

These are traits that it doesn’t take a genius to think of. Most are known requirements established through a decade and a half of hosting industry experience. If you want to differentiate, you need to get beyond them.

On-demand cloud offerings are a critical evolution stage for hosters. I continue to be very, very interested in hearing from hosters who are introducing this new set of capabilities. For the moment, there’s also some differentiation in which part of the cloud conundrum a hoster has decided to attack first, creating provider differences for both the immediate offerings and the near-term roadmap offerings. But hosters are making a big mistake by thinking their cloud competition is Amazon. Amazon certainly is a competitor now, but a hoster’s biggest worry should still be other hosters, given the worrisome similarities in the emerging services.

Bookmark and Share

Verizon and Carpathia launch hybrid offerings

Two public cloud announcements from hosting providers this week, with some interesting similarities…

Verizon

Verizon has launched its Computing as a Service (CaaS) offering. This is a virtual data center (VDC) offering, which means that it’s got a Web-based GUI within which you provision and manage your infrastructure. You contract for CaaS itself on a one-year basis, paying for that base access on a monthly basis. Within CaaS, you can provision “farms”, which are individual virtual data centers. Within a farm, you can provision servers (along with the usual storage, load-balancing, firewall, etc.). Farms and servers are on-demand, with a daily price.

Two things make the Verizon offering distinctive (at least temporarily). First, farms can contain both physical servers and virtual (VMware-based) servers, on an HP C-class blade platform; while hybridized offerings have become increasingly common, Verizon is one of the few to allow them to be managed out of a unified GUI. Second, Verizon offers managed services across the whole platform. By default, you get basic management (including patch management) for the OS and Verizon-provided app infrastructure. You can also upgrade to full managed service. It looks like, compared to similar providers, the Verizon offering is going to be extremely cost-competitive.

Carpathia Hosting

In yet another example of a smaller hoster “growing up” with serious cloud computing ambitions, Carpathia has released an offering it calls Cloud Orchestration. It’s a hybrid utility hosting model, combining its managed dedicated hosting service (AlwaysOn) with scaling on its virtual server offering, InstantOn.

Carpathia has stated it’s the first hybrid offering; I don’t agree that it is. However, I do think that Carpathia has rolled out a notable number of features on its cloud platform (Citrix Xen-based). It’s made a foray into the cloud storage space, based on ParaScale. It also has auto-scaling, including auto-provisioning based on performance and availability SLA violations (the only vendor I know of that currently offers that feature). OS patch management is included, as are other basic managed hosting services. Check out Carpathia CTO Jon Greaves’s blog post on the value proposition, for an indication of where their thinking is at.

Side thought: Carpathia is one of the few Xen-based cloud providers to use Citrix Xen, rather than open-source Xen. However, now that Citrix is offering XenServer for free, it seems likely that service providers will gradually drift that way. Live migration (XenMotion) will probably be the main thing that drives that switch.

Bookmark and Share

VMware takes stake in Terremark

I have been crazily, insanely busy, and my frequency of blog posting has suffered for it. On the plus side, I’ve been busy because a huge number of people — users, vendors, investors — want to talk about cloud.

I’ve seen enough questions about VMware investing $20 million in Terremark that I figured I’d write a quick take, though.

Terremark is a close VMware partner (and their service provider of the year for 2008). Data Return (acquired by Terremark in 2007) was the first to have a significant VMware-based utility hosting offering, dating all the way back to 2005. Terremark has since also gotten good traction with its VMware-based Enterprise Cloud offering, which is a virtual data center service. However, Terremark is not just a hosting/cloud provider; it also does carrier-neutral colocation. It has been sinking capital into data center builds, so an external infusion, particularly one directed specifically at funding the cloud-related engineering efforts, is probably welcome.

Terremark has been the leading-edge service provider for VMware-based on-demand infrastructure. It is to VMware’s advantage to get service providers to use its cutting-edge stuff, particularly the upcoming vCloud, as soon as possible, so giving Terremark money to accelerate its cloud plans is a perfectly good tactical move. I don’t think it’s necessary to read any big strategic message into this investment, although undoubtedly it’s interesting to contemplate.

Bookmark and Share

If you worry about hardware, it’s not cloud

If you need more RAM, and you have to call your service provider, they’ve got to order the RAM, wait until they receive it, and then put it in a physical server, before you actually get more memory, and then they bill you on a one-off basis for buying and installing the RAM, you’re not doing cloud computing. If you have to negotiate the price of that RAM each time they buy some, you are really really not doing cloud computing.

I talked to a client yesterday who is in exactly this situation, with a small vendor who calls themselves a cloud computing provider. (I am not going to name names on my blog, in this case.)

Cloud infrastructure services should not be full of one-offs. (The example I cited is merely the worst of the service provider’s offenses against cloud concepts.) It’s reasonably to hybridize cloud solutions with non-cloud solutions, but for basic things — compute cores, RAM, storage, bandwidth — if it’s not on-demand, seamless, and nigh-instant, it’s not cloud, at least not in any reasonable definition of public cloud computing. (“Private cloud”, in the sense of in-house, virtualized data centers, adopts some but not all traits of the public cloud to varying degrees, and therefore gets cut more slack.)

Cloud infrastructure should be a fabric, not individual VMs that are tied to specific physical servers.

Bookmark and Share

Amazon announces reserved instances

Amazon’s announcement du jour is “reserved instances” for EC2.

Basically, with a reserved instance, you pay an up-front non-refundable fee for a one-year term or a three-year term. That buys you a discount on the usage fee for that instance, during that period of time. Reserved instances are only available for Unix flavors (i.e., no Windows) and, at present, only in the US availability zones.

Let’s do some math to see what the cost savings turn out to be.

An Amazon small instance (1 virtual core equivalent to a 1.0-1.2 GHz 2007 Opteron or Xeon) is normally $0.10 per hour. Assuming 720 hours in a month, that’s $72 a month, or $864 per year, if you run that instance full-time.

Under the reserved instance pricing scheme, you pay $325 for a one-year term, then $0.03 per hour. That would be $21 per month, or $259 per year. Add in the reserve fee and you’re at $584 for the year, averaging out to $49 per month — a pretty nice cost savings.

On a three-year basis, unreserved would cost you $2,592; reserved, full-time, is a $500 one-time fee, and with usage, a grand total of $1277. Big savings over the base price, averaging out to $35 per month.

This is important because at the unreserved prices, on a three-year cash basis, it’s cheaper to just buy your own servers. At the reserved price, does that equation change?

Well, let’s see. Today, in a Dell PowerEdge R900 (a reasonably popular server for virtualized infrastructure), I can get a four-socket server populated with quad-cores for around $15,000. That’s sixteen Xeon cores clocking at more than 2 GHz. Call it $1000 per modern core; split up over a 3-year period, that’s about $28 per month. Cheaper than the reserved price, and much less than the unreserved price.

Now, this is a crude, hardware-only, three-year cash calculation, of course, and not a TCO calculation. But it shows that if you plan to run your servers full-time on Amazon, it’s not as cheap as you might think when you think “it’s just three cents an hour!”

Bookmark and Share

There’s more to cloud computing than Amazon

In dozens of client conversations, I keep encountering companies — both IT buyers and vendors — who seem to believe that Amazon’s EC2 platform is the be-all and end-all of the state of the art in cloud computing today. In short, they believe that if you can’t get it on EC2, there’s no cloud platform that can offer it to you. (I saw a blog post recently called “Why, right now, is Amazon the only game in town?” that exemplifies this stance.)

For better or for worse, this is simply not the case. While Amazon’s EC2 platform (and the rest of AWS) is a fantastic technical achievement, and it has demonstrated that it scales well and has a vast amount of spare capacity to be used on demand, as it stands, it’s got some showstoppers for many mainstream adopters. But that doesn’t mean that the rest of the market can’t fill those needs, like:

  • Not having to make any changes to applications.
  • Non-public-Internet connectivity options.
  • High-performance, reliable storage with managed off-site backups.
  • Hybridization with dedicated or colocated equipment.
  • Meeting compliance and audit requirements.
  • Real-time visibility into usage and billing.
  • Enterprise-class customer support and managed services.

There are tons of providers who would be happy to sell some or all of that to you — newer names to most people, like GoGrid and SoftLayer, as well as familiar enterprise hosting names like AT&T, Savvis, and Terremark. Even your ostensibly stodgy IT outsourcers are starting to get into this game, although the boundaries of what’s a public cloud service and what’s an outsourced private one start to get blurry.

If you’ve got to suddenly turn up four thousand servers to handle a flash crowd, you’re going to need Amazon. But if you’re like most mainstream businesses looking at cloud today, you’ve got a cash crunch you’ve got to get through, you’re deploying at most dozens of servers this year, and you’re not putting up and tearing down servers hour by hour. Don’t get fooled into thinking that Amazon’s the only possible option for you. It’s just one of many. Every cloud infrastructure services platform is better for some needs than others.

(Gartner clients interested in learning more about Amazon’s EC2 platform should read my note “Is Amazon EC2 Right For You?“. Those wanting to know more about S3 should read “A Look at Amazon’s S3 Cloud-Computing Storage Service“, authored by my colleagues Stan Zaffos and Ray Paquet.)

Bookmark and Share

Interesting tidbits for the week

A bit of a link round-up…

My colleague Daryl Plummer has posted his rebuttal in our ongoing debate over cloud infrastructure commoditization. I agree with his assertion that over the long term, the bigger growth stories will be the value-added providers and not the pure-play cloud infrastructure guys, but I also stick to my guns in believing that customer service is a differentiator and we’ll have a lot of pure-plays, not a half-dozen monolithic mega-infrastructure-providers.

Michael Topalovich, of Delivered Innovation, has blogged a post-mortem on Coghead. It’s a well-written and detailed dissection of what went wrong, from the perspective of a former Coghead partner. Anyone who runs or uses a platform as a service would be well served to read it, as there are plenty of excellent lessons to be learned.

Richard Jones, of Last.fm, has put up an annotated short-list of distributed key-value stores (mostly in the form of distributed hash tables). He’s looking for a premise-based rather than cloud-based solution, but his commentary is thoughtful and the comments thread is interesting as well.

Also, I have a new research note out (Gartner clients only), in collaboration with my colleague Ted Chamberlin: evaluation criteria for Web hosting (including cloud infrastructure services in that context), which is the decision framework that supports the the Magic Quadrant that we’re anticipating publishing in April. (Also coming soon, a “how to choose a cloud infrastructure provider” note and accompanying toolkit.)

Bookmark and Share

Does cloud infrastructure become commoditized?

My colleague Daryl Plummer has mused upon the future of cloud in a blog post titled “Cloud Infrastructure: The Next Fat Dumb and Happy Pipe?” In it, he posits that cloud infrastructure will commoditize, that in 5-7 years the market will only support a handful of huge players, and that value-adds are necessary in order to stay in the game.

I both agree and disagree with him. I believe that cloud infrastructure will not be purely a commodity market, specifically because everyone in this market will offer value-added differentiation, and that even a decade out, we’ll still have lots of vendors, many of them small, in this game. Here’s a quick take on a couple of reasons why:

There are diminishing returns on the cost-efficiency of scale. There is a limit to how cheap a compute cycle can get. The bigger you are, the less you’ll pay for hardware, but in the end, even semiconductor companies have to make a little margin. And the bigger you are, the more you can leverage your engineers, especially your software tools guys — but it’s also possible that a tools vendor will deliver similar cost efficiencies to the smaller players (think about the role of Virtuozzo and cPanel in shared hosting). Broadly, smaller players pay more for things and may not leverage their resources as thoroughly, but they also have less overhead. It’s important to reach sufficient scale, but it’s not necessarily beneficial to be as large as possible.

This is a service. People matter. It’s hard to really commoditize a service, because people are a big wildcard. Buyers will care about customer service. Computing infrastructure is too mission-critical not to. The nuances of account management and customer support will differentiate companies, and smaller, more agile, more service-focused companies will compete successfully with giants.

The infrastructure itself is not the whole of the service. While there will be people out there who just buy server instances with a credit card, they are generally, either implicitly or explicitly, buying a constellation of stuff around that. At the most basic level, that’s customer support, and the management portal and tools, service level agreements, and actual operational quality — all things which can be meaningfully differentiated. And you can obviously go well beyond that point. (Daryl mentions OpSource competing with Amazon/IBM/Microsoft for the same cloud infrastructure dollar — but it doesn’t, really, because those monoliths are not going to learn the specifics of your SaaS app, right down to providing end-user help-desk support, like OpSource does. Cloud infrastructure is a means to an end, not an end unto itself.)

It takes time for technology to mature. Five years from now, we’ll still have stark differences in the way that cloud infrastructure services are implemented, and those differences will manifest themselves in customer-visible ways. And the application platforms will take even longer to mature (and by their nature, promote differentiation and vendor lock-in).

By the way, my latest research note, “Save Money Now With Hosted and ‘Cloud’ Infrastructure” (Gartner clients only) is a tutorial for IT managers, focused on how to choose the right type of cloud service for the application that you want to deploy. All clouds are not created equal, especially now.

Bookmark and Share

More cloud news

Another news round-up, themed around “competitive fracas”.

Joyent buys Reasonably Smart. Cloud hoster Joyent has picked up Reasonably Smart, a tiny start-up with an APaaS offering based, unusually enough, on JavaScript and the Git version-control system. GigaOM has an analysis; I’ll probably post my take later, once I get a better idea of exactly what Reasonably Smart does.

DreamHost offers free hosting. DreamHost — one of the more prominent, popular mass-market and SMB hosting providers — is now offering free hosting for certain applications, including WordPress, Drupal, MediaWiki, and PhpBB. There are a limited number of beta invites out there, and DreamHost notes that the service may become $50/year later. (The normal DreamHost base plan is $6/month.) Increasingly, shared hosting companies are having to compete with free application-specific hosting services like WordPress.com; and Wikidot, and they’re facing the looming spectre of some giants like Google giving away cloud capacity for free. And shared hosting is a cutthroat market already. So, here’s another marketing salvo being fired.

Google goes after Microsoft. Google has announced it’s hiring a sales force to pitch the Premier Edition of Google Apps to customers who are traditionally Microsoft customers. I’d expect the two key spaces where they’ll compete are in email and collaboration, going after the Exchange and Sharepoint base.

Bookmark and Share

Sun buys Q-Layer

Today, Sun announced the acquisition of Q-Layer, a Belgium-based start-up of about two dozen people. Q-Layer is a virtualization orchestration vendor, with a focus that seems similar to 3Tera. For a similar acquisition parallel, look at Dune Technologies, acquired by VMware in late 2007.

When people say “orchestrate virtual resources”, usually what they mean is, “make software handle the messy background details of the infrastructure, automatically, while allowing me to navigate through a point-and-click GUI to provision and manage my virtualized data center resources”. In other words, they’ve got a GUI that can be exposed to users, who can create, configure, manage, and destroy virtual servers (and related equipment) at whim.

Like 3Tera, Q-Layer targets the hosting market — notably, Q-Layer’s founders include folks from Dedigate, a small European managed hosting provider that was acquired by Terremark back in 2005. Unlike 3Tera, which has focused on Linux, Q-Layer has made the effort to support Sun technologies, like Solaris Containers. However, Q-Layer has virtually no market traction; it seems to have signed some small, country-specific managed hosting providers in Europe, who are offering a VMware-based Q-Layer solution. (3Tera’s notable hosting customers include Layered Technologies and 1-800-HOSTING, but despite relatively few hosting partners, it has done a good job of creating market awareness.)

Hosters who want to offer virtual data center hosting (“VDC hosting”) — blocks of capacity that customers can carve up into servers at whim — can buy an off-the-shelf orchestration solution, or, if they’re brave and sufficiently skilled, they can write their own (as Terremark has). It’s not a big market yet, but orchestration also has value for large enterprises deploying big virtualization environments and who would like to delegate the management down through the organization.

Sun’s various cloud ambitions are being expanded with this acquisition. Sun expects to derive near-term benefits from incorporating Q-Layer’s technologies into its product plans this year.

On a lighter note, last week, I had dinner with an old friend I haven’t seen for some years. She’s a former Sun employee, and we were reminescing about Sun’s heyday — I was Sun’s second-largest customer back in those days (ironically, only Enron bought more stuff from them). She joked that her Sun stock options had been priced so egregiously high that Sun would have had to invent teleportation for her to ever see a return on them. Then she stopped and said, “Of course, even if Sun did invent teleportation, they would still somehow have failed to make money from it. They’d probably have given it away for free to spite Microsoft.”

And there’s the rub: Sun is doing many interesting and cool things with technology, but seems to have a persistent problem actually generating meaningful revenue from those ideas. So the Q-Layer acquisition is reasonably logical and I know where I can expect it to fit into Sun’s product line, but I’m still feeling a bit like the plan is:

1. Buy company.
2. …
3. Profit!

Bookmark and Share