Blog Archives

Q1 2010 inquiry in review

My professional life has gotten even busier — something that I thought was impossible, until I saw how far out my inquiry calendar was being booked. As usual, my blogging has suffered for it, as has my writing output in general. Nearly all of my writing now seems to be done in airports, while waiting for flights.

The things that clients are asking me about has changed in a big way since my Q4 2009 commentary, although this is partially due to an effort to shift some of my workload to other analysts on my team, so I can focus on the stuff that’s cutting edge rather than routine. I’ve been trying to shed as much of the routine colocation and data center leasing inquiry onto other analysts as possible, for instance; reviewing space-and-power contracts isn’t exactly rocket science, and I can get the trends information I need without needing to look at a zillion individual contracts.

Probably the biggest surprise of the quarter is how intensively my CDN inquiry has ramped up. It’s Akamai and more Akamai, for the most part — renewals, new contracts, and almost always, competitive bids. With aggressive new pricing across the board, a willingness to negotiate (and an often-confusing contract structure), and serious prospecting for new business, Akamai is generating a volume of CDN inquiry for me that I’ve never seen before, and I talk to a lot of customers in general. Limelight is in nearly all of these bids, too, by the way, and the competition in general has been very interesting — particularly AT&T. Given Gartner’s client base, my CDN inquiry is highly diversified; I see a tremendous amount of e-commerce, enterprise application acceleration, electronic software delivery and whatnot, in addition to video deals. (I’ve seen as many as 15 CDN deals in a week, lately.)

The application acceleration market in general is seeing some new innovations, especially on the software end (check out vendors like Aptimize), and there will be more ADN offers will be launched by the major CDN vendors this year. The question of, “Do you really need an ADN, or can you get enough speed with hardware and/or software?” is certainly a key one right now, due to the big delta in price between pure content offload and dynamic acceleration.

By the way, if you have not seen Akamai CEO Paul Sagan’s “Leading through Adversity” talk given at MIT Sloan, you might find it interesting — it’s his personal perspective on the company’s history. (His speech starts around the 5:30 mark, and is followed by open Q&A, although unfortunately the audio cuts out in one of the most interesting bits.)

Most of the rest of my inquiry time is focused around cloud computing inquiries, primarily of a general strategic sort, but also with plenty of near-term adoption of IaaS. Traditional pure-dedicated hosting inquiry, as I mentioned in my last round-up, is pretty much dead — just about every deal has some virtualized utility component, and when it doesn’t, the vendor has to offer some kind of flexible pricing arrangement. Unusually, I’m beginning to take more and more inquiry from traditional data center outsourcing clients who are now looking at shifting their sourcing model. And we’re seeing some sharp regional trends in the evolution of the cloud market that are the subject of an upcoming research note.

Bookmark and Share

The (temporary?) transformation of hosters

Classically, hosting companies have been integrators of technology, not developers of technology. Yet the cloud world is increasingly pushing hosting companies into being software developers — companies who create competitive advantage in significant part by creating software which is used to deliver capabilities to customers.

I’ve heard the cloud IaaS business compared to the colocation market of the 1990s — the idea that you build big warehouses full of computers and you rent that compute capacity to people, comparable conceptually to renting data center space. People who hold this view tend to say things like, “Why doesn’t company X build a giant data center, buy a lot of computers, and rent them? Won’t the guy who can spend the most money building data centers win?” This view is, bluntly, completely and utterly wrong.

IaaS is functionally becoming a software business right now, one that is driven by the ability to develop software in order to introduce new features and capabilities, and to drive quality and efficiency. IaaS might not always be a software business; it might eventually be a service-and-support business that is enabled by third-party software. (This would be a reasonable view if you think that VMware’s vCloud is going to own the world, for instance.) And you can get some interesting dissonances when you’ve got some competitors in a market who are high-value software businesses vs. other folks who are mostly commodity infrastructure providers enabled by software (the CDN market is a great example of this). But for the next couple of years at least, it’s going to be increasingly a software business in its core dynamics; you can kind of think of it as a SaaS business in which the service delivered happens to be infrastructure.

To illustrate, let’s talk about Rackspace. Specifically, let’s talk about Rackspace vs. Amazon.

Amazon is an e-commerce company, with formidable retail operations skills embedded in its DNA, but it is also a software company, with more than a decade of experience under its belt in rolling out a continuous stream of software enhancements and using software to drive competitive advantage.

Amazon, in the cloud IaaS part of its Web Services division, is in the business of delivering highly automated IT infrastructure to customers. Custom-written software drives their entire infrastructure, all the way down to their network devices. Software provides the value-added enhancements that they deliver on top of the raw compute and storage, from the spot pricing marketplace to auto-scaling to the partially-automated MySQL management provided by the RDS service. Amazon’s success and market leadership depends on consistently rolling out new and enhanced features, functions, capabilities. It can develop and release software on such aggressive schedules that it can afford to be almost entirely tactical in its approach to the market, prioritizing whatever customers and prospects are demanding right now.

Rackspace, on the other hand, is a managed hosting company, built around a deep culture of customer service. Like all managed hosters, they’re imperfect, but on the whole, they are the gold standard of service, and customer service is one of the key differentiators in managed hosting, driving Rackspace’s very rapid growth over the last five years. Rackspace has not traditionally been a technology leader; historically, it’s been a reasonably fast follower implementing mainstream technologies in use by its target customers, but people, not engineering, has been its competitive advantage.

And now, Rackspace is going head to head with Amazon on cloud IaaS. It has made a series of acquisitions aimed at acquiring developers and software technology, including Slicehost, JungleDisk, and Webmail.us. (JungleDisk is almost purely a software company, in fact; it makes money by selling software licenses.) Even if they emphasize other competitive differentiation, like customer support, they’re still in direct competition with Amazon on pure functionality. Can Rackspace obtain the competencies it will need to be a software leader?

And in related questions: Can the other hosters who eschew the VMware vCloud route manage to drive the featureset and innovation they’ll need competitively? Will vCloud be inexpensive enough and useful enough to be widely adopted by hosters, and if it is, how much will it commoditize this market? What does this new emphasis upon true development, not just integration, do to hosters and to the market as a whole? (I’ve been thinking about this a lot, lately, although I suspect it’ll go into a real research note rather than a blog post.)

Bookmark and Share

VMware buys Zimbra

Hot on the heels of a day of rumors, VMware announced the acquisition of Zimbra, the open-source email platform vendor previously purchased (and up until now, still owned by) Yahoo!.

It’s a somewhat puzzling acquisition. Its key strategic thrust seems to be enabling the service provider ecosystem. Most service providers (i.e., hosters) already offer email as a service — although today, they primarily offer Hosted Exchange. Other platforms — Open-Xchange, OpenWave, Critical Path, Mirapoint, etc. — are used for more commodity email services. Relatively speaking, Zimbra doesn’t have as much traction in the service provider space, and VMware’s service provider ecosystem is not gasping for lack of reasonable platforms on which to offer email SaaS.

The email SaaS space is super-competitive. Businesses have come to the realization that email is a commodity that can be safely outsourced, and that huge cost savings can be realized by outsourcing it; this is driving rapid growth of mailboxes delivered as SaaS, but it’s also driving aggressive price competition. That, in turn, drives service providers to push their underlying email software vendors for lower license costs.

One has to speculate, then, that this acquisition is not just about email. It’s about the broader platform strategy, and the degree to which VMware wants to own an entire stack.

My colleagues and I are working on publishing a Gartner position (a “First Take”) on this acquisition, so I’m sorry to be a bit brief, and cryptic.

Bookmark and Share

Savvis CEO Phil Koen resigns

Savvis announced the resignation of CEO Phil Koen on Friday, citing a “joint decision” between Koen and the board of directors. This was clearly not a planned event, and it’s interesting, coming at the end of a year in which Savvis’s stock has performed pretty well (it’s up 96% over last year, although the last quarter has been rockier, -8%). The presumed conflict between Koen and the board becomes clearer when one looks at a managed hosting comparable like Rackspace (up 276% over last year, 19% in the last quarter), rather than at the colocation vendors.

When the newly-appointed interim CEO Jim Ousley says “more aggressive pursuit of expanding our growth”, I read that as, “Savvis missed the chance to be an early cloud computing leader”. A leader in utility computing, offering on-demand compute on eGenera-based blade architectures, Savvis could have taken its core market message, shifted its technology approach to embrace a primarily virtualization-based implementation, and led the charge into enterprise cloud. Instead, its multi-pronged approach (do you want dedicated servers? blades? VMs?) led to a lengthy period of confusion for prospective customers, both in marketing material and in the sales cycle itself.

Savvis still has solid products and services, and we still see plenty of contract volume in our own dealings with enterprise clients, as well as generally positive customer experiences. But Savvis has become a technology follower, conservative in its approach, rather than a boldly visionary leader. Under previous CEO Rob McCormick, the company was often ahead of its time, which isn’t ideal either, but in this period of rapid market evolution, the consumerization of IT, and self-service, Savvis’s increasingly IBM-like market messages are a bit discordant with the marketplace, and its product portfolio has largely steered away from the fastest-growing segment of the market, self-managed cloud hosting.

Koen made many good decisions — among them, focusing on managed hosting rather than colocation. But his tenure was also a time of significant turnover within the Savvis ranks, especially at the senior levels of sales and marketing. When Ousley says the company is going to take a “fresh look” at sales and marketing, I read that as, “Try to retain sales and marketing leadership for long enough for them to make a long-term impact.”

Having an interim CEO in the short term — and one drawn from the ranks of the board, rather than from the existing executive leadership — means what is effectively a holding pattern until a new CEO can be selected, gets acquainted with the business, and figures out what he wants to do. That’s not going to be quick, which is potentially dangerous at this time of fast-moving market evolution. But the impact of that won’t be felt for many months; in the near term, one would expect projects to continue to execute as planned.

Thus, for existing and prospective Savvis customers, I’d expect that this change in the executive ranks will result in exactly zero impact in the near term; anyone considering signing a contract should just proceed as if nothing’s changed.

Bookmark and Share

Recent inquiry trends

It’s been mentioned to me that my “what are you hearing about from clients” posts are particularly interesting, so I’ll try to do a regular update of this sort. I have some limits on how much detail I can blog and stay within Gartner’s policies for analysts, so I can’t get too specific; if you want to drill into detail, you’ll need to make a client inquiry.

It’s shaping up into an extremely busy fall season, with people — IT users and vendors like — sounding relatively optimistic about the future. If you attended Gartner’s High-Tech Forum (a free event we recently did for tech vendors in Silicon Valley), you saw that we showed a graph of inquiry trends, indicating that “cost” is a declining search term, and “cloud” has rapidly increased in popularity. We’re forecasting a slow recovery, but at least it’s a recovery.

This is budget and strategic planning time, so I’m spending a lot of time with people discussing their 2010 cloud deployment plans, as well as their two- and five-year cloud strategies. There’s some planning stuff going around data centers, hosting, and CDN services, too, but the longer-term the planning, the more likely it is that it’s going to involve cloud. (I posted on cloud inquiry trends previously.)

There’s certainly purchasing going on right now, though, and I’m talking to clients across the whole of the planning cycle (planning, shortlisting, RFP review, evaluating RFP responses, contract review, re-evaluating existing vendors, etc.). Because pretty much everything that I cover is a recurring service, I don’t see the end-of-year rush to finish spending 2009’s budget, but this is the time of year when people start to work on the contracts they want to go for as soon as 2010’s budget hits.

My colo inquiries this year have undergone an interesting shift towards local (and regional) data centers, rather than national players, reflecting a shift in colocation from being primarily an Internet-centric model, to being one where it’s simply another method by which businesses can get data center space. Based on the planning discussions I’m hearing, I expect this is going to be the prevailing trend going forward, as well.

People are still talking about hosting, and there are still plenty of managed hosting deals out there, but very rarely do I see a hosting deal now that doesn’t have a cloud discussion attached. If you’re a hoster and you can’t offer capacity on demand, most of my clients will now simply take you off the table. It’s an extra kick in the teeth if you’ve got an on-demand offering but it’s not yet integrated with your managed services and/or dedicated offerings; now you’re competing as if you were two providers instead of one.

The CDN wars continue unabated, and competitive bidding is increasingly the norm, even in small deals. Limelight Networks fired a salvo into the fray yesterday, with an update to their delivery platform that they’ve termed “XD”. The bottom line on that is improved performance at a baseline for all Limelight customers, plus a higher-performance tier and enhanced control and reporting for customers who are willing to pay for it. I’ll form an opinion on its impact once I see some real-world performance data.

There’s a real need in the market for a company who can monitor actual end-user performance and that can do consulting assessments of multiple CDNs and origin configurations. (It’d be useful in the equipment world, too, for ADCs and WOCs.) Not everyone can or wants to deploy Keynote or Gomez or Webmetrics for this kind of thing, those companies aren’t necessarily eager to do a consultative engagement of this sort, and practically every CDN on the planet has figured out how to game their measurements to one extent or another. It doesn’t make them without value in such assessments, but real-world data from actual users (via JavaScript agents, video player instrumentation, download client instrumentation, etc.) is still vastly preferable. Practically every client I speak to wants to do performance trials, but the means available for doing so are still overly limited and very expensive.

All in all, things are really crazy busy. So busy, in fact, that I ended up letting a whole month go by without a blog post. I’ll try to get back into the habit of more frequent updates. There’s certainly no lack of interesting stuff to write about.

Bookmark and Share

Amazon VPC is not a private cloud

The various reactions to Amazon’s VPC announcement have been interesting to read.

Earlier today, I summarized what VPC is and isn’t, but I realize, after reading the other reactions, that I should have been clearer on one thing: Amazon VPC is not a private cloud offering. It is a connectivity option for a public cloud. If you have concerns about sharing infrastructure, they’re not going to be solved here. If you have concerns about Amazon’s back-end security, this is one more item you’re going to have to trust them on — all their technology for preventing VM-to-VM and VM-to-public-Internet communication is proprietary.

Almost every other public cloud compute provider already offers connectivity options beyond public Internet. Many other providers offer multiple types of Internet VPN (IPsec, SSL, PPTP, etc.), along with options to connect virtual servers in their clouds to colocated or dedicated equipment within the same data center, and options to connect those cloud servers to private, dedicated connectivity, such as an MPLS VPN connection or other private WAN access method (leased line, etc.).

All Amazon has done here is join the club — offering a service option that nearly all their competitors already offer. It’s not exactly shocking that customers want this; in fact, customers have been getting this from competitors for a long time now, bugging Amazon to offer an option, and generally not making a secret of their desires. (Gartner clients: Connectivity options are discussed in my How to Select a Cloud Computing Infrastructure Provider note, and its accompanying toolkit worksheet.)

Indeed, there’s likely a burgeoning market for Internet VPN termination gear of various sorts, specifically to serve the needs of cloud providers — it’s already commonplace to offer a VPN for administration, allowing cloud servers to be open to the Internet to serve Web hits, but only allow administrative logins via the backend VPN-accessed network.

What Amazon has done that’s special (other than being truly superb at public relations) is to be the only cloud compute provider that I know of to fully automate the process of dealing with an IPsec VPN tunnel, and to forego individual customer VLANs for their own layer 2 isolation method. You can expect that other providers will probably automate VPN set-up so in the future, but it’s possibly less of a priority on their road maps. Amazon is deeply committed to full automation, which is necessary at their scale. The smaller cloud providers can get away with some degree of manual provisioning for this sort of thing, still — and it should be pretty clear to equipment vendors (and their virtual appliance competitors) that automating this is a public cloud requirement, ensuring that the feature will show up across the industry within a reasonable timeframe.

Think of it this way: Amazon VPC does not isolate any resources for an individual customer’s use. It provides Internet VPN connectivity to a shared resource pool, rather than public Internet connectivity. It’s still the Internet — the same physical cables in Amazon’s data center and across the world, and the same logical Internet infrastructure, just with a Layer 3 IPsec encrypted tunnel on top of it. VPC is “virtual private” in the same sense that “virtual private” is used in VPN, not in the sense of “private cloud”.

Bookmark and Share

Amazon VPC

Today, Amazon announced a new enhancement to its EC2 compute service, called Virtual Private Cloud (VPC). Amazon’s CTO, Werner Vogels, has, as usual, provided some useful thoughts on the release, accompanied by his thoughts on private clouds in general. And as always, the RightScale blog has a lucid explanation.

So what, exactly, is VPC?

VPC offers network isolation to instances (virtual servers) running in Amazon’s EC2 compute cloud. VPC instances do not have any connectivity to the public Internet. Instead, they only have Internet VPN connectivity (specifically, an IPsec VPN tunnel), allowing the instances to seem as if they’re part of the customer’s private network.

For the non-techies among my readers: Think about the way you connect your PC to a corporate VPN when you’re on the road. You’re on the general Internet at the hotel, but you run a VPN client on your laptop that creates a secure, encrypted tunnel over the Internet, between your laptop and your corporate network, so it seems like your laptop is on your corporate network, with an IP address that’s within your company’s internal address range.

That’s basically what’s happening here with VPC — the transport network is still the Internet, but now there’s a secure tunnel that “extends” the corporate network to an external set of devices. The virtual instances get corporate IP addresses (Amazon now even supports DHCP options), and although of course the traffic is still coming through your Internet gateway and you are experiencing Internet performance/latency/availability, devices on your corporate WAN “think” the instances are local.

To set this up, you use new features of the Amazon API that lets you create a VPC container (a logical construct for the concept of your private cloud), subnets, and gateways. When you actually activate the VPN, you begin paying 5 cents an hour to keep the tunnel up. You pay normal Amazon bandwidth charges on top of that (remember, your traffic is still going over the Internet, so the only extra expense to Amazon is the tunnel itself).

When you launch an EC2 instance, you can now specify that it belongs to a particular VPC subnet. A VPC-enabled instance is not physically isolated from the rest of EC2; it’s still part of the general shared pool of capacity. Rather, the virtual privacy is achieved via Amazon’s proprietary networking software, which they use to isolate virtual instances from one another. (It is not intra-VM firewalling per se; Amazon says this is layer 2 network isolation.)

At the moment, an instance can’t be both be part of a VPC and accessible to the general Internet, which means that this doesn’t solve a common use case — the desire to use a private network for back-end administration or data, but still have the server accessible to the Internet so that it can be customer-facing. Expect Amazon to offer this option in the future, though.

As it currently stands, with an EC2 instance with VPC limited to communicating with other instances within the VPC, as well as the corporate network, this solves the use case of customers who are using EC2 for purely internally-facing applications and are seeking a more isolated environment. While some customers are going to want to have genuinely private network connectivity (i.e., the ability to drop an MPLS VPN connection into the data center), a scenario that Amazon is unlikely to support, the VPC offering is likely to serve many needs.

Note, by the way, that the current limitation on communication also means that EC2 instances can’t reach other Amazon Web services, including S3. (However, EBS does work, as far as I know.) While monitoring is supported, load-balancing is not. Thus, auto-scaling functionality, one of the more attractive recent additions to the platform, is limited.

VPN connectivity for cloud servers is not a new thing in general, and part of what Amazon is addressing with this release is a higher-security option, for those customers who are uncomfortable with the fact that Amazon, unlike most of its competitors, does not offer a private VLAN to each customer. For EC2 specifically, there have been software-only approaches, like CohesiveFT’s VPN-Cubed. Other cloud compute service providers have offered VPN options, including GoGrid and SoftLayer. What distinguishes the Amazon offering is that the provisioning is fully automated, and the technology is proprietary.

This is an important step forward for Amazon, and it will probably cause some re-evaluations by prospective customers who previously rejected an Amazon solution because of the lack of connectivity options beyond public Internet only.

Cloud services are evolving with extraordinary rapidity. I always caution customers not to base deployment plans for one year out on the current state of the technology, because every vendor is evolving so rapidly that the feature that’s currently missing and that you really want has, assuming it’s not something wacky and unusual, a pretty high chance of being available when you’re actually ready to start using the service in a year’s time.

Bookmark and Share

Bits and pieces

Interesting recent news:

Amazon’s revocation of Orwell novels on the Kindle has stirred up some cloud debate. There seems to have been a thread of “will this controversy kill cloud computing”, which you can find in plenty of blogs and press articles. I think that question, in this context, is silly, and am not going to dignify it with a lengthy post of my own. I do think, however, that it highlights important questions around content ownership, application ownership, and data ownership, and the role that contracts (whether in the form of EULAs or traditional contracts) will play in the cloud. By giving up control over physical assets, whether data or devices, we place ourselves into the hands of thir parties, and we’re now subject to their policies and foibles. The transition from a world of ownership to a world of rental, even “permanent” lifetime rental, is not a trivial one.

Engine Yard has expanded its EC2 offering. Previously, Engine Yard was offering Amazon EC2 deployment of its stack via an offering called Solo, for low-end customers who only needed a single instance. Now, they’ve introduced a version called Flex, which is oriented around customers who need a cluster and associated capabilities, along with a higher level of support. This is notable because Engine Yard has been serving these higher-end customers out of their own data center and infrastructure. This move, however, seems to be consistent with Engine Yard’s gradual shift from hosting towards being more software-centric.

The Rackspace Cloud Servers API is now in open beta. Cloud Servers is essentially the product that resulted from Rackspace’s acquisition of Slicehost. Previously, you dealt with your Cloud Server through a Web portal; this new release adds a RESTful API, along with some new features, like shared IPs (useful for keepalived and the like). Also of note is the resize operation, letting you scale your server size up or down, but this is really handwaving magic in front of replacing a smaller virtual server with a larger virtual server, rather than expanding an already-running virtual instance. The API is fairly extensive and the documentation seems decent, although I haven’t had time to personally try it out yet. The API responses, interestingly, include both human-readable data as well as WADL (Web Application Description Language, which is machine-parseable).

SOASTA has introduced a cloud-based performance certification program. Certification is something of a marketing gimmick, but I do think that SOASTA is, overally, an interesting company. Very simply, SOASTA leverages cloud system infrastructure to offer high-volume load-testing services. In the past, you’d typically execute such tests using a tool like HP’s LoadRunner, and many Web hosters offer, as part of their professional services offerings, performance testing using LoadRunner or a similar tool. SOASTA is a full-fledged software as a service offering (i.e., it is their own test harness, monitors, analytics, etc., not a cloud repackaging of another vendor), and the price point makes it reasonable not just for the sort of well-established organizations that could previously afford commercial performance-testing tools, but also for start-ups.

Bookmark and Share

Magic Quadrant (hosting and cloud), published!

The new Magic Quadrant for Web Hosting and Hosted Cloud System Infrastructure Services (On Demand) has been published. (Gartner clients only, although I imagine public copies will become available soon as vendors buy reprints.) Inclusion criteria was set primarily by revenue; if you’re wondering why your favorite vendor wasn’t included, it was probably because they didn’t, at the January cut-off date, have a cloud compute service, or didn’t have enough revenue to meet the bar. Also, take note that this is direct services only (thus the somewhat convoluted construction of the title); it does not include vendors with enabling technology like Enomaly, or overlaid services like RightScale.

It marks the first time we’ve done a formal vendor rating of many of the cloud system infrastructure service providers. We do so in the context of the Web hosting market, though, which means that the providers are evaluated on the full breadth of the five most common hosting use cases that Gartner clients have. Self-managed hosting (including “virtual data center” hosting of the Amazon EC2, GoGrid, Terremark Enterprise Cloud, etc. sort) is just one of those use cases. (The primary cloud infrastructure use case not in this evaluation is batch-oriented processing, like scientific computing.)

We mingled Web hosting and cloud infrastructure on the same vendor rating because one of the primary use cases for cloud infrastructure is for the hosting of Web applications and content. For more details on this, see my blog post about how customers buy solutions to business needs, not technology. (You might also want to read my blog post on “enterprise class” cloud.)

We rated more than 60 individual factors for each vendor, spanning five use cases. The evaluation criteria note (Gartner clients only) gives an overview of the factors that we evaluate in the course of the MQ. The quantitative scores from the factors were rolled up into category scores, which in turn rolled up into overall vision and execution scores, which turn into the dot placement in the Quadrant. All the number crunching is done by software — analysts don’t get to arbitrarily move dots around.

To understand the Magic Quadrant methodology, I’d suggest you read the following:

Some people might look at the vendors on this MQ and wonder why exciting new entrants aren’t highly rated on vision and/or execution. Simply put, many of these vendors might be superb at what they do, yet still not rate very highly in the overall market represented by the MQ, because they are good at just one of the five use cases encompassed by the MQ’s market definition, or even good at just one particular aspect of a single use case. This is not just a cloud-related rating; to excel in the market as a whole, one has to be able to offer a complete range of solutions.

Because there’s considerable interest in vendor selection for various use cases (including non-hosting use cases) that are unique to public cloud compute services, we’re also planning to publish some companion research, using a recently-introduced Gartner methodology called a Critical Capabilities note. These notes look at vendors in the context of a single product/service, broken down by use case. (Magic Quadrants, on the other hand, look at overall vendor positioning within an entire market.) The Critical Capabilities note solves one of the eternal dilemmas of looking at a MQ, which is trying to figure out which vendors are highly rated for the particular business need that you have, since, as I want to re-iterate again, a MQ niche player may be do the exact thing you need in a vastly more awesome fashion than a vendor rated a leader. Critical Capabilities notes break things down feature-by-feature.

In the meantime, for more on choosing a cloud infrastructure provider, Gartner clients should also look at some of my other notes:

For cloud infrastructure service providers: We may expand the number of vendors we evaluate for the Critical Capabilities note. If you’ve never briefed us before, we’d welcome you to do so now; schedule a briefing with myself, Ted Chamberlin, and Mike Spink (a brand-new colleague in Europe).

Bookmark and Share

I’m thinking about using Amazon, IBM, or Rackspace…

At Gartner, much of our coverage of the cloud system infrastructure services market (i.e., Amazon, GoGrid, Joyent, etc.) is an outgrowth of our coverage of the hosting market. Hosting is certainly not the only common use case for cloud, but it is the use case that is driving much of the revenue right now, a high percentage of the providers are hosters, and most of the offerings lean heavily in this direction.

This leads to some interesting phenomenons, like the inquiries where the client begins with, “I’m considering using Amazon, IBM, or Rackspace…” That’s the result of customers thinking about the trade-offs between different types of solutions, not just vendors. Also, ultimately, customers buy solutions to business needs, not technology.

Customers say things like, “I’ve got an e-commerce website that uses the following list of technologies. I get a lot more traffic around Mother’s Day and Christmas. Also, I run marketing campaigns, but I’m never sure how much additional traffic an advertisement will drive to my site.”

If you’re currently soaking in the cloud hype, you might quickly jump on that to say, “A perfect case for cloud!” and it could be, but then you get into other questions. Is maximum cost savings the most important budgetary aspect, or is predictability of the bill more important? When he has traffic spikes, are they gradual, giving him hours (or even days) to build up the necessary capacity, or are they sudden, requiring provisioning in close to real time as possible? Does he understand how to architect the infrastructure (and app!) to scale, or does he need help? Does his application scale horizontally or vertically? Does he want to do capacity planning himself, or does he want someone else to take care of it? (Capacity planning equals budget planning, so it’s rarely an, “eh, because we can scale quickly, it doesn’t matter.”) Does he have a good change management process, or does he want a provider to shepherd that for him? Does he need to be PCI compliant, and if so, how does he plan to achieve that? How much systems management does he want to do himself, and to what degree does he have automation tools, or want to use provider-supplied automation? And so on.

That’s just one of the use cases for cloud compute as a service. Similar sets of questions exist in each of the other use cases where cloud is a possible solution. It’s definitely not as simple as “more efficient utilization of infrastructure equals Win”.

Bookmark and Share