Category Archives: Industry
Qualifying for the next Cloud IaaS Magic Quadrant
Now that the Magic Quadrant for Cloud Infrastructure as a Service and Web Hosting, 2010 has been published, we’re going to be getting started on the mid-year update almost immediately (in February). The mid-year version will be cloud-only, specifically the self-provisioned “virtual data center” segment of the market.
Since I have been deluged with questions about what it takes to be included (and there’s been some interesting fud on Quora), I thought I’d explain in public.
For many years now, Ted Chamberlin and I have done this Magic Quadrant using criteria that are very black-and-white; anyone should be able to look at them like a checklist. Those criteria are pretty simple:
- You are required to have certain services, which we try to define as clearly as possible.
- There’s a minimum revenue requirement.
- There’s a requirement to demonstrate global presence, either through data centers in particular geographies, or a certain amount of revenue derived from outside your home region.
If you meet those criteria, you’re in. If you don’t meet those criteria, no amount of begging will get you in. It has nothing to do with whether or not you are a client. It doesn’t even have anything to do with whether or not our clients ask about you, or whether we think you’re worthy; in inquiry, we routinely recommend some providers who don’t qualify for the MQ but who compete successfully against included vendors.
Because we routinely recommend vendors who aren’t on the MQ, and we’re obviously interested in the market as a whole, we welcome briefings from all vendors who believe that they serve Gartner’s end-user client base (mid-sized businesses to large enterprises, technology companies and tech-heavy businesses of all sizes), regardless of whether they qualify for inclusion. We also track the lower end of the market, though, so we do look at the vendors who serve small businesses; vendors in this segment are similarly welcome to brief us, though in that space we’re generally primarily interested in market-share leaders and anyone doing something that’s clearly differentiated.
Analysts at Gartner choose what briefings they want to take, regardless of whether or not a vendor is a client (our system for briefing requests doesn’t even tell analysts the vendor’s client status). You are welcome to brief us as frequently as you have something interesting to say.
Cloud adoption and polling
I’m pondering my poll results from the Gartner data center conference, and trying to understand the discontinuities. I spoke at two sessions at the conference. One was higher level and more strategic, called “Is Amazon or VMware the Future of Your Data Center?” The other was very focused and practical, called “Getting Real with Cloud Infrastructure Sevices”. The second session was in the very last slot, and therefore you had to really want to be there, I suppose. The poll sample size of the second session was about half of the first. My polling questions were similar but not identical, and this is the source of the difficulty in understanding the differences in results.
I normally ask a demographic question at the beginning of my session polls, about how much physical server infrastructure the audience members run in their data centers. This lets me cross-tabulate the poll results by demographic, with the expectation that those who run bigger data centers behave differently than those who run smaller data centers. Demographics for both sessions were essentially identical, with about a third of the audience under 250 physical servers, a third between 250 and 1000, and a third with more than 1000. I do not have the cross-tabbed results back yet, unfortunately, but I suspect they won’t explain my problematic results.
In my first session, 33% of the audience’s organizations had used Amazon EC2, and 33% had used a cloud IaaS provider other than Amazon. (The question explicitly excluded internal clouds.) I mentioned the denial syndrome in a previous blog post, and I was careful to note in my reading of the polling questions that I meant any use, not just sanctioned use — the buckets were very specific. The main difference in Amazon vs. non-Amazon was that more of the use of Amazon was informal (14% vs. 9%) and there was less production application usage (8% vs. 12%).
In my second session, 13% of respondents had used Amazon, and 6% had used a non-Amazon cloud IaaS. I am not sure whether I should attribute this vast difference to the fact that I did not emphasize the “any use”, or simply because this session drew a very different sort of attendee, perhaps one who was farther back on the adoption curve and wanting to learn more basic material, than the first session.
The two audiences also skewed extremely differently when asked what mattered to them in choosing a provider (choose top 3 out of list of options). I phrased the questions differently, though. In the first session, it was about “things that matter”; in the second session, it was “the provider who is best-in-class at this thing”. Where this really became a radically different result was in customer service. It was overwhelmingly the most heavily weighted thing in the first session (“excellent customer service, responsive and proactive in meeting my needs”), but was by far the least important thing in the second session (where I emphasized “best in class customer service” and not “good enough customer service”).
Things like this are why I generally do not like to cite conference keypad polls in my research, preferring instead to rely on formal primary research that’s been demographically weighted and where there are enough questions to tease out what’s going on in the respondent’s head. (I do love polls for being able to tailor my talk, on the fly, to the audience, though.)
Observations from the Gartner data center conference
I’m at Gartner’s Data Center Conference this week, and I’m finding it to be an interesting contrast to our recent Application Architecture, Development, and Integration Summit.
AADI’s primary attendees are enterprise architects and other people who hold leadership roles in applications development. The data center conference’s primary attendees are IT operations directors and others with leadership roles in the data center. Both have significant CIO attendance, especially the data center conference. Attendees at the data center conference, especially, skew heavily towards larger enterprises and those who otherwise have big data centers, so when you see polling results from the conference, keep the bias of the attendees in mind. (Those of you who read my blog regularly: I cite survey data — formal field research, demographically weighted, etc. — differently than conference polling data, as the latter is non-scientific.)
At AADI, the embrace of the public cloud was enthusiastic, and if you asked people what they were doing, they would happily tell you about their experiments with Amazon and whatnot. At this conference, the embrace of the public cloud is far more tentative. In fact, my conversations not-infrequently go like this:
Me: Are you doing any public cloud infrastructure now?
Them: No, we’re just thinking we should do a private cloud ourselves.
Me: Nobody in your company is doing anything on Amazon or a similar vendor?
Them: Oh, yeah, we have a thing there, but that’s not really our direction.
That is not “No, we’re not doing anything on the public cloud”. That’s, “Yes, we’re using the public cloud but we’re in denial about it”.
Lots of unease here about Amazon, which is not particularly surprising. That was true at AADI as well, but people were much more measured there — they had specific concerns, and ways they were addressing, or living with, those concerns. Here the concerns are more strident, particularly around security and SLAs.
Feedback from folks using the various VMware-based public cloud providers seems to be consistently positive — people seem to uniformly be happy with the services themselves and are getting the benefits they hoped to get, and are comfortable. Terremark seems to be the most common vendor for this, by a significant margin. Some Savvis, too. And Verizon customers seem to have talked to Verizon about CaaS, at least. (This reflects my normal inquiry trends, as well.)
Akamai sues Cotendo for patent infringement
How to tell when a CDN has arrived: Akamai sues them for patent infringement.
The lawsuit that Akamai has filed against Cotendo alleges the violation of three patents.
The most recent of the patents, 7,693,959, is dated April 2010, but it’s a continuation of several previous applications — its age is nicely demonstrated by things like its reference to the Netscape Navigator browser and references to ADC vendors that ceased to exist years ago. It seems to be a sort of generic basic CDN patent, but it governs the use of replication to distribute objects, not just caching.
The second Akamai patent, 7,293,093, essentially covers the “whole site delivery” technique.
The oldest of the patents, 6,820,133, was obtained by Akamai when it acquired Netli. It essentially covers the basic ADN technique of optimized communication between two server nodes in a network. I don’t know how defensible this patent is; there are similar ones held by a variety of ADC and WOC vendors who use such techniques.
My expectation is that the patent issues won’t affect the CDN market in any significant way, and the lawsuit is likely to drag on forever. Fighting the lawsuit will cost money, but Cotendo has deep-pocketed backers in Sequoia and Benchmark. It will obviously fuel speculation that Akamai will try to buy them, but I don’t think that it’s going to be a simple way to eliminate competition in this market, especially given what I’ve seen of the roadmaps of other competitors (under NDA). Dynamic acceleration is too compelling to stay a monopoly (and incidentally, the ex-Netli executives are now free of their competitive lock-up), and the only reason it’s taken this long to arrive was that most CDNs were focused on the huge, and technically far easier, opportunity in video delivery.
It’s an interesting signal that Akamai takes Cotendo seriously as a competitor, though. In my opinion, they should; since early this year, I’ve regularly seen Cotendo as a bidder in the deals that I look at. The AT&T deal is just a sweetener — and since AT&T will apply your corporate enteprise discount to a Cotendo deal, that means that I’ve got enterprise clients who sometimes look at 70% discounts on top of an already less-expensive price quote. And Akamai’s problem is that Cotendo isn’t just, as Akamai alleges in its lawsuit, a low-cost competitor; Cotendo is trying to innovate and offer things that Akamai doesn’t have.
Competition is good for the market, and it’s good for customers.
Symposium, Smart Control, and cloud computing
I originally wrote this right after Orlando Symposium and forgot to post it.
Now that Symposium is over, I want to reflect back on the experiences of the week. Other than the debate session that I did (Eric Knipp and I arguing the future of PaaS vs. IaaS), I spent the conference in the one-on-one room or meeting with customers over meals. And those conversations resonated in sharp and sometimes uncomfortable ways with the messages of this year’s Symposiums.
The analyst keynote said this:
An old rule for an old era: absolute centralized IT control over technology people, infrastructure, and services. Rather than fight to regain control, CIOs and IT Leaders must transform themselves from controllers to influencers; from implementers to advisers; from employees to partners. You own this metamorphosis. As an IT leader, you must apply a new rule, for a new era: Smart Control. Open the IT environment to a Wild Open World of unprecedented choice, and better balance cost, risk, innovation and value. Users can achieve incredible things WITHOUT you, but they can only maximize their IT capabilities WITH you. Smart Control is about managing technology in tighter alignment with business goals, by loosening your grip on IT.
The tension of this loss of control, was by far the overwhelming theme of my conversations with clients at Symposium. Since I cover cloud computing, my topic was right at the heart of the new worries. Data from a survey we did earlier this year showed that less than 50% of cloud computing projects are initiated by IT leadership.
Most of the people that I talked to strongly held one of two utterly opposing beliefs: that cloud computing was going to be the Thing of the Future and the way they and their companies would consume IT in the future, and that cloud computing would be something that companies could never embrace. My mental shorthand for the extremes of these positions is “enthusiastic embrace” and “fear and loathing”.
I met IT leaders, especially in mid-sized business, who were tremendously excited by the possibilities of the cloud to free their businesses to move and innovate in ways that they never had before, and to free IT from the chores of keeping the lights on in order to help deliver more busines value. They understood that the technology in cloud is still fairly immature, but they wanted to figure out how they could derive benefits right now, taking smart risks to develop some learnings, and to deliver immediate business value.
And I also met IT leaders who fear the new world and everything it represents — a loss of control, a loss of predictability, an emphasis on outcomes rather than outputs, the need to take risks to obtain rewards. These people were looking for reasons not to change — reasons that they could take back to the business for why they should continue to do the things they always have, perhaps with some implementation of a “private cloud”, in-house.
The analyst keynote pointed out: The new type of CIO won’t ask first what the implementation cost is, or whether something complies with the architecture, but whether it’s good for the enterprise. They will train their teams to think like a business executive, asking first, “is this valuable”? And only then asking, “how can we make this work”? Rather than the other way around.
Changing technologies is often only moderately difficult. Changing mindsets, though, is an enormous challenge.
More on Symposium 1-on-1s
My calendar for one-on-ones at Symposium is now totally full, as far as I know, so here’s a look at some updated stats:
Cloud 23
Colocation 9
Hosting 9
CDN 3
(No overlaps above. Things have been disambiguated. This counts only the formal 1-on-1s, and not any other meetings I’m doing here.)
The hosting discussions have a very strong cloud flavor to them, as one might expect. The broad trend from today is that most people talking about cloud infrastructure here are really talking about putting individual production applications on virtualized infrastructure in a managed hosting environment, with at least some degree of capacity flexibility. But at the same time, this is a good thing for service provider — it clearly illustrates that people are comfortable putting highly mission-critical, production applications on shared infrastructure.
Symposium 1-on-1 trends
My 1-on-1 schedule is filling rapidly. (People who didn’t pre-book, you’re in luck: I was only added to the system on Friday or so, so I still have openings, at least as of this writing.)
Trend-watchers might be interested in how these break down so far:
17 on cloud
8 on colocation
4 on hosting
2 on CDN
(A few of these mention two topics, such as ‘colo and cloud’, and are counted twice above.)
Slightly over half the cloud 1-on-1s so far are about cloud strategy in general; the remainder are about infrastructure specifically.
What’s also interesting to me is that the 1-on-1s scheduled prior to on-site registration appear to be more about colocation and hosting, but the on-site 1-on-1 requests are very nearly pure cloud. I’m not sure what that signifies, although I expect the conversations may be illuminating in this regard.
Rackspace AppMatcher and SaaS marketplaces
Rackspace has teased a preview page for a SaaS marketplace called AppMatcher. (It looks to be more of a front-page mock-up than anything actual; note that the “1,000 apps”, “100,000 businesses” bits look like placeholders.) The concept is pretty straightforward: app providers provide info about their target customer, and potential customers provide info about their company, and the marketplace tries to hook them up.
Hosting companies have increasingly been talking about doing marketplaces for their customers and their partner ecosystems, particularly in the SaaS space, and Rackspace’s foray is one of several that I know of that are still under wraps. Parallels has gotten into the act on the small business end, too, with the SaaS marketplace it’s integrated into its software. And a ton of other companies in the technology services space are also wanting to jump into the SaaS marketplace / exchange / brokerage business. (And you have folks like Etelos who build software to enable SaaS marketplaces.)
We’re seeing other software marketplaces in the cloud context, of course. For instance, there’s the increasing trend towards cloud IaaS providers offering an app store for rent-by-the-hour or otherwise cloud-license-friendly software — an excellent and important convenience, even necessity, for really driving cost savings for customers. And there are plenty of opportunities, including in the marketplace context, to add value as a broker.
However, I suspect that, by default, these days, if you have a need for software that does X, you go and attempt to enter X into Google, and pray that you’ve picked the right search term (or that the vendors have done reasonable SEO), in order to find software that does X. Anyone who wants to do a meaningful matching marketplace needs to be able to do better than this — which means that the listings in a marketplace need to be pretty comprehensive before it offers better results than Google. What a marketplace offers to the buyer, hopefully, is more nicely-encapsulated information than raw search results easily deliver.
However, many SaaS apps are narrow, “long tail” applications — almost more a handful of features that properly belong in a larger software suite, than they are properly full products unto themselves. That means that it’s harder to make sure that you really have wide and deep listings, and it means that useful community review gets more difficult because the app that’s got a handful of customers quite possibly doesn’t get any thoughtful reviews. And for many of the companies that are considering SaaS marketplace, the length of the long tail makes it difficult to have a meaningful partner model.
So what does Rackspace have that other, previous, attempts to launch general SaaS marketplaces have not? Money to do marketing. And at least thus far, the apparent willingness to not charge for the matching service. That might very well drive the kind of SaaS vendor sign-ups necessary to really make the marketplace meaningful to potential customers.
Liability and the cloud
I saw an interesting article about cloud provider liability limits, including some quotes from my esteemed colleague Drue Reeves (via Gartner’s acquisition of Burton). A quote in an article about Eli Lilly and Amazon also caught my eye: Tanya Forsheit, founding partner at the Information Law Group, “advised companies to walk away from the business if the cloud provider is not willing to negotiate on liability.”
I frankly think that this advice is both foolish and unrealistic.
Let’s get something straight: Every single IT company out there takes measures to strongly limit its liability in the case something goes wrong. For service providers — data center outsourcers, Web hosting companies, and cloud providers among them — their contracts usually specify that the maximum that they’re liable for, regardless of what happens, is related in some way to the fees paid for service.
Liability is different from an SLA payout. The terms of service-level agreements and their financial penalties vary significantly from provider to provider. SLA payouts are usually limited to 100% of one month of service fees, and may be limited to less. Liability, on the other hand, in most service provider contracts, specifically refers to a limitation of liability clause, which basically states the maximum amount of damages that can be claimed in a lawsuit.
It’s important to note that liability is not a new issue in the cloud. It’s an issue for all outsourced services. Prior to the cloud, any service provider who had their contracts written by a lawyer would always have a limitation of liability clause in there. Nobody should be surprised that cloud providers are doing the same thing. Service providers have generally limited their liability to some multiple of service fees, and not to the actual damage to the customer’s business. This is usually semi-negotiable, i.e., it might be six months of service fees, twelve months of fees, some flat cap, etc., but it’s never unlimited.
For years, Gartner’s e-commerce clients have wanted service providers to be liable for things like revenues lost if the site is down. (American Eagle Outfitters no doubt wishes it could hold IBM’s feet to the fire with that, right now.) Service providers have steadfastly refused, though back a decade or so, the insurance industry had considered whether it was reasonable to offer insurance for this kind of thing.
Yes, you’re taking a risk by outsourcing. But you’re also taking risks when you insource. Contract clauses are not a substitute for trust in the provider, or any kind of proxy indicating quality. (Indeed, a few years back, small SaaS providers often gave away so much money in SLA refunds for outages that we advised clients not to use SLAs as a discounting mechanism!) You are trying to ensure that the provider has financial incentive to be responsible, but just as importantly, a culture and operational track record of responsibility, and that you are taking a reasonable risk. Unlimited liability does not change your personal accountability for the sourcing decision and the results thereof.
In practice, the likelihood that you’re going to sue your hosting provider is vanishingly tiny. The likelihood that it will actually go to trial, rather than being settled in arbitration, is just about nil. The liability limitation just doesn’t matter that much, especially when you take into account what you and the provider are going to be paying your lawyers.
Bottom line: There are better places to focus your contract-negotiating and due-diligence efforts with a cloud provider, than worrying about the limitation-of-liability clause. (I’ve got a detailed research note on cloud SLAs coming out in the future that will go into all of these issues; stay tuned.)
HostingCon keynote slides
My HostingCon keynote slides are now available. Sorry for the delay in posting these slides — I completely spaced on remembering to do so.