Author Archives: Lydia Leong
Cloud IaaS coverage at Gartner
I’ve got a pair of new European colleagues, and I thought I’d take a moment to introduce, on my blog, the folks who cover public cloud infrastructure as a service here at Gartner, and to answer a common question about the way we cover the space here.
There are three groups of analysts here at Gartner who cover cloud IaaS, who belong to three different teams. Those teams are our Infrastructure and Operations (I&O) team, which is part of the division that offers advice to technology buyers (what Gartner calls “end-user organizations”) in the traditional Gartner client base of IT managers; our High-Tech and Telecom Provider (“HTTP”) division, which offers advice to vendors and investors along with end-users, and also produces quantitative market data such as forecasts and market statistics; and our IT1 division (formerly our Burton Group acquisition), which offers advice to technology implementors, generally IT architects and senior engineers in end-user organizations.
We all collaborate with one another, but these distinctions matter for anyone buying research from us. If you’re just buying what Gartner calls Core Research, you’ll have access to what the I&O analysts publish, along with anything that HTTP analysts publish into Core. To get access to HTTP-specific content, though, you’ll need to buy an upgrade, usually in the form of a Gartner for Business Laeders (GBL) research seat. The IT1 resesarch is sold separately; anything that IT1 analysts write (that’s not co-authored with analysts in other groups) goes solely to IT1 subscribers. The I&O analysts and HTTP analysts are available via inquiry by anyone who buys Gartner research, but the IT1 analysts are only inquiry-accessible by those who buy IT1 research specifically. You can, however, brief any of us — client status doesn’t matter for briefings.
So, we’re:
- Lydia Leong (HTTP, North America) – Cloud IaaS, Web hosting and colocation, content delivery networks, cloud computing and Internet infrastructure in general.
- Ted Chamberlin (I&O, North America) – Web and app hosting, colocation, cloud IaaS, network services (voice, data, and Internet).
- Drue Reeves (IT1, North America) – Data centers and cloud infrastructure, both internal and external.
- Kyle Hilgendorf (IT1, North America) – Data centers and cloud infrastructure, both internal and external.
- Tiny Haynes (I&O, Europe) – Web and app hosting, colocation, cloud IaaS, carrier services.
- Gregor Petri (HTTP, Europe) – Cloud IaaS, Web hosting and colocation, carrier services.
- Chee-Eng To (HTTP, Asia) – Carrier services in Asia, including cloud IaaS.
- Vincent Fu (HTTP, China) – Carrier services in China, including cloud IaaS.
Tiny Haynes and Gregor Petri are brand-new to Gartner, and they’ll be deepening our coverage of Europe as well as contributing to global research.
Citrix buys Cloud.com
(This is part of a series of “catch-up” posts of announcements that I’ve wanted to comment on but didn’t previously find time to blog about.)
Recently, Citrix acquired Cloud.com. The purchase price was reported to be in the $200m+ vicinity — around 100x revenues. (Even in this current run of outsized valuations, that’s a rather impressive payday for an infrastructure software start-up. I heard that VMware’s Paul Maritz was talking about how these guys were shopping themselves around, into which some people have read that they ‘had’ to sell, but companies that sell themselves for 100x trailing revenues don’t ‘have’ to be doing anything, other than sniffing around to see if anyone is willing to give them even more money.)
Cloud.com (formerly known as VMOps) is one of a great many “cloud operating system” companies — it competes with Abiquo, OpenStack, Eucalyptus, Nimbula, VMware (in the form of vCloud Director), and so on. By that, I mean that you can take Cloud.com and use it to build cloud IaaS of your very own. While you can use Cloud.com to build a private cloud, the reason that Cloud.com commanded such a high valuation is that it’s currently the primary alternative to VMware for service providers who want to build public cloud IaaS.
Cloud.com is a commercial open-source vendor, but realistically, it’s heavily on the commercial side, not the open-source side; people running Cloud.com in production are generally using the licensed, much more featureful, version. Large service providers who want to build commodity clouds, particularly on the Xen hypervisor (especially Citrix Xen, rather than open-source Xen), are highly likely to choose Cloud.com’s CloudStack product as the underlying “cloud OS”. We’re also increasingly hearing from service providers who intend to use Cloud.com to manage VMware-based environments (using the VMware stack minus vCloud Director), as part of a hypervisor-neutral strategy.
Key service provider customers include GoDaddy and Tata Communications. A particular private cloud customer of note is Zynga, which uses Cloud.com to provide Amazon-compatible (and thus Rightscale-compatible) infrastructure internally, letting it easily move workloads across their own infrastructure and Amazon’s.
Citrix, of course, now has a significant commitment to OpenStack, in the form of Project Olympus, their planned commercial distribution. The Cloud.com acquisition is nevertheless complementary, though, not competitive to the OpenStack commitment.
Cloud.com provides a much more complete set of features than OpenStack — it’s got much of what you need to have a turnkey cloud. Over time, as OpenStack matures, Cloud.com will be able to replace the lower levels of its software stack with OpenStack components instead. For Citrix, though (and broadly, service providers interested in VMware alternatives), this is a time-to-market issue as well as a solution-completeness issue.
In my conversations with a variety of organizations that are deeply strategically involved with OpenStack and working in-depth on the codebase, consensus seems to have developed that OpenStack is about 18 months from maturity (in the sense that it will be stable enough for a service provider who needs to depend on it to run their business to be able to reasonably do so). That’s forever in this fast-moving market. While Swift (the storage piece) is currently reliable and in production use at a variety of service providers, Nova (the compute piece) is not — there are no major service providers running Nova, and it’s acknowledged to not be service-provider-ready. (Rackspace is running the code it got via the acquisition of Slicehost, not the Nova project.) Service providers want to work with proven, stable code, and that’s not Nova right now — that’s Cloud.com. (Or VMware, and even there, people have been touchy about vCloud Director.)
It’s not that the service providers have a deep interest in running an open-source codebase; rather, they are looking for an alternative to VMware that is less expensive. Cloud.com currently fills that need reasonably well.
Similarly, it’s not that most of the members of the OpenStack coalition are vastly interested in an open-source cloud world, but rather, that they realize that there needs to be an alternative to VMware’s ecosystem, and it is in the best interests of VMware’s various competitors to pool their efforts (and for vendors in more of an “arms merchant” role, to ensure that their stuff works with every ecosystem out there). Open source is a means to an end there. Cloud.com’s stack, whether commercial or open source, is only a benefit to the OpenStack project, in the long term.
This acquisition means something pretty straightforward: Citrix is ensuring that it can deliver a full service provider stack of software that will enable providers to successfully compete against vCloud — or to have hypervisor-neutral solutions peacefully coexist, in a way that can be easily blended to meet business needs for a broad range of IaaS solutions. While Citrix would undoubtedly love to sell more XenServer licenses, ultimately the real money is in selling the rest of its portfolio to service providers — like NetScaler ADCs. Having a hypervisor-neutral cloud stack benefits Citrix’s overall position, even if some Cloud.com customers will choose to go VMware or KVM or open-source Xen rather than Citrix Xen for the hypervisor.
It certainly doesn’t hurt that Cloud.com’s Amazon-compatible APIs (and thus support of RightScale’s functionality) is also tremendously useful for organizations seeking to build Amazon-compatible private clouds at scale. No one else has really addressed this need, and VMware (in an infrastructure context) has largely targeted the market for “dependable”, classically enterprise-like infrastructure, rather than explored the opportunities in the emerging demand for commodity cloud.
In short, I think Cloud.com is a great buy for Citrix, and VMware-watchers interested in whether or not their vCloud service provider initiative is working well should certainly track Cloud.com wins vs. vCloud wins in the service provider space.
AT&T’s CDN re-launch
(This is part of a series of “catch-up” posts of announcements that I’ve wanted to comment on but didn’t previously find time to blog about.)
AT&T recently essentially re-launched its CDN — new technology, new branding, new footprint.
AT&T’s existing CDN product, called iCDS, has had limited success in the marketplace. They’ve been a low-cost competitor, but their deal success in the high-volume market has been low — Level 3, for instance, has offered prices just as good or better, on a more featureful, higher-performance service, and with other competitors, notably Akamai and Limelight, willing to compete in the low-cost high-volume market, it’s been difficult for AT&T to compete successfully on price (although they certainly helped the general decline in prices). We’ve also seen them get good pick-up with CDN added to a managed hosting contract — there are plenty of managed hosting customers happy to sign on $1000 or $2000 worth of CDN. (We’ve also seen this with other hosters that casually quote a little bit of CDN along with managed hosting deals; it’s not just an AT&T phenomenon.) We’ve also seen them pitch “hey, you should use us if you want to reach iPhone customers”, but that’s too narrow for most content providers to consider right now.
Previously, AT&T had been insistent on developing all of its CDN technology in-house. AT&T has a long and proud “built here and only here” tradition, especially with AT&T Labs, but it simply hasn’t worked out well for its CDN, especially since anything that AT&T builds in portal technology tends to look like it was built by hard-core geeks for other hard-core geeks — not the slick, user-friendly, Web 2.0 interfaces that you’ll see coming out of many other service providers these days. That made all iCDS things to do with “how customers interface with the CDN to actually get something useful done”, including configuration and analytics, pretty sub-par to the market.
AT&T has now done something that would probably be smart for other carriers to emulate — buying CDN technology rather than developing it in-house. There are now plenty of vendors to choose from — Cisco, Juniper (Ankeena), Alcatel-Lucent (Velocix), Edgecast, 3Crowd, JetStream, etc. — and although these solutions vary wildly in quality and completeness, I’m still bemused by the number of carriers whose engineers are really jonesing to build their own in-house technology. In AT&T’s case, they’ve selected Edgecast’s software solution — a nice feather in the cap for Edgecast, definitely, given the kind of scrutiny that AT&T gives its solutions that are going to be deployed in its network. (Carrier CDNs are very much a hot trend at the moment, although they’re a hot trend relative to the otherwise glacial speed at which carriers do anything.)
AT&T is building out a new footprint of servers running the Edgecast software. They’ll operate both the old and new CDNs for some time — existing iCDS customers will continue to run on the existing iCDS platform and footprint, and new customers will go onto the new platform. That means it’s going to take some time to assess the real performance of the new CDN, as the POPs are being rolled out gradually. The new footprint will be similar, but not identical to, the old fotprint.
However, I don’t think the launch of a new AT&T CDN is anywhere near as significant for the market as AT&T’s continued success in reselling Cotendo. The AT&T CDN itself is simply part of the already-commoditized market for high-volume delivery — the re-launch will likely return them to real competitiveness, but doesn’t change any fundamental market dynamics.
Riverbed acquires Zeus and Aptimize
(This is part of a series of “catch-up” posts of announcements that I’ve wanted to comment on but didn’t previously find time to blog about.)
Riverbed made two interesting acquisitions recently, which I think signal a clear intention to be more than just a traditional WAN optimization controller (WOC) vendor — Zeus, and Aptimize. If you’re an investment banker or a networking vendor, who has talked to me over the last year, you know that these two companies have been right at the top of my “who I think should get bought” list; these are both great pick-ups for Riverbed.
Zeus has been around for quite some time now, but a lot of people have never heard of them. They’re a small company in the UK. Those of you who have been following infrastructure for the Web since the 1990s might remember them as the guys who developed the highest-performance webserver — if a vendor did SPECweb benchmarks for its hardware back then, they generally used Zeus for the software. It was a great service provider product, too, especially for shared Web hosting — it had tons of useful sandboxing and throttling features that were light-years ahead of anyone else back then. But despite the fact that the tech was fantastic, Zeus was never really commercially successful with their webserver software, and eventually they turned their underlying tech to building application delivery controller (ADC) software instead.
Today, Zeus sells a high-performance, software-based ADC, with a nice set of features, including the ability to act as a proxy cache. It’s a common choice for high-end load-balancing when cloud IaaS customers need to be able to deploy a virtual appliance running on a VM, rather than dropping in a box. It’s also the underlying ADC for a variety of cloud IaaS providers, including Joyent and Rackspace (which means it’ll also get an integration interface to OpenStack). Notably, over the last two years, we’ve seen Zeus supplanting or supplementing F5 Networks in historically strong F5 service provider accounts.
Aptimize, by contrast, is a relatively new start-up. It’s a market leader in front-end optimization (FEO), sometimes also called Web performance optimization (WPO) or Web content optimization (WCO). FEO is the hot new thing in acceleration — it’s been the big market innovation in the last two years. While previous acceleration approaches have focused upon network and protocol optimization, or on edge caching, FEO optimizes the pages themselves — the HTML, Cascading Style Sheets (CSS), JavaScript, and so forth that goes into them. It basically takes whatever the webserver output is and attempts to automatically apply the kinds of best practices that Steve Souders has espoused in his books.
Aptimize makes a software-based FEO solution which can be deployed in a variety of ways, including as a virtual appliance running on a VM. (FEO is generally a computationally expensive thing, though, since it involves lots of text parsing, so it’s not unusual to see it run on a standalone server.)
So, what Riverbed has basically bought itself is the ability to offer a complete optimization solution — WOC, ADC, and FEO — plus the intellectual property portfolio to potentially eventually combine the techniques from all three families of products into an integrated product suite. (Note that Riverbed is fairly cloud-friendly already with its Virtual Steelhead.)
I think this also illustrates the vital importance of “beyond the box” thinking. Networking hardware has traditionally been just that — specialized appliances with custom hardware that can do something to traffic, really really fast. But off-the-shelf servers have gotten so powerful that they can now generate the kind of processing umph and network throughput that you used to have to build custom hardware logic to achieve. That’s leading us to the rise of networking vendors who make software appliances instead, because it’s a heck of a lot easier and cheaper to launch a software company than a hardware company (probably something like a 3:1 ratio in funding needed), you can have product to market and iterate much more quickly, and you can integrate more easily with other products.
ObPlug for new, related research notes (Gartner clients only):
- Riverbed Reports 2Q11 Results, and Acquires Zeus and Aptimize: My colleague Frank Marsala presents an Invest Insight for institutional investors, on Riverbed’s recent announcements.
- How to Accelerate Internet Websites and Applications: My latest note, on how to combine network-based and front-end optimization techniques, looking at CDNs, ADCs, FEO, and more.
Amazon and Equinix partner for Direct Connect
Amazon has introduced a new connectivity option called AWS Direct Connect. In plain speak, Direct Connect allows an Amazon customer to get a cross-connect between his own network equipment and Amazon’s, in some location where the two companies are physically colocated. In even plainer speak, if you’re an Equinix colocation customer in their Ashburn, Virginia (Washington DC) data center campus, you can get a wire run between your cage and Amazon’s, which gives you direct connectivity between your router and theirs.
This is relatively cheap, as far as such things go. Amazon imposes a “port charge” for the cross-connect at $0.30/hour for 1 Gbps or $2.25/hour for 10 Gbps (on a practical level, since cross-connects are by definition nailed up 100% of the time, about $220/month and $1625/month respectively), plus outbound data transfer at $0.02/GB. You’ll also pay Equinix for the cross-connect itself (I haven’t verified the prices for these, but I’d expect they would be around $500 and $1500 per month). And, of course, you have to pay Equinix for the colocation of whatever equipment you have (upwards of $1000/month+ per rack).
Direct Connect has lots of practical uses. It provides direct, fast, private connectivity between your gear in colocation and whatever Amazon services are in Equinix Ashburn (and non-Internet access to AWS in general), vital for “hybrid cloud” use cases and enormously useful for people who, say, have PCI-compliant e-commerce sites with huge databases Oracle RAC and black-box encryption devices, but would like to put some front-end webservers in the cloud. You can also buy whatever connectivity you want from your cage in Equinix, so you can take that traffic and put it over some less expensive Internet connection (Amazon’s bandwidth fees are one of the major reasons customers leave them), or you can get private networking like ethernet or MPLS VPN (an important requirement for enterprise customers who don’t want their traffic to touch the Internet at all).
This is not a completely new thing — Amazon has quietly offered private peering and cross-connects to important customers for some time now, in Equinix. But this now makes cross-connects into a standard option with an established price point, which is likely to have far greater uptake than the one-off deals that Amazon has been doing.
It’s not a fully-automated service — the sign-up is basically used to get Amazon to grant you an authorization so that you can put in an Equinix work order for the cross-connect. But it’s an important step in the right direction. (I’ve previously noted the value of this partnership in a blog post called “Why Cloud IaaS Customers Care About a Colo Option“. Also, for Gartner clients, see my research note “Customers Need Hybrid Cloud Compute IaaS” for a detailed analysis.)
This is good for Equinix, too, for the obvious reasons. For quite some time now, I’ve been evangelizing the importance of carrier-neutral colocation as a “cloud hub”, envisioning a future where these providers facilitate cross-connect infrastructures between cloud users and cloud providers. Widespread adoption of this model would allow an enterprise to say, get a single rack of network equipment at Equinix (or Telecity or Interxion, etc.), and then cross-connect directly to all of their important cloud suppliers. It would drive cross-connect density, differentiation and stickiness at the carrier-neutral colo providers who succeed in being the draw for these ecosystems.
It’s worth noting that this doesn’t grant Amazon a unique capability, though. Just about every other major cloud IaaS provider already offers colocation and private connectivity options. But it’s a crucial step for Amazon towards being suitable for more typical enterprise use cases. (And as a broader long-term ecosystem play, customers may prefer using just one or two “cloud hubs” like an Equinix location for their “cloud backhaul” onto private connectivity, especially if they have gateway devices.)
How to get a meeting with me at VMworld
I will be at VMworld in Las Vegas this year. If you’re interested in meeting with me during VMworld, please do the following:
Gartner clients and current prospects: Please contact your Gartner account executive to have them set up a meeting (they can use a Gartner internal system called WhereRU to schedule it). I’ve set aside Thursday, September 1st, for client meetings. If you absolutely cannot do Thursday, please have your account executive contact me and we’ll see what else we can work out. (Because there are often more meeting requests than there are meeting times available, I will allow our sales team to prioritize my time.)
Non-clients: Please contact me directly via email, with a range of times that you’re available. In general, these will be meetings after 5 pm, although depending on my schedule, I may fit in meetings throughout the day on Wednesday, August 31st, as well.
I am particularly interested in start-ups that have innovative cloud IaaS offerings, or which have especially interesting enabilng technologies targeted at the service provider market.
New gTLDs require a business case
Recently, I’ve been deluged with client inquiries about the new gTLDs that ICANN finally approved last month. (That’s three years after they first accepted the gTLD stakeholder recommendation, and two years after they said they expected to start taking applications… which they now say they won’t do until January 2012.)
Tonight, I decided to write a research note, in hopes of persuading clients to read the note rather than trying to talk to me. I sat down at 5 pm to write it. I figured it’d be a quick little note. I finished at 3 am, with an hour break for dinner. It’s not a short note, and I’m not convinced that it’s really as complete as it should be, so it’s not done per se, and it still needs peer review…
I’ll throw out a couple of quick thoughts on this blog, though, and invite you to challenge my thinking:
- If you’re going to get a gTLD, you should start with the business plan, driven by your business / marketing guys, not IT security guys nattering about defensive moves. Lots of organizations won’t be able to come up with reasonable business plans, especially given the cost.
- A gTLD is valuable to a business with many affiliates or affinity sites. That includes companies that franchise or have agents, companies with partner networks, and companies that have big fan communities. It may also include companies that have a ton of unique names that need to be associated with a domain, for some reason, or which otherwise need a namespace to themselves.
- Most companies won’t become .brand rather than brand.com; among other things, nobody knows what second-level domains are going to be logical, in many cases. Global companies currently operating under a mess of country-specific domains may usefully consolidate under a .brand, though.
- Government entities are facing a ton of hype, especially from consultants selling gTLD-related services. But most governments won’t significantly benefit from a gTLD for their locale, and the benefits to residents of a geographic-name gTLD are pretty limited. (That doesn’t mean that you can’t make a successful business out of a geographic name, though; at the very least you’ll get the obligatory defensive registrations.)
- Defensive registrations of gTLDs are relatively pointless. Nobody’s going to cybersquat for the kind of money that a gTLD costs to apply for and operate, and the dispute process is so expensive that people aren’t going to go spend money applying for a gTLD that’s likely to be contested on trademark grounds.
- There will be some contention for generic terms, both by companies associated with those terms, trade associations, and registry businesses that want to operate general-public registries for those terms.
- The proliferation of new gTLDs is going to multiply everyone’s defensive registration headaches for domain names. Many new gTLD registries will probably make most of their money off defensive registrations, and not active primary-use domains. This is very sad and creates negative value in the world.
I’m a fan of the digital brand management guys — companies like MarkMonitor, Melbourne IT, and NameProtect (Corporation Services Company, the “other CSC”), to name a few. I think they have a lot of specialized knowledge and I tend to recommend that clients who need in-depth thinking on this stuff use them. If you really want to dive into gTLD strategy, they’re the folks to go to. (Yes, I know there are tons of other little consultancies out there that now claim to specialize in gTLDs. I don’t trust any of them yet, and what my clients have told me about their interactions with various such shops hasn’t made me feel better about their trustworthiness. Beware of consultants who either try to scare you or make your eyes light up in dollar symbols.)
Citrix invests in Cotendo
On the heels of the announcement of an Akamai/Riverbed partnership, Citrix has taken an investment in Cotendo, and announced the development of an integrated ADC/CDN solution.
This is a different sort of deal than Akamai/Riverbed. Whereas that deal addresses a particular use case — enterprises who want to accelerate a SaaS solution but the SaaS provider isn’t cooperating — the Citrix/Cotendo deal is intended to enhance dynamic acceleration by integrating with an on-premise ADC (in this case, a Citrix NetScaler, of course).
Back during the Netli days, Netli actually coupled their service, in most cases, with a lightweight on-premises ADC to ensure first-mile acceleration as well. This was phased out when Netli was acquired by Akamai, which did not want to have to deal with CPE (customer premises equipment). While there had been talks of partnerships with ADC vendors, the Akamai acquisition essentially killed them, and in the four years that have passed, this excellent, even vital, idea has essentially lain fallow.
Optimal acceleration of content requires end-to-end solutions — optimizing of the content itself, optimization of the network from the first mile all the way to the last mile, and optimization on the device. To make this happen, CDN providers need to have tight integration with ADC vendors.
I like the partnership and the investment, and I hope that it paves the way for an ecosystem in which many CDNs offer tighter integration with a variety of ADC devices from a range of popular vendors.
Limelight Networks buys AcceloWeb
I am way behind on my news announcements, or I’d have posted on this earlier: Limelight has bought AcceloWeb.
AcceloWeb is a front-end optimization software company (sometimes called Web content optimization). FEO technologies improve website / Web application performance, through optimizing the HTML/CSS/JavaScript/images on the page. I’ve blogged about this in the past, regarding Cotendo’s integration of Google’s mod_pagespeed technology; if you’re interested in understanding more about FEO, see that post.
Like their competitors Aptimize and Strangeloop Networks, AcceloWeb is a software-based solution. FEO is an emerging technology, and it is computationally expensive — far more so than the kind of network-based optimizations that you get in ADCs like F5’s, or WOCs like Riverbed’s. It is also complex, since FEO tries to rewrite the page without breaking any of its elements — especially hard to do with complex e-commerce sites, for instance, especially those that aren’t following architectural best practices (or even good practices).
CDN and FEO services are highly complementary, since caching the optimized page elements obviously makes sense. Level 3 and Strangeloop recently partnered, with Level 3 offering Strangeloop’s technology as a service called CDN Site Optimizer, although it’s a “side by side” implementation in Level 3’s CDN POPs, not yet integrated with the Level 3 CDN. (Obviously, the next step in that partnership would be integration.)
The integration of network optimization and FEO is the most significant innovation in the optimization market in recent years. For Limelight, this is an important purchase, since it gets them into the acceleration game with a product that Akamai doesn’t offer. (Akamai only has a referral deal with Strangeloop.)
Gartner clients: My research note on improving Web performance (combining on-premise acceleration, CDN / ADN, and FEO for complete solutions) will be out soon!
The forthcoming Public Cloud IaaS Magic Quadrant
Despite having made various blog posts and corresponded with a lot of people in email, there is persistent, ongoing confusion about our forthcoming Magic Quadrant for Public Cloud Infrastructure as a Service, which I will attempt to clear up here on my blog so I have a reference that I can point people to.
1. This is a new Magic Quadrant. We are doing this MQ in addition to, and not instead of, the Magic Quadrant for Cloud IaaS and Web Hosting (henceforth the “cloud/hosting MQ”). The cloud/hosting MQ will continue to be published at the end of each calendar year. This new MQ (henceforth the “public cloud MQ”) will be published in the middle of the year, annually. In other words, there will be two MQs each year. The two MQs will have entirely different qualification and evaluation criteria.
2. This new public cloud MQ covers a subset of the market covered by the existing cloud/hosting MQ. Please consult my cloud IaaS market segmentation to understand the segments covered. The existing MQ covers the traditional Web hosting market (with an emphasis on complex managed hosting), along with all eight of the cloud IaaS market segments, and it covers both public and private cloud. This new MQ covers multi-tenant clouds, and it has a strong emphasis on automated services, with a focus on the scale-out cloud hosting, virtual lab environment, self-managed virtual data center, and turnkey virtual data center segments. The existing MQ weights managed services very highly; by contrast, the new MQ emphasizes automation and self-service.
3. This is cloud compute IaaS only. This doesn’t rate cloud storage providers, PaaS providers, or anything else. IaaS in this case refers to the customer being able to have access to a normal guest OS. (It does not include, for instance, Microsoft Azure’s VM role.)
4. When we say “public cloud”, we mean massive multi-tenancy. That means that the service provider operates, in his data center, a pool of virtualized compute capacity in which multiple arbitrary customers will have VMs on the same physical server. The customer doesn’t have any idea who he’s sharing this pool of capacity with.
5. This includes cloud service providers only. This is an MQ for the public cloud compute IaaS providers themselves — the services focused on are ones like Amazon EC2, Terremark Enterprise Cloud, and so forth. This does not include any of the cloud-enablement vendors (no Eucalyptus, etc.), nor does it include any of the vendors in the ecosystem (no RightScale, etc.).
6. The target audience for this new MQ is still the same as the existing MQ. As Gartner analysts, we write for our client base. These are corporate IT buyers in mid-sized businesses or enterprises, or technology companies of any size (generally post-funding or post-revenue, i.e., at the stage where they’re looking for serious production infrastructure). We expect to weight the scoring heavily towards the requirements of organizations who need a dependable cloud, but we also recognize the value of commodity cloud to our audience, for certain use cases.
At this point, the initial vendor surveys for this MQ have been sent out. They have gone out to every vendor who requested one, so if you did not get one and you wanted one, please send me email. We did zero pre-qualification; if you asked, you got it. This is a data-gathering exercise, where the data will be used to determine which vendors get a formal invitation to participate in the research. We do not release the qualification criteria in advance of the formal invitations; please do not ask.
If you’re a vendor thinking of requesting a survey, please consider the above. Are you a cloud infrastructure service provider, not a cloud-building vendor or a consultancy? Is your cloud compute massively multi-tenant? Is it highly automated and focused on self-service? Do you serve enterprise customers and actively compete for enterprise deals, globally? If the answers to any of these questions are “no”, then this is not the MQ for you.