Monthly Archives: August 2011

The Global Internet Speedup Initiative

The rather prosaically-named, if accurately and precisely named, IETF draft specification, “Client Subnet in DNS Requests” (“edns-client-subnet”), has gotten some breathless marketing spin as the Global Internet Speedup Inititative.

I blogged about this about a year and a half ago: “Google’s DNS protocol extension and CDNs“. See that post for a deeper analysis. (I also previously blogged about the problem with using DNS as the CDN vantage point.)

My opinion on this hasn’t changed. In the intervening time, various DNS service providers and CDN providers have contributed to the draft, and the end result seems to be pretty reasonable. The extension solves a common problem for the CDNs — returning appropriately close CDN servers to an end-user who is using a DNS resolver that’s not close to his own location (common for users on some ISP networks, along with those who use resolvers from OpenDNS, Neustar, etc., and potentially for some users in enterprise networks).

But I am impressed with the amount of hype that the vendors involved have managed to generate about a fiddly little technical detail that ordinary users have probably never thought about and shouldn’t ever really need to think about.

VMware vCloud Global Connect and commoditization

At VMworld, VMware has announced vCloud Global Connect, a federation between vCloud Datacenter Provider partners.

My colleague Kyle Hilgendorf has written a good analysis, but I wanted to offer a few thoughts on this as well.

The initial partners for the announcement are Bluelock (US, based in Indianapolis), SingTel (Singapore), and SoftBank Telecom (Japan). Notably, these vendors are landlocked, so to speak — they have deployments only within their home countries, and who probably will not expand significantly beyond their home territories. Consequently, they’re not able to compete for customers who want multi-region deployments but one throat to choke. (Broadly, there are still an insufficient number of high-quality cloud providers who have multi-region deployments.)

These providers are relatively heavyweight — their typical customers are organizations which are going through a formal sourcing process in order to procure infrastructure, and are highly concerned about security, availability, performance, and alignment with enterprise IT. I expect that anyone who chooses federation with Global Connect is going to apply intense scrutiny to the extension provider, as well. At least because the vCloud Datacenter architecture is to some extent proscriptive, and has relatively high requirements, in theory all federation providers should pass the buyer’s most basic “is this cloud provider architected in a reasonable fashion” checks.

However, I think customers will probably strongly prefer to work with a truly global provider if they need truly global infrastructure (as opposed to simply trying to globally source infrastructure that will be used in unique ways within each region) — and those with specific regional needs are probably going to continue to buy from regional (or local) providers, especially given how fragmented cloud IaaS sourcing frequently is.

It’s an important technical capability for VMware to demonstrate, though, since, implicitly, being able to do this between providers also means that it should be possible to move workloads between internal vClouds and external vClouds, and to disaster-recover between providers.

Importantly, the providers chosen for this launch are also providers who are not especially worried about being commoditized. Their margin is really made on the value-added services, especially managed services, and not so much from just providing compute cycles. Each of them probably gains more from being able to address global customer needs, than they lose from allowing their infrastructure to be used by other providers in this fashion.

I do believe that the core IaaS functionality will be commoditized over time, just like the server market has become commoditized. I believe, however, that IaaS providers will still be able to differentiate — it’ll just be a differentiation based on the stuff on top, not the IaaS platform itself.

In the early years of the market, there is significant difference in features/functionality between IaaS providers (and how that relates to cost), but the roadmaps are largely convergent over the next few years. Just like hosters don’t depend on having special server hardware in order to differentiate from one another, cloud IaaS providers eventually won’t depend on having a differentiated base infrastructure layer — the value will primarily come higher up the stack.

That’s not the say that there won’t still be difference in the quality of the underlying IaaS platforms, and some providers will manage costs better than others. And the jury’s still out on whether providers who build their own intellectual property at the IaaS platform layer, vs. buying into vCloud (or Cloud.com, some future OpenStack-based stack, or one of many other “cloud stacks”), will generate greater long-term value.

(For further perspective on commoditization, see an old blog post of mine.)

Recent research notes

This is just a quick call-out to draw your attention to the research that I’ve published recently.


Do You Have a Business Case for a Top-Level Domain?
I blogged previously on this topic, and this research note, done with my colleague Ray Valdes (whose coverage includes online user experience), dives deeply into consideration of the uses of gTLDs, the impact of gTLDs, the shifting landscape of how users find websites, and other things of interest to anyone considering a gTLD or preparing a business case for one.

How to Deliver Video to Dispersed Users Without Upgrading Your Network
Many organizations that are trying to deliver video to a lot of users think that they should use a traditional CDN. That’s not necessarily the right solution. This research note examines the range of solutions, divided by the delivery targets — Internet users outside your organization, your own employees at remote sites, Internet VPN users, and mixed-usage scenarios.

How to Accelerate Internet Websites and Applications
There are a range of techniques that can be used for acceleration — netwok optimization, front-end optimization (sometimes called Web content optimization or Web performance optimization), and caching — that can be delivered as appliances or services. This research note looks at selecting the right solution, and combining solutions, to maximize performance within your available budget.

(These notes are for Gartner clients only, sorry.)

What makes Akamai sticky?

There’s one thing in particular that tends to make Akamai customers “sticky” — the amount the customer uses professional services. The more professional services a customer consumes from Akamai, the less likely it is they’ll ever switch CDNs. In short: The more of a pain it’s been for them to integrate with Akamai’s CDN (usually due to the customer having a complex site that violates best practices related to content cacheability), and the more they have to use recurring professional services every time they update their site, the less likely it is that they’re going to move to another CDN. That’s for two reasons — one, because it’s difficult and expensive to do the up-front work to get the site onto another CDN, and two, because most other CDNs don’t like to do extensive professional services on a recurring basis. That makes the use of professional services a double-edged sword, since it’s not really a business with great margins, and you’re vulnerable if the customer eventually goes and builds a site that isn’t a great big hairy mess.

But there’s one Akamai product (delivered as a value-added additional service) that’s currently sufficiently compelling that customers and prospects who want it, won’t consider any other CDN that can’t offer the same. (And since it’s currently unique to Akamai, that means no competition, always a boon in a market where pricing is daily warfare.) I’m suddenly seeing it frequently quoted, which makes it likely that it’s a significant sales push, though it’s not a brand-new product. It’s a very effective attach.

Can you guess what it is?

(You may feel free to speculate on my blog, but if you want the answer, and you’re a Gartner client, make an inquiry request through the usual means.)

OpenStack, community, and commercialization

I wrote, the other day, about Citrix buying Cloud.com, and I realized I forgot to make an important point about OpenStack versus the various commercial vendors vying for the cloud-building market; it’s worthy of a post on its own.

OpenStack is designed by the community, which is to say that it’s largely designed by committee, with some leadership that represents, at least in theory, the interests of the community and has some kind of coherent plan in mind. It is implemented by the community, which means that people who want to contribute simply do so. If you want something in OpenStack, you can write it and hope that your patches are included, but there’s no guarantee. If the community decides something should be included in OpenStack, they need some committers to agree to actually write it, and hope that they implement it well and do it in some kind of reasonable timeframe.

This is not the way that one normally deals with software vendors, of course. If you’re a potentially large customer and you’d like to use Product X but it doesn’t contain Feature Y that’s really important to you, you can normally say to the vendor, “I will buy X if you had Y within Z timeframe,” and you can even write that into your contract (usually witholding payment and/or preventing the vendor from recognizing the revenue until they do it).

But if you’re a potentially large customer that would happily adopt OpenStack if it just had Feature Y, you have miminal recourse. You probably don’t actually want to write Feature Y yourself, and even if you did, you would have no guarantee that you wouldn’t be maintaining a fork of the code; ditto if you paid some commercial entity (like one of the various ventures that do OpenStack consulting). You could try getting Feature Y through the community process, but that doesn’t really operate on the timeframe of business, nor have any guarantees that it’ll be successful, and also requires you to engage with the community in a way that you may have no interest in doing. And even if you do get it into the general design, you have no control over implementation timeframe. So that’s not really doable for a business that would like to work with a schedule.

There are a growing number of OpenStack startups that aim to offer commercial distributions with proprietary features on top of the community OpenStack core, including Nebula and Piston (by Chris Kemp and Joshua McKenty, respectively, and funded by Kleiner Perkins and Hummer Winblad, respectively, two VCs who usually don’t make dumb bets). Commercial entities, of course, can deal with this “I need to respond to customer needs more promptly than the open source community can manage” requirement.

There are many, many entitities, globally, telling us that they want to offer a commercial OpenStack distribution. Most of these are not significant forks per se (although some plan to fork entirely), but rather plans to pick a particular version of the open source codebase and work from there, in order to try to achieve code stability as well as add whatever proprietary features are their secret source. Over time, that can easily accrete into a fork, especially because the proprietary stuff can very easily clash with whatever becomes part of OpenStack’s own core, given how early OpenStack is in its evolution.

Importantly, OpenStack flavors are probably not going to be like Linux distributions. Linux distributions differ mostly in which package manager they use, what packages are installed by default, and the desktop environment config out of the box — almost cosmetic differences, although there can be non-cosmetic ones (such as when things like virtualization technologies were supported). Successful OpenStack commercial ventures need to provide significant value-add and complete solutions, which, especially in the near term when OpenStack is still a fledgling immature project, will result in a fragmentation of what features can be expected out of a cloud running OpenStack, and possibly significant differences in the implementation of critical underlying functionality.

I predict most service providers will pick commercial software, whether in the form of VMware, Cloud.com, or some commercial distribution of OpenStack. Ditto most businesses making use of cloud stack software to do something significant. But the commercial landscape of OpenStack may turn out to be confusing and crowded.

Cloud IaaS coverage at Gartner

I’ve got a pair of new European colleagues, and I thought I’d take a moment to introduce, on my blog, the folks who cover public cloud infrastructure as a service here at Gartner, and to answer a common question about the way we cover the space here.

There are three groups of analysts here at Gartner who cover cloud IaaS, who belong to three different teams. Those teams are our Infrastructure and Operations (I&O) team, which is part of the division that offers advice to technology buyers (what Gartner calls “end-user organizations”) in the traditional Gartner client base of IT managers; our High-Tech and Telecom Provider (“HTTP”) division, which offers advice to vendors and investors along with end-users, and also produces quantitative market data such as forecasts and market statistics; and our IT1 division (formerly our Burton Group acquisition), which offers advice to technology implementors, generally IT architects and senior engineers in end-user organizations.

We all collaborate with one another, but these distinctions matter for anyone buying research from us. If you’re just buying what Gartner calls Core Research, you’ll have access to what the I&O analysts publish, along with anything that HTTP analysts publish into Core. To get access to HTTP-specific content, though, you’ll need to buy an upgrade, usually in the form of a Gartner for Business Laeders (GBL) research seat. The IT1 resesarch is sold separately; anything that IT1 analysts write (that’s not co-authored with analysts in other groups) goes solely to IT1 subscribers. The I&O analysts and HTTP analysts are available via inquiry by anyone who buys Gartner research, but the IT1 analysts are only inquiry-accessible by those who buy IT1 research specifically. You can, however, brief any of us — client status doesn’t matter for briefings.

So, we’re:

  • Lydia Leong (HTTP, North America) – Cloud IaaS, Web hosting and colocation, content delivery networks, cloud computing and Internet infrastructure in general.
  • Ted Chamberlin (I&O, North America) – Web and app hosting, colocation, cloud IaaS, network services (voice, data, and Internet).
  • Drue Reeves (IT1, North America) – Data centers and cloud infrastructure, both internal and external.
  • Kyle Hilgendorf (IT1, North America) – Data centers and cloud infrastructure, both internal and external.
  • Tiny Haynes (I&O, Europe) – Web and app hosting, colocation, cloud IaaS, carrier services.
  • Gregor Petri (HTTP, Europe) – Cloud IaaS, Web hosting and colocation, carrier services.
  • Chee-Eng To (HTTP, Asia) – Carrier services in Asia, including cloud IaaS.
  • Vincent Fu (HTTP, China) – Carrier services in China, including cloud IaaS.

Tiny Haynes and Gregor Petri are brand-new to Gartner, and they’ll be deepening our coverage of Europe as well as contributing to global research.

Citrix buys Cloud.com

(This is part of a series of “catch-up” posts of announcements that I’ve wanted to comment on but didn’t previously find time to blog about.)

Recently, Citrix acquired Cloud.com. The purchase price was reported to be in the $200m+ vicinity — around 100x revenues. (Even in this current run of outsized valuations, that’s a rather impressive payday for an infrastructure software start-up. I heard that VMware’s Paul Maritz was talking about how these guys were shopping themselves around, into which some people have read that they ‘had’ to sell, but companies that sell themselves for 100x trailing revenues don’t ‘have’ to be doing anything, other than sniffing around to see if anyone is willing to give them even more money.)

Cloud.com (formerly known as VMOps) is one of a great many “cloud operating system” companies — it competes with Abiquo, OpenStack, Eucalyptus, Nimbula, VMware (in the form of vCloud Director), and so on. By that, I mean that you can take Cloud.com and use it to build cloud IaaS of your very own. While you can use Cloud.com to build a private cloud, the reason that Cloud.com commanded such a high valuation is that it’s currently the primary alternative to VMware for service providers who want to build public cloud IaaS.

Cloud.com is a commercial open-source vendor, but realistically, it’s heavily on the commercial side, not the open-source side; people running Cloud.com in production are generally using the licensed, much more featureful, version. Large service providers who want to build commodity clouds, particularly on the Xen hypervisor (especially Citrix Xen, rather than open-source Xen), are highly likely to choose Cloud.com’s CloudStack product as the underlying “cloud OS”. We’re also increasingly hearing from service providers who intend to use Cloud.com to manage VMware-based environments (using the VMware stack minus vCloud Director), as part of a hypervisor-neutral strategy.

Key service provider customers include GoDaddy and Tata Communications. A particular private cloud customer of note is Zynga, which uses Cloud.com to provide Amazon-compatible (and thus Rightscale-compatible) infrastructure internally, letting it easily move workloads across their own infrastructure and Amazon’s.

Citrix, of course, now has a significant commitment to OpenStack, in the form of Project Olympus, their planned commercial distribution. The Cloud.com acquisition is nevertheless complementary, though, not competitive to the OpenStack commitment.

Cloud.com provides a much more complete set of features than OpenStack — it’s got much of what you need to have a turnkey cloud. Over time, as OpenStack matures, Cloud.com will be able to replace the lower levels of its software stack with OpenStack components instead. For Citrix, though (and broadly, service providers interested in VMware alternatives), this is a time-to-market issue as well as a solution-completeness issue.

In my conversations with a variety of organizations that are deeply strategically involved with OpenStack and working in-depth on the codebase, consensus seems to have developed that OpenStack is about 18 months from maturity (in the sense that it will be stable enough for a service provider who needs to depend on it to run their business to be able to reasonably do so). That’s forever in this fast-moving market. While Swift (the storage piece) is currently reliable and in production use at a variety of service providers, Nova (the compute piece) is not — there are no major service providers running Nova, and it’s acknowledged to not be service-provider-ready. (Rackspace is running the code it got via the acquisition of Slicehost, not the Nova project.) Service providers want to work with proven, stable code, and that’s not Nova right now — that’s Cloud.com. (Or VMware, and even there, people have been touchy about vCloud Director.)

It’s not that the service providers have a deep interest in running an open-source codebase; rather, they are looking for an alternative to VMware that is less expensive. Cloud.com currently fills that need reasonably well.

Similarly, it’s not that most of the members of the OpenStack coalition are vastly interested in an open-source cloud world, but rather, that they realize that there needs to be an alternative to VMware’s ecosystem, and it is in the best interests of VMware’s various competitors to pool their efforts (and for vendors in more of an “arms merchant” role, to ensure that their stuff works with every ecosystem out there). Open source is a means to an end there. Cloud.com’s stack, whether commercial or open source, is only a benefit to the OpenStack project, in the long term.

This acquisition means something pretty straightforward: Citrix is ensuring that it can deliver a full service provider stack of software that will enable providers to successfully compete against vCloud — or to have hypervisor-neutral solutions peacefully coexist, in a way that can be easily blended to meet business needs for a broad range of IaaS solutions. While Citrix would undoubtedly love to sell more XenServer licenses, ultimately the real money is in selling the rest of its portfolio to service providers — like NetScaler ADCs. Having a hypervisor-neutral cloud stack benefits Citrix’s overall position, even if some Cloud.com customers will choose to go VMware or KVM or open-source Xen rather than Citrix Xen for the hypervisor.

It certainly doesn’t hurt that Cloud.com’s Amazon-compatible APIs (and thus support of RightScale’s functionality) is also tremendously useful for organizations seeking to build Amazon-compatible private clouds at scale. No one else has really addressed this need, and VMware (in an infrastructure context) has largely targeted the market for “dependable”, classically enterprise-like infrastructure, rather than explored the opportunities in the emerging demand for commodity cloud.

In short, I think Cloud.com is a great buy for Citrix, and VMware-watchers interested in whether or not their vCloud service provider initiative is working well should certainly track Cloud.com wins vs. vCloud wins in the service provider space.

AT&T’s CDN re-launch

(This is part of a series of “catch-up” posts of announcements that I’ve wanted to comment on but didn’t previously find time to blog about.)

AT&T recently essentially re-launched its CDN — new technology, new branding, new footprint.

AT&T’s existing CDN product, called iCDS, has had limited success in the marketplace. They’ve been a low-cost competitor, but their deal success in the high-volume market has been low — Level 3, for instance, has offered prices just as good or better, on a more featureful, higher-performance service, and with other competitors, notably Akamai and Limelight, willing to compete in the low-cost high-volume market, it’s been difficult for AT&T to compete successfully on price (although they certainly helped the general decline in prices). We’ve also seen them get good pick-up with CDN added to a managed hosting contract — there are plenty of managed hosting customers happy to sign on $1000 or $2000 worth of CDN. (We’ve also seen this with other hosters that casually quote a little bit of CDN along with managed hosting deals; it’s not just an AT&T phenomenon.) We’ve also seen them pitch “hey, you should use us if you want to reach iPhone customers”, but that’s too narrow for most content providers to consider right now.

Previously, AT&T had been insistent on developing all of its CDN technology in-house. AT&T has a long and proud “built here and only here” tradition, especially with AT&T Labs, but it simply hasn’t worked out well for its CDN, especially since anything that AT&T builds in portal technology tends to look like it was built by hard-core geeks for other hard-core geeks — not the slick, user-friendly, Web 2.0 interfaces that you’ll see coming out of many other service providers these days. That made all iCDS things to do with “how customers interface with the CDN to actually get something useful done”, including configuration and analytics, pretty sub-par to the market.

AT&T has now done something that would probably be smart for other carriers to emulate — buying CDN technology rather than developing it in-house. There are now plenty of vendors to choose from — Cisco, Juniper (Ankeena), Alcatel-Lucent (Velocix), Edgecast, 3Crowd, JetStream, etc. — and although these solutions vary wildly in quality and completeness, I’m still bemused by the number of carriers whose engineers are really jonesing to build their own in-house technology. In AT&T’s case, they’ve selected Edgecast’s software solution — a nice feather in the cap for Edgecast, definitely, given the kind of scrutiny that AT&T gives its solutions that are going to be deployed in its network. (Carrier CDNs are very much a hot trend at the moment, although they’re a hot trend relative to the otherwise glacial speed at which carriers do anything.)

AT&T is building out a new footprint of servers running the Edgecast software. They’ll operate both the old and new CDNs for some time — existing iCDS customers will continue to run on the existing iCDS platform and footprint, and new customers will go onto the new platform. That means it’s going to take some time to assess the real performance of the new CDN, as the POPs are being rolled out gradually. The new footprint will be similar, but not identical to, the old fotprint.

However, I don’t think the launch of a new AT&T CDN is anywhere near as significant for the market as AT&T’s continued success in reselling Cotendo. The AT&T CDN itself is simply part of the already-commoditized market for high-volume delivery — the re-launch will likely return them to real competitiveness, but doesn’t change any fundamental market dynamics.

Riverbed acquires Zeus and Aptimize

(This is part of a series of “catch-up” posts of announcements that I’ve wanted to comment on but didn’t previously find time to blog about.)

Riverbed made two interesting acquisitions recently, which I think signal a clear intention to be more than just a traditional WAN optimization controller (WOC) vendor — Zeus, and Aptimize. If you’re an investment banker or a networking vendor, who has talked to me over the last year, you know that these two companies have been right at the top of my “who I think should get bought” list; these are both great pick-ups for Riverbed.

Zeus has been around for quite some time now, but a lot of people have never heard of them. They’re a small company in the UK. Those of you who have been following infrastructure for the Web since the 1990s might remember them as the guys who developed the highest-performance webserver — if a vendor did SPECweb benchmarks for its hardware back then, they generally used Zeus for the software. It was a great service provider product, too, especially for shared Web hosting — it had tons of useful sandboxing and throttling features that were light-years ahead of anyone else back then. But despite the fact that the tech was fantastic, Zeus was never really commercially successful with their webserver software, and eventually they turned their underlying tech to building application delivery controller (ADC) software instead.

Today, Zeus sells a high-performance, software-based ADC, with a nice set of features, including the ability to act as a proxy cache. It’s a common choice for high-end load-balancing when cloud IaaS customers need to be able to deploy a virtual appliance running on a VM, rather than dropping in a box. It’s also the underlying ADC for a variety of cloud IaaS providers, including Joyent and Rackspace (which means it’ll also get an integration interface to OpenStack). Notably, over the last two years, we’ve seen Zeus supplanting or supplementing F5 Networks in historically strong F5 service provider accounts.

Aptimize, by contrast, is a relatively new start-up. It’s a market leader in front-end optimization (FEO), sometimes also called Web performance optimization (WPO) or Web content optimization (WCO). FEO is the hot new thing in acceleration — it’s been the big market innovation in the last two years. While previous acceleration approaches have focused upon network and protocol optimization, or on edge caching, FEO optimizes the pages themselves — the HTML, Cascading Style Sheets (CSS), JavaScript, and so forth that goes into them. It basically takes whatever the webserver output is and attempts to automatically apply the kinds of best practices that Steve Souders has espoused in his books.

Aptimize makes a software-based FEO solution which can be deployed in a variety of ways, including as a virtual appliance running on a VM. (FEO is generally a computationally expensive thing, though, since it involves lots of text parsing, so it’s not unusual to see it run on a standalone server.)

So, what Riverbed has basically bought itself is the ability to offer a complete optimization solution — WOC, ADC, and FEO — plus the intellectual property portfolio to potentially eventually combine the techniques from all three families of products into an integrated product suite. (Note that Riverbed is fairly cloud-friendly already with its Virtual Steelhead.)

I think this also illustrates the vital importance of “beyond the box” thinking. Networking hardware has traditionally been just that — specialized appliances with custom hardware that can do something to traffic, really really fast. But off-the-shelf servers have gotten so powerful that they can now generate the kind of processing umph and network throughput that you used to have to build custom hardware logic to achieve. That’s leading us to the rise of networking vendors who make software appliances instead, because it’s a heck of a lot easier and cheaper to launch a software company than a hardware company (probably something like a 3:1 ratio in funding needed), you can have product to market and iterate much more quickly, and you can integrate more easily with other products.

ObPlug for new, related research notes (Gartner clients only):

Amazon and Equinix partner for Direct Connect

Amazon has introduced a new connectivity option called AWS Direct Connect. In plain speak, Direct Connect allows an Amazon customer to get a cross-connect between his own network equipment and Amazon’s, in some location where the two companies are physically colocated. In even plainer speak, if you’re an Equinix colocation customer in their Ashburn, Virginia (Washington DC) data center campus, you can get a wire run between your cage and Amazon’s, which gives you direct connectivity between your router and theirs.

This is relatively cheap, as far as such things go. Amazon imposes a “port charge” for the cross-connect at $0.30/hour for 1 Gbps or $2.25/hour for 10 Gbps (on a practical level, since cross-connects are by definition nailed up 100% of the time, about $220/month and $1625/month respectively), plus outbound data transfer at $0.02/GB. You’ll also pay Equinix for the cross-connect itself (I haven’t verified the prices for these, but I’d expect they would be around $500 and $1500 per month). And, of course, you have to pay Equinix for the colocation of whatever equipment you have (upwards of $1000/month+ per rack).

Direct Connect has lots of practical uses. It provides direct, fast, private connectivity between your gear in colocation and whatever Amazon services are in Equinix Ashburn (and non-Internet access to AWS in general), vital for “hybrid cloud” use cases and enormously useful for people who, say, have PCI-compliant e-commerce sites with huge databases Oracle RAC and black-box encryption devices, but would like to put some front-end webservers in the cloud. You can also buy whatever connectivity you want from your cage in Equinix, so you can take that traffic and put it over some less expensive Internet connection (Amazon’s bandwidth fees are one of the major reasons customers leave them), or you can get private networking like ethernet or MPLS VPN (an important requirement for enterprise customers who don’t want their traffic to touch the Internet at all).

This is not a completely new thing — Amazon has quietly offered private peering and cross-connects to important customers for some time now, in Equinix. But this now makes cross-connects into a standard option with an established price point, which is likely to have far greater uptake than the one-off deals that Amazon has been doing.

It’s not a fully-automated service — the sign-up is basically used to get Amazon to grant you an authorization so that you can put in an Equinix work order for the cross-connect. But it’s an important step in the right direction. (I’ve previously noted the value of this partnership in a blog post called “Why Cloud IaaS Customers Care About a Colo Option“. Also, for Gartner clients, see my research note “Customers Need Hybrid Cloud Compute IaaS” for a detailed analysis.)

This is good for Equinix, too, for the obvious reasons. For quite some time now, I’ve been evangelizing the importance of carrier-neutral colocation as a “cloud hub”, envisioning a future where these providers facilitate cross-connect infrastructures between cloud users and cloud providers. Widespread adoption of this model would allow an enterprise to say, get a single rack of network equipment at Equinix (or Telecity or Interxion, etc.), and then cross-connect directly to all of their important cloud suppliers. It would drive cross-connect density, differentiation and stickiness at the carrier-neutral colo providers who succeed in being the draw for these ecosystems.

It’s worth noting that this doesn’t grant Amazon a unique capability, though. Just about every other major cloud IaaS provider already offers colocation and private connectivity options. But it’s a crucial step for Amazon towards being suitable for more typical enterprise use cases. (And as a broader long-term ecosystem play, customers may prefer using just one or two “cloud hubs” like an Equinix location for their “cloud backhaul” onto private connectivity, especially if they have gateway devices.)

%d bloggers like this: