Monthly Archives: October 2010
My calendar for one-on-ones at Symposium is now totally full, as far as I know, so here’s a look at some updated stats:
(No overlaps above. Things have been disambiguated. This counts only the formal 1-on-1s, and not any other meetings I’m doing here.)
The hosting discussions have a very strong cloud flavor to them, as one might expect. The broad trend from today is that most people talking about cloud infrastructure here are really talking about putting individual production applications on virtualized infrastructure in a managed hosting environment, with at least some degree of capacity flexibility. But at the same time, this is a good thing for service provider — it clearly illustrates that people are comfortable putting highly mission-critical, production applications on shared infrastructure.
My 1-on-1 schedule is filling rapidly. (People who didn’t pre-book, you’re in luck: I was only added to the system on Friday or so, so I still have openings, at least as of this writing.)
Trend-watchers might be interested in how these break down so far:
17 on cloud
8 on colocation
4 on hosting
2 on CDN
(A few of these mention two topics, such as ‘colo and cloud’, and are counted twice above.)
Slightly over half the cloud 1-on-1s so far are about cloud strategy in general; the remainder are about infrastructure specifically.
What’s also interesting to me is that the 1-on-1s scheduled prior to on-site registration appear to be more about colocation and hosting, but the on-site 1-on-1 requests are very nearly pure cloud. I’m not sure what that signifies, although I expect the conversations may be illuminating in this regard.
I will be at three Gartner conferences during the remainder of this year.
I will be at Symposium ITxpo Orlando. My main session here will be The Great Debate: Shared-Hardware vs. Shared-Everything Multitenancy, or Amazon’s Apples vs. Force.com’s Oranges. (Or for those of you who heard Larry Ellison’s OpenWorld keynote, Oracle ExaLogic vs. Salesforce.com…) The debate will be moderated by my colleague Ray Valdes, I’ll be taking the shared-hardware side while my colleague Eric Knipp takes the shared-everything side. I’m also likely to be running some end-user roundtables, but mostly, I’ll be available to take questions in 1-on-1 sessions.
If you go to Symposium, I highly encourage you to attend a session by one of my colleagues, Neil MacDonald. It’s called Why Cloud-Based Computing Will Be More Secure Than What You Have Today, and it’s what we call a “maverick pitch”, which means that it’s follows an idea that’s not a common consensus opinion at Gartner. But it’s also the foundation of some really, really interesting work that we’re doing on the future of the cloud, and it follows an incredibly important tenet that we’re talking about a lot: that control is not a substitute for trust, and the historical model that enterprises have had of equating the two is fundamentally broken.
I will be at the Application Architecture, Development, and Integration Summit (ala, our Web and Cloud conference) in November. I’m giving two presentations there. The first will be Controlling the Cloud: How to Leverage Cloud Computing Without Losing Control of Your IT Processes. The second is Infrastructure as a Service: Providing Data Center Services in the Cloud. I’ll also be running an end-user roundtable on building private clouds, and be available to take 1-on-1 questions.
Finally, I will be at the Data Center Conference in December. I’m giving two presentations there. The first will be Is Amazon, Not VMware, the Future of Your Data Center? The second is Getting Real With Cloud Infrastructure Services. I’ll also be in one of our “town hall” meetings on cloud, running an end-user roundtable on cloud IaaS, and be available to take 1-on-1 questions.
I will be at VMworld Europe this week.
I am speaking on Wednesday, October 13th, at 10:30 am, giving a short tutorial on outsourcing cloud infrastructure as a service. This is an NTT Communications-sponsored session. I also expect to be available to answer questions at NTT’s booth during that day.
During the rest of the conference I’m available for one-on-one meetings, regardless of whether or not you’re a Gartner client. Please send me email if you’d like to meet.
(In case you’re wondering how vendor-neutrality works when I do a vendor-sponsored day like this: NTT has no control over my presentation content whatsoever, nor anything that I say in general. It’s a risk for the vendor, in that they don’t know what I’m going to say exactly, and that what I have to say might not be entirely consonant with their strategy. But it’s part of Gartner’s policy when we speak at an external event like this, which means that you, as an attendee, don’t have to wonder about whether I’d have said something else under different circumstances.)
A lot of Gartner Invest clients are calling to ask about the AT&T deal with Cotendo. Since I’m swamped, I’m doing a blog post, and the inquiry coordinators will try to set up a single conference call.
I’ve known about this deal for a long time, but I’ve been respecting AT&T and Cotendo’s request to keep it quiet despite the fact that it’s not under formal nondisclosure. Since the deal was noted in my recently-published Who’s Who in Content Delivery Networks, 2010, someone else has now blogged about it publicly, and I’m being asked explicitly about it, though, I’m going to go ahead and talk about it on my blog.
There are now three vendors in the market who claim true dynamic site acceleration offerings: Akamai, CDNetworks, and Cotendo. (Limelight’s recently-announced accelerator offerings are incremental evolutions of LimelightSITE.) CDNetworks has not gained any significant market traction with their offering since introducing it six months ago, whereas these days, I routinely see customers bid Cotendo along with Akamai.
However, to understand the potential impact of Cotendo, one has to understand what they actually deliver. It’s important to note that while Cotendo positions its service identically to Akamai’s, even calling it Dynamic Site Accelerator (just like Akamai brands it), it is not, from a technical perspective, like Akamai’s DSA.
Cotendo’s DSA offering, at present, consists of TCP multiplexing and connection pooling from their edge servers. Both of these technologies are common features in application delivery controllers (or, in more colloquial terms, load-balancers, i.e., F5’s LTM, Citrix’s NetScaler, etc.). If you’re not familiar with the benefits of either, F5’s DevCentral provides good articles on multiplexing and persistent connections, as does Network World (2001, but still relevant).
By contrast, Akamai’s DSA offering — the key technology acquired when they bought Netli — is sort of like a combination of functionality from an ADC and a WAN optimization controller (WOC, like Riverbed), offered as a service in the cloud (in the old-fashioned meaning, i.e., “somewhere on the Internet”). In DSA, Akamai’s edge servers essentially behave like bidirectional WOCs, speaking an optimized protocol between them; it’s coupled with Akamai’s other acceleration technologies, including pre-fetching, compression, and so on.
Engineering carrier-scale WOC functionality is hard. Netli succeeded. There have been other successes in the hardware market — for instance, Ipanema, which targets carriers. Both made significant sacrifices in the complexity of functionality in order to achieve scale. Enterprise WOC vendors have had a hard time scaling past more than a few dozen sites, and the bar is still pretty low (at Gartner, we use “scale to over a hundred sites” on our vendor evaluation, for instance). A new CDN entrant offering WOC-style, Akamai/Netli-style functionality would be a big deal — but that’s not what Cotendo actually has.
Akamai’s DSA service competes to some extent with unidirectional ADC-based acceleration (F5’s WebAccelerator, for instance), but there are definitely benefits to middle-mile bidirectional acceleration, resulting in a stacked benefit if you use an ADC plus Akamai; moreover, this kind of acceleration is not a baseline feature in ADCs. Cotendo overlaps directly with baseline ADC functionality. That means the two companies have distinctly different services, serving different target audiences.
Cotendo is offering pretty good performance in the places where they have footprint — enough to be competitive. Like all CDN performance, customers care about “good enough” rather than “the very best”, but in transactional sites, there’s usually a decent return curve for more performance before you finally hit “fast enough that faster makes no difference”. This is still dependent upon the context, though. Electronics shoppers, for instance, are much less patience than people shopping for air travel. And the baseline site performance (i.e., your application response time in general) and construction, will also determine how much site acceleration will get you in terms of ROI.
The deal with AT&T is significant for the same reason that it was significant for Akamai to have signed Verizon and IBM as resellers years ago — because larger companies can be much more comfortable buying on the paper of a big vendor they already have a relationship with. And since AT&T’s CDN wins are often add-ons to hosting deals — where you typically have a complex transactional site — selling a dynamic acceleration service over a pure static caching one is definitely preferable. AT&T has tried to get around that deficiency in the past by selling multi-data-center and managed F5 WebAccelerator solutions, but those solutions aren’t as attractive. This partnership benefits both companies, but it’s not a game-changer in the CDN industry.
Since everyone’s asking, no, I don’t see Cotendo price-pressuring Akamai at the moment. (I see as many as 15 CDN deals a week, so I feel very comfortable with my state of pricing knowledge, especially in this transactional space.) What I do see is the incredibly depressed price of static object delivery affecting what anyone can realistically charge for dynamic acceleration, because the price/performance delta gets too large. I certainly do see Cotendo winning smaller deals, but it’s important that the wins aren’t coming from just undercuts in price — for instance, my clients cite the user-friendly, attractive portal as a reason to choose Cotendo over Akamai.
I have plenty more to say on this subject, but I’ve already skimmed the edge of how much I can say in my blog vs. when I should be writing research or answering inquiry, so: If you’re a client, please feel free to make an inquiry.
Interesting side note: Since publishing my Who’s Who note a week and a half ago, my CDN inquiry from customers has suddenly started to include a lot more multi-vendor inquiry about the smaller vendors. That probably says that other CDNs could still do a lot to build brand awareness. (SEO is key to this, these days.)
A lot of Gartner Invest clients are calling to ask about Equinix’s trimming of guidance. I am enormously swamped at the moment, and cannot easily open up timeslots to talk to everyone asking. So I’m posting a short blog entry (short and not very detailed because of Gartner’s rules about how much I can give away on my blog), and the Invest inquiry coordinators are going to try to set up a 30-minute group conference call for everyone with questions about this.
If you haven’t read it, you should read my post on Getting Real on Colocation, from six months ago, when I warned that I did not see this year’s retail colocation market being particularly hot. (Wholesale and leasing are hot. Retail colo is not.)
Equinix has differentiators on the retail colo side, but they are differentiators to only part of the market. If you don’t care about dense interconnect, Equinix is just a high-quality colo facility. I have plenty of regular enterprise clients that like Equinix for their facility quality, and reliably solid operations and customer service, and who are willing to pay a premium for it — but of course increasingly, nobody’s paying a premium for much of anything (in the US) because the economy sucks and everyone is in serious belt-tightening mode. And the generally flat-to-down pricing environment for retail colo also depresses the absolute premium Equinix can command, since the premium has to be relative to the rest of the market in a given city.
Those of you who have talked to me in the past about Switch and Data know that I have always felt that the SDXC sales force was vastly inferior to the Equinix sales force, both in terms of its management and, at least as manifested in actual working with prospects, possibly in terms of the quality of the salespeople themselves. Time is needed for sales force integration and upgrade, and it seems like the earning calls indicated an issue there. Equinix has had a good track record of acquisition integration to date, so I wouldn’t worry too much about this.
The underprediction of churn is more interesting, since Equinix has historically been pretty good about forecasting, and customers who are going to be churning tend to look different from customers who will be staying. Moving out of a data center is a big production, and it drives changes in customer behavior that are observable. My guess is that they expected some mid-sized customers to stay who decided to leave instead — possibly clients who are moving to a wholesale or lease model, and who are just leaving their interconnection in Equinix. (Things like that are good from a revenue-per-square-foot standpoint, but they’re obviously an immediate hit to actual revenues.)
This doesn’t represent a view change for me; I’ve been pessimistic on prospects for retail colocation since last year, even though I still feel that Equinix is the best and most differentiated company in that sector.