Monthly Archives: October 2009

Jim Cramer’s “Death of the Data Center”

Jim Cramer’s “Mad Money” featured an interesting segment yesterday, titled “Sell Block: The Death of the Data Center?

Basically, the premise of the segment is that Intel’s Nehalem DP processors will allow businessses to shrink their data center footprint, and thus businesses won’t need as much data center space, commercial data centers will empty out, and businesses might even bring previously colocated gear back into in-house data centers. He claims, somewhat weirdly, that because the Wall Street analysts who cover this space are primarily telco analysts, they’re not thinking about the impact of compute density on the growth of data center space.

I started to write a “Jim Cramer has no idea what he’s talking about” post, but I saw that Rich Miller over at Data Center Knowledge beat me to it.

Processing power has been increasing exponentially forever, but data center needs have grown even more quickly — certainly in the exponential-growth dot-com world, but even in the enterprise. There’s no reason to believe that this next generation of chips changes that at all, and it’s certainly not backed up by survey data from enterprise buyers, much less rapidly-growing dot-coms.

Cramer also seems to fail to understand the fundamental value proposition of Equinix in particular. It’s not about providing the space more cheaply; it’s about the ability to interconnect to lots of networks. That’s why companies like Google, Microsoft, etc. have built their own data centers in places where there’s cheap power — but continued to leave edge footprints and interconnect within Equinix and other high-network-density facilities.

Bookmark and Share

Equinix swallows Switch & Data

The acquisition train rumbles on.

Equinix, along with Q3 earnings, has announced that it will acquire Switch and Data in a $689 million, 80% stock, 20% cash deal, representing about a 30% premium over SDXC’s closing share price today.

This move should be read as a definitive shift in strategy for Equinix. Equinix’s management team has changed significantly over the past year, and this is probably the strongest signal that the company has given yet about its evolving vision for the future.

Historically, Equinix has determinedly stuck to big Internet hub cities. Given its core focus upon network-neutral colocation — and specifically the customers who need highly dense network interconnect — it’s made sense for them to be where content providers want to be, which is also, not coincidentally, where there’s a dense concentration of service providers. Although Equinix derives a significant portion of its revenues from traditional businesses who simply treat them as a high-quality colocation provider and do very little interconnect, Equinix’s core value proposition has been most compelling to those companies for whom access to many networks, or access to an ecosystem, is critical.

The Switch and Data acquisition takes them out of big Internet hub cities, into secondary cities — often with much smaller, and lower-quality, data centers than Equinix has traditionally built. Equinix specifically cites interest in these secondary markets as a key reason for making the acquisition. They believe that cloud computing will drive applications closer to the edge, and therefore, in order to continue to compete successfully as a network hub for cloud and SaaS providers, they need to be in more markets than just the big Internet hub cities.

Though many anecdotes have been told about the shift towards content peering over the last couple of years, the Arbor Networks study of Internet traffic patterns — see the NANOG presentation for details — backs this up with excellent quantitative data. Consider that many of the larger content providers are migrating to places where there’s cheap power and using a tethering strategy instead (getting fiber back to a network-dense location), and that emerging cloud providers will likely do the same as their infrastructure grows, and you’ll see how a broader footprint becomes relevant — shorter tethers (desirable for latency reasons) mean needing to be in more markets. (Whether this makes regulators more or less nervous about the acquisition remains to be seen.)

While on the surface, this might seem like a pretty simple acquisition — two network-neutral colocation companies getting together, whee — it’s not actually that straightforward. I’ll leave it to the Wall Street analysts to fuss about the financial impact — Equinix and S&D have very different margin profiles, notably — and just touch on a few other things.

While S&D and Equinix overlap in service provider customer base, there are significant differences between the rest of their customers. S&D’s smaller, often less central data centers mean that they historically don’t serve customers who have had large-footprint needs (although this becomes less of a concern with the tethering approach taken by big content providers, who have moved their large footprints out of colo anyway). S&D’s data centers also tend to attract smaller businesses, rather than the mid-sized and enterprise market. Although, like many colo companies, their sales forces are essentially order-takers, Equinix displays a knack for enterprise sales and service, a certain polish, that S&D lacks. Equinix has a strong enterprise brand, and a consistency of quality that supports that brand; S&D is well-known within the industry (within the trade, so to speak), but not to typical IT managers, and the mixed-quality portfolio that the acquisition creates will probably present some branding and positioning challenges for Equinix.

While I think there will be some challenges in bringing the two companies together to deliver a rationalized portfolio of services in a consistent manner, Equinix has a history of successfully integrating acquisitions, and for a fast entrance into secondary markets, this was certainly the most practical way to go about doing so.

As usual, I can’t delve too deeply in this blog without breaking Gartner’s blogging rules, and so I’ll leave it at that. Clients can feel free to make an inquiry if they’re interested in hearing more.

Bookmark and Share

Recent inquiry trends

It’s been mentioned to me that my “what are you hearing about from clients” posts are particularly interesting, so I’ll try to do a regular update of this sort. I have some limits on how much detail I can blog and stay within Gartner’s policies for analysts, so I can’t get too specific; if you want to drill into detail, you’ll need to make a client inquiry.

It’s shaping up into an extremely busy fall season, with people — IT users and vendors like — sounding relatively optimistic about the future. If you attended Gartner’s High-Tech Forum (a free event we recently did for tech vendors in Silicon Valley), you saw that we showed a graph of inquiry trends, indicating that “cost” is a declining search term, and “cloud” has rapidly increased in popularity. We’re forecasting a slow recovery, but at least it’s a recovery.

This is budget and strategic planning time, so I’m spending a lot of time with people discussing their 2010 cloud deployment plans, as well as their two- and five-year cloud strategies. There’s some planning stuff going around data centers, hosting, and CDN services, too, but the longer-term the planning, the more likely it is that it’s going to involve cloud. (I posted on cloud inquiry trends previously.)

There’s certainly purchasing going on right now, though, and I’m talking to clients across the whole of the planning cycle (planning, shortlisting, RFP review, evaluating RFP responses, contract review, re-evaluating existing vendors, etc.). Because pretty much everything that I cover is a recurring service, I don’t see the end-of-year rush to finish spending 2009′s budget, but this is the time of year when people start to work on the contracts they want to go for as soon as 2010′s budget hits.

My colo inquiries this year have undergone an interesting shift towards local (and regional) data centers, rather than national players, reflecting a shift in colocation from being primarily an Internet-centric model, to being one where it’s simply another method by which businesses can get data center space. Based on the planning discussions I’m hearing, I expect this is going to be the prevailing trend going forward, as well.

People are still talking about hosting, and there are still plenty of managed hosting deals out there, but very rarely do I see a hosting deal now that doesn’t have a cloud discussion attached. If you’re a hoster and you can’t offer capacity on demand, most of my clients will now simply take you off the table. It’s an extra kick in the teeth if you’ve got an on-demand offering but it’s not yet integrated with your managed services and/or dedicated offerings; now you’re competing as if you were two providers instead of one.

The CDN wars continue unabated, and competitive bidding is increasingly the norm, even in small deals. Limelight Networks fired a salvo into the fray yesterday, with an update to their delivery platform that they’ve termed “XD”. The bottom line on that is improved performance at a baseline for all Limelight customers, plus a higher-performance tier and enhanced control and reporting for customers who are willing to pay for it. I’ll form an opinion on its impact once I see some real-world performance data.

There’s a real need in the market for a company who can monitor actual end-user performance and that can do consulting assessments of multiple CDNs and origin configurations. (It’d be useful in the equipment world, too, for ADCs and WOCs.) Not everyone can or wants to deploy Keynote or Gomez or Webmetrics for this kind of thing, those companies aren’t necessarily eager to do a consultative engagement of this sort, and practically every CDN on the planet has figured out how to game their measurements to one extent or another. It doesn’t make them without value in such assessments, but real-world data from actual users (via JavaScript agents, video player instrumentation, download client instrumentation, etc.) is still vastly preferable. Practically every client I speak to wants to do performance trials, but the means available for doing so are still overly limited and very expensive.

All in all, things are really crazy busy. So busy, in fact, that I ended up letting a whole month go by without a blog post. I’ll try to get back into the habit of more frequent updates. There’s certainly no lack of interesting stuff to write about.

Bookmark and Share

Speculating on Amazon’s capacity

How much capacity does Amazon EC2 have? And how much gets provisioned?

Given that it’s now clear that there are capacity constraints on EC2 (i.e., periods of time where provisioning errors out due to lack of capacity), this is something that’s of direct concern to users. And for all the cloud-watchers, it’s a fascinating study of IaaS adoption.

Randy Bias of CloudScaling has recently posted some interesting speculation on EC2 capacity.

Guy Rosen has done a nifty analysis of EC2 resource IDs, translated to an estimate of the number of instances provisioned on the platform in a day. Remember, when you look at provisioned instances (i.e., virtual servers), that many EC2 instances are short-lived. Auto-scaling can provision and de-provision servers frequently, and there’s significant use of EC2 for batch-computing applications.

Amazon’s unreserved-instance capacity is not unlimited, as people have discovered. There are additional availability zones, and for serious users of the platform, choosing the right zone has become minimal, since you don’t want to pay for cross-zone data transfers or absorb the latency impact, if you don’t have to.

We’re entering a time of year that’s traditionally a traffic ramp for Amazon, the fall leading into Christmas. It should be interesting to see how Amazon balances its own need for capacity (AWS is used for portions of the company’s retail site), reserved EC2 capacity, and unreserved EC2 capacity. I suspect that the nature of EC2′s usage makes it much more bursty than, say, a CDN.

Bookmark and Share

Follow

Get every new post delivered to your Inbox.

Join 9,849 other followers

%d bloggers like this: