Blog Archives

Savvis CEO Phil Koen resigns

Savvis announced the resignation of CEO Phil Koen on Friday, citing a “joint decision” between Koen and the board of directors. This was clearly not a planned event, and it’s interesting, coming at the end of a year in which Savvis’s stock has performed pretty well (it’s up 96% over last year, although the last quarter has been rockier, -8%). The presumed conflict between Koen and the board becomes clearer when one looks at a managed hosting comparable like Rackspace (up 276% over last year, 19% in the last quarter), rather than at the colocation vendors.

When the newly-appointed interim CEO Jim Ousley says “more aggressive pursuit of expanding our growth”, I read that as, “Savvis missed the chance to be an early cloud computing leader”. A leader in utility computing, offering on-demand compute on eGenera-based blade architectures, Savvis could have taken its core market message, shifted its technology approach to embrace a primarily virtualization-based implementation, and led the charge into enterprise cloud. Instead, its multi-pronged approach (do you want dedicated servers? blades? VMs?) led to a lengthy period of confusion for prospective customers, both in marketing material and in the sales cycle itself.

Savvis still has solid products and services, and we still see plenty of contract volume in our own dealings with enterprise clients, as well as generally positive customer experiences. But Savvis has become a technology follower, conservative in its approach, rather than a boldly visionary leader. Under previous CEO Rob McCormick, the company was often ahead of its time, which isn’t ideal either, but in this period of rapid market evolution, the consumerization of IT, and self-service, Savvis’s increasingly IBM-like market messages are a bit discordant with the marketplace, and its product portfolio has largely steered away from the fastest-growing segment of the market, self-managed cloud hosting.

Koen made many good decisions — among them, focusing on managed hosting rather than colocation. But his tenure was also a time of significant turnover within the Savvis ranks, especially at the senior levels of sales and marketing. When Ousley says the company is going to take a “fresh look” at sales and marketing, I read that as, “Try to retain sales and marketing leadership for long enough for them to make a long-term impact.”

Having an interim CEO in the short term — and one drawn from the ranks of the board, rather than from the existing executive leadership — means what is effectively a holding pattern until a new CEO can be selected, gets acquainted with the business, and figures out what he wants to do. That’s not going to be quick, which is potentially dangerous at this time of fast-moving market evolution. But the impact of that won’t be felt for many months; in the near term, one would expect projects to continue to execute as planned.

Thus, for existing and prospective Savvis customers, I’d expect that this change in the executive ranks will result in exactly zero impact in the near term; anyone considering signing a contract should just proceed as if nothing’s changed.

Bookmark and Share

The last quarter in review

The end of 2009 was extraordinarily busy, and that’s meant that, shamefully, I haven’t posted to my blog in ages. I aim to try to return to near-daily posting in 2010, but this means creating time in my schedule to think and research and write, rather than being entirely consumed by client inquiry.

December was Gartner’s data center conference, where I spent most of a week in back-to-back meetings, punctuated by a cloud computing end-user roundtable, a cloud computing town hall, and my talk on getting data center space. Attendance at the conference is skewed heavily towards large enterprises, but one of the most fascinating bits that emerged out of the week was the number of people walking around with emails from their CEO saying that they had to investigate this cloud computing thing, and whose major goals for the conference included figuring out how the heck they were going to reply to that email.

My cloud computing webinar is now available for replay — it’s a lightweight introduction to the subject. Ironically, when I started working at Gartner, I was terrified of public speaking, and much more comfortable doing talks over the phone. Now, I’m used to having live audiences and public speaking is just another routine day on the job… but speaking into the dead silence of an ATC is a little unnerving. (I once spent ten minutes giving a presentation to dead air, not realizing that the phone bridge had gone dead.) There were tons of great questions asked by the audience, far more than could possibly be answered in the Q&A time, but I’m taking the input and using it to figure out how to decide what I should be writing this year.

Q4 2009, by and large, continued my Q3 inquiry trends. Tons of colocation inquiries — but colocation is often giving way to leasing, now, and local/regional players are prominent in nearly every deal (and winning a lot of the deals). Relatively quiet on the CDN front, but this has to be put in context — Gartner’s analysts took over 1300 inquiries on enterprise video during 2009, and these days I’m pretty likely to look at a client’s needs and tell them they need someone like Kontiki or Ignite, not a traditional Internet CDN. And cloud, cloud, cloud is very much on everyone’s radar screen, with Asia suddenly becoming hot. Traditional dedicated hosting is dying at a remarkable pace; it’s unusual to see new deals that aren’t virtualized.

I’ll be writing on all this and more in the new year.

Bookmark and Share

Jim Cramer’s “Death of the Data Center”

Jim Cramer’s “Mad Money” featured an interesting segment yesterday, titled “Sell Block: The Death of the Data Center?

Basically, the premise of the segment is that Intel’s Nehalem DP processors will allow businessses to shrink their data center footprint, and thus businesses won’t need as much data center space, commercial data centers will empty out, and businesses might even bring previously colocated gear back into in-house data centers. He claims, somewhat weirdly, that because the Wall Street analysts who cover this space are primarily telco analysts, they’re not thinking about the impact of compute density on the growth of data center space.

I started to write a “Jim Cramer has no idea what he’s talking about” post, but I saw that Rich Miller over at Data Center Knowledge beat me to it.

Processing power has been increasing exponentially forever, but data center needs have grown even more quickly — certainly in the exponential-growth dot-com world, but even in the enterprise. There’s no reason to believe that this next generation of chips changes that at all, and it’s certainly not backed up by survey data from enterprise buyers, much less rapidly-growing dot-coms.

Cramer also seems to fail to understand the fundamental value proposition of Equinix in particular. It’s not about providing the space more cheaply; it’s about the ability to interconnect to lots of networks. That’s why companies like Google, Microsoft, etc. have built their own data centers in places where there’s cheap power — but continued to leave edge footprints and interconnect within Equinix and other high-network-density facilities.

Bookmark and Share

Equinix swallows Switch & Data

The acquisition train rumbles on.

Equinix, along with Q3 earnings, has announced that it will acquire Switch and Data in a $689 million, 80% stock, 20% cash deal, representing about a 30% premium over SDXC’s closing share price today.

This move should be read as a definitive shift in strategy for Equinix. Equinix’s management team has changed significantly over the past year, and this is probably the strongest signal that the company has given yet about its evolving vision for the future.

Historically, Equinix has determinedly stuck to big Internet hub cities. Given its core focus upon network-neutral colocation — and specifically the customers who need highly dense network interconnect — it’s made sense for them to be where content providers want to be, which is also, not coincidentally, where there’s a dense concentration of service providers. Although Equinix derives a significant portion of its revenues from traditional businesses who simply treat them as a high-quality colocation provider and do very little interconnect, Equinix’s core value proposition has been most compelling to those companies for whom access to many networks, or access to an ecosystem, is critical.

The Switch and Data acquisition takes them out of big Internet hub cities, into secondary cities — often with much smaller, and lower-quality, data centers than Equinix has traditionally built. Equinix specifically cites interest in these secondary markets as a key reason for making the acquisition. They believe that cloud computing will drive applications closer to the edge, and therefore, in order to continue to compete successfully as a network hub for cloud and SaaS providers, they need to be in more markets than just the big Internet hub cities.

Though many anecdotes have been told about the shift towards content peering over the last couple of years, the Arbor Networks study of Internet traffic patterns — see the NANOG presentation for details — backs this up with excellent quantitative data. Consider that many of the larger content providers are migrating to places where there’s cheap power and using a tethering strategy instead (getting fiber back to a network-dense location), and that emerging cloud providers will likely do the same as their infrastructure grows, and you’ll see how a broader footprint becomes relevant — shorter tethers (desirable for latency reasons) mean needing to be in more markets. (Whether this makes regulators more or less nervous about the acquisition remains to be seen.)

While on the surface, this might seem like a pretty simple acquisition — two network-neutral colocation companies getting together, whee — it’s not actually that straightforward. I’ll leave it to the Wall Street analysts to fuss about the financial impact — Equinix and S&D have very different margin profiles, notably — and just touch on a few other things.

While S&D and Equinix overlap in service provider customer base, there are significant differences between the rest of their customers. S&D’s smaller, often less central data centers mean that they historically don’t serve customers who have had large-footprint needs (although this becomes less of a concern with the tethering approach taken by big content providers, who have moved their large footprints out of colo anyway). S&D’s data centers also tend to attract smaller businesses, rather than the mid-sized and enterprise market. Although, like many colo companies, their sales forces are essentially order-takers, Equinix displays a knack for enterprise sales and service, a certain polish, that S&D lacks. Equinix has a strong enterprise brand, and a consistency of quality that supports that brand; S&D is well-known within the industry (within the trade, so to speak), but not to typical IT managers, and the mixed-quality portfolio that the acquisition creates will probably present some branding and positioning challenges for Equinix.

While I think there will be some challenges in bringing the two companies together to deliver a rationalized portfolio of services in a consistent manner, Equinix has a history of successfully integrating acquisitions, and for a fast entrance into secondary markets, this was certainly the most practical way to go about doing so.

As usual, I can’t delve too deeply in this blog without breaking Gartner’s blogging rules, and so I’ll leave it at that. Clients can feel free to make an inquiry if they’re interested in hearing more.

Bookmark and Share

Gartner BCM summit pitches

I’ve just finished writing one of my presentations for Gartner’s Business Continuity Management Summit. My pitch is focused upon looking at colocation as well as the future of cloud infrastructure for disaster recovery purposes. (My other pitch at the conference is on network resiliency.)

When I started out to write this, I’d actually been expecting that some providers who had indicated that they’d have formal cloud DR services coming out shortly would be able to provide me with a briefing on what they were planning to offer. But that, unfortunately, turned out not to be the case in the end. So the pitch has been more focused on do-it-yourself cloud DR.

Lightweight DR services have appeared and disappeared from the market at an interesting rate ever since Inflow (many years and many acquisitions ago) began offering a service focused on smaller mid-market customers that couldn’t typically afford full-service DR solutions. It’s a natural complement to colocation (in fact, a substantial percentage of the people who use colo do it for a secondary site), and now, a natural complement to the cloud.

Bookmark and Share

Recent polling results

I’ve just put out a new research report called The Changing Colocation and Data Center Market. Macroeconomic factors have driven major changes in both the supply and demand picture for data center construction, leasing, and colocation, in the last quarter of 2008, continuing into this year. The economic environment has brought about abrupt shift in sourcing strategies, build plans, and the like, driving a ton of inquiry for myself and my colleagues. This report looks at those changes, and presents results from a colocation poll done of attendees at Gartner’s data center conference in December.

Those of you interested in commentary related to that conference might also want to read reports done by colleagues of mine: Too Many Data Center Conference Attendees Are Not Considering Availability and Location Risks in Data Center Siting and Sourcing Decisions, and an issue near and dear to many businesses right now, how to stretch out their money and current data center life, Pragmatic Guidance on Data Center Energy Issues for the Next 18 Months.

Reports are clients-only, sorry.

Bookmark and Share

The week’s observations

My colleague Tom Bittman has written a great summary of the hot topics from the Gartner data center conference this past week.

Some personal observations as I wrap up the week…

The future of infrastructure is the cloud. I use “cloud” in a broad sense; many larger organizations will be building their own “private clouds” (which technically aren’t actually clouds, but the “private cloud” terminology has sunk in and probably won’t be easily budged). I was surprised by how many people at the conference wanted to talk to me about initial use of public clouds, how to structure cloud services within their own organizations, and what they could learn from public cloud and hosting services.

Cloud demos are extremely compelling. I was using demos of several clouds in order to make my points to people asking about cloud computing: Terremark’s Enterprise Cloud, Rackspace’s Mosso, and Amazon’s EC2 plus RightScale. I showed some screen shots off 3Tera’s website as well. I did not warn the providers that I was going to do this, and none of them were at the conference (a pity, since I suspect this would have been lead-generating). It was interesting to see how utterly fascinated people were — particularly with the Terremark offering, which is essentially a private cloud. (People were stopping me in the hallways to say, “I hear you have a really cool cloud demo.”) I was showing the trivially easy point-and-click process of provisioning a server, which, I think, provided a kind of grounding for “here is how the cloud could apply to your business”.

Colocation is really, really hot. My one-on-one schedule was crammed with colocation questions, though, as were my conversations with attendees in hallways and over meals, yet I was shocked by how many people showed up to my Friday, 8 am talk on colocation — the best-attended talk of the slot, I was told (and one cursed by lots of A/V glitches). Over the last month, we’ve seen demand accelerate and supply projections tighten — neither businesses nor data center providers can build right now.

A crazy conference week, like always, but tremendously interesting.

Bookmark and Share

Trion World gets a $70m C round

MMOG developer and publisher Trion World Network just closed a $70 million Series C round, which brings its total raised since its inception in 2006 to over $100 million.

This might seem like a staggering amount of money for a company with two games in development but none published yet. It’s trading on the name of its founder, Jon Van Caneghem, of Might and Magic fame. But it’s not that much money if you realize that games are now being made on movie-sized budgets, and MMOGs are exceptionally expensive to develop.

Dan Hunter had an interesting piece on the Terra Nova blog last year regarding the financials of MMOG development, based off an Interplay prospectus for an MMOG based on Fallout. That cited a cost of $75m, including a launch budget of $30m, which presumably includes marketing, manufacturing, and server deployment.

MMOGs are not efficient beasts, and by their nature, they are also prone to flash crowds and highly variable capacity needs. Most scale in a highly unwieldy manner, compounding the basic inefficient utilization of computing capacity. Utility computing infrastructure has huge potential to reduce the overbuy of capacity, but colocation on their own hardware is nigh-universally the way that such companies deploy their games.

Nicholas Carr estimated back in 2006 that an avatar in Second Life has a carbon footprint equivalent to a Brazilian. Last year, I heard, from a source I’d consider to be pretty authoritative, that an avatar in Second Life actually has a carbon footprint larger than its typical real-person (usually an affluent American).

This is why Internet data center providers drool at MMOG companies.

Bookmark and Share

Who hosts Warhammer Online?

With the recent launch of EA/Mythic‘s Warhammer Online MMORPG, comes my usual curiosity about who’s providing the infrastructure.

Mythic has stated publicly that all of the US game servers are located in Virginia, near Mythic’s offices. A couple of traceroutes seem to indicate that they’re in Verizon, almost certainly in colocation (managed hosting is rare for MMOGs), and seem to have purely Verizon connectivity to the Internet. The webservers, on the other hand, look to be split between Verizon, and ThePlanet in Dallas. FileBurst (a single-location download hosting service) is used to serve images and cinematics.

During the beta, Mythic used BitTorrent to serve files. With the advent of full release, it doesn’t appear that they’re depending on peer-to-peer any longer — unlike Blizzard, for instance, which uses public P2P in the form of BitTorrent for its World of Warcraft updates, trading off cost with much higher levels of user frustration. MMO updates are probably an ideal case for P2P file distribution — Solid State Networks, a P2P CDN, has done well by that — and with hybrid CDNs (those combining a traditional distributed model with P2P) becoming more commonplace, I’d expect to see that model more often.

However, I’m not keen on either single data center locations or single-homing, for anything that wants to be reliable. I also believe that gaming — a performance-sensitive application — really ought to run in a multi-homed environment. My favorite “why you should use multiple ISPs, even if you’re using a premium ISP that you love” anecdote to my clients is an observation I made while playing World of Warcraft a few years ago. WoW originally used just AT&T’s network (in AT&T colocation). Latency was excellent — most of the time. Occasionally, you’d get a couple of seconds of network burp, where latency would spike hugely. If you’re websurfing, this doesn’t really impact your experience. If you’re playing an online game, you can end up dead. When WoW switched to Internap for the network piece (remaining in AT&T colo), overall latencies went up — but the latencies were still well below the threshold of problematic performance, and more importantly, the latencies were rock-solidly in a narrow window of variability. (This is the same reason multi-homed CDNs with lots of route diversity deliver better consistency of user experience than single-carrier CDNs.)

Companies like Fileburst, by the way, are going to be squarely in the crosshairs of the forthcoming Amazon CDN. Fileburst will do 5 TB of delivery at $0.80 per GB — $3,985/month. At the low end, they’ll do 100 GB or less at $1/GB. The first 100 MB of storage is free, then it’s $2/MB. They’ve got a delivery infrastructure at the Equinix IBX in Ashburn (Northern Virginia, near DC), extensive peering, but any other footprint is vague (they say they have a six-location CDN service, but it’s not clear whether it’s theirs or if they’re reselling).

If Amazon’s CDN pricing is anything like the S3 pricing, they’ll blow the doors off those prices. S3 is $0.15/GB for space and $0.17/GB for the first 10 TB of data transfer. So deliver 5 TB worth of content, out of a 1 GB store, would cost me $5,785/month with Fileburst, and about $850 with Amazon S3. Even if the CDN premium on data transfer is, say, 100%, that’d still be only $1,700 with Amazon.

Amazon has a key cloud trait — elasticity, basically defined as the ability to scale to zero (or near-zero) as easily as scaling to bogglosity. It’s that bottom end that’s really going to give them the potential to wipe out the zillion little CDNs that primarily have low-volume customers.

Bookmark and Share

The power of denial

The power of denial is particularly relevant this week, as we live through the crisis that is currently gripping Wall Street.

I’ve been an analyst for more than eight years, now, and during those years, I’ve seen some stunning examples of denial in action. From tightly held and untrue beliefs about the market and the competition, to unrealistic expectations of engineers, to desperate hopes pinned on uncaring channel partners, to idealistic views of buyers, denial is the thing that people cling to when the reality of the situation is too overwhelmingly awful to acknowledge.

I’m not a doomsayer, and I think that we’re living in a phenomenally exciting time for IT innovation. But innovation disrupts old models, and I see numerous dangers in the market, that my vendor clients frequently like to downplay.

For instance:

I’m not a believer in an oversupply of colocation space in the market right now (although this is still primarily a per-city market, so the supply/demand balance really varies with location); we still see prices creeping up. But I do believe that much of the colocation demand is transient, while enterprises who unexpectedly ran out of space and power ahead of forecast shove their equipment into colocation for a year or three while figuring out what to do next (which is often building a data center of their own). Overbuilding is still a very real danger.

I also warned of the changes that blades and other high-density hardware would bring to the colocation industry, back in 2001, and over the seven years that have passed since I wrote that note, it’s all come true. Most of the large colocation companies have shifted their models accordingly, but regional data centers are often still woefully underprepared for this change.

Moving to the topic of hosting, I warned of the perils of capacity on demand for computing power all the way back in 2001. Although we’ve seen a decline in overprovisioning in managed hosting over the years, severe overprovisioning remains common, and the market has been buttressed by lots of high-growth customers. But tolerance for overprovisioning is dropping rapidly with the advent of virtualized, burstable capacity, and an increasing number of customers have slow-growth installations. Every managed hoster whose revenue stream depends on customers requiring capacity faster than Moore’s Law can obsolete their servers still has some vital thinking to do.

Making that problem worse is that the expensive part of servicing a hosting customer is the first server of a type they deploy, not the N more that they horizontally scale. Getting that box to stable golden bits is the tough part that eats up all your expensive people-time. And everyone who is thinking that their utility hosting platform is going to be great for picking up high-margin revenues off scaling front-end webservers needs to have another think. Given the dirt-cheap CDN prices these days, and ever-more-powerful and cost-effective application delivery controllers and caches, scaling at the front-end webserver tier is going the way of the dodo.

And while we’re talking about CDNs: Two years ago, I warned our clients that CDN prices were headed off a cliff. Margins were cushioned by the one-time discontinuity in server prices caused by the advent of multi-core chips, but prices have spent much of that time in free-fall, and although the floor’s now stable, average selling prices continue to decline and the market continues to commoditize, even as adoption of rich media shoots through the roof. I’m currently writing a research note updating our market predictions, because our clients have had a lot of interesting things to say about CDN purchases of late… stay tuned.

If you’ve got anything you want to share publicly about where you’re going with colocation, hosting, or your CDN purchases, and your thoughts on these trends, please do comment!

Bookmark and Share

%d bloggers like this: