Category Archives: Industry
Savvis CEO Phil Koen resigns
Savvis announced the resignation of CEO Phil Koen on Friday, citing a “joint decision” between Koen and the board of directors. This was clearly not a planned event, and it’s interesting, coming at the end of a year in which Savvis’s stock has performed pretty well (it’s up 96% over last year, although the last quarter has been rockier, -8%). The presumed conflict between Koen and the board becomes clearer when one looks at a managed hosting comparable like Rackspace (up 276% over last year, 19% in the last quarter), rather than at the colocation vendors.
When the newly-appointed interim CEO Jim Ousley says “more aggressive pursuit of expanding our growth”, I read that as, “Savvis missed the chance to be an early cloud computing leader”. A leader in utility computing, offering on-demand compute on eGenera-based blade architectures, Savvis could have taken its core market message, shifted its technology approach to embrace a primarily virtualization-based implementation, and led the charge into enterprise cloud. Instead, its multi-pronged approach (do you want dedicated servers? blades? VMs?) led to a lengthy period of confusion for prospective customers, both in marketing material and in the sales cycle itself.
Savvis still has solid products and services, and we still see plenty of contract volume in our own dealings with enterprise clients, as well as generally positive customer experiences. But Savvis has become a technology follower, conservative in its approach, rather than a boldly visionary leader. Under previous CEO Rob McCormick, the company was often ahead of its time, which isn’t ideal either, but in this period of rapid market evolution, the consumerization of IT, and self-service, Savvis’s increasingly IBM-like market messages are a bit discordant with the marketplace, and its product portfolio has largely steered away from the fastest-growing segment of the market, self-managed cloud hosting.
Koen made many good decisions — among them, focusing on managed hosting rather than colocation. But his tenure was also a time of significant turnover within the Savvis ranks, especially at the senior levels of sales and marketing. When Ousley says the company is going to take a “fresh look” at sales and marketing, I read that as, “Try to retain sales and marketing leadership for long enough for them to make a long-term impact.”
Having an interim CEO in the short term — and one drawn from the ranks of the board, rather than from the existing executive leadership — means what is effectively a holding pattern until a new CEO can be selected, gets acquainted with the business, and figures out what he wants to do. That’s not going to be quick, which is potentially dangerous at this time of fast-moving market evolution. But the impact of that won’t be felt for many months; in the near term, one would expect projects to continue to execute as planned.
Thus, for existing and prospective Savvis customers, I’d expect that this change in the executive ranks will result in exactly zero impact in the near term; anyone considering signing a contract should just proceed as if nothing’s changed.
The next round-up of links
Renesys has posted its yearly ranking of Internet transit providers. For anyone interested in understanding how transit volumes across various networks are changing, this should be very interesting data.
Ryan Kearney’s Comparing CDN Performance is an interesting overview of cloud CDNs. His methodology is flawed by the limited number of locations he’s testing from, but his comparison charts of features and whatnot are a handy reference for anyone who’s looking at file delivery off the cloud. (And for those who have missed the announcement: don’t forget the Windows Azure CDN, which presumably uses the tech that Microsoft licensed from Limelight.)
Jack of All Clouds has some nice graphs in a State of the Cloud post, showing sites (out of the top 500,000 sites) hosted by various public clouds.
Rich Miller rounds up a Slashdot discussion on how many servers an admin can manage. I’ll throw in my two cents that it’s not just a matter of how many people you have in true systems operations — you also have to look at what you invested in tools and the people to write and maintain those tools. There’s a TCO to be looked at here. Tools scale; people don’t. Anyone operating at dot-com or service provider scale rapidly develops a passion for automating everything humanly possible (or agrees that they’ll be giving up on sleep). But for the enterprise, tools implementations often don’t go as well as one might hope.
And on a Jim Cramer note, Rich Miller also has a fun round-up of data center stock performance in 2009 and since Cramer’s call.
The last quarter in review
The end of 2009 was extraordinarily busy, and that’s meant that, shamefully, I haven’t posted to my blog in ages. I aim to try to return to near-daily posting in 2010, but this means creating time in my schedule to think and research and write, rather than being entirely consumed by client inquiry.
December was Gartner’s data center conference, where I spent most of a week in back-to-back meetings, punctuated by a cloud computing end-user roundtable, a cloud computing town hall, and my talk on getting data center space. Attendance at the conference is skewed heavily towards large enterprises, but one of the most fascinating bits that emerged out of the week was the number of people walking around with emails from their CEO saying that they had to investigate this cloud computing thing, and whose major goals for the conference included figuring out how the heck they were going to reply to that email.
My cloud computing webinar is now available for replay — it’s a lightweight introduction to the subject. Ironically, when I started working at Gartner, I was terrified of public speaking, and much more comfortable doing talks over the phone. Now, I’m used to having live audiences and public speaking is just another routine day on the job… but speaking into the dead silence of an ATC is a little unnerving. (I once spent ten minutes giving a presentation to dead air, not realizing that the phone bridge had gone dead.) There were tons of great questions asked by the audience, far more than could possibly be answered in the Q&A time, but I’m taking the input and using it to figure out how to decide what I should be writing this year.
Q4 2009, by and large, continued my Q3 inquiry trends. Tons of colocation inquiries — but colocation is often giving way to leasing, now, and local/regional players are prominent in nearly every deal (and winning a lot of the deals). Relatively quiet on the CDN front, but this has to be put in context — Gartner’s analysts took over 1300 inquiries on enterprise video during 2009, and these days I’m pretty likely to look at a client’s needs and tell them they need someone like Kontiki or Ignite, not a traditional Internet CDN. And cloud, cloud, cloud is very much on everyone’s radar screen, with Asia suddenly becoming hot. Traditional dedicated hosting is dying at a remarkable pace; it’s unusual to see new deals that aren’t virtualized.
I’ll be writing on all this and more in the new year.
Jim Cramer’s “Death of the Data Center”
Jim Cramer’s “Mad Money” featured an interesting segment yesterday, titled “Sell Block: The Death of the Data Center?”
Basically, the premise of the segment is that Intel’s Nehalem DP processors will allow businessses to shrink their data center footprint, and thus businesses won’t need as much data center space, commercial data centers will empty out, and businesses might even bring previously colocated gear back into in-house data centers. He claims, somewhat weirdly, that because the Wall Street analysts who cover this space are primarily telco analysts, they’re not thinking about the impact of compute density on the growth of data center space.
I started to write a “Jim Cramer has no idea what he’s talking about” post, but I saw that Rich Miller over at Data Center Knowledge beat me to it.
Processing power has been increasing exponentially forever, but data center needs have grown even more quickly — certainly in the exponential-growth dot-com world, but even in the enterprise. There’s no reason to believe that this next generation of chips changes that at all, and it’s certainly not backed up by survey data from enterprise buyers, much less rapidly-growing dot-coms.
Cramer also seems to fail to understand the fundamental value proposition of Equinix in particular. It’s not about providing the space more cheaply; it’s about the ability to interconnect to lots of networks. That’s why companies like Google, Microsoft, etc. have built their own data centers in places where there’s cheap power — but continued to leave edge footprints and interconnect within Equinix and other high-network-density facilities.
Equinix swallows Switch & Data
The acquisition train rumbles on.
Equinix, along with Q3 earnings, has announced that it will acquire Switch and Data in a $689 million, 80% stock, 20% cash deal, representing about a 30% premium over SDXC’s closing share price today.
This move should be read as a definitive shift in strategy for Equinix. Equinix’s management team has changed significantly over the past year, and this is probably the strongest signal that the company has given yet about its evolving vision for the future.
Historically, Equinix has determinedly stuck to big Internet hub cities. Given its core focus upon network-neutral colocation — and specifically the customers who need highly dense network interconnect — it’s made sense for them to be where content providers want to be, which is also, not coincidentally, where there’s a dense concentration of service providers. Although Equinix derives a significant portion of its revenues from traditional businesses who simply treat them as a high-quality colocation provider and do very little interconnect, Equinix’s core value proposition has been most compelling to those companies for whom access to many networks, or access to an ecosystem, is critical.
The Switch and Data acquisition takes them out of big Internet hub cities, into secondary cities — often with much smaller, and lower-quality, data centers than Equinix has traditionally built. Equinix specifically cites interest in these secondary markets as a key reason for making the acquisition. They believe that cloud computing will drive applications closer to the edge, and therefore, in order to continue to compete successfully as a network hub for cloud and SaaS providers, they need to be in more markets than just the big Internet hub cities.
Though many anecdotes have been told about the shift towards content peering over the last couple of years, the Arbor Networks study of Internet traffic patterns — see the NANOG presentation for details — backs this up with excellent quantitative data. Consider that many of the larger content providers are migrating to places where there’s cheap power and using a tethering strategy instead (getting fiber back to a network-dense location), and that emerging cloud providers will likely do the same as their infrastructure grows, and you’ll see how a broader footprint becomes relevant — shorter tethers (desirable for latency reasons) mean needing to be in more markets. (Whether this makes regulators more or less nervous about the acquisition remains to be seen.)
While on the surface, this might seem like a pretty simple acquisition — two network-neutral colocation companies getting together, whee — it’s not actually that straightforward. I’ll leave it to the Wall Street analysts to fuss about the financial impact — Equinix and S&D have very different margin profiles, notably — and just touch on a few other things.
While S&D and Equinix overlap in service provider customer base, there are significant differences between the rest of their customers. S&D’s smaller, often less central data centers mean that they historically don’t serve customers who have had large-footprint needs (although this becomes less of a concern with the tethering approach taken by big content providers, who have moved their large footprints out of colo anyway). S&D’s data centers also tend to attract smaller businesses, rather than the mid-sized and enterprise market. Although, like many colo companies, their sales forces are essentially order-takers, Equinix displays a knack for enterprise sales and service, a certain polish, that S&D lacks. Equinix has a strong enterprise brand, and a consistency of quality that supports that brand; S&D is well-known within the industry (within the trade, so to speak), but not to typical IT managers, and the mixed-quality portfolio that the acquisition creates will probably present some branding and positioning challenges for Equinix.
While I think there will be some challenges in bringing the two companies together to deliver a rationalized portfolio of services in a consistent manner, Equinix has a history of successfully integrating acquisitions, and for a fast entrance into secondary markets, this was certainly the most practical way to go about doing so.
As usual, I can’t delve too deeply in this blog without breaking Gartner’s blogging rules, and so I’ll leave it at that. Clients can feel free to make an inquiry if they’re interested in hearing more.
Recent inquiry trends
It’s been mentioned to me that my “what are you hearing about from clients” posts are particularly interesting, so I’ll try to do a regular update of this sort. I have some limits on how much detail I can blog and stay within Gartner’s policies for analysts, so I can’t get too specific; if you want to drill into detail, you’ll need to make a client inquiry.
It’s shaping up into an extremely busy fall season, with people — IT users and vendors like — sounding relatively optimistic about the future. If you attended Gartner’s High-Tech Forum (a free event we recently did for tech vendors in Silicon Valley), you saw that we showed a graph of inquiry trends, indicating that “cost” is a declining search term, and “cloud” has rapidly increased in popularity. We’re forecasting a slow recovery, but at least it’s a recovery.
This is budget and strategic planning time, so I’m spending a lot of time with people discussing their 2010 cloud deployment plans, as well as their two- and five-year cloud strategies. There’s some planning stuff going around data centers, hosting, and CDN services, too, but the longer-term the planning, the more likely it is that it’s going to involve cloud. (I posted on cloud inquiry trends previously.)
There’s certainly purchasing going on right now, though, and I’m talking to clients across the whole of the planning cycle (planning, shortlisting, RFP review, evaluating RFP responses, contract review, re-evaluating existing vendors, etc.). Because pretty much everything that I cover is a recurring service, I don’t see the end-of-year rush to finish spending 2009’s budget, but this is the time of year when people start to work on the contracts they want to go for as soon as 2010’s budget hits.
My colo inquiries this year have undergone an interesting shift towards local (and regional) data centers, rather than national players, reflecting a shift in colocation from being primarily an Internet-centric model, to being one where it’s simply another method by which businesses can get data center space. Based on the planning discussions I’m hearing, I expect this is going to be the prevailing trend going forward, as well.
People are still talking about hosting, and there are still plenty of managed hosting deals out there, but very rarely do I see a hosting deal now that doesn’t have a cloud discussion attached. If you’re a hoster and you can’t offer capacity on demand, most of my clients will now simply take you off the table. It’s an extra kick in the teeth if you’ve got an on-demand offering but it’s not yet integrated with your managed services and/or dedicated offerings; now you’re competing as if you were two providers instead of one.
The CDN wars continue unabated, and competitive bidding is increasingly the norm, even in small deals. Limelight Networks fired a salvo into the fray yesterday, with an update to their delivery platform that they’ve termed “XD”. The bottom line on that is improved performance at a baseline for all Limelight customers, plus a higher-performance tier and enhanced control and reporting for customers who are willing to pay for it. I’ll form an opinion on its impact once I see some real-world performance data.
There’s a real need in the market for a company who can monitor actual end-user performance and that can do consulting assessments of multiple CDNs and origin configurations. (It’d be useful in the equipment world, too, for ADCs and WOCs.) Not everyone can or wants to deploy Keynote or Gomez or Webmetrics for this kind of thing, those companies aren’t necessarily eager to do a consultative engagement of this sort, and practically every CDN on the planet has figured out how to game their measurements to one extent or another. It doesn’t make them without value in such assessments, but real-world data from actual users (via JavaScript agents, video player instrumentation, download client instrumentation, etc.) is still vastly preferable. Practically every client I speak to wants to do performance trials, but the means available for doing so are still overly limited and very expensive.
All in all, things are really crazy busy. So busy, in fact, that I ended up letting a whole month go by without a blog post. I’ll try to get back into the habit of more frequent updates. There’s certainly no lack of interesting stuff to write about.
Cloudy inquiry trends
I haven’t been posting much lately, due to being overwhelmingly busy with client inquiries, and having a few medical issues that have taken me out of the action somewhat. So, this is something of a catch-up, state-of-the-universe-from-my-perspective, inquiry-trends post.
With the economy picking up a bit, and businesses starting to return to growth initiatives rather than just cost optimization, and the approach of the budget season, the flow of client inquiry around cloud strategy has accelerated dramatically, to the point where cloud inquiries are becoming the overwhelming majority of my inquiries. Even my colocation and data center leasing inquiries are frequently taking on a cloud flavor, i.e., “How long more should we plan to have this data center, rather than just putting everything in the cloud?”
Organizations have really absorbed the hype — they genuinely believe that shortly, the cloud will solve all of their infrastructure issues. Sometimes, they’ve even made promises to executive management that this will be the case. Unfortunately, in the short term (i.e., for 2010 and 2011 planning), this isn’t going to be the case for your typical mid-size and enterprise business. There’s just too much legacy burden. Also, traditional software licensing schemes simply don’t work in this brave new world of elastic capacity.
The enthusiasm, though, is vast, which means that there are tremendous opportunities out there, and I think it’s both entirely safe and mainstream to run cloud infrastructure pilot projects right now, including large-scale, mission-critical, production infrastructure pilots for a particular business need (as opposed to deciding to move your whole data center into the cloud, which is still bleeding-edge adopter stuff). Indeed, I think there’s a significant untapped potential for tools that ease this transition. (Certainly there are any number of outsourcers and consultants who would love to charge you vast amounts of money to help you migrate.)
We see the colocation and data center leasing markets shift with the economy, and the trends and the players shift with them, especially as strong new regionals and high-density players emerge. The cloud influence is also significant, as people try to evaluate what their real needs for space will be going forward; this is particularly true for anyone looking at long-term leases, and wondering what the state of IT will be like going out ten years. Followers of this space should check out SwitchNAP for a good example of the kind of impact that a new player can make in a very short time (they opened in December).
August has been a consistently quiet month for CDN contract inquiries, and this year is no exception, but the whole of last three months has really been hopping. The industry is continuing to shift in interesting ways, not just because of the dynamics of the companies involved, but because of changing buyer needs. Also, there was a very interesting new launch in July, in the application delivery network space, a company called Asankya, definitely worth checking out if you follow this space.
All in all, there’s a lot of activity, and it’s becoming more future-focused as people get ready to prep their budgets. This is good news for everyone, I think. Even though the fundamental economic shifts have driven companies to be more value-driven, I think there’s a valuable emphasis being placed on the right solutions at the right price, that do the right thing for the business.
What’s mid-sized?
As I talk to clients, it strikes me that companies with fairly similar IT infrastructures can use very different words to describe how they feel about it. One client might say, “Oh, we’re just a small IT shop, we’ve only got a little over 250 servers, we think cloud computing is for people like us.” Another client that’s functionally identical (same approximate business size, industry, mix of workloads and technologies) might say, much more indignantly, “We’re a big IT shop! We’ve got more than 250 servers! Cloud computing can’t help enterprises like us!”
“SMB” is a broadly confused term. So, for that matter, is “enterprise”. I tend to prefer the term “mid-market”, but even that is sort of cop-out language. Moreover, business size and IT size don’t correlate. Consider the Fortune 500 companies that extract natural resources, vs. their neighbors on the list, for instance.
Vendors have to be careful how they pitch their marketing. Mid-sized companies and/or mid-sized IT shops don’t always know when they’re talking about them, and not some other sort of company. Conversely, IT managers have to look more deeply to figure out if a particular sort of cloud service is right for their organization. Don’t dismiss a cloud service out of hand because you think you’re either too big or too small for it.
A hodgepodge of links
This is just a round-up of links that I’ve recently found to be interesting.
Barroso and Holzle (Google): Warehouse-Scale Computing. This is a formal lecture-paper covering the design of what these folks from Google refer to as WSCs. They write, “WSCs differ significantly from traditional data centers: they belong to a single organization, use a relatively homogenous hardware and system software platform, and share a common systems management layer. Often, much of the application, middleware, and system software is built in-house compared to the predominance of third-party software running in conventional data centers. Most importantly, WSCs run a smaller number of very large applications (or Internet services), and the common resource management infrastructure allows significant deployment flexibility.” The paper is wide-ranging but written to be readily understandable by the mildly technical layman. Highly recommended for anyone interested in cloud.
Washington Post: Metrorail Crash May Exemplify Automation Paradox. The WaPo looks back at serious failures of automated systems, and quotes a “growing consensus among experts that automated systems should be designed to enhance the accuracy and performance of human operators rather than to supplant them or make them complacent. By definition, accidents happen when unusual events come together. No matter how clever the designers of automated systems might be, they simply cannot account for every possible scenario, which is why it is so dangerous to eliminate ‘human interference’.” Definitely something to chew over in the cloud context.
Malcolm Gladwell: Priced to Sell. The author of The Tipping Point takes on Chris Anderon’s Free, and challenges the notion that information wants to be free. In turn, Seth Godin thinks Gladwell is wrong, and the book seems to be setting off some healthy debate.
Bruce Robertson: Capacity Planning Equals Budget Planning. My colleague Bruce riffs off a recent blog post of mine, and discusses how enterprise architects need to change the way they design solutions.
Martin English: Install SAP on Amazon Web Services. An interesting blog devoted to how to get SAP running on AWS. This is for people interested in hands-on instructions.
Robin Burkinshaw: Being homeless in the Sims 3. This blog tells the story, in words and images, of “Alice and Kev”, a pair of characters that the author (a game design student) created in the Sims 3. It’s a fascinating bit of user-generated content, and a very interesting take on what can be done with modern sandbox-style games.
Does Procurement know what you care about?
In many enterprises, IT folks decide what they want to buy and who they want to buy it from, but Procurement negotiates the contract, manages the relationship, and has significant influence on renewals. Right now, especially, purchasing folks have a lot of influence, because they’re often now the ones who go out and shop for alternatives that might be cheaper, forcing IT into the position of having to consider competitive bids.
A significant percentage of enterprise seatholders who use industry advisory firms have inquiry access for their Procurement group, so I routinely talk to people who work in purchasing. Even the ones who are dedicated to an IT procurement function tend not to have more than a minimal understanding of technology. Moreover, when it comes to renewals, they often have no thorough understanding of what exactly it is that the business is actually trying to buy.
Increasingly, though, procurement is self-educating via the Internet. I’ve been seeing this a bit in relationship to the cloud (although there, the big waves are being made by business leadership, especially the CEO and CFO, reading about cloud in the press and online, more so than Purchasing), and a whole lot in the CDN market, where things like Dan Rayburn’s blog posts on CDN pricing provide some open guidance on market pricing. Bereft of context, and armed with just enough knowledge to be dangerous, purchasing folks looking across a market for the cheapest place to source something, can arrive at incorrect conclusions about what IT is really trying to source, and misjudge how much negotiating leverage they’ll really have with a vendor.
The larger the organization gets, the greater the disconnect between IT decision-makers and the actual sourcing folks. In markets where commoditization is extant or in process, vendors have to keep that in mind, and IT buyers need to make sure that the actual procurement staff has enough information to make good negotiation decisions, especially if there are any non-commodity aspects that are important to the buyer.