Monthly Archives: April 2010

The convenience of not coping

There’s a lot to be said for the ability to get a server for less than the price of a stick of chewing gum.

But convenience has a price, and it’s sufficient that shared hosters, blog hosters, and other folks who make their daily pittance from infrastructure-plus-a-little-extra aren’t especially threatened by cloud infrastructure services.

For instance, I pay for WordPress to host a blog because, while I am readily capable of managing a cloud server and everything necessary to run WordPress, I don’t want to deal with it. I have better things to do with my time.

Small businesses will continue to use traditional shared hosting or even some control-panel-based VPS offerings, despite the much-inferior price-to-resource ratios compared to raw cloud servers, because of the convenience of not having to cope with administration.

The reason why cloud servers are not a significant cost savings for most enterprises (when running continuously, not burst or one-time capacity), is because administration is still a tremendous burden. It’s why PaaS offerings will gain more and more traction over time, as the platforms mature, but also why those companies that crack the code to really automating systems administration will win over time.

I was pondering this equation while contemplating the downtime of a host that I use for some personal stuff; they’ve got a multi-hour maintenance downtime this weekend. My solution to this was simple: write a script that would, shortly before shutdown time, automatically shut down my application, provision a 1.5-cent-an-hour cloud server over on Rackspace, copy the data over, and fire up the application on its new home. (Note: This was just a couple of lines of code, taking moments to type.) The only thing I couldn’t automate was the DNS changeover, since I use GoDaddy for primary DNS and they don’t have an API available for ordinary customers. But conveniently: failover, without having to disrupt my Saturday.

But I realized that I was paying, on a resource-unit equivalent, tremendously more for my regular hosting than I would for a cloud server. Mostly, I’m paying for the convenience of not thinking — for not having to deal with making sure the OS is hardened, pay attention to security advisories, patch, upgrade, watch my logs, etc. I can probably afford the crude way of not thinking for a couple of hours — blindly shutting down all ports, pretty much — but I’m not comfortable with that approach for more than an afternoon.

This is, by the way, also a key difference between the small-business folks who have one or two servers, and the larger IT organizations with dozens, hundreds, or thousands of servers. The fewer you’ve got, the less efficient your labor leverage is. The guy with the largest scale doesn’t necessarily win on cost-efficiency, but there’s definitely an advantage to getting to enough scale.

Bookmark and Share

Getting real on colocation

Of late, I’ve had a lot of people ask me why my near-term forecast for the colocation market in the United States is so much lower (in many cases, half the growth rate) when compared with those produced by competing analyst firms, Wall Street, and so forth.

Without giving too much information (as you’ll recall, Gartner likes its bloggers to preserve client value by not delving too far into details for things like this), the answer to that comes down to:

  1. Gartner’s integrated forecasting approach
  2. Direct insight into end-user buying behavior
  3. Tracking the entire market, not just the traditional “hot” colo markets

I’ve got the advantage of the fact that Gartner producing forecasts for essentially the full range of IT-related “stuff”. If I’ve got a data center, I’ve got to fill it with stuff. It needs servers, network equipment, and storage, and those things need semiconductors as their components. It’s got to have network connectivity (and that means carrier network equipment for service providers, as well as equipment on the terminating end). It’s got to have software running on those servers. Stuff is a decent proxy for overall data center growth. If people aren’t buying a lot of stuff, their data center footprint isn’t growing. And when they’re buying stuff, it’s important to know if it’s replacing other stuff (freeing up power and space), or if it’s new stuff that’s going to drive footprint or power growth.

Collectively, analysts at Gartner take over a quarter-million client inquiries a year, an awful of lot of them related to purchasing decisions of one sort or another. We also do direct primary research in the form of surveys. So when we forecast, we’re not just listening to vendors tell us what they think their demand is; we’re also judging demand from the end-user (buyer) side. My colleagues and I, who collectively cover data center construction, renovation, leasing, and colocation (as well as things like hosting and data center outsourcing), have a pretty good picture of what our clientele are thinking about when it comes to procuring data center space, in addition to the degree to which end-user thinking informs our forecast for the stuff that goes into data centers.

Because of our client base, which not only include IT buyers dispersed throughout the world, but a lot of vendors and investors, we watch not just the key colocation markets where folks like Equinix have set up shop, but everywhere anyone does colo, which is getting to be an awful lot of places. If you’re judging the data center market by what’s happening in Equinix Cities or even Savvis Cities, you’re missing a lot.

If I’m going to believe in gigantic growth rates in colocation, I have to believe that one or more of the following things is true:

  1. IT stuff is growing very quickly, driving space and/or power needs
  2. Substantially more companies are choosing colo over building or leasing
  3. Prices are escalating rapidly
  4. Renewals will be at substantially higher prices than the original contracts

I don’t think, in the general case, that these things are true. (There are places where they can be true, such as with dot-com growth, specific markets where space is tight, and so on.) They’re sufficiently true to drive a colo growth rate that is substantially higher than the general “stuff that goes into data centers” growth rate, but not enough to drive the stratospheric growth rates that other analysts have been talking about.

Note, though, that this is market growth rate. Individual companies may have growth rates far in excess or far below that of the market.

I could be wrong, but pessimism plus the comprehensive approach to forecasting has served me well in the past. I came through the dot-com boom-and-bust with forecasts that turned out to be pretty much on the money, despite the fact that every other analyst firm on the planet was predicting rates of growth enormously higher than mine.

(Also, to my retroactive amusement: Back then, I estimated revenue figures for WorldCom that were a fraction of what they reported, due to my simple inability to make sense of their reported numbers. If you push network traffic, you need carrier equipment, as do the traffic recipients. And traffic goes to desktops and servers, which can be counted, and you can arrive at reasonable estimates of how much bandwidth each uses. And so on. Everything has to add up to a coherent picture, and it simply didn’t. It didn’t help that the folks at WorldCom couldn’t explain the logical discrepancies, either. It just took a lot of years to find out why.)

Bookmark and Share

Credit cards and EA/Mythic’s epic billing mistake

Most of us have long since overcome our fear of handing over our credit cards to Internet merchants. It’s become routine for most of us to simply do so. We buy stuff, we sign up for subscriptions, it’s just like handing over plastic anytime else. For that matter, most of us have never really thought about all that credit card data laying around in the hands of brick-and-mortar merchants with whom we do business, until the unfortunate times when that data gets mass-compromised.

Bad billing problems plague lots of organizations, but Electronic Arts (in the form of its Mythic Entertainment studio, which does the massively multiplayer online RPGs Dark Ages of Camelot and Warhammer Online) just had a major screw-up: a severe billing system error that, several days ago, repeatedly charged customers their subscription fees. Not just one extra charge, but, some users say, more than sixty. Worse still, the error reportedly affected not just current customers, but past customers. A month’s subscription is $15, but users can pre-pay for as much as a year. And these days, with credit cards so often actually being checking-account debit cards, that is often an immediate hit to the wallet. So you can imagine the impact on even users with decent bank balances, being hit by multiple charges. (Plenty of people with good-sized savings cushions only keep enough money in the checking account to cover expected bills, so you don’t have to be on the actual fiscal edge to get smacked with overdraft fees.) EA is scrambling to get this straightened out, of course, but this is every company’s worst billing nightmare, and it comes at a time when EA and its competitors are all scrambling to shift their business models online.

How many merchants that you don’t do business with any longer, but used to have recurring billing permission on your credit card, still have your credit card on file? As online commerce and micropayments proliferate, how many more merchants will store that data? (Or will PayPal, Apple’s storefronts, and other payment services rule the world?)

Bookmark and Share

Cogent’s Utility Computing

A client evaluating cloud computing solutions asked me about Cogent’s Utility Computing offering (and showed me a nice little product sheet for it). Never having heard of it before, and not having a clue from the marketing collateral what this was actually supposed to be (and finding zero public information about it), I got in touch with Cogent and asked them to brief me. I plan to include a blurb about it in my upcoming Who’s Who note, but it’s sufficiently unusual and interesting that I think it’s worth a call-out on my blog.

Simply put, Cogent is allowing customers to rent dedicated Linux servers at Cogent’s POPs. The servers are managed through the OS level; customers have sudo access. This by itself wouldn’t be hugely interesting (and many CDNs now allow their customers to colocate at their POPs, and might offer self-managed or simple managed dedicated hosting as well in those POPs). What’s interesting is the pricing model.

Cogent charges for this service based on bandwidth (on a Mbps basis). You pay normal Cogent prices for the bandwidth, plus an additional per-Mbps surcharge of about $1. In other words, you don’t pay any kind of compute price at all. (You do have to push a certain minimum amount of bandwidth in order for Cogent to sell you the service at all, though.)

This means that if you want to construct your own fly-by-night CDN (or even just a high-volume origin for static content), this is one way to do it. Figure you could easily be looking at $5/Mbps pricing and below, all-in. If you’re looking for cheap and crude and high-volume, then these servers in a couple of POPs, and a global load-balancing service of some sort will do it. For anything that’s not performance-sensitive, like large file downloads in the background (like game content updates), this might turn out to be a pretty interesting alternative.

I’ve always thought that Level 3’s CDN service, with its “it costs what our bandwidth costs” pricing tactic, was a competitive assault not so much on Limelight (or even AT&T, who has certainly gotten into mudpit pricing fights with Level 3), but on Cogent and other providers of low-cost high-volume bandwidth — i.e., convincing people that rather than buying servers and getting colocation space and cheap high-volume bandwidth, that they should just take CDN services. So it makes sense for Cogent to strike back with a product that circumvents having to make the investments in technology that would be required to get into the CDN mudpit directly.

I’ll be interested to see how this evolves — and will be curious to see if anyone else launches a similar service.

Bookmark and Share

Who’s Who in CDN

I’m currently working on writing a research note called “Who’s Who in Content Delivery Networks“. The CDN space isn’t quite large enough yet to justify one of Gartner’s formal rating methodologies (the Magic Quadrant or MarketScope), but with the proliferation of vendors who can credibly serve enterprise customers, the market deserves a vendor note.

The format of a Who’s Who looks a lot like our Cool Vendors format — company name, headquarters location, website, a brief blurb about who they are and what they do, and a recommendation for what to use them for. I like to keep my vendor write-up formats pretty consistent, so each CDN has a comment about its size (and length of time in the business and funding source, if relevant), its footprint, services offered, whether there’s an application acceleration solution and if so what the technology approach to that is, pricing tier (premium-priced, competitive, etc.), and general strategy.

Right now, I’m writing up the ten vendors that are most commonly considered by enterprise buyers of CDN, and then planning to add some quick bullet-points of other vendors in the ecosystem but who aren’t CDNs themselves (equipment vendors, enterprise internal CDN alternatives, etc.), probably more in a ‘here are some vendor names’ with no blurbs, fashon.

For those of you who follow my research, I’m also about to publish my yearly update of the CDN market that’s targeted at our IT buyer clients (i.e., how to choose a vendor and what the pricing is like), along with another note on the emergence of cloud CDNs (to answer a very common inquiry, which is, “Can I replace my Akamai services with Amazon?”).

Bookmark and Share

Q1 2010 inquiry in review

My professional life has gotten even busier — something that I thought was impossible, until I saw how far out my inquiry calendar was being booked. As usual, my blogging has suffered for it, as has my writing output in general. Nearly all of my writing now seems to be done in airports, while waiting for flights.

The things that clients are asking me about has changed in a big way since my Q4 2009 commentary, although this is partially due to an effort to shift some of my workload to other analysts on my team, so I can focus on the stuff that’s cutting edge rather than routine. I’ve been trying to shed as much of the routine colocation and data center leasing inquiry onto other analysts as possible, for instance; reviewing space-and-power contracts isn’t exactly rocket science, and I can get the trends information I need without needing to look at a zillion individual contracts.

Probably the biggest surprise of the quarter is how intensively my CDN inquiry has ramped up. It’s Akamai and more Akamai, for the most part — renewals, new contracts, and almost always, competitive bids. With aggressive new pricing across the board, a willingness to negotiate (and an often-confusing contract structure), and serious prospecting for new business, Akamai is generating a volume of CDN inquiry for me that I’ve never seen before, and I talk to a lot of customers in general. Limelight is in nearly all of these bids, too, by the way, and the competition in general has been very interesting — particularly AT&T. Given Gartner’s client base, my CDN inquiry is highly diversified; I see a tremendous amount of e-commerce, enterprise application acceleration, electronic software delivery and whatnot, in addition to video deals. (I’ve seen as many as 15 CDN deals in a week, lately.)

The application acceleration market in general is seeing some new innovations, especially on the software end (check out vendors like Aptimize), and there will be more ADN offers will be launched by the major CDN vendors this year. The question of, “Do you really need an ADN, or can you get enough speed with hardware and/or software?” is certainly a key one right now, due to the big delta in price between pure content offload and dynamic acceleration.

By the way, if you have not seen Akamai CEO Paul Sagan’s “Leading through Adversity” talk given at MIT Sloan, you might find it interesting — it’s his personal perspective on the company’s history. (His speech starts around the 5:30 mark, and is followed by open Q&A, although unfortunately the audio cuts out in one of the most interesting bits.)

Most of the rest of my inquiry time is focused around cloud computing inquiries, primarily of a general strategic sort, but also with plenty of near-term adoption of IaaS. Traditional pure-dedicated hosting inquiry, as I mentioned in my last round-up, is pretty much dead — just about every deal has some virtualized utility component, and when it doesn’t, the vendor has to offer some kind of flexible pricing arrangement. Unusually, I’m beginning to take more and more inquiry from traditional data center outsourcing clients who are now looking at shifting their sourcing model. And we’re seeing some sharp regional trends in the evolution of the cloud market that are the subject of an upcoming research note.

Bookmark and Share