Monthly Archives: November 2008

Recently-published research

Here’s a quick round-up of some of my recently-published research.

Is Amazon EC2 Right For You? This is an introduction to Amazon’s Elastic Compute Cloud, written for a mildly technical audience. It summarizes Amazon’s capabilities, the typical business case for using it, and what you’ve got to do to use it. If you’re an engineer looking for a quick briefing, or you want to show a “what this newfangled thing is” summary to your manager, or you’re an investor trying to understand what exactly it is that Amazon does, this is the document for you.

Dataquest Insight: Web Hosting, North America, 2006-2012. This is an in-depth look at the colocation and hosting business, together with market forecasts and trends. (Investors may also want to look at the Invest Implications.)

Dataquest Insight: Content Delivery Networks, North America, 2006-2012. This is an in-depth look at the CDN market, segment-by-segment, with market forecasts and trends. (Investors may also want to look at the Invest Implications.)

You’ll need to be a Gartner subscriber (or purchase the individual document) in order to view these pieces.

Upcoming research (for publication in the next month): A pricing guide for Web hosting and cloud infrastructure services; a classification scheme and service provider roadmap for cloud offerings; a toolkit for CDN requirements gathering and price estimation; a framework for gathering video requirements; and a CDN selection guide.

Bookmark and Share

Quick takes: Comcast, Cogent, IronScale

Some quick takes on recent news:

Comcast P4P Results. Comcast is one of the ISPs working with hybrid-P2P CDN Pando Networks on a trial, and is showing better numbers than its competitors. The takeaway: Broadband ISPs are actively interested in P2P, CDN, and figuring out a way to monetize all of the video delivery they’re doing to their end-users.

Sprint Depeers Cogent (and Repeers). In this latest round of Cogent’s peering disputes, it’s arguing over a contract it signed with Sprint. The takeaway: Cogent is trying to keep its costs down, and is responsible for driving down bandwidth costs for everyone; its competitors are hitting back, rooted in the belief that Cogent is able to keep its prices low because it isn’t pulling its fair share of traffic carriage, which gets expressed in disputes over peering settlements.

IronScale Launches. RagingWire (a colo provider in Sacramento) has launched a managed hosting offering. Like SoftLayer, this is rapidly-provisioned dedicated servers and associated infrastructure, but unlike most of the competition in this space, it’s a managed solution. The takeaway: Like I wrote almost three years ago, it’s not about virtualization, it’s about flexibility. (“Beyond the Hype“: clients only, sorry.)

Bookmark and Share

The broadband-caps disaster

The United States has now elected a President who has pledged universal broadband. On almost the same day, AT&T announced it would be following some of its fellow network operators into trialing metered broadband.

Broadband caps have been much more common in Europe, but the trend there is away from caps, not towards them. Caps stifle innovation, discouraging the development of richer Internet experiences and the convergence of voice and video with data.

AT&T’s proposed caps start at 20 GB of data transfer per month. That’s equivalent to about 64 kbps of sustained data — barely the kind of speed a modem can manage. Or, put another way, that’s five 4 GB, DVD-quality movie downloads. And those caps are generous compared to Time Warner’s — AT&T proposes 20-150 GB caps, vs. TW’s 5-40 GB. (Comcast is much more reasonable, with its 250 GB cap.)

The US already has pathetic broadband speeds compared to much of the rest of the developed world. There are lots of good reasons for that, of course, like our broader population spread, but we don’t want to be taking steps backwards, and caps are certainly that. (And that’s not even getting into the important question of what “broadband” really constitutes now, and what, in a universal deployment, should be the minimum speed necessary to justify the infrastructure investment against future sustainable usefulness.)

Yes, network operators need to make money. Yes, someone has to pay for the bandwidth. Yes, customers with exceptionally high bandwidth usage should expect to pay for that usage. But the kind of caps that are being discussed are simply unreasonable, especially for a world that is leaning more and more heavily towards video.

A few months ago, comScore reported video metrics showing 74% of the US Internet audience watched video, consuming an average of 228 minutes of video. 35% of that was YouTube, so let’s call that 320 kbps. It looks like the remainder is mostly higher-quality. Hulu makes a good reference — 480 kbps – 700 kbps, with the highest quality topping 1 Mbps. For purposes of calculation, let’s call it 700 kbps. Add it all up and you’re looking at about 1 GB of content delivered to the average video-watcher.

Average page weights are on the rise; Keynote, an Internet performance measurement company, recently cited 750k in a study of media sites. I’d probably cite the average page as more in the 250-450k range, but I don’t dispute that heavier pages are where things are going (compare a January 2008 study). At that kind of weight, you can view around 1500 pages in 1 GB of transfer — i.e., about 50 pages per day.

A digital camera shot is going to be in the vicinity of 1.5 MB, but a photo on Flickr is typically in the 500k range, so you can comfortably view a photo gallery of five dozen shots every day in your 1 GB.

Email sizes are increasing. Assuming you get attachments, you’re probably looking at around 75k per email, as an average. 1 GB will let you get around 450 emails per day, but if you’re downloading your spam, at the 95% mark, that gets you about 20 legit messages per day.

If you’re a Vonage customer or the like, you’re looking at around 96 kbps, or around 45 minutes of VoIP talk time per day, in 1 GB of usage.

Now add your anti-virus updates, and your Windows and other app software updates. Add your online gaming (don’t forget your voice chat with that), your instant messaging, and other trivialities of Internet usage.

And good luck if you’ve got more than one computer in your household — which a substantial percentage of broadband households do. You can take those numbers and multiply them out by the number of users in your household.

A 5 GB cap is nothing short of pathetic. Casual users can easily run up against that kind of limit with the characteristics of today’s content, and families will be flat-out hosed. With content only getting more and more heavyweight, this situation is only going to get worse.

20 GB will probably suffice for single-person, casual-user households that don’t watch much video. But families and online entertainment enthusiasts will certainly need more, and the low caps violate, I think, reasonable expectations of what one can get out of a broadband connection.

Making users watch their usage is damaging to the entire emerging industry around rich Internet content. I respect the business needs of network operators, but caps are the wrong way to achieve their goals, and counterproductive in the long term.

Bookmark and Share

User or community competence curve

I was pondering cloud application patterns today, and the following half-formed thought occurred to me: All new technologies go through some kind of competence curve — where they are on it determines the aggregate community knowledge of that technology, and the likely starting point for a new user adopting that technology. This might not entirely correlate to its position on the hype cycle.

The community develops a library of design patterns and recipes, over the lifetime of the technology. This is everything from broad wisdom like, “MVC is probably the right pattern for a Web app”, to trivia like “you can use Expando with Google App Engine’s user class to get most of what you want out of traditional session management”. And the base level of competence of the engineers grows in seemingly magical ways — the books get better, Googling answers gets easier, the random question tossed over your shoulder to another team member has a higher chance of being usefully answered. As the technology ages, this aggregate knowledge fades, until years down the road, the next generation of engineers end up repeating the mistakes of the past.

So far, we have very little aggregate community knowledge about cloud.

Bookmark and Share

Cloud enables offshoring

The United States is a hotbed of cloud innovation and adoption, but cloud is also going to be a massive enabler of the offshoring of IT operations.

Peter Laird (an architect at Oracle) had an interesting blog post about a month ago on cloud computing mindshare across geographies, analyzing traffic to his blog. And Pew Research’s cloud adoption study indicate that uptake of cloud apps among American consumers is very high. But where the users are doesn’t matter.

Today, most companies still do most of their IT Ops locally (i.e., wherever their enterprise data centers are), even if they’ve sent functions like help desk offshore. Most companies server-hug — their IT staff is close to wherever the equipment is. But the trend is moving towards remote data centers (especially as the disparity between data center real estate prices between, say, New York City and Salt Lake City grows), and cloud exacerbates that even more. Data centers don’t move off-shore because of network latency, data protection laws, etc., so that won’t change — but a big Internet data center only employs about 50 people.

What the future looks like could be very similar to the NaviSite model — local data centers staffed by small local teams who handle physical hardware, but all the monitoring, remote management, and software development for automation and other back-office functions handled offshore.

Being a hardware wrangler isn’t a high-paying job. In fact, a lot of hosting and Web 2.0 companies hire college students, part-time, to do it. So in making a move to cloud, we seem to be facilitating the further globalization of the IT labor market for the high-paying jobs.

Bookmark and Share

Does architecture matter?

A friend of mine, upon reading my post on the cloud skills shift, commented that he thought that the role of the IT systems architect was actually diminishing in the enterprise. (He’s an architect at a media company.)

His reasoning was simple: Hardware has become so cheap that IT managers no longer want to spend staff time performance-tuning, finding just the right configuration, or getting the sizing and capacity planning at its most efficient, any longer.

Put another way: Is it cheaper to have a senior operations engineer on staff, or is it cheaper to just buy more hardware?

The one-size-fits-all nature of cloud may very well indicate the latter, for organizations for whom cutting-edge technology expertise does not drive competitive advantage.

Bookmark and Share

Microsoft’s cloud strategy

The Industry Standard has an interesting and lengthy interview with Microsoft VP Amitabh Srivistava, discussing Microsoft’s cloud strategy.

It’s clear that Microsoft’s aim is squarely at the traditional business, much more so than the dot-coms and technology early-adopters who have been the enthusiasts of cloud infrastructure to date.

Bookmark and Share

%d bloggers like this: