Category Archives: Industry
The broadband-caps disaster
The United States has now elected a President who has pledged universal broadband. On almost the same day, AT&T announced it would be following some of its fellow network operators into trialing metered broadband.
Broadband caps have been much more common in Europe, but the trend there is away from caps, not towards them. Caps stifle innovation, discouraging the development of richer Internet experiences and the convergence of voice and video with data.
AT&T’s proposed caps start at 20 GB of data transfer per month. That’s equivalent to about 64 kbps of sustained data — barely the kind of speed a modem can manage. Or, put another way, that’s five 4 GB, DVD-quality movie downloads. And those caps are generous compared to Time Warner’s — AT&T proposes 20-150 GB caps, vs. TW’s 5-40 GB. (Comcast is much more reasonable, with its 250 GB cap.)
The US already has pathetic broadband speeds compared to much of the rest of the developed world. There are lots of good reasons for that, of course, like our broader population spread, but we don’t want to be taking steps backwards, and caps are certainly that. (And that’s not even getting into the important question of what “broadband” really constitutes now, and what, in a universal deployment, should be the minimum speed necessary to justify the infrastructure investment against future sustainable usefulness.)
Yes, network operators need to make money. Yes, someone has to pay for the bandwidth. Yes, customers with exceptionally high bandwidth usage should expect to pay for that usage. But the kind of caps that are being discussed are simply unreasonable, especially for a world that is leaning more and more heavily towards video.
A few months ago, comScore reported video metrics showing 74% of the US Internet audience watched video, consuming an average of 228 minutes of video. 35% of that was YouTube, so let’s call that 320 kbps. It looks like the remainder is mostly higher-quality. Hulu makes a good reference — 480 kbps – 700 kbps, with the highest quality topping 1 Mbps. For purposes of calculation, let’s call it 700 kbps. Add it all up and you’re looking at about 1 GB of content delivered to the average video-watcher.
Average page weights are on the rise; Keynote, an Internet performance measurement company, recently cited 750k in a study of media sites. I’d probably cite the average page as more in the 250-450k range, but I don’t dispute that heavier pages are where things are going (compare a January 2008 study). At that kind of weight, you can view around 1500 pages in 1 GB of transfer — i.e., about 50 pages per day.
A digital camera shot is going to be in the vicinity of 1.5 MB, but a photo on Flickr is typically in the 500k range, so you can comfortably view a photo gallery of five dozen shots every day in your 1 GB.
Email sizes are increasing. Assuming you get attachments, you’re probably looking at around 75k per email, as an average. 1 GB will let you get around 450 emails per day, but if you’re downloading your spam, at the 95% mark, that gets you about 20 legit messages per day.
If you’re a Vonage customer or the like, you’re looking at around 96 kbps, or around 45 minutes of VoIP talk time per day, in 1 GB of usage.
Now add your anti-virus updates, and your Windows and other app software updates. Add your online gaming (don’t forget your voice chat with that), your instant messaging, and other trivialities of Internet usage.
And good luck if you’ve got more than one computer in your household — which a substantial percentage of broadband households do. You can take those numbers and multiply them out by the number of users in your household.
A 5 GB cap is nothing short of pathetic. Casual users can easily run up against that kind of limit with the characteristics of today’s content, and families will be flat-out hosed. With content only getting more and more heavyweight, this situation is only going to get worse.
20 GB will probably suffice for single-person, casual-user households that don’t watch much video. But families and online entertainment enthusiasts will certainly need more, and the low caps violate, I think, reasonable expectations of what one can get out of a broadband connection.
Making users watch their usage is damaging to the entire emerging industry around rich Internet content. I respect the business needs of network operators, but caps are the wrong way to achieve their goals, and counterproductive in the long term.
User or community competence curve
I was pondering cloud application patterns today, and the following half-formed thought occurred to me: All new technologies go through some kind of competence curve — where they are on it determines the aggregate community knowledge of that technology, and the likely starting point for a new user adopting that technology. This might not entirely correlate to its position on the hype cycle.
The community develops a library of design patterns and recipes, over the lifetime of the technology. This is everything from broad wisdom like, “MVC is probably the right pattern for a Web app”, to trivia like “you can use Expando with Google App Engine’s user class to get most of what you want out of traditional session management”. And the base level of competence of the engineers grows in seemingly magical ways — the books get better, Googling answers gets easier, the random question tossed over your shoulder to another team member has a higher chance of being usefully answered. As the technology ages, this aggregate knowledge fades, until years down the road, the next generation of engineers end up repeating the mistakes of the past.
So far, we have very little aggregate community knowledge about cloud.
Cloud enables offshoring
The United States is a hotbed of cloud innovation and adoption, but cloud is also going to be a massive enabler of the offshoring of IT operations.
Peter Laird (an architect at Oracle) had an interesting blog post about a month ago on cloud computing mindshare across geographies, analyzing traffic to his blog. And Pew Research’s cloud adoption study indicate that uptake of cloud apps among American consumers is very high. But where the users are doesn’t matter.
Today, most companies still do most of their IT Ops locally (i.e., wherever their enterprise data centers are), even if they’ve sent functions like help desk offshore. Most companies server-hug — their IT staff is close to wherever the equipment is. But the trend is moving towards remote data centers (especially as the disparity between data center real estate prices between, say, New York City and Salt Lake City grows), and cloud exacerbates that even more. Data centers don’t move off-shore because of network latency, data protection laws, etc., so that won’t change — but a big Internet data center only employs about 50 people.
What the future looks like could be very similar to the NaviSite model — local data centers staffed by small local teams who handle physical hardware, but all the monitoring, remote management, and software development for automation and other back-office functions handled offshore.
Being a hardware wrangler isn’t a high-paying job. In fact, a lot of hosting and Web 2.0 companies hire college students, part-time, to do it. So in making a move to cloud, we seem to be facilitating the further globalization of the IT labor market for the high-paying jobs.
Microsoft’s cloud strategy
The Industry Standard has an interesting and lengthy interview with Microsoft VP Amitabh Srivistava, discussing Microsoft’s cloud strategy.
It’s clear that Microsoft’s aim is squarely at the traditional business, much more so than the dot-coms and technology early-adopters who have been the enthusiasts of cloud infrastructure to date.
Domain names and Kentucky gambling
Last month, the state of Kentucky issued a seizure order for 141 domain names that it claimed were being used in connection with illegal gambling. (Full text of the order here.)
It’s a remarkable order. It asserts that probable cause exists to believe that the domain names are being used in connection with illegal gambling (despite the fact that some are parked domains, which would clearly indicate otherwise), and that as such, Kentucky is entitled to require the registrars to immediately transfer the registration for those domains to Kentucky or some other entity that it designates.
WebProNews has published statements from Governor Steve Beshear and his deputy communications director Jill Midkiff. The governor essentially claimed that illegal online gambling harms Kentucky’s legal gambling businesses, particularly the lottery and horse racing. But regardless of why it was done, it’s still a chilling potential precedent.
Yesterday, the judge in the case denied a dismissal, setting a forfeiture hearing for next month. He also stated that the sites would have 30 days to voluntarily block access by Kentucky users to avoid further legal problems. MarkMonitor (a provider of managed domain name and brand protection solutions) has posted the full text of the opinion, along with the key relevant questions raised by this case.
This case gets right to the heart of the question, “Who controls the Internet?” If Kentucky succeeds, it will fundamentally change our understanding of jurisdiction with regarding to domain names, with broad ramifications both within the United States and internationally.
Akamai expands its advertising solutions
Akamai made an advertising-related announcement today, introducing something it calls Advertising Decision Solutions, and stating that it has agreed to aquire acerno for $95 million in cash.
acerno (which seems to belong to the e.e. cummings school of brand naming) is a small retailer-focused advertising network, but the reason that Akamai acquired it is that they operate a data cooperative, wherein retailers share shopping data. This data in turn is used to create a predictive model — i.e., if a customer bought X, then it’s likely they will also be shopping for Y and Z and therefore you might want to show them related ads.
Although Akamai states they’ll continue to operate the acerno business, don’t expect them to really push that ad network; Akamai knows where its bread is buttered and isn’t going to risk competing with the established large ad networks, which number amongst Akamai’s most significant customers. Instead, Akamai intends to use the acerno data and its data cooperative model to enhance the advertising-related capabilities that it offers to its customers.
This complements the Advertising Decision Solutions announcement. Basically, it appears that Akamai is going to begin to exploit its treasure-trove of user behavior data, as well as take advantage of the fact that they deliver content on behalf of the publishers as well as the ad networks, and therefore are able to insert elements into the delivery, such as cookies (thus enabling communication between cooperating Akamai customers without those customers having to manually set up such cooperation with their various partners).
This expansion of Akamai’s product portfolio is a smart move. With the cost of delivery dropping through the floor, Akamai needs new, high-value, high-margin services to offer to customers, as well as services that tie customers more closely to Akamai, creating a stickiness that will make customers more reluctant to switch providers to obtain lower costs. Note, however, that Akamai already dominates the online retail space; the new service probably won’t make much of a difference in a retail customer’s decision about whether or not to purchase Akamai services. It will, however, help them defend and grow their ad network customers, and help them maintain a hold on core website delivery for the media and entertainment space. (This is true even in the face of video delivery moving to low-cost CDNs, since you don’t need to deliver the site and the video from the same CDN.)
I think this move signals that we’re going to see Akamai move into adjacent markets where it can leverage its distributed computing platform, its aggregated data (whether about users, content, systems, or networks), or its customer ecosystem. Because these kinds of services will tend to be decoupled from the actual price of bit delivery, they should also help Akamai broaden its revenue streams.
Power usage effectiveness
Two interesting blog posts:
Chirag Mehta: Greening the Data Centers
Microsoft: Charging Customers for Power Usage in Microsoft Data Centers
Also, Google has publicly released its data center efficiency measurements, as part of their docs on their commitment to sustainable computing. What they don’t say is the degree to which their green efforts impacts the availability of their facilities. Google can afford to have lower individual facility reliability, because their smart distributed infrastructure can seamlessly adapt to failure. Most enterprises don’t have that luxury.
Sequoia warns its portfolio companies
TechCrunch has Sequoia Capital’s presentation of doom, 56 slides that explain how we got into the current economic crisis, what’s happening now and in the near future, how technology spending is impacted, and what tech start-ups are going to need to do in order to survive.
TechCrunch has also linked to emails from angel investor Ron Conway, as well as Benchmark Capital, providing similar advice to start-ups. It comes down to “have cash on hand” — cut costs and raise money now, because things are not going to get better anytime soon.
I’ve heard theorizing that some of the wealthy investors pulling money out of the stock market could consider putting it in venture funds instead. But a recent Fortune article contradicts that theory: Venture firms are bracing for a cash crunch. Investors apparently want liquidity, and VC funds don’t offer that. So we’re going to be looking at a tight funding environment for a while, and that’s going to have a lot of ripple effects on the emerging cloud computing industry.
Blog tag
My colleague Thomas Otter tagged me with a name a blog I like meme today.
My two most frequently-read and (during the period when Gartner policy didn’t forbid it, and shortly, now that the policy has changed back to openness) most-commented-upon blogs are actually peripheral to my coverage area, and date back to my long-time research interests (all the way back to college).
The first is the blog of Raph Koster, one of the old hands of virtual worlds, and the author of a book called The Theory of Fun, and has a great deal to say about the future of online games and social worlds.
The second is Terra Nova, a collaborative academically-oriented blog on online gaming and virtual worlds. It contains a lot of thoughtful, sometimes quantitative, past, present, and future of these worlds.
My favorite blog is one that is no longer updated: Creating Passionate Users. It’s about creating better user experiences, whether through technology or other means. It’s all worth reading.
Other than that, I like marketing guru Seth Godin. He’s usually got a bunch of interesting ideas on consumers, businesses, and the art of marketing and selling.
I’m also fond of Joel on Software, whose musings on software development, gadgetry, and technology are always interesting to read and periodically thought-provoking.
Finally, an emergency room physician going by the handle of figent figary writes compellingly about her ER experiences. Unlike the other authors on this blogroll, she hasn’t written a book — but she should.
I’m tagging Eric Goodness, Nick Jones, and Allen Weiner for their list of blogs they like.
The power of denial
The power of denial is particularly relevant this week, as we live through the crisis that is currently gripping Wall Street.
I’ve been an analyst for more than eight years, now, and during those years, I’ve seen some stunning examples of denial in action. From tightly held and untrue beliefs about the market and the competition, to unrealistic expectations of engineers, to desperate hopes pinned on uncaring channel partners, to idealistic views of buyers, denial is the thing that people cling to when the reality of the situation is too overwhelmingly awful to acknowledge.
I’m not a doomsayer, and I think that we’re living in a phenomenally exciting time for IT innovation. But innovation disrupts old models, and I see numerous dangers in the market, that my vendor clients frequently like to downplay.
For instance:
I’m not a believer in an oversupply of colocation space in the market right now (although this is still primarily a per-city market, so the supply/demand balance really varies with location); we still see prices creeping up. But I do believe that much of the colocation demand is transient, while enterprises who unexpectedly ran out of space and power ahead of forecast shove their equipment into colocation for a year or three while figuring out what to do next (which is often building a data center of their own). Overbuilding is still a very real danger.
I also warned of the changes that blades and other high-density hardware would bring to the colocation industry, back in 2001, and over the seven years that have passed since I wrote that note, it’s all come true. Most of the large colocation companies have shifted their models accordingly, but regional data centers are often still woefully underprepared for this change.
Moving to the topic of hosting, I warned of the perils of capacity on demand for computing power all the way back in 2001. Although we’ve seen a decline in overprovisioning in managed hosting over the years, severe overprovisioning remains common, and the market has been buttressed by lots of high-growth customers. But tolerance for overprovisioning is dropping rapidly with the advent of virtualized, burstable capacity, and an increasing number of customers have slow-growth installations. Every managed hoster whose revenue stream depends on customers requiring capacity faster than Moore’s Law can obsolete their servers still has some vital thinking to do.
Making that problem worse is that the expensive part of servicing a hosting customer is the first server of a type they deploy, not the N more that they horizontally scale. Getting that box to stable golden bits is the tough part that eats up all your expensive people-time. And everyone who is thinking that their utility hosting platform is going to be great for picking up high-margin revenues off scaling front-end webservers needs to have another think. Given the dirt-cheap CDN prices these days, and ever-more-powerful and cost-effective application delivery controllers and caches, scaling at the front-end webserver tier is going the way of the dodo.
And while we’re talking about CDNs: Two years ago, I warned our clients that CDN prices were headed off a cliff. Margins were cushioned by the one-time discontinuity in server prices caused by the advent of multi-core chips, but prices have spent much of that time in free-fall, and although the floor’s now stable, average selling prices continue to decline and the market continues to commoditize, even as adoption of rich media shoots through the roof. I’m currently writing a research note updating our market predictions, because our clients have had a lot of interesting things to say about CDN purchases of late… stay tuned.
If you’ve got anything you want to share publicly about where you’re going with colocation, hosting, or your CDN purchases, and your thoughts on these trends, please do comment!