Monthly Archives: September 2008

Blog tag

My colleague Thomas Otter tagged me with a name a blog I like meme today.

My two most frequently-read and (during the period when Gartner policy didn’t forbid it, and shortly, now that the policy has changed back to openness) most-commented-upon blogs are actually peripheral to my coverage area, and date back to my long-time research interests (all the way back to college).

The first is the blog of Raph Koster, one of the old hands of virtual worlds, and the author of a book called The Theory of Fun, and has a great deal to say about the future of online games and social worlds.

The second is Terra Nova, a collaborative academically-oriented blog on online gaming and virtual worlds. It contains a lot of thoughtful, sometimes quantitative, past, present, and future of these worlds.

My favorite blog is one that is no longer updated: Creating Passionate Users. It’s about creating better user experiences, whether through technology or other means. It’s all worth reading.

Other than that, I like marketing guru Seth Godin. He’s usually got a bunch of interesting ideas on consumers, businesses, and the art of marketing and selling.

I’m also fond of Joel on Software, whose musings on software development, gadgetry, and technology are always interesting to read and periodically thought-provoking.

Finally, an emergency room physician going by the handle of figent figary writes compellingly about her ER experiences. Unlike the other authors on this blogroll, she hasn’t written a book — but she should.

I’m tagging Eric Goodness, Nick Jones, and Allen Weiner for their list of blogs they like.

Bookmark and Share

Heavy experiments with Amazon

Scott Penberthy of online video provider Heavy has an interesting blog post about trying to replace Rackspace and Akamai with Amazon web services — substituting S3 for Rackspace SAN storage, and direct delivery out of S3 for Akamai CDN services. Not surprisingly, the S3 performance fell well below Akamai performance, but they managed to achieve significant storage cost savings.

Bookmark and Share

Trion World gets a $70m C round

MMOG developer and publisher Trion World Network just closed a $70 million Series C round, which brings its total raised since its inception in 2006 to over $100 million.

This might seem like a staggering amount of money for a company with two games in development but none published yet. It’s trading on the name of its founder, Jon Van Caneghem, of Might and Magic fame. But it’s not that much money if you realize that games are now being made on movie-sized budgets, and MMOGs are exceptionally expensive to develop.

Dan Hunter had an interesting piece on the Terra Nova blog last year regarding the financials of MMOG development, based off an Interplay prospectus for an MMOG based on Fallout. That cited a cost of $75m, including a launch budget of $30m, which presumably includes marketing, manufacturing, and server deployment.

MMOGs are not efficient beasts, and by their nature, they are also prone to flash crowds and highly variable capacity needs. Most scale in a highly unwieldy manner, compounding the basic inefficient utilization of computing capacity. Utility computing infrastructure has huge potential to reduce the overbuy of capacity, but colocation on their own hardware is nigh-universally the way that such companies deploy their games.

Nicholas Carr estimated back in 2006 that an avatar in Second Life has a carbon footprint equivalent to a Brazilian. Last year, I heard, from a source I’d consider to be pretty authoritative, that an avatar in Second Life actually has a carbon footprint larger than its typical real-person (usually an affluent American).

This is why Internet data center providers drool at MMOG companies.

Bookmark and Share

Who hosts Warhammer Online?

With the recent launch of EA/Mythic‘s Warhammer Online MMORPG, comes my usual curiosity about who’s providing the infrastructure.

Mythic has stated publicly that all of the US game servers are located in Virginia, near Mythic’s offices. A couple of traceroutes seem to indicate that they’re in Verizon, almost certainly in colocation (managed hosting is rare for MMOGs), and seem to have purely Verizon connectivity to the Internet. The webservers, on the other hand, look to be split between Verizon, and ThePlanet in Dallas. FileBurst (a single-location download hosting service) is used to serve images and cinematics.

During the beta, Mythic used BitTorrent to serve files. With the advent of full release, it doesn’t appear that they’re depending on peer-to-peer any longer — unlike Blizzard, for instance, which uses public P2P in the form of BitTorrent for its World of Warcraft updates, trading off cost with much higher levels of user frustration. MMO updates are probably an ideal case for P2P file distribution — Solid State Networks, a P2P CDN, has done well by that — and with hybrid CDNs (those combining a traditional distributed model with P2P) becoming more commonplace, I’d expect to see that model more often.

However, I’m not keen on either single data center locations or single-homing, for anything that wants to be reliable. I also believe that gaming — a performance-sensitive application — really ought to run in a multi-homed environment. My favorite “why you should use multiple ISPs, even if you’re using a premium ISP that you love” anecdote to my clients is an observation I made while playing World of Warcraft a few years ago. WoW originally used just AT&T’s network (in AT&T colocation). Latency was excellent — most of the time. Occasionally, you’d get a couple of seconds of network burp, where latency would spike hugely. If you’re websurfing, this doesn’t really impact your experience. If you’re playing an online game, you can end up dead. When WoW switched to Internap for the network piece (remaining in AT&T colo), overall latencies went up — but the latencies were still well below the threshold of problematic performance, and more importantly, the latencies were rock-solidly in a narrow window of variability. (This is the same reason multi-homed CDNs with lots of route diversity deliver better consistency of user experience than single-carrier CDNs.)

Companies like Fileburst, by the way, are going to be squarely in the crosshairs of the forthcoming Amazon CDN. Fileburst will do 5 TB of delivery at $0.80 per GB — $3,985/month. At the low end, they’ll do 100 GB or less at $1/GB. The first 100 MB of storage is free, then it’s $2/MB. They’ve got a delivery infrastructure at the Equinix IBX in Ashburn (Northern Virginia, near DC), extensive peering, but any other footprint is vague (they say they have a six-location CDN service, but it’s not clear whether it’s theirs or if they’re reselling).

If Amazon’s CDN pricing is anything like the S3 pricing, they’ll blow the doors off those prices. S3 is $0.15/GB for space and $0.17/GB for the first 10 TB of data transfer. So deliver 5 TB worth of content, out of a 1 GB store, would cost me $5,785/month with Fileburst, and about $850 with Amazon S3. Even if the CDN premium on data transfer is, say, 100%, that’d still be only $1,700 with Amazon.

Amazon has a key cloud trait — elasticity, basically defined as the ability to scale to zero (or near-zero) as easily as scaling to bogglosity. It’s that bottom end that’s really going to give them the potential to wipe out the zillion little CDNs that primarily have low-volume customers.

Bookmark and Share

Oracle in the cloud… sort of

Today’s keynote at Oracle World mentioned that Oracle’s coming to Amazon’s EC2 cloud.

The bottom line is that you can now get some Oracle products, including the Oracle 11g database software, bundled as AMIs (Amazon machine images) for EC2 — i.e., ready-to-deploy — and you can license these products to run in the cloud. Any sysadmin who has ever personally gone through the pain of trying to install an Oracle database from scratch knows how frustrating it can be; I’m curious how much the task has or hasn’t been simplified by the ready-to-run AMIs.

On the plus side, this is going to address the needs of those companies who simply want to move apps into the cloud, without changing much if anything about their architecture and middleware. And it might make a convenient development and testing platform.

But simply putting a database on cloud infrastructure doesn’t make it make it a cloud database. Without that crucial distinction, what are the compelling economics or business value-add? It’s cool, but I’m having difficulty thinking of circumstances under which I would tell a client, yes, you should host your production Oracle database on EC2, rather than getting a flexible utility hosting contract with someone like Terremark, AT&T, or Savvis.

Bookmark and Share

How not to video-enable your site

Businesses everywhere are video-enabling their websites. Over the past year, I’ve handled a ton of inquiries from Gartner clients whose next iteration of their B2C websites included video. Given what I cover (Internet infrastructure), most of the queries I handled involved how to deliver that video in a cost-effective and high-performance way. Most enterprises hope that a richer experience will help drive traffic — but it’s also possible that a poor implementation will lead to user frustration.

I was trying to do something pretty simple tonight — find the hours that a local TGI Friday’s was open until. This entailed going to the Friday’s website, typing a zip code into the store locator box, and getting some results. Or that was the theory, anyway. (It turned out that they didn’t have the hours listed, but this was the least of my problems.)

Most people interact with restaurant websites in highly utiliarian ways — find a location, get driving directions, make reservations, check out a menu, and so forth. That means the consumer wants to get in and out, but still needs an attractive, branded experience.

Unfortunately, restaurant sites seem to repeatedly commit the sin of being incredibly heavyweight, without optimization. Indeed, an increasing number of restaurant sites seem to be presented purely in Flash — where a mobile user, i.e., someone who is looking for a place to eat right now, can’t readily get the content.

But the Friday’s site is extraordinarily heavyweight — giving the home page’s URL to Web Page Analyzer showed a spectacular 95 HTTP requests encompassing a page weight of 2,254,607 bytes — more than 2 MB worth of content! The front page has multiple embedded Flash files — complete with annoying background noise (not even music). The store locator has video, plus a particularly heavyweight design; ditto the rest of the site. It’s attractive, but it takes forever to load. Just the front page alone occupies a 30-second load time under good conditions and bandwidth equivalent to a T-1. Since Friday’s does not seem to use a CDN, there’s absolutely nothing smoothing out spiky network conditions or helping to decrease latency by being closer to that edge, so that on a practical level, with some spiky latencies between me and their website, it took several minutes to get the front page loaded to the point of usability. (I’m on a 1.1 Mbps SDSL connection.) And yes, in the end, I decided to go to another restaurant.

HCI studies pretty much say that you really need your page load times at 8 seconds or less. Many of my clients are trying for 4 seconds or less. A 250K page weight is common these days, with many B2C sites as much as double that, but 2 MB is effectively unusable for the average broadband user.

Businesses: Don’t be seduced by beautiful designs with unreasonable load times. When you test out new designs, make sure to find out what the experience is like for the average user, as well as what happens in adverse network conditions (which can be simulated by products from vendors like Shunra). And if you’re doing heavyweight pages, you should really, really consider using a CDN for your delivery.

Bookmark and Share

Amazon gets into the CDN business

Unsurprisingly, Amazon is getting into the CDN business. (They’re taking notification sign-ups but it’s still in private beta.)

Content delivery is a natural complement to S3 and EC2. There’s already been use and abuse of S3 as a “ghetto CDN”, and at least one commercial hosting provider (Voxel) already offers a productized S3-based CDN. If you’re an EC2 or an S3 customer, chances are high that you’ve got significant static content traffic suited to CDN delivery. Amazon is just gluing together the logical pieces, and like you’d expect, your content on their CDN will reside in S3.

Basic content delivery services can practically be thought of as nothing more than value-added bandwidth (or value-added storage, if you want to think of it that way). Chances are very high that every major carrier, not to mention every major provider of distributed computing services (i.e., infrastructure clouds), is going to end up in the CDN business sooner or later.

GigaOm and Dan Rayburn have more details about the announcement, and come to similar conclusions: Despite how badly the stock market is beating up on Akamai in the wake of this announcement, this really has very little impact on them. I concur with that bottom line.

I noted last year that the CDN market has bifurcated. Amazon’s new offering is going to squarely target the commoditized portion of the market. Of the existing CDNs, it will impact Level 3 and the smaller no-frills CDNs the most. It will probably also have a minor impact on Limelight (which has a significant percentage of commodity CDN traffic), but basically negligible impact upon Akamai, whose customer base is tilting more and more to the high end of this business.

Just like EC2 and S3 have, this new Amazon service is also going to create a market for overlay value-add companies — people who provide easier-to-use interfaces, analytics, and so on, over the Amazon offering. I’d expect to see some of the existing overlay companies provide management toolsets for the new service, and it will probably prompt some hosters to offer CDN services built on top of the Amazon platform.

Amazon’s entry, combining an elastic model with what at this point can reasonably be considered proven scalable infrastructure expertise, constitutes further market expansion, and supports my fundamental belief that CDNs are increasingly going to entirely dominate the front-end webserving tier. Delivery is becoming so cheap for the masses that there’s very little reason to bother with your own front-end infrastructure.

Bookmark and Share

The power of denial

The power of denial is particularly relevant this week, as we live through the crisis that is currently gripping Wall Street.

I’ve been an analyst for more than eight years, now, and during those years, I’ve seen some stunning examples of denial in action. From tightly held and untrue beliefs about the market and the competition, to unrealistic expectations of engineers, to desperate hopes pinned on uncaring channel partners, to idealistic views of buyers, denial is the thing that people cling to when the reality of the situation is too overwhelmingly awful to acknowledge.

I’m not a doomsayer, and I think that we’re living in a phenomenally exciting time for IT innovation. But innovation disrupts old models, and I see numerous dangers in the market, that my vendor clients frequently like to downplay.

For instance:

I’m not a believer in an oversupply of colocation space in the market right now (although this is still primarily a per-city market, so the supply/demand balance really varies with location); we still see prices creeping up. But I do believe that much of the colocation demand is transient, while enterprises who unexpectedly ran out of space and power ahead of forecast shove their equipment into colocation for a year or three while figuring out what to do next (which is often building a data center of their own). Overbuilding is still a very real danger.

I also warned of the changes that blades and other high-density hardware would bring to the colocation industry, back in 2001, and over the seven years that have passed since I wrote that note, it’s all come true. Most of the large colocation companies have shifted their models accordingly, but regional data centers are often still woefully underprepared for this change.

Moving to the topic of hosting, I warned of the perils of capacity on demand for computing power all the way back in 2001. Although we’ve seen a decline in overprovisioning in managed hosting over the years, severe overprovisioning remains common, and the market has been buttressed by lots of high-growth customers. But tolerance for overprovisioning is dropping rapidly with the advent of virtualized, burstable capacity, and an increasing number of customers have slow-growth installations. Every managed hoster whose revenue stream depends on customers requiring capacity faster than Moore’s Law can obsolete their servers still has some vital thinking to do.

Making that problem worse is that the expensive part of servicing a hosting customer is the first server of a type they deploy, not the N more that they horizontally scale. Getting that box to stable golden bits is the tough part that eats up all your expensive people-time. And everyone who is thinking that their utility hosting platform is going to be great for picking up high-margin revenues off scaling front-end webservers needs to have another think. Given the dirt-cheap CDN prices these days, and ever-more-powerful and cost-effective application delivery controllers and caches, scaling at the front-end webserver tier is going the way of the dodo.

And while we’re talking about CDNs: Two years ago, I warned our clients that CDN prices were headed off a cliff. Margins were cushioned by the one-time discontinuity in server prices caused by the advent of multi-core chips, but prices have spent much of that time in free-fall, and although the floor’s now stable, average selling prices continue to decline and the market continues to commoditize, even as adoption of rich media shoots through the roof. I’m currently writing a research note updating our market predictions, because our clients have had a lot of interesting things to say about CDN purchases of late… stay tuned.

If you’ve got anything you want to share publicly about where you’re going with colocation, hosting, or your CDN purchases, and your thoughts on these trends, please do comment!

Bookmark and Share

Beautiful logos

Vectortuts has an article with 30 Brilliant Vector Logo Designs, Deconstructed. They’re worth appreciating.

Bookmark and Share

This generation’s got game

Pew Research just released a report on Teens, Video Games, and Civics. Of particular interest — 99% of boys and 94% of girls play video games. 27% play games with people they connect to via the Internet. 21% of teens play MMOGs, and 10% play virtual worlds (i.e., Habbo Hotel and similar pure-social games). And Guitar Hero topped the favorite-games mentions.

The Pew study is interesting when taken in conjunction with some numbers released released by comScore today, in conjunction with Edward Hunter’s Measuring and Metrics presentation at Austin GDC earlier this week. That provides more demographic detail about the changing gaming market in general.

Bottom line: Lots and lots of casual gamers, and as the teen generation ages up, a growing ubiquity of gaming as an entirely mainstream activity. This is Generation V in action.

(Those of you with Gartner subscriptions might want to check out How ‘Generation V’ Will Change Your Business for my colleague Adam Sarner’s future-looking take on how the 40th-level half-elf from Secaucus, New Jersey, behaves as a customer.)

%d bloggers like this: