Blog Archives

OnLive game streaming

OnLive debuted its gaming service at the Game Developers Conference in what was apparently a pretty impressive demonstration, to judge from the press and blogosphere buzz. Basically, OnLive will be running games on its server infrastructure, and then streams them live to users over the Internet, thus allowing users to play titles for multiple consoles, as well as games whose normal hardware specs exceed their own PCs, on whatever computers they want.

Forrester’s Josh Bernoff is enthused about both the announcement and the broader implications of “your life in the cloud”. His take is an interesting read, which I’m not entirely sure I agree with in its entirety. However, I do think that the implications of OnLive’s technology is well worth thinking about in the context of hosted desktop virtualization.

In order for OnLive to be able to deliver graphics-intensive, high-resolution, fast-twitch games over long-haul Internet links, they have to have an amazing, very low-latency way to transmit screen images from their central servers to users at the edge. We know it has to be screen images because in their scheme, the end-user’s computer is not responsible for rendering anything. (This kind of display is a hard problem; previous attempts to display games via remote desktop have run into serious performance issues.) From the way this is written about, the trick is that it’s sending video, meaning that it can stream as quickly as live video in general can be streamed. Real-time screen update is theoretically awesome for business uses too, not just for gaming. So I am extremely curious about the underlying technology.

I’m not sure whether I’m really OnLive’s target audience. I own all three modern consoles (Xbox 360, PS3, Wii), and a lot of my games come with peripherals. So my primary interest in this is mostly the ability to truly get games on-demand. But I am enough of a performance hound to own a high-end gaming monitor, gaming keyboard, gaming mouse, etc. for my PC (although ironically, no high-end graphics card), so any compromise in latency might not be my cup of tea. But it is certainly a terribly interesting idea.

Bookmark and Share

The broadband-caps disaster

The United States has now elected a President who has pledged universal broadband. On almost the same day, AT&T announced it would be following some of its fellow network operators into trialing metered broadband.

Broadband caps have been much more common in Europe, but the trend there is away from caps, not towards them. Caps stifle innovation, discouraging the development of richer Internet experiences and the convergence of voice and video with data.

AT&T’s proposed caps start at 20 GB of data transfer per month. That’s equivalent to about 64 kbps of sustained data — barely the kind of speed a modem can manage. Or, put another way, that’s five 4 GB, DVD-quality movie downloads. And those caps are generous compared to Time Warner’s — AT&T proposes 20-150 GB caps, vs. TW’s 5-40 GB. (Comcast is much more reasonable, with its 250 GB cap.)

The US already has pathetic broadband speeds compared to much of the rest of the developed world. There are lots of good reasons for that, of course, like our broader population spread, but we don’t want to be taking steps backwards, and caps are certainly that. (And that’s not even getting into the important question of what “broadband” really constitutes now, and what, in a universal deployment, should be the minimum speed necessary to justify the infrastructure investment against future sustainable usefulness.)

Yes, network operators need to make money. Yes, someone has to pay for the bandwidth. Yes, customers with exceptionally high bandwidth usage should expect to pay for that usage. But the kind of caps that are being discussed are simply unreasonable, especially for a world that is leaning more and more heavily towards video.

A few months ago, comScore reported video metrics showing 74% of the US Internet audience watched video, consuming an average of 228 minutes of video. 35% of that was YouTube, so let’s call that 320 kbps. It looks like the remainder is mostly higher-quality. Hulu makes a good reference — 480 kbps – 700 kbps, with the highest quality topping 1 Mbps. For purposes of calculation, let’s call it 700 kbps. Add it all up and you’re looking at about 1 GB of content delivered to the average video-watcher.

Average page weights are on the rise; Keynote, an Internet performance measurement company, recently cited 750k in a study of media sites. I’d probably cite the average page as more in the 250-450k range, but I don’t dispute that heavier pages are where things are going (compare a January 2008 study). At that kind of weight, you can view around 1500 pages in 1 GB of transfer — i.e., about 50 pages per day.

A digital camera shot is going to be in the vicinity of 1.5 MB, but a photo on Flickr is typically in the 500k range, so you can comfortably view a photo gallery of five dozen shots every day in your 1 GB.

Email sizes are increasing. Assuming you get attachments, you’re probably looking at around 75k per email, as an average. 1 GB will let you get around 450 emails per day, but if you’re downloading your spam, at the 95% mark, that gets you about 20 legit messages per day.

If you’re a Vonage customer or the like, you’re looking at around 96 kbps, or around 45 minutes of VoIP talk time per day, in 1 GB of usage.

Now add your anti-virus updates, and your Windows and other app software updates. Add your online gaming (don’t forget your voice chat with that), your instant messaging, and other trivialities of Internet usage.

And good luck if you’ve got more than one computer in your household — which a substantial percentage of broadband households do. You can take those numbers and multiply them out by the number of users in your household.

A 5 GB cap is nothing short of pathetic. Casual users can easily run up against that kind of limit with the characteristics of today’s content, and families will be flat-out hosed. With content only getting more and more heavyweight, this situation is only going to get worse.

20 GB will probably suffice for single-person, casual-user households that don’t watch much video. But families and online entertainment enthusiasts will certainly need more, and the low caps violate, I think, reasonable expectations of what one can get out of a broadband connection.

Making users watch their usage is damaging to the entire emerging industry around rich Internet content. I respect the business needs of network operators, but caps are the wrong way to achieve their goals, and counterproductive in the long term.

Bookmark and Share

How not to video-enable your site

Businesses everywhere are video-enabling their websites. Over the past year, I’ve handled a ton of inquiries from Gartner clients whose next iteration of their B2C websites included video. Given what I cover (Internet infrastructure), most of the queries I handled involved how to deliver that video in a cost-effective and high-performance way. Most enterprises hope that a richer experience will help drive traffic — but it’s also possible that a poor implementation will lead to user frustration.

I was trying to do something pretty simple tonight — find the hours that a local TGI Friday’s was open until. This entailed going to the Friday’s website, typing a zip code into the store locator box, and getting some results. Or that was the theory, anyway. (It turned out that they didn’t have the hours listed, but this was the least of my problems.)

Most people interact with restaurant websites in highly utiliarian ways — find a location, get driving directions, make reservations, check out a menu, and so forth. That means the consumer wants to get in and out, but still needs an attractive, branded experience.

Unfortunately, restaurant sites seem to repeatedly commit the sin of being incredibly heavyweight, without optimization. Indeed, an increasing number of restaurant sites seem to be presented purely in Flash — where a mobile user, i.e., someone who is looking for a place to eat right now, can’t readily get the content.

But the Friday’s site is extraordinarily heavyweight — giving the home page’s URL to Web Page Analyzer showed a spectacular 95 HTTP requests encompassing a page weight of 2,254,607 bytes — more than 2 MB worth of content! The front page has multiple embedded Flash files — complete with annoying background noise (not even music). The store locator has video, plus a particularly heavyweight design; ditto the rest of the site. It’s attractive, but it takes forever to load. Just the front page alone occupies a 30-second load time under good conditions and bandwidth equivalent to a T-1. Since Friday’s does not seem to use a CDN, there’s absolutely nothing smoothing out spiky network conditions or helping to decrease latency by being closer to that edge, so that on a practical level, with some spiky latencies between me and their website, it took several minutes to get the front page loaded to the point of usability. (I’m on a 1.1 Mbps SDSL connection.) And yes, in the end, I decided to go to another restaurant.

HCI studies pretty much say that you really need your page load times at 8 seconds or less. Many of my clients are trying for 4 seconds or less. A 250K page weight is common these days, with many B2C sites as much as double that, but 2 MB is effectively unusable for the average broadband user.

Businesses: Don’t be seduced by beautiful designs with unreasonable load times. When you test out new designs, make sure to find out what the experience is like for the average user, as well as what happens in adverse network conditions (which can be simulated by products from vendors like Shunra). And if you’re doing heavyweight pages, you should really, really consider using a CDN for your delivery.

Bookmark and Share

%d bloggers like this: