Monthly Archives: March 2009

OnLive game streaming

OnLive debuted its gaming service at the Game Developers Conference in what was apparently a pretty impressive demonstration, to judge from the press and blogosphere buzz. Basically, OnLive will be running games on its server infrastructure, and then streams them live to users over the Internet, thus allowing users to play titles for multiple consoles, as well as games whose normal hardware specs exceed their own PCs, on whatever computers they want.

Forrester’s Josh Bernoff is enthused about both the announcement and the broader implications of “your life in the cloud”. His take is an interesting read, which I’m not entirely sure I agree with in its entirety. However, I do think that the implications of OnLive’s technology is well worth thinking about in the context of hosted desktop virtualization.

In order for OnLive to be able to deliver graphics-intensive, high-resolution, fast-twitch games over long-haul Internet links, they have to have an amazing, very low-latency way to transmit screen images from their central servers to users at the edge. We know it has to be screen images because in their scheme, the end-user’s computer is not responsible for rendering anything. (This kind of display is a hard problem; previous attempts to display games via remote desktop have run into serious performance issues.) From the way this is written about, the trick is that it’s sending video, meaning that it can stream as quickly as live video in general can be streamed. Real-time screen update is theoretically awesome for business uses too, not just for gaming. So I am extremely curious about the underlying technology.

I’m not sure whether I’m really OnLive’s target audience. I own all three modern consoles (Xbox 360, PS3, Wii), and a lot of my games come with peripherals. So my primary interest in this is mostly the ability to truly get games on-demand. But I am enough of a performance hound to own a high-end gaming monitor, gaming keyboard, gaming mouse, etc. for my PC (although ironically, no high-end graphics card), so any compromise in latency might not be my cup of tea. But it is certainly a terribly interesting idea.

Bookmark and Share

AWS in Eclipse, and Azure announcements

Amazon’s announcement for today, with timing presumably associated with EclipseCon, is an AWS toolkit for the Eclipse IDE.

Eclipse, which is an open-source project under the aegis of IBM (who also offers a commercial version), is one of the most popular IDEs (the other is Microsoft Visual Studio). Originally designed for Java applications, it has since been extended to support many other languages and environments.

Integrating with Eclipse is a useful step for Amazon, and hopefully other cloud providers will follow suit. It’s also a competitive response to the integration that Microsoft has done between Visual Studio and its Azure platform.

Speaking of Azure, as part of a set of announcements, Microsoft has said that it’s supporting non-.Net languages on Azure via FastCGI. FastCGI is a webserver extension that basically compiles and loads your scripts once, instead of every time they’re accessed, resulting in a reduction of computational overhead. You can run most languages under it, including Java, but it doesn’t really give you the full featureset that you get with tight integration with the webserver through a language-specific extension. (Note that because .NET’s languages encompass anything that supports the CLR, users already had some reasonable access to non-C# languages on Azure — implementations like Ruby.NET, IronRuby, IronPython, etc.)

Also, in an interesting Q&A on a ZDnet blog post, Microsoft said that there will be no private Azure-based clouds, i.e., enterprises won’t be able to take the Azure software and host it in their own data centers. What’s not clear is whether or not the software written for Azure will be portable into the enterprise environment. Portability of this sort is a feature that Microsoft, with its complete control over the entire stack, is uniquely well-positioned to be able to deliver.

Bookmark and Share

Gartner BCM summit pitches

I’ve just finished writing one of my presentations for Gartner’s Business Continuity Management Summit. My pitch is focused upon looking at colocation as well as the future of cloud infrastructure for disaster recovery purposes. (My other pitch at the conference is on network resiliency.)

When I started out to write this, I’d actually been expecting that some providers who had indicated that they’d have formal cloud DR services coming out shortly would be able to provide me with a briefing on what they were planning to offer. But that, unfortunately, turned out not to be the case in the end. So the pitch has been more focused on do-it-yourself cloud DR.

Lightweight DR services have appeared and disappeared from the market at an interesting rate ever since Inflow (many years and many acquisitions ago) began offering a service focused on smaller mid-market customers that couldn’t typically afford full-service DR solutions. It’s a natural complement to colocation (in fact, a substantial percentage of the people who use colo do it for a secondary site), and now, a natural complement to the cloud.

Bookmark and Share

Research du jour

My newest research notes are all collaborative efforts.

Forecast: Sizing the Cloud; Understanding the Opportunities in Cloud Services. This is Gartner’s official take on cloud segmentation and forecasting through 2013. It was a large-team effort; my contribution was primarily on the compute services portion.

Invest Insight: Content Delivery Network Arbitrage Increases Market Competition. This is a note specifically for Gartner Invest clients, written in conjunction with my colleague Frank Marsala (a former sell-side analyst who heads up our telecom sector for investors). It’s primarily about Conviva but also touches on Cotendo, but its key point is not to look at particular companies, but to look at technology-enabled long-term trends.

Cool Vendors in Cloud Computing Management and Professional Services, 2009. This is part of our annual “cool vendors” series highlighting small vendors whom we think are doing something notable. It’s a group effort, and we pick the vendors via committee. (And no, there is no way to buy your way into the report.) This year’s picks (never a secret, since vendors usually do press releases) are Appirio, CohesiveFT, Hyperic, RightScale, and Ylastic.

Bookmark and Share

Sun, IBM, and the cloud

The morning’s hot rumor: IBM and Sun are in acquisition talks. The punditry is in full swing in the press. My mailbox here at work is filling rapidly with research-community discussion of the implications, too. (As if Cisco’s Unified Computing Strategy wasn’t creating enough controversy for the week.)

Don’t let that buzz drown out Sun’s cloud announcement, though. An insider has useful detailed comments, along with links to the API itself. It’s Q-Layer inside, a RESTful API on top, and clearly in the early stages of development. I’ll likely post some further commentary once I get some time to read through all the documentation and think it through.

Bookmark and Share

A little SourceForge frustration

SourceForge puzzles me. I think it’s the combination of what is obviously eager effort to improve the site, and the fumbling to get the basics right.

On the plus side, SourceForge recently made a very welcome addition — adding “hosted apps”, including WordPress and MediaWiki — as an option for all projects, for free. And the announcement of support for additional repository types, notably git, is also a nice move.

But SourceForge is plagued by sluggish response (which is especially stark when compared to the consistent zippiness of Google Code) — across its website, source code repositories, etc. — as well as occasional outages. And the continual redesign of the site, especially in its current bright-orange incarnation, hasn’t seemed like a positive to me. With every redesign, I’ve felt like SourceForge was becoming harder and harder to use. As an example, one redesign ago, the Project Admin menu got so long it was basically unusable on smaller screens (like laptops). To SourceForge’s credit, the next iteration promptly fixed it; unfortunately, the chosen fix was by burying vitally important functionality like the file release system under the “Feature Settings” page (found under Project Admin). That led me on a wild hunt through most of the UI before I finally stumbled upon it the functionality I was looking for by accident.

SourceForge offers a tremendous amount of functionality for free, which is what’s allowing it to stay dominant against the proliferating number of alternative services out there. But not only does SourceForge need to innovate, it needs to make sure that it gets the basics right. It has to add functionality while still being fast and simple to use, and over the years, SourceForge seems to have grown tendrils of new features while the main octopod body has grown sessile and mottled with confusion.

Bookmark and Share

Linkage du jour

Tossing a few links out there…

In the weekend’s biggest cloud news, Microsoft’s Azure was down for 22 hours. It’s now back up, with no root cause known.

Geva Perry has posted a useful Zoho Sheet calculator for figuring out whether an Amazon EC2 reserved instance will save you money over an unreserved instance.

Craig Balding has posted a down-to-earth dissection of PCI compliance in the cloud, and the practical reality that cloud infrastructure providers tend to deal with PCI compliance by encouraging you to push the actual payment stuff off to third parties.

Bookmark and Share

Google App Engine updates

For those of you who haven’t been following Google’s updates to App Engine, I want to call your attention to a number of recent announcements. At the six-month point of the beta, I asked when App Engine would be enterprise-ready; now, as we come to almost the year mark, these announcements show the progress and roadmap to addressing many of the issues I mentioned in my previous post.

Paid usage. Google is now letting applications grow beyond the free limits. You set quotas for various resources, and pay for what you use. I still have concerns about the quota model, but being able to bill for these services is an important step for Google. Google intends to be price-competitive with Amazon, but there’s an important difference — there’s still some free service. Google anticipates that the free quotas are enough to serve about five million page views. 5 MPVs is a lot; it pretty much means that if you’re willing to write to the platform, you can easily host your hobby project on it for free. For that matter, many enterprises don’t get 5 MPVs worth of hits on an individual Web app or site each month — it’s just that the platform restrictions are a barrier to mainstream adoption.

Less aggressive limits and fewer restrictions. Google has removed or reduced some limits and restrictions that were significant frustrations for developers.

Promised new features. Google has announced that it’s going to provide APIs for some vital bits of functionality that it doesn’t currently allow, like the ability to run scheduled jobs and background processes.

Release of Python 3.0. While there’s no word on how Google plans to manage the 3.0 transition for App Engine, it’s interesting to see how many Python contributors have been absorbed into Google.

Speaking personally, I like App Engine. Python is my strongest scripting language skill, so I prefer to write in it whenever possible. I also like Django, though I appreciate that Google’s framework is easier to get started with than Django (it’s very easy to crank out basic stuff). Like a lot of people, I’ve had trouble adjusting to the non-relational database, but that’s mostly a matter of programming practice. It is, however, clear that the platform is still in its early stages. (I once spent several hours of a weekend tearing my hair out at something that didn’t work, only to eventually find that it was a known bug in the engine.) But Google continues to work at improving it, and it’s worth keeping an eye on to see what it will eventually become. Just don’t expect it to be enterprise-ready this year.

Bookmark and Share

Amazon announces reserved instances

Amazon’s announcement du jour is “reserved instances” for EC2.

Basically, with a reserved instance, you pay an up-front non-refundable fee for a one-year term or a three-year term. That buys you a discount on the usage fee for that instance, during that period of time. Reserved instances are only available for Unix flavors (i.e., no Windows) and, at present, only in the US availability zones.

Let’s do some math to see what the cost savings turn out to be.

An Amazon small instance (1 virtual core equivalent to a 1.0-1.2 GHz 2007 Opteron or Xeon) is normally $0.10 per hour. Assuming 720 hours in a month, that’s $72 a month, or $864 per year, if you run that instance full-time.

Under the reserved instance pricing scheme, you pay $325 for a one-year term, then $0.03 per hour. That would be $21 per month, or $259 per year. Add in the reserve fee and you’re at $584 for the year, averaging out to $49 per month — a pretty nice cost savings.

On a three-year basis, unreserved would cost you $2,592; reserved, full-time, is a $500 one-time fee, and with usage, a grand total of $1277. Big savings over the base price, averaging out to $35 per month.

This is important because at the unreserved prices, on a three-year cash basis, it’s cheaper to just buy your own servers. At the reserved price, does that equation change?

Well, let’s see. Today, in a Dell PowerEdge R900 (a reasonably popular server for virtualized infrastructure), I can get a four-socket server populated with quad-cores for around $15,000. That’s sixteen Xeon cores clocking at more than 2 GHz. Call it $1000 per modern core; split up over a 3-year period, that’s about $28 per month. Cheaper than the reserved price, and much less than the unreserved price.

Now, this is a crude, hardware-only, three-year cash calculation, of course, and not a TCO calculation. But it shows that if you plan to run your servers full-time on Amazon, it’s not as cheap as you might think when you think “it’s just three cents an hour!”

Bookmark and Share

Launch of Cotendo, a new CDN / ADN

Cotendo, a new CDN backed by VC heavyweights Sequoia Capital and Benchmark Capital, has launched. The technical founders are ex-Commtouch; the VPs of Ops and Marketing are ex-Limelight. Cotendo is positioning itself as a software company (rather than an infrastructure company, per the market shift I blogged about a few months ago), but it’s not a software pure-play — it’s got the usual megaPOP-model deployment. However, they’re positioning themselves more as a fourth-generation approach.

Three things make this launch notable — an ADN service similar to Akamai’s (thus breaking the monopoly Akamai has had since the Netli acquisition), a global load-balancing solution beefed up into an arbitrage service (for multiple delivery resources), and real-time analytics. Plus, all of us CDN-watchers can experience a wry sense of relief to see that Cotendo, unlike practically every other CDN to launch in the last two years, is not focused on video.

Again, I apologize for what is essentially a news blurb, but since I expect it’s going to be a significant subject of client inquiry, I shouldn’t be giving away analysis on my blog. Gartner’s Invest clients are going to ask what this means in the e-commerce/enterprise space, and our mid-market IT buyer clients will want to know what it means for their options. Like usual, I’m happy to take inquiry. Also, more information about this will be going out in a research note.

Bookmark and Share