Category Archives: Infrastructure

Launch of Cotendo, a new CDN / ADN

Cotendo, a new CDN backed by VC heavyweights Sequoia Capital and Benchmark Capital, has launched. The technical founders are ex-Commtouch; the VPs of Ops and Marketing are ex-Limelight. Cotendo is positioning itself as a software company (rather than an infrastructure company, per the market shift I blogged about a few months ago), but it’s not a software pure-play — it’s got the usual megaPOP-model deployment. However, they’re positioning themselves more as a fourth-generation approach.

Three things make this launch notable — an ADN service similar to Akamai’s (thus breaking the monopoly Akamai has had since the Netli acquisition), a global load-balancing solution beefed up into an arbitrage service (for multiple delivery resources), and real-time analytics. Plus, all of us CDN-watchers can experience a wry sense of relief to see that Cotendo, unlike practically every other CDN to launch in the last two years, is not focused on video.

Again, I apologize for what is essentially a news blurb, but since I expect it’s going to be a significant subject of client inquiry, I shouldn’t be giving away analysis on my blog. Gartner’s Invest clients are going to ask what this means in the e-commerce/enterprise space, and our mid-market IT buyer clients will want to know what it means for their options. Like usual, I’m happy to take inquiry. Also, more information about this will be going out in a research note.

Bookmark and Share

Fourth-generation CDNs and the launch of Conviva

First-generation CDNs use a highly distributed edge model, and include companies like Akamai and Sandpiper Networks (whose acquisiton chain goes Digital Island, Exodus, Savvis, Level 3).

Second-generation CDNs basically try to achieve most of the performance of a first-generation CDN without needing hundreds of POPs, aiming for just a few dozen locations. Speedera (eventually acquired by Akamai) is the best example of a CDN of this type.

Third-generation CDNs follow a megaPOP model — two or three dozen huge points of presence, which they hope will be highly peered. Limelight, VitalStream (acquired by Internap), and the new entrants of the past two years are pretty much all megaPOP CDNs.

Fourth-generation CDNs are very different. They are a shift towards a more software-oriented model, and thus, these companies own limited (or even no) delivery assets themselves. Some of these are not (and will not be) so much CDNs themselves, as platforms that reside in the CDN ecosystem, or CDN enablers. Velocix (for their Metro product) and MediaMelon both reside in the fourth-generation space.

That gets us to the morning’s interesting announcement.

Conviva has come out of stealth mode with a powerhouse customer announcement — NBC Universal. Conviva is not a CDN in the traditional sense, but they’re part of the ecosystem for Internet video. Rather than owning delivery assets themselves, they’ve got a pure-play SaaS solution — a platform that can arbitrage resources from multiple content sources (multiple CDNs, data centers, etc.), as well as offer value-added services like real-time analytics and integration capabilities across those multiple sources. (From an ecosystem perspective, the closest analogue is probably Move Networks.)

What makes Conviva immediately notable is their ability to do real-time monitoring of the performance of every individual delivery, and seamlessly switch sources midway through playing a video, driven by metrics and business rules, thus allowing the customer to deliver consistently good-enough performance (i.e., a target of no buffering or other degradation) at the lowest price point, i.e., cost-arbitraged QoS.

I’ve been writing about the customer desire for control and the rise of the fourth-generation software “CDN” since last year. Conviva takes full advantage of the overlay model. I’d rate the significance of this launch on par with that of Netli’s (back in 2003), although obviously in a very different way.

Because it’s a particularly important launch, I know it’s going to be of substantial interest to Gartner’s Invest clients, and likely of significant interest to our media and telecommunications industry clients. As such, I’m refraining from blogging a detailed description or analysis of the company’s technology and strategy, its likely impact to the rest of the video delivery ecosystem (which goes beyond the CDNs themselves), and the more general impact of the conceptual shift that’s taking place with fourth-generation CDNs. If you have inquiry access, please feel free to use it. A note to clients will be published soon.

(Disclaimer: I was pre-briefed on this, and I am quoted in Conviva’s press release. As I almost always do, I wrote my own quote, rather than letting words be put in my mouth. As with all Gartner quotes in press releases, it is a statement about the market, and no endorsement of the vendor is implied.)

Bookmark and Share

TCO tool for cloud computing

Gartner clients might be interested in my just-published piece of research, which is a TCO toolkit for comparing the cost of internal and cloud infrastructure.

A not-new link, but which I nonetheless want to draw people’s attention to as much as possible: Yahoo’s best practices for speeding up your web site is a superb list of clearly-articulated tips for improving your site performance and the user’s perception of performance (which goes beyond just site performance). Recommended reading for everyone from the serious Web developer to the guy just throwing some HTML up for his personal pages.

On the similarly not-new but still-interesting front, Voxel’s open-source mod_cdn module for Apache is a cool little bit of code that makes it easy to CDN-ify your site — install the module and it’ll automatically transform your links to static content. For those of you who are dealing with CDNs that don’t provide CNAME support (like the Rackspace/Limelight combo), are using Apache for your origin front-end, and who don’t want to fool with mod_rewrite, this might be an interesting alternative.

Bookmark and Share

There’s more to cloud computing than Amazon

In dozens of client conversations, I keep encountering companies — both IT buyers and vendors — who seem to believe that Amazon’s EC2 platform is the be-all and end-all of the state of the art in cloud computing today. In short, they believe that if you can’t get it on EC2, there’s no cloud platform that can offer it to you. (I saw a blog post recently called “Why, right now, is Amazon the only game in town?” that exemplifies this stance.)

For better or for worse, this is simply not the case. While Amazon’s EC2 platform (and the rest of AWS) is a fantastic technical achievement, and it has demonstrated that it scales well and has a vast amount of spare capacity to be used on demand, as it stands, it’s got some showstoppers for many mainstream adopters. But that doesn’t mean that the rest of the market can’t fill those needs, like:

  • Not having to make any changes to applications.
  • Non-public-Internet connectivity options.
  • High-performance, reliable storage with managed off-site backups.
  • Hybridization with dedicated or colocated equipment.
  • Meeting compliance and audit requirements.
  • Real-time visibility into usage and billing.
  • Enterprise-class customer support and managed services.

There are tons of providers who would be happy to sell some or all of that to you — newer names to most people, like GoGrid and SoftLayer, as well as familiar enterprise hosting names like AT&T, Savvis, and Terremark. Even your ostensibly stodgy IT outsourcers are starting to get into this game, although the boundaries of what’s a public cloud service and what’s an outsourced private one start to get blurry.

If you’ve got to suddenly turn up four thousand servers to handle a flash crowd, you’re going to need Amazon. But if you’re like most mainstream businesses looking at cloud today, you’ve got a cash crunch you’ve got to get through, you’re deploying at most dozens of servers this year, and you’re not putting up and tearing down servers hour by hour. Don’t get fooled into thinking that Amazon’s the only possible option for you. It’s just one of many. Every cloud infrastructure services platform is better for some needs than others.

(Gartner clients interested in learning more about Amazon’s EC2 platform should read my note “Is Amazon EC2 Right For You?“. Those wanting to know more about S3 should read “A Look at Amazon’s S3 Cloud-Computing Storage Service“, authored by my colleagues Stan Zaffos and Ray Paquet.)

Bookmark and Share

Cloud failures

A few days ago, an unexpected side-effect of some new code caused a major Gmail outage. Last year, a small bug triggered a series of cascading failures that resulted in a major Amazon outage. These are not the first cloud failures, nor will they be the last.

Cloud failures are as complex as the underlying software that powers them. No longer do you have isolated systems; you have complex, interwoven ecosystems, delicately orchestrated by a swarm of software programs. In presenting simplicity to the user, the cloud provider takes on the burden of dealing with that complexity themselves.

People sometimes say that these clouds aren’t built to enterprise standards. In one sense, they aren’t — most aren’t intended to meet enterprise requirements in terms of feature-set. In another sense, though, they are engineered to far exceed anything that the enterprise would ever think of attempting themselves. Massive-scale clouds are designed to never, ever, fail in a user-visible way. The fact that they do fail nonetheless should not be a surprise, given the potential for human error encoded in software. It is, in fact, surprising that they don’t visibly fail more often. Every day, within these clouds, a whole host of small errors that would be outages if they occurred within the enterprise — server hardware failures, storage failures, network failures, even some software failures — are handled invisibly by the back-end. Most of the time, the self-healing works the way it’s supposed to. Sometimes it doesn’t. The irony in both the Gmail outage and the S3 outage is that both appear to have been caused by the very software components that were actively trying to create resiliency.

To run infrastructure on a massive scale, you are utterly dependent upon automation. Automation, in turn, depends on software, and no matter how intensively you QA your software, you will have bugs. It is extremely hard to test complex multi-factor failures. There is nothing that indicates that either Google or Amazon are careless about their software development processes or their safeguards against failure. They undoubtedly hate failure as much as, and possibly more than, their customers do. Every failure means sleepless nights, painful internal post-mortems, lost revenue, angry partners, and embarrassing press. I believe that these companies do, in fact, diligently seek to seamlessly handle every error condition they can, and that they generally possess sufficient quantity and quality of engineering talent to do it well.

But the nature of the cloud — the one homogenous fabric — magnifies problems. Still, that’s not isolated to the cloud alone. Let’s not forget VMware’s license bug from last year. People who normally booted up their VMs at the beginning of the day were pretty much screwed. It took VMware the better part of a day to produce a patch — and their original announced timeframe was 36 hours. I’m not picking on VMware — certainly you could find yourself with a similar problem with any kind of widely deployed software that was vulnerable to a bug that caused it all to fail.

Enterprise-quality software produced the SQL Slammer worm, after all. In the cloud, we ain’t seen nothing yet…

Bookmark and Share

CDNetworks buys Panther Express

For many months now, CDN industry insiders have gossiped that Panther Express was in financial trouble. Panther was caught with the bad luck of mistiming the funding cycle, leaving them to try to raise capital at a point when the capital markets were essentially frozen. Moreover, a large percentage of their revenues were tied to no-commit or limited-commit contracts, and with CDN prices in free-fall for much of 2008, Panther was doubly screwed from the perspective of the money guys. As time wore on, the likelihood of an acquisition by either a rival CDN or a carrier wanting to get into the space became more and more likely — but the longer the potential acquirers could wait to pull the trigger, the more cheaply they could buy the accompany, especially since rumors of Panther’s financial difficulties were starting to scare off potential customers.

Enter CDNetworks, a global CDN based in South Korea, who in the last year has been aggressively trying to penetrate the North American market. CDNetworks acquired Panther Express yesterday, in a deal structured so that it merged its US and European-based operations with Panther. Panther’s CEO Steve Liddell (who has experience working with Asian-based companies through his past experience as president of Level 3’s Asia business) will lead the new entity.

Dan Rayburn has offered some numbers and claims the acquisition values Panther at about $5 million — which would be about one-quarter of its 2008 trailing revenues, and would leave me wondering how that compares to the book value of Panther’s deployed eqiupment.

I’ll simply say that, although I agree with Dan that this acquisition basically has zero impact on other players or on pricing in the market, I have a very different perspective on the acquisition itself, and carrier opinion of this space, and of the general market opportunity (especially in the context that CDN is much more than video), than Dan does. Gartner clients who want to talk about it, you’re welcome to schedule an inquiry with me.

(Sorry. I started to write a long and detailed analysis, and then realized that I was crossing the line on what Gartner views as acceptable analyst blogging, and what is full-fledged analysis that ought to be reserved for paying clients.)

Bookmark and Share

Interesting tidbits for the week

A bit of a link round-up…

My colleague Daryl Plummer has posted his rebuttal in our ongoing debate over cloud infrastructure commoditization. I agree with his assertion that over the long term, the bigger growth stories will be the value-added providers and not the pure-play cloud infrastructure guys, but I also stick to my guns in believing that customer service is a differentiator and we’ll have a lot of pure-plays, not a half-dozen monolithic mega-infrastructure-providers.

Michael Topalovich, of Delivered Innovation, has blogged a post-mortem on Coghead. It’s a well-written and detailed dissection of what went wrong, from the perspective of a former Coghead partner. Anyone who runs or uses a platform as a service would be well served to read it, as there are plenty of excellent lessons to be learned.

Richard Jones, of Last.fm, has put up an annotated short-list of distributed key-value stores (mostly in the form of distributed hash tables). He’s looking for a premise-based rather than cloud-based solution, but his commentary is thoughtful and the comments thread is interesting as well.

Also, I have a new research note out (Gartner clients only), in collaboration with my colleague Ted Chamberlin: evaluation criteria for Web hosting (including cloud infrastructure services in that context), which is the decision framework that supports the the Magic Quadrant that we’re anticipating publishing in April. (Also coming soon, a “how to choose a cloud infrastructure provider” note and accompanying toolkit.)

Bookmark and Share

Single-function clouds are unlikely

GigaOm has an interesting post on HP’s cloud vision, which asserts that HP’s view of the future is that service providers will reducing complexity by delivering only one application (scaling up their own infrastructure in a monolithic way), and that generalized infrastructure-as-a-service (IaaS) providers will not be able to scale up in a profitable manner.

Setting the specifics of what HP does or does not believe aside, I think that it’s highly-unlikely that we’ll see super-specialization in the cloud. There are, of course, software vendors today who make highly specialized components, that in turn are incorporated into the software of vendors further up the stack — today, those are ISVs that sell libraries, Web 2.0 companies with mashable components, and so forth. But as software companies get more ambitious, the scope of their software tends to broaden, too. In the future, they may want to become the masher rather than the mashed, so to speak. And then they start wanting to become diversified.

For an example, look at Oracle. Originally a database company, they now have a hugely diversified base of enterprise software. Why should we believe that a cloud-based software company would be any less ambitious?

Certainly, it is more difficult and more expensive to manage general-purpose compute than it is to manage specific-purpose compute. But there’s a great deal more to driving profitability than keeping costs down. Broader integration has a business value, and the increase in value (and the price the customer is willing to pay) can readily outpace the increased infrastructure cost.

To take another example, Google runs incredibly efficient single-purpose compute in the form of their search farms. Yet, they are trying to broaden their business to other services, both for the potential synergies, and because it is incredibly dangerous for a business to be too narrow, since that limits its growth and vastly increases its vulnerability to any shifts in the market.

I don’t think successful software companies will confine themselves to delivering single applications as a service. And I think that IaaS providers will find cost-effective ways to deliver appropriate infrastructure to those SaaS companies.

Bookmark and Share

Application delivery network adoption

A long-standing puzzle for myself and my various colleagues who cover application-fluent networking: Why don’t more SaaS providers adopt application delivery networks (ADNs), either via a service, or via application delivery controller (ADC) hardware?

Even if a SaaS vendor perceives their performance as being just fine for the typical US-based user, performance is often an issue in Europe, and frequently deteriorates sharply in Asia, especially China, and is erratic everywhere else depending on the quality of the country’s connectivity. (Change the names of the regions if the data center isn’t in North America.) Deploying an ADN helps to bolster performance for these users. And if it’s not cost-effective to do that for all users, why not charge extra for an accelerated service? (Yes, we understand that there are issues like “if we offer an accelerated service, are we implying our regular service is slow?” but really, that’s just a marketing issue. Performance can be a competitive differentiator and it’s also a revenue opportunity.)

Two interesting recent examples:

Yes, times are tough right now, so a SaaS company does have to evaluate the ROI carefully, but any SaaS provider with performance issues owes it to themselves to give this stuff a look. (And SaaS customers who have performance issues ought to be poking at their providers.)

Bookmark and Share

Does cloud infrastructure become commoditized?

My colleague Daryl Plummer has mused upon the future of cloud in a blog post titled “Cloud Infrastructure: The Next Fat Dumb and Happy Pipe?” In it, he posits that cloud infrastructure will commoditize, that in 5-7 years the market will only support a handful of huge players, and that value-adds are necessary in order to stay in the game.

I both agree and disagree with him. I believe that cloud infrastructure will not be purely a commodity market, specifically because everyone in this market will offer value-added differentiation, and that even a decade out, we’ll still have lots of vendors, many of them small, in this game. Here’s a quick take on a couple of reasons why:

There are diminishing returns on the cost-efficiency of scale. There is a limit to how cheap a compute cycle can get. The bigger you are, the less you’ll pay for hardware, but in the end, even semiconductor companies have to make a little margin. And the bigger you are, the more you can leverage your engineers, especially your software tools guys — but it’s also possible that a tools vendor will deliver similar cost efficiencies to the smaller players (think about the role of Virtuozzo and cPanel in shared hosting). Broadly, smaller players pay more for things and may not leverage their resources as thoroughly, but they also have less overhead. It’s important to reach sufficient scale, but it’s not necessarily beneficial to be as large as possible.

This is a service. People matter. It’s hard to really commoditize a service, because people are a big wildcard. Buyers will care about customer service. Computing infrastructure is too mission-critical not to. The nuances of account management and customer support will differentiate companies, and smaller, more agile, more service-focused companies will compete successfully with giants.

The infrastructure itself is not the whole of the service. While there will be people out there who just buy server instances with a credit card, they are generally, either implicitly or explicitly, buying a constellation of stuff around that. At the most basic level, that’s customer support, and the management portal and tools, service level agreements, and actual operational quality — all things which can be meaningfully differentiated. And you can obviously go well beyond that point. (Daryl mentions OpSource competing with Amazon/IBM/Microsoft for the same cloud infrastructure dollar — but it doesn’t, really, because those monoliths are not going to learn the specifics of your SaaS app, right down to providing end-user help-desk support, like OpSource does. Cloud infrastructure is a means to an end, not an end unto itself.)

It takes time for technology to mature. Five years from now, we’ll still have stark differences in the way that cloud infrastructure services are implemented, and those differences will manifest themselves in customer-visible ways. And the application platforms will take even longer to mature (and by their nature, promote differentiation and vendor lock-in).

By the way, my latest research note, “Save Money Now With Hosted and ‘Cloud’ Infrastructure” (Gartner clients only) is a tutorial for IT managers, focused on how to choose the right type of cloud service for the application that you want to deploy. All clouds are not created equal, especially now.

Bookmark and Share