Monthly Archives: May 2011

Limelight Networks buys AcceloWeb

I am way behind on my news announcements, or I’d have posted on this earlier: Limelight has bought AcceloWeb.

AcceloWeb is a front-end optimization software company (sometimes called Web content optimization). FEO technologies improve website / Web application performance, through optimizing the HTML/CSS/JavaScript/images on the page. I’ve blogged about this in the past, regarding Cotendo’s integration of Google’s mod_pagespeed technology; if you’re interested in understanding more about FEO, see that post.

Like their competitors Aptimize and Strangeloop Networks, AcceloWeb is a software-based solution. FEO is an emerging technology, and it is computationally expensive — far more so than the kind of network-based optimizations that you get in ADCs like F5’s, or WOCs like Riverbed’s. It is also complex, since FEO tries to rewrite the page without breaking any of its elements — especially hard to do with complex e-commerce sites, for instance, especially those that aren’t following architectural best practices (or even good practices).

CDN and FEO services are highly complementary, since caching the optimized page elements obviously makes sense. Level 3 and Strangeloop recently partnered, with Level 3 offering Strangeloop’s technology as a service called CDN Site Optimizer, although it’s a “side by side” implementation in Level 3’s CDN POPs, not yet integrated with the Level 3 CDN. (Obviously, the next step in that partnership would be integration.)

The integration of network optimization and FEO is the most significant innovation in the optimization market in recent years. For Limelight, this is an important purchase, since it gets them into the acceleration game with a product that Akamai doesn’t offer. (Akamai only has a referral deal with Strangeloop.)

Gartner clients: My research note on improving Web performance (combining on-premise acceleration, CDN / ADN, and FEO for complete solutions) will be out soon!

The forthcoming Public Cloud IaaS Magic Quadrant

Despite having made various blog posts and corresponded with a lot of people in email, there is persistent, ongoing confusion about our forthcoming Magic Quadrant for Public Cloud Infrastructure as a Service, which I will attempt to clear up here on my blog so I have a reference that I can point people to.

1. This is a new Magic Quadrant. We are doing this MQ in addition to, and not instead of, the Magic Quadrant for Cloud IaaS and Web Hosting (henceforth the “cloud/hosting MQ”). The cloud/hosting MQ will continue to be published at the end of each calendar year. This new MQ (henceforth the “public cloud MQ”) will be published in the middle of the year, annually. In other words, there will be two MQs each year. The two MQs will have entirely different qualification and evaluation criteria.

2. This new public cloud MQ covers a subset of the market covered by the existing cloud/hosting MQ. Please consult my cloud IaaS market segmentation to understand the segments covered. The existing MQ covers the traditional Web hosting market (with an emphasis on complex managed hosting), along with all eight of the cloud IaaS market segments, and it covers both public and private cloud. This new MQ covers multi-tenant clouds, and it has a strong emphasis on automated services, with a focus on the scale-out cloud hosting, virtual lab environment, self-managed virtual data center, and turnkey virtual data center segments. The existing MQ weights managed services very highly; by contrast, the new MQ emphasizes automation and self-service.

3. This is cloud compute IaaS only. This doesn’t rate cloud storage providers, PaaS providers, or anything else. IaaS in this case refers to the customer being able to have access to a normal guest OS. (It does not include, for instance, Microsoft Azure’s VM role.)

4. When we say “public cloud”, we mean massive multi-tenancy. That means that the service provider operates, in his data center, a pool of virtualized compute capacity in which multiple arbitrary customers will have VMs on the same physical server. The customer doesn’t have any idea who he’s sharing this pool of capacity with.

5. This includes cloud service providers only. This is an MQ for the public cloud compute IaaS providers themselves — the services focused on are ones like Amazon EC2, Terremark Enterprise Cloud, and so forth. This does not include any of the cloud-enablement vendors (no Eucalyptus, etc.), nor does it include any of the vendors in the ecosystem (no RightScale, etc.).

6. The target audience for this new MQ is still the same as the existing MQ. As Gartner analysts, we write for our client base. These are corporate IT buyers in mid-sized businesses or enterprises, or technology companies of any size (generally post-funding or post-revenue, i.e., at the stage where they’re looking for serious production infrastructure). We expect to weight the scoring heavily towards the requirements of organizations who need a dependable cloud, but we also recognize the value of commodity cloud to our audience, for certain use cases.

At this point, the initial vendor surveys for this MQ have been sent out. They have gone out to every vendor who requested one, so if you did not get one and you wanted one, please send me email. We did zero pre-qualification; if you asked, you got it. This is a data-gathering exercise, where the data will be used to determine which vendors get a formal invitation to participate in the research. We do not release the qualification criteria in advance of the formal invitations; please do not ask.

If you’re a vendor thinking of requesting a survey, please consider the above. Are you a cloud infrastructure service provider, not a cloud-building vendor or a consultancy? Is your cloud compute massively multi-tenant? Is it highly automated and focused on self-service? Do you serve enterprise customers and actively compete for enterprise deals, globally? If the answers to any of these questions are “no”, then this is not the MQ for you.

App categorization and the commodity vs. dependable cloud

At the SLA@SOI conference, my colleague Drue Reeves gave a presentation on the dependable cloud, which he defined as “a cloud service that has the availability, security, scalabilty, and risk management necessary to host enterprise applications… at a reasonable price.” We’ll be publishing research on this in the months to come, so this blog post contains relatively early-stage musings on my part.

We need enterprise-grade, dependable cloud infrastructure as as service (IaaS). But there’s also a place in the world for commodity cloud IaaS. They serve different sorts of use cases, different categories of applications. (Everything in this post refers to IaaS, but I’m just saying “cloud” for convenience.)

There are four types of applications that will move into the cloud:

  • Existing enterprise applications, capable of being virtualized
  • New enterprise-class applications, almost certainly Web-based
  • Internet-class applications, Web 1.0 and early Web 2.0
  • Global-class applications, highly sophisticated super-scalable Web 2.0 and beyond

Enterprise-class applications are generally characterized by the expectation that the underlying infrastructure is at least as reliable, performant, and secure as traditional enterprise data center infrastructure. They expect resilience at the infrastructure layer. Over the last decade, applications of this type have generally been written as three-tier, Web-based apps. Nevertheless, these apps often scale vertically rather than horizontally (scale up rather than scale out), but a very large percentage of them are small applications — ones that use a core or less of a modern CPU — and so even if they could scale out on multiple VMs, it often doesn’t make sense, from a capacity efficiency standpoint, to deploy them that way.

In the future, while an increasing percentage of new business applications will be obtained as SaaS, rather than being internally-hosted COTS apps or in-house-written apps, and more will be deployed onto business process management (BPM) suite platforms or the like, businesses will still be writing custom apps of this sort. So we will continue to need dependable infrastructure.

Moreover, many enterprise-class applications are written not just by business IT, but also by external vendors, whether ISVs, SaaS, or otherwise. Even tech companies that make their living off their websites may write enterprise-class apps. Indeed, many such apps have previously used managed hosting for the underlying infrastructure, and these companies have infrastructure dependability as an expectation.

By contrast, Internet-class applications are written to scale out. They might or might not be written to be easily distributed. They assume sufficient scale that there is an expectation that at least some things can fail without causing widespread failure, although there may still be particularly vulnerable points in the app and underlying infrstracture — the database, for instance. Resilience is generally built into the application, but these are not apps designed to withstand the Chaos Monkey.

Finally, global-class applications are written to be scale-out, highly-distributed, and to withstand massive infrastructure failures. All the resiliency is built into the application; the underlying infrastructure is assumed to be fragile. Simple underlying infrastructure components that fail cleanly and quickly (rather than dying slow deaths of degradation) are prized, because they are cheap to buy and cheap to replace; all the intelligence resides in software.

Global-class applications can use commodity cloud infrastructure, as can other use cases that do not expect a dependable cloud. Internet-class applications can also use commodity cloud infrastructure, but unless efforts are made to move more resiliency into the application layer, there are risk management issues here, and depending upon scale and needs, a dependable cloud may be preferable to commodity cloud. Enterprise-class applications need a dependable cloud.

Where resiliency resides is an architectural choice. There is no One True Way. Building resilience into the app may be the most cost-effective choice for applications which need to have “Internet scale”, but it may add unwarranted and unnecessary complexity to many other applications, making dependable infrastructure the more cost-effective choice.

Akamai and Riverbed partner on SaaS delivery

Akamai and Riverbed have signed a significant partnership deal to jointly develop solutions that combine Internet acceleration with WAN optimization. The two companies will be incorporating each other’s technologies into their platforms; this is a deep partnership with significant joint engineering, and it is probably the most significant partnership that Akamai has done to date.

Akamai has been facing increasing challenges to its leadership in the application acceleration market — what Akamai’s financial statements term “value added services”, including their Dynamic Site Accelerator (DSA) and Web Application Accelerator (WAA) services, which are B2C and B2B bundles, respectively, built on top of the same acceleration delivery network (ADN) technology. Vendors such as Cotendo (especially via its AT&T partnership), CDNetworks, and EdgeCast now have services that compete directly with what has been, for Akamai, a very high-margin, very sticky service. This market is facing severe pricing pressure, due not just to competition, but due to the delta between the cost of these services and standard CDN caching. (In other words, as basic CDN services get cheaper, application acceleration also needs to get cheaper, in order to demonstrate sufficient ROI, i.e., business value of performance, above just buying the less expensive solution.)

While Akamai has had interesting incremental innovations and value-adds since it obtained this technology via the 2007 acquisition of Netli, it has, until recently, enjoyed a monopoly on these services, and therefore hasn’t needed to do any groundbreaking innovation. While the internal enterprise WAN optimization market has been heavily competitive (between Riverbed, Cisco, and many others), other CDNs largely only began offering competitive ADN solutions in the last year. Now, while Akamai still leads in performance, it badly needs to open up some differentiation and new potential target customers, or it risks watching ADN solutions commoditize just the way basic CDN services have.

The most significant value proposition of the joint Akamai/Riverbed solution is this:

Despite the fundamental soundness of the value proposition of ADN services, most SaaS providers use only a basic CDN service, or no CDN at all. The same is true of other providers of cloud-based services. Customers, however, frequently want accelerated services, especially if they have end-users in far-flung corners of the globe; the most common problem is poor performance for end-users in Asia-Pacific when the service is based in the United States. Yet, today, doing so either requires that the SaaS provider buy an ADN service themselves (which it’s hard to do for only one customer, especially for multi-tenant SaaS), or requires the SaaS provider to allow the customer to deploy hardware in their data center (for instance, a Riverbed Steelhead WOC).

With the solution that this partnership is intended to produce, customers won’t need a SaaS provider’s cooperation to deploy an acceleration solution — they can buy it as a service and have the acceleration integrated with their existing Riverbed solution. It adds significant value to Riverbed’s customers, and it expands Akamai’s market opportunity. It’s a great idea, and in fact, this is a partnership that probably should have happened years ago. Better late than never, though.

3Crowd, a new fourth-generation CDN

3Crowd has unveiled its master plan with the recent launch of its CrowdCache product. Previously, 3Crowd had a service called CrowdDirector, essentially load-balancing for content providers who use multiple CDNs. CrowdCache is much more interesting, and it gives life and context to the existence of CrowdDirector. CrowdCache is a small, free, Java application that you can deploy onto a server, which turns it into a CDN cache. You then use CrowdDirector, which you pay for as-a-service on a per-object-request basis, to provide all the intelligence on top of that cache. CrowdDirector handles the request routing, management, analytics, and so forth. What you get, in the end, at least in theory, is a turnkey CDN.

I consider 3Crowd to be a fourth-generation CDN. (I started writing about 4th-gen CDNs back in 2008; see my blog posts on CDN overlays and MediaMelon, on the launch of CDN aggregator Aflexi, and 4th-gen CDNs and the launch of Conviva).

To recap, first-generation CDNs use a highly distributed edge model (think: Akamai), second-generation CDNs use a somewhat more concentrated but still highly distributed model (think: Speedera), and third-generation CDNs use a megaPOP model of many fewer locations (think: Limelight and most other CDNs founded in the 2005-2008 timeframe). These are heavily capital-intensive models that require owning substantial server assets.

Fourth-generation CDNs, by contrast, represent a shift towards a more software-oriented model. These companies own limited (or even no) delivery assets themselves. Some of these are not (and will not be) so much CDNs themselves, as platforms that reside in the CDN ecosystem, or CDN enablers. Fourth-generation CDNs provide software capabilities that allow their customers to turn existing delivery assets (whether in their own data centers, in the cloud, or sometimes even on clients using peer-to-peer) into CDN infrastructure. 3Crowd fits squarely into this fourth-generation model.

3Crowd is targeting three key markets: content providers who have spare capacity in their own data centers and would like to deliver content using that capacity before they resort to their CDN; Web hosters who want to add a CDN to their service offerings; and carriers who want to build CDNs of their own.

In this last market segment, especially, 3Crowd will compete against Cisco, Juniper (via the Ankeena acquisition), Alcatel-Lucent (via the Velocix acquisition), EdgeCast, Jet-Stream, and other companies that offer CDN-building solutions.

No doubt 3Crowd will also get some do-it-yourselfers who will decide to use 3Crowd to build their own CDN using cloud IaaS from Amazon or the like. This is part of what’s generating buzz for the company now, since their “Garage Startup” package is totally free.

I also think there’s potentially an enterprise play here, for those organizations who need to deliver content both internally and externally, who could potentially use 3Crowd to deploy an eCDN internally along with an Internet CDN hosted on a cloud provider, substituting for caches from BlueCoat or the like. There are lots of additional things that 3Crowd needs to be viable in that space, but it’s an interesting thing to think about.

3Crowd has federation ambitions, which is to say: Once they have a bunch of customers using their platform, they’d like to have a marketplace in which capacity-trading can be done, and, of course, also enable more private deals for federation, something which tends to be of interest to regional carriers with local CDN ambitions, who look to federation as a way of competing with the global CDNs.

Conceptually, what 3Crowd has done is not unique. Velocix, for instance, has similar hopes with its Metro product. There is certainly plenty of competition for infrastructure for the carrier CDN market (most of the world’s carriers have woken up over the last year and realize that they need a CDN strategy of some sort, even if their ambitions do not go farther than preventing their broadband networks from being swamped by video). What 3Crowd has done that’s notable is an emphasis on having an easy-to-deploy complete integrated solution that runs on commodity infrastructure resources, and the relative sophistication of the product’s feature set.

The baseline price seemed pretty cheap to me at first, and then I did some math. At the baseline pricing for a start-up, it’s about 2 cents per 10,000 requests. If you’re doing small object delivery at 10K per file, ten thousand requests is about 100 MB of content. So 1 GB of content of 10k-file requests would cost you 20 cents. That’s not cheap, since that’s just the 3Crowd cost — you still have to supply the servers and the network bandwidth. By comparison, Rackspace Cloud Files CDN-enabled delivery via Akamai, is 18 cents per GB for the actual content delivery. Anyone doing enough volume to actually have a full CDN contract and not pushing their bits through a cloud CDN is going to see pricing a lot lower than 18 cents, too.

However, the pricing dynamics are quite different for video. if you’re doing delivery of relatively low-quality, YouTube-like social video, for instance, your average file size is probably more like 10 MB. So 10,000 requests is 100 GB of content, making the per-GB surcharge a mere $0.02 cents. This is an essentially negligible amount. Consequently, the request-based pricing model makes 3Crowd far more cost-effective as a solution for video and other large-file-centric CDNs, than it does for small object delivery.

I certainly have plenty more thoughts on this, both specific to 3Crowd, and to the 4th-gen CDN and carrier CDN evolutionary path. I’m currently working on a research note on carrier CDN strategy and implementation, so keep an eye out for it. Also, I know many of the CDN watchers who read my blog are probably now asking themselves, “What are the implications for Akamai, Limelight, and Level 3?” If you’re a Gartner client, please feel free to call and make an inquiry.

Gartner research related to Amazon’s outage

In the wake of Amazon’s recent outage, we know we have Gartner clients who are interested in what we’ve written about Amazon in the past, and our existing recommendations for using cloud IaaS, and managing cloud-related risks. While we’re comfortable with our current advice, we’re also in the midst of some internal debate about what new recommendations may emerge out of this event, I’m posting a list of research notes that clients may find helpful as they sort through their thinking. This is just a reading list; it is by no means a comprehensive list of Gartner research related to Amazon or cloud IaaS. If you are a client, you may want to do your own search of the research, or ask our client services folks for help.

I will mark notes as “Core” (available to regular Gartner clients), “GBL” (available to technology and service provider clients who have subscribed to Gartner for Business Leaders or a product with similar access to research targeted at vendors), or “ITP” (available to clients of the Burton Group’s services, known as Gartner for IT Professionals post-acquisitions).

If you are specifically concerned about this particular Amazon outage and its context, and you want to read just one cautionary note, read Will Your Data Rain When the Cloud Bursts?, by my colleague Jay Heiser. It’s specifically about the risk of storage failure in the public cloud, and what you should ask your provider about their recoverability.

You might also be interested in our Cloud Computing: Infrastructure as a Service research round-up, for research related to both external cloud IaaS, and internal private clouds.

Amazon EC2

We first profiled Amazon EC2 in-depth in the November 2008 note, Is Amazon EC2 Right For You? (Core). It provides a brief overview of EC2, and examines the business case for using it, what applications are suited to using it, and the operational considerations. While some of the information is now outdated, the core questions outlined there are still valid. I am currently in the process of writing an update to this note, which will be out in a few weeks.

A deeper-dive profile can be found in the November 2009 note, Amazon EC2: Is It Ready For the Enterprise? (ITP). This goes into more technical detail (although it is also slightly out of date), and looks at it from an “enterprise readiness” standpoint, including suitability to run certain types of workloads, and a view on security and risk.

Amazon was one of the vendors profiled in our December 2010 multi-provider evaluation, Magic Quadrant for Cloud Infrastructure as a Service and Web Hosting (Core). The evaluation is focused in the context of EC2. This is the most recent competitive view of the market that we’ve published. Our thinking on some of these vendors has changed since the time it was published (and we are working on writing an update, in the form of an MQ specific to public cloud); if you are currently evaluating cloud IaaS, or any part of Amazon Web Services, we encourage you to call and place an inquiry.

Amazon S3

We did an in-depth profile for Amazon S3 in the November 2008 note, A Look at Amazon’s S3 Cloud-Computing Storage Service (Core). This note is now somewhat outdated, but please do make a client inquiry if you want to get our current thinking.

The October 2010 note, in Cloud Storage Infrastructure-as-a-Service Providers, North America (Core), provides a “who’s who” list of quick profiles of the major cloud storage providers.

An in-depth examination of cloud storage, focused on the technology and market more so than the vendors (although it does have a chart of competitive positioning), is given in the December 2010 note, Market Profile: Cloud-Storage Service Providers, 2011 (ITP).

The major cloud storage vendors are profiled in some depth in the June 2010 note, Competitive Landscape: Cloud Storage Infrastructure as a Service, North America, 2010 (GBL).

Other Amazon-Specific Things

The June 2009 note, Software on Amazon’s Elastic Compute Cloud: How to Tell Hype From Reality (Core), explores the issues of running commercial software on Amazon EC2, as well as how to separate vendor claims of Amazon partnerships from the reality of what they’re doing.

Amazon was one of the vendors who responded to the cloud rights and responsibilities published by the Gartner Global IT Council for Cloud Services. Their response, and Gartner commentary on it, can be found in Vendor Response: How Providers Address the Cloud Rights and Responsibilities (Core).

Amazon’s Elastic MapReduce service is profiled in the January 2011 note, Hadoop and MapReduce: Big Data Analytics (ITP).

Cloud IaaS, in General

A seven-part note, the top-level note of which is Evaluating Cloud Infrastructure as a Service (Core), goes into extensive detail about the range of options available in cloud IaaS provider, and how to evaluate those providers. You are highly encouraged to read it to understand the full range of market options; there’s a lot more to the market than just Amazon.

To understand the breadth of the market, and the players in particular segments, read Market Insight: Structuring the Cloud Compute IaaS Market (GBL). This is targeted at vendors ho want to understand buyer profiles and how they map to the offerings in the market.

Help with evaluating what type of data center solution is right for you can be found in the framework laid out in Data Center Sourcing: Cloud, Host, Co-Lo, or Do It Yourself (ITP).

Help with evaluating your application’s suitability for a move to the cloud can be found in Migrating Applications to the Cloud: Rehost, Refactor, Revise, Rebuild, or Replace? (ITP), which takes an in-depth look at the factors you should consider when evaluating your application portfolio in a cloud context.

Risk Management

We’ve recently produced a great deal of research related to cloud sourcing. A catalog of that research can be found in Manage Risk and Unexpected Costs During the Cloud Sourcing Revolution (Core). There’s a ton of critical advice there, especially with regard to contracting, that make these notes a must-read.

We provide a framework for evaluating cloud security and risks in Developing a Cloud Computing Security Strategy (ITP). This offers a deep dive into security and compliance issues, including how to build a cross-functional team to deal with these issues.

We take a look at assessment and auditing frameworks for cloud computing, in Determining Criteria for Cloud Security Assessment: It’s More than a Checklist (ITP). This goes deep into detail on risk assessment, assessment of provider controls, and the emerging industry standards for cloud security.

We caution about the risks of expecting that a cloud provider will have such a high level of reliability that a business continuity and recoverability are no long necessary, in Will Your Data Rain When the Cloud Bursts? (Core). This note is specifically primarily focused on data recoverability.

We provide a framework for cloud risk mitigation in Managing Availability and Performance Risks in the Cloud: Expect the Unexpected (ITP). This provides solid advice on planning your bail-out strategy, distributing your applications/data/services, and buying cyber-risk insurance.

If you are using a SaaS provider, and you’re concerned about their underlying infrastructure, we encourage you to ask them a set of Critical Questions. There are three research notes, covering Infrastructure, Security, and Recovery (all Core). These notes are somewhat old, but the questions are still valid ones.

%d bloggers like this: