Blog Archives
Amazon CloudFront gets whole site delivery and acceleration
For months, there have been an abundance of rumors that Amazon was intending to enter the dynamic site acceleration market; it was the logical next step for its CloudFront CDN. Today, Amazon released a set of features oriented towards dynamic content, described in blog posts from Amazon’s Jeff Barr and Werner Vogels.
When CloudFront introduced custom origins (as opposed to the original CloudFront, which required you to use S3 as the origin), and dropped minimum TTLs down to zero, it effectively edged into the “whole site delivery” feature set that’s become mainstream for the major CDNs.
With this latest release, whole site delivery is much more of a reality — you can have multiple origins so you can mix static and dynamic content (which are often served from different hostnames, i.e., you might have images.mycompany.com serving your static content, but http://www.mycompany.com serving your dynamic content), and you’ve got pattern-matching rules that let you define what the cache behavior should be for content whose URL matches a particular pattern.
The “whole site delivery” feature set is important, because it hugely simplifies CDN configuration. Rather than having to go through your site and change its URL references to the CDN (long-time CDN watchers may remember that Akamai in the early days would have customers “Akamaize” their site using a tool that did these URL rewrites), the CDN is smart — it just goes to the origin and pulls things, and it can do so dynamically (so, for instance, you don’t have to explicitly publish to the CDN when you add a new page, image, etc. to your website). It gets you closer to simply being able to repoint the URL of your website to the CDN and having magic happen.
The dynamic site acceleration features — the actual network optimization features — that are being introduced are much more limited. They basically amount to TCP connection multiplexing, TCP connection peristency/pooling, and TCP window size optimization, much like Cotendo in its very first version. At this current stage, it’s not going to be seriously competing against Akamai’s DSA offering (or CDNetworks’s similar DWA offering), but it might have appeal against EdgeCast’s ADN offering.
However, I would expect that like everything else that Amazon releases, there will be frequent updates that introduce new features. The acceleration techniques are well known at this point, and Amazon would presumably logically add bidirectional (symmetric POP-to-POP) acceleration as the next big feature, in addition to implementing the common other optimizations (dynamic congestion control, TCP “FastRamp”, etc.).
What’s important here: CloudFront dynamic acceleration costs the same as static delivery. For US delivery, that starts at about $0.12/GB and goes down to below $0.02/GB for high volumes. That’s easily somewhere between one-half and one-tenth of the going rate for dynamic delivery. The delta is even greater if you look at a dynamic product like Akamai WAA (or its next generation, Terra Alta), where enterprise applications that might do all of a TB of delivery a month typically cost $6000 per app per month — whereas a TB of CloudFront delivery is $120. Akamai is pushing the envelope forward in feature development, and arguably those price points are so divergent that you’re talking about different markets, but low price points also expand a market to where lots of people can decide to do things, because it’s a totally different level of decision — to an enterprise, at that kind of price point, it might as well be free.
Give CloudFront another year of development, and there’s a high probability that it can become a seriously disruptive force in the dynamic acceleration market. The price points change the game, making it much more likely that companies, especially SaaS providers (many of whom use EC2, and AWS in general), who have been previously reluctant to adopt dynamic acceleration due to the cost, will simply get it as an easy add-on.
There is, by the way, a tremendous market opportunity out there for a company that delivers value-added services on top of CloudFront — which is to say, the professional services to help customers integrate with it, the ongoing expert technical support on a day to day basis, and a great user portal that provides industry-competitive reporting and analytics. CloudFront has reached the point where enterprises, large mainstream media companies, and other users of Akamai, Limelight, and Level 3 who feel they need ongoing support of complex implementations and a great toolset that helps them intelligently operate those CDN implementations, are genuinely interested in taking a serious look at CloudFront as an alternative, but there’s no company that I know of that provides the services and software that would bridge the gap between CloudFront and a traditional CDN implementation.
Akamai buys Cotendo
Akamai is acquiring Cotendo for a purchase price of $268 million, somewhat under the rumored $300 million that had been previously reported in the Israeli press. To judge from the stock price, the acquisition is being warmly received by investors (and for good reason).
The acquisition only impacts the website delivery/acceleration portion of the CDN market — it has no impact on the software delivery and media delivery segments. The acquisition will leave CDNetworks as the only real alternative for dynamic site acceleration that is based on network optimization techniques (EdgeCast does not seem to have made the technological cut thus far). Level 3 (via its Strangeloop Networks partnership) and Limelight (via its Acceloweb acquisition) have chosen to go with front-end optimization techniques instead for their dynamic acceleration. Obviously, AT&T is going to have some thinking to do, especially since application-fluent networking is a core part of its strategy for cloud computing going forward.
I am not going to publicly blog a detailed analysis of this acquisition, although Gartner clients are welcome to schedule an inquiry to discuss it (thus far the questions are coming from investors and primarily have to do with the rationale for the purchase price, technology capabilities, pricing impact, and competitive impact). I do feel compelled to correct two major misperceptions, though, which I keep seeing all over the place in press quotes from Wall Street analysts.
First, I’ve heard it claimed repeatedly that Cotendo’s technology is better than Akamai’s. It’s not, although Cotendo has done some important incremental engineering innovation, as well as some better marketing of specific aspects (for instance, their solution around mobility). I expect that there will be things that Akamai will want to incorporate into their own codebase, naturally, but this is not really an acquisition that is primarily being driven by the desire for the technology capabilities.
Second, I’ve also heard it claimed repeatedly that Cotendo delivers better performance than Akamai. This is nonsense. There is a specific use case in which Cotendo may deliver better performance — low-volume customers with low cache hit ratios due to infrequently-accessed content, as can occur with SaaS apps, corporate websites, and so on. Cotendo pre-fetches content into all of its POPs and keeps it there regardless of whether or not it’s been accessed recently. Akamai flushes objects out of cache if they haven’t been accessed recently. This means that you may see Akamai cache hit ratios that are only in the 70%-80% range, especially in trial evaluations, which is obviously going to have a big impact on performance. Akamai cache tuning can help some of those customers substantially drive up cache hits (for better performance, lower origin costs, etc.), although not necessarily enough; cache hit ratios have always been a competitive point that other rivals, like Mirror Image, have hammered on. It has always been a trade-off in CDN design — if you have a lot more POPs you get better edge performance, but now you also have a much more distributed cache and therefore lower likelihood of content being fresh in a particular POP.
(Those are the two big errors that keep bothering me. There are plenty of other minor factual and analysis errors that I keep seeing in the articles that I’ve been reading about the acquisition. Investors, especially, seem to frequently misunderstand the CDN market.)
Cotendo’s potential acquisition
Thus far, merger-watchers eyeing the rumored bidding for Cotendo seem to be asking: Why this high a valuation compared to the rest of the CDN industry? Who are the potential suitors and why? What if anything does Cotendo offer that other CDNs don’t? How do the various dynamic offerings in the market compare? Who else might be ripe for acquisition? What is the general trend of M&A activity in the CDN industry going forward? Do I agree with Dan Rayburn’s commentary on this deal?
However, for various reasons, I am not currently publicly commenting further on Twitter or my blog, or really in general with non-Gartner-clients, regarding the potential acquisition of Cotendo by Akamai (or AT&T, or Juniper, or anyone else who might be interested in buying them).
If you are a Gartner client, and you want to discuss the topic, you may request a written response or a phone call through the usual mechanisms for inquiry.
What makes Akamai sticky?
There’s one thing in particular that tends to make Akamai customers “sticky” — the amount the customer uses professional services. The more professional services a customer consumes from Akamai, the less likely it is they’ll ever switch CDNs. In short: The more of a pain it’s been for them to integrate with Akamai’s CDN (usually due to the customer having a complex site that violates best practices related to content cacheability), and the more they have to use recurring professional services every time they update their site, the less likely it is that they’re going to move to another CDN. That’s for two reasons — one, because it’s difficult and expensive to do the up-front work to get the site onto another CDN, and two, because most other CDNs don’t like to do extensive professional services on a recurring basis. That makes the use of professional services a double-edged sword, since it’s not really a business with great margins, and you’re vulnerable if the customer eventually goes and builds a site that isn’t a great big hairy mess.
But there’s one Akamai product (delivered as a value-added additional service) that’s currently sufficiently compelling that customers and prospects who want it, won’t consider any other CDN that can’t offer the same. (And since it’s currently unique to Akamai, that means no competition, always a boon in a market where pricing is daily warfare.) I’m suddenly seeing it frequently quoted, which makes it likely that it’s a significant sales push, though it’s not a brand-new product. It’s a very effective attach.
Can you guess what it is?
(You may feel free to speculate on my blog, but if you want the answer, and you’re a Gartner client, make an inquiry request through the usual means.)
Limelight Networks buys AcceloWeb
I am way behind on my news announcements, or I’d have posted on this earlier: Limelight has bought AcceloWeb.
AcceloWeb is a front-end optimization software company (sometimes called Web content optimization). FEO technologies improve website / Web application performance, through optimizing the HTML/CSS/JavaScript/images on the page. I’ve blogged about this in the past, regarding Cotendo’s integration of Google’s mod_pagespeed technology; if you’re interested in understanding more about FEO, see that post.
Like their competitors Aptimize and Strangeloop Networks, AcceloWeb is a software-based solution. FEO is an emerging technology, and it is computationally expensive — far more so than the kind of network-based optimizations that you get in ADCs like F5’s, or WOCs like Riverbed’s. It is also complex, since FEO tries to rewrite the page without breaking any of its elements — especially hard to do with complex e-commerce sites, for instance, especially those that aren’t following architectural best practices (or even good practices).
CDN and FEO services are highly complementary, since caching the optimized page elements obviously makes sense. Level 3 and Strangeloop recently partnered, with Level 3 offering Strangeloop’s technology as a service called CDN Site Optimizer, although it’s a “side by side” implementation in Level 3’s CDN POPs, not yet integrated with the Level 3 CDN. (Obviously, the next step in that partnership would be integration.)
The integration of network optimization and FEO is the most significant innovation in the optimization market in recent years. For Limelight, this is an important purchase, since it gets them into the acceleration game with a product that Akamai doesn’t offer. (Akamai only has a referral deal with Strangeloop.)
Gartner clients: My research note on improving Web performance (combining on-premise acceleration, CDN / ADN, and FEO for complete solutions) will be out soon!
Akamai and Riverbed partner on SaaS delivery
Akamai and Riverbed have signed a significant partnership deal to jointly develop solutions that combine Internet acceleration with WAN optimization. The two companies will be incorporating each other’s technologies into their platforms; this is a deep partnership with significant joint engineering, and it is probably the most significant partnership that Akamai has done to date.
Akamai has been facing increasing challenges to its leadership in the application acceleration market — what Akamai’s financial statements term “value added services”, including their Dynamic Site Accelerator (DSA) and Web Application Accelerator (WAA) services, which are B2C and B2B bundles, respectively, built on top of the same acceleration delivery network (ADN) technology. Vendors such as Cotendo (especially via its AT&T partnership), CDNetworks, and EdgeCast now have services that compete directly with what has been, for Akamai, a very high-margin, very sticky service. This market is facing severe pricing pressure, due not just to competition, but due to the delta between the cost of these services and standard CDN caching. (In other words, as basic CDN services get cheaper, application acceleration also needs to get cheaper, in order to demonstrate sufficient ROI, i.e., business value of performance, above just buying the less expensive solution.)
While Akamai has had interesting incremental innovations and value-adds since it obtained this technology via the 2007 acquisition of Netli, it has, until recently, enjoyed a monopoly on these services, and therefore hasn’t needed to do any groundbreaking innovation. While the internal enterprise WAN optimization market has been heavily competitive (between Riverbed, Cisco, and many others), other CDNs largely only began offering competitive ADN solutions in the last year. Now, while Akamai still leads in performance, it badly needs to open up some differentiation and new potential target customers, or it risks watching ADN solutions commoditize just the way basic CDN services have.
The most significant value proposition of the joint Akamai/Riverbed solution is this:
Despite the fundamental soundness of the value proposition of ADN services, most SaaS providers use only a basic CDN service, or no CDN at all. The same is true of other providers of cloud-based services. Customers, however, frequently want accelerated services, especially if they have end-users in far-flung corners of the globe; the most common problem is poor performance for end-users in Asia-Pacific when the service is based in the United States. Yet, today, doing so either requires that the SaaS provider buy an ADN service themselves (which it’s hard to do for only one customer, especially for multi-tenant SaaS), or requires the SaaS provider to allow the customer to deploy hardware in their data center (for instance, a Riverbed Steelhead WOC).
With the solution that this partnership is intended to produce, customers won’t need a SaaS provider’s cooperation to deploy an acceleration solution — they can buy it as a service and have the acceleration integrated with their existing Riverbed solution. It adds significant value to Riverbed’s customers, and it expands Akamai’s market opportunity. It’s a great idea, and in fact, this is a partnership that probably should have happened years ago. Better late than never, though.
Akamai sues Cotendo for patent infringement
How to tell when a CDN has arrived: Akamai sues them for patent infringement.
The lawsuit that Akamai has filed against Cotendo alleges the violation of three patents.
The most recent of the patents, 7,693,959, is dated April 2010, but it’s a continuation of several previous applications — its age is nicely demonstrated by things like its reference to the Netscape Navigator browser and references to ADC vendors that ceased to exist years ago. It seems to be a sort of generic basic CDN patent, but it governs the use of replication to distribute objects, not just caching.
The second Akamai patent, 7,293,093, essentially covers the “whole site delivery” technique.
The oldest of the patents, 6,820,133, was obtained by Akamai when it acquired Netli. It essentially covers the basic ADN technique of optimized communication between two server nodes in a network. I don’t know how defensible this patent is; there are similar ones held by a variety of ADC and WOC vendors who use such techniques.
My expectation is that the patent issues won’t affect the CDN market in any significant way, and the lawsuit is likely to drag on forever. Fighting the lawsuit will cost money, but Cotendo has deep-pocketed backers in Sequoia and Benchmark. It will obviously fuel speculation that Akamai will try to buy them, but I don’t think that it’s going to be a simple way to eliminate competition in this market, especially given what I’ve seen of the roadmaps of other competitors (under NDA). Dynamic acceleration is too compelling to stay a monopoly (and incidentally, the ex-Netli executives are now free of their competitive lock-up), and the only reason it’s taken this long to arrive was that most CDNs were focused on the huge, and technically far easier, opportunity in video delivery.
It’s an interesting signal that Akamai takes Cotendo seriously as a competitor, though. In my opinion, they should; since early this year, I’ve regularly seen Cotendo as a bidder in the deals that I look at. The AT&T deal is just a sweetener — and since AT&T will apply your corporate enteprise discount to a Cotendo deal, that means that I’ve got enterprise clients who sometimes look at 70% discounts on top of an already less-expensive price quote. And Akamai’s problem is that Cotendo isn’t just, as Akamai alleges in its lawsuit, a low-cost competitor; Cotendo is trying to innovate and offer things that Akamai doesn’t have.
Competition is good for the market, and it’s good for customers.
Netflix, Akamai, and video delivery performance
Dan Rayburn’s blg post about the Akamai/Netflix relationship seems to have set off something of a firestorm, and I’ve been deluged by inquiries from Gartner Invest clients about it.
I do not want to add fuel to the fire by speculating on anything, and I have access to confidential information that prevents me from stating some facts as I know them, so for blog purposes I will stick to making some general comments about Akamai’s delivery performance for long-tail video content.
From the independent third-party testing that I’ve seen, Akamai delivers good large-file video performance, but their performance is not always superior to other major CDNs (excluding greater reach, i.e., they’re going to solidly trounce a rival who doesn’t have footprint in a given international locale, for instance). Actual performance depends on a number of factors, including the specifics of a customer’s deal with Akamai and the way that the service is configured. The testing location also matters. The bottom line is that it’s very competitive performance but it’s not, say, head and shoulders above competitors.
Akamai has, for the last few years, had a specific large-file delivery service designed to cache just the beginning of a file at the very edge, with the remainder delivered from the middle tier to the edge server, thus eliminating the obvious cache storage issues involved in, say, caching entire movies, while still preserving decent cache hit ratios. However, this has been made mostly irrelevant in video delivery by the rise in popularity of adaptive streaming techniques — if you’re thinking about Akamai’s capabilities in the Netflix or similar contexts, you should think of this as an adaptive streaming thing and not a large file delivery thing.
In adaptive streaming (pioneered by Move Networks), a video is chopped up into lots of very short chunks, each just a few seconds long, and usually delivered via HTTP. The end-consumer’s video player takes care of assembly these chunks. Based on the delivery performance of each chunk, the video player decides whether it wants to upshift or downshift the bitrate / quality of the video in real time, thus determining what the URL of the next video chunk is. This technique can also be used to switch sources, allowing, for instance, the CDN to be changed in real time based on performance, as is offered by the Conviva service. Because the video player software in adaptive streaming is generally instrumented to pay attention to performance, there’s also the possibility that it may send back performance information to the content owner, thus enabling them to get a better understanding of what the typical watcher is experiencing. Using an adaptive technique, your goal is generally to get the end-user the most “upshift” possible (i.e., sustain the highest bitrate possible), and if you can, have it delivered via the least expensive source.
When adaptive streaming is in use, that means that the first chunk of videos is now just a small object, easily cached on just about any CDN. Obviously, cache hit ratios still matter, and you will generally get higher cache hit ratios on a megaPOP approach (like Limelight) than you will with a high distributed approach (like Akamai), although that starts to get more complex when you add in server-side pre-fetching, deliberately serving content off the middle tier, and the like. So now your performance starts to boil down to footprint, cache hit ratio, algorithms for TCP/IP protocol optimization, server and network performance — how quickly and seamlessly can you deliver lots and lots of small objects? Third-party testing generally shows that Akamai benchmarks very well when it comes to small object delivery — but again, specific relative CDN performance for a specific customer is always unique.
In the end, it comes down to price/performance ratios. Netflix has clearly decided that they believe Level 3 and Limelight deliver better value for some significant percentage of their traffic, at this particular instant in time. Given the incredibly fierce competition for high-volume deals, multi-vendor sourcing, and the continuing fall in CDN prices, don’t think of this as an alteration in the market, or anything particularly long-term for the fate of the vendors in question.
Google’s mod_pagespeed and Cotendo
Those of you who are Gartner clients know that in the last year, my colleague Joe Skorupa and I have become excited about the emergence of software-based application acceleration via page optimization approaches, as exemplified by vendors like Aptimize and Strangeloop Networks. (Clients: See Cool Vendors in Enterprise Networking, 2010.) This approach to acceleration enhances the performance of Web-based content and applications, by automatically optimizing the page output of webservers according to the best practices described in books like High Performance Web Sites by Steve Souders. Techniques of this sort include automatically combining JavaScript files (which reduces overall download time), optimizing the order of the scripts, and rewriting HTML so that the browser can display the page more quickly.
Page optimization techniques can often provide significant acceleration boosts (2x or more) even when other acceleration techniques are in use, such as a hardware ADC with acceleration module (offered as add-ons by F5 and Citrix NetScaler, for instance), or a CDN (including CDN-based dynamic acceleration). Since early this year, we’ve been warning our CDN clients that this is a critical technology development to watch and to consider adopting in their networks. It’s a highly sensible thing to deploy on a CDN, for customers doing whole site delivery; the CDN offloads the computational expense of doing the optimization (which can be significant), and then caches the result (and potentially distributing the optimized pages to other nodes on the CDN). That gets you excellent, seamless acceleration for essentially no effort on the part of the customer.
Google’s Page Speed project provides free and open-source tools designed to help site owners implement these best practices. Google has recently released an open-source module, called mod_pagespeed, for the popular Apache webserver. This essentially creates an open-source competitor to commercial vendors like Aptimize and Strangeloop. Add the module into your Apache installation, and it will automatically try to optimize your pages for you.
Now, here’s where it gets interesting for CDN watchers: Cotendo has partnered wih Google. Cotendo is deploying the Google code (modified, obviously, to run on Cotendo’s proxy caches, which are not Apache-based), in order to be able to offer the page optimization service to its customers.
I know some of you will automatically be asking now, “What does this mean for Akamai?” The answer to that is, “Losing speed trials when it’s Akamai DSA vs. Cotendo DSA + Page Speed Automatic, until they can launch a competing service.” Akamai’s acceleration service hasn’t changed much since the Netli acquisition in 2007, and the evolution in technology here has to be addressed. Page optimization plus TCP optimization is generally much faster than TCP optimization alone. That doesn’t just have pricing implications; it has implications for the competitive dynamics of the space, too.
I fully expect that page optimization will become part of the standard dynamic acceleration service offerings of multiple CDNs next year. This is the new wave of innovation. Despite the well-documented nature of these best practices, organizations still frequently ignore them when coding — and even commercial packages like Sharepoint ignore them (Sharepoint gets a major performance boost when page optimization techniques are applied, and there are solutions like Certeon that are specific to it). So there’s a very broad swathe of customers that can benefit easily from these techniques, especially since they provide nice speed boosts even in environments where the network latency is pretty decent, like delivery within the United States.
Next round, Akamai vs. Limelight
In CDN news this past weekend, a judge has overturned the jury verdict in the Akamai vs. Limelight patent infringement case. Akamai has said it intends to appeal.
The judge cited Muniauction v. Thomson Corp. as the precedent for a judgement of law, which basically says that if you have a method claim in a patent that involves steps performed by multiple parties, you cannot claim direct infringement unless one party exercises control over the entire process.
I have not read the court filing yet, but based on the citation of precedent, it’s a good guess that because the CDN patent methods generally involve steps beyond the provider’s control, it falls under this citation. Unexpected, at least to me, and for those IP law watchers among you, rather fascinating, since in our increasingly federated, distributed, outsourced IT world, this would seem to raise a host of intellectual property issues for multi-party transactions, which are in some ways inherent to web services.