Blog Archives

Instart Logic launches a new kind of acceleration service

There have been three core techniques for accelerating content and application delivery over the Internet — caching (“classic” CDN), network optimization (think protocol tricks, like F5 Web Application Accelerator on the hardware side, or Akamai DSA on the service side), and front-end optimization (FEO, think content re-write, like Aptimize/Riverbed or Strangeloop/Radware on the software side, or Blaze.io/Akamai or Acceloweb/Limelight on the service side).

Now, with the launch of Instart Logic, there’s a fourth technique, that I don’t yet have a name for. In spirit, it’s probably most similar to a SoftWOC, but in this case, the client endpoint is the browser, and the symmetric remote endpoint is the CDN server. The techniques are also different from typical SoftWOC techniques, as far as I know.

From the perspective of an Instart Logic customer, they’re getting a dynamic acceleration service that, from a deployment perspective, is much like a CDN. For most customers, it would entirely replace using a traditional CDN (rather than being additive) — i.e., they would buy this instead of buying Akamai DSA or a similar service. Note that this is a performance play, not a price play — Instart Logic expects that they’ll be in the ballpark of typical dynamic acceleration pricing, and that performance carries a market premium.

The techniques used in the service are intended to dramatically improve load times, especially on congested networks; this is particularly useful in mobile, but it is not mobile-specific. As with FEO, the goal is to allow the end-user to quickly see and interact with the content while the remainder of the page is still downloading.

On the client side, there’s what they call a “NanoVisor” — an HTML5-based thin virtualization layer that runs in the browser. If Instart Logic is full-proxying the customer’s site, the NanoVisor code can simply be injected; otherwise the customer can insert the code into their site. It requires no other changes to the customer’s site. The NanoVisor provides intelligence about the end-user and serves as the client endpoint for the optimization.

On the server side, the “AppSequencer” analyzes page content, and it fragments and orders objects that are then streamed to the NanoVisor. It does large-scale analysis of usage patterns, and it predictively sends things based on the responses that it’s seen before. There’s compression and network optimization techniques, as well as implicit caching.

Like other recent innovators in the CDN space, Instart Logic is predominantly a software company. Whlie they do have servers of their own, they are also using a variety of cloud IaaS providers for capacity. They’re also using Dyn for DNS.

Instart Logic has raised a significant amount of money, almost purely from top-tier VCs — $26 million to date. I think their technology is very promising, which probably means they’ll get a bit of time to prove themselves out and then they’ll get bought by one of the CDNs looking to get an edge on the competition, or maybe even an ADC or WOC vendor.

Instart Logic’s demos are impressive, and they’ve got paying customers at this point, although obviously they’re newly-launched. While it always takes time to build trust in this industry, at this point they’re worth checking out, and I’ve been referring Gartner clients to them ever since I was briefed by them while they were still in stealth mode, a few months back. They’re potentially an excellent fit for customers who are looking for something beyond what DSA-style network optimization offerings can do, but either do not want to do FEO, have reached the limits of what FEO can offer them, or simply want to explore alternatives.

Amazon CloudFront gets whole site delivery and acceleration

For months, there have been an abundance of rumors that Amazon was intending to enter the dynamic site acceleration market; it was the logical next step for its CloudFront CDN. Today, Amazon released a set of features oriented towards dynamic content, described in blog posts from Amazon’s Jeff Barr and Werner Vogels.

When CloudFront introduced custom origins (as opposed to the original CloudFront, which required you to use S3 as the origin), and dropped minimum TTLs down to zero, it effectively edged into the “whole site delivery” feature set that’s become mainstream for the major CDNs.

With this latest release, whole site delivery is much more of a reality — you can have multiple origins so you can mix static and dynamic content (which are often served from different hostnames, i.e., you might have images.mycompany.com serving your static content, but http://www.mycompany.com serving your dynamic content), and you’ve got pattern-matching rules that let you define what the cache behavior should be for content whose URL matches a particular pattern.

The “whole site delivery” feature set is important, because it hugely simplifies CDN configuration. Rather than having to go through your site and change its URL references to the CDN (long-time CDN watchers may remember that Akamai in the early days would have customers “Akamaize” their site using a tool that did these URL rewrites), the CDN is smart — it just goes to the origin and pulls things, and it can do so dynamically (so, for instance, you don’t have to explicitly publish to the CDN when you add a new page, image, etc. to your website). It gets you closer to simply being able to repoint the URL of your website to the CDN and having magic happen.

The dynamic site acceleration features — the actual network optimization features — that are being introduced are much more limited. They basically amount to TCP connection multiplexing, TCP connection peristency/pooling, and TCP window size optimization, much like Cotendo in its very first version. At this current stage, it’s not going to be seriously competing against Akamai’s DSA offering (or CDNetworks’s similar DWA offering), but it might have appeal against EdgeCast’s ADN offering.

However, I would expect that like everything else that Amazon releases, there will be frequent updates that introduce new features. The acceleration techniques are well known at this point, and Amazon would presumably logically add bidirectional (symmetric POP-to-POP) acceleration as the next big feature, in addition to implementing the common other optimizations (dynamic congestion control, TCP “FastRamp”, etc.).

What’s important here: CloudFront dynamic acceleration costs the same as static delivery. For US delivery, that starts at about $0.12/GB and goes down to below $0.02/GB for high volumes. That’s easily somewhere between one-half and one-tenth of the going rate for dynamic delivery. The delta is even greater if you look at a dynamic product like Akamai WAA (or its next generation, Terra Alta), where enterprise applications that might do all of a TB of delivery a month typically cost $6000 per app per month — whereas a TB of CloudFront delivery is $120. Akamai is pushing the envelope forward in feature development, and arguably those price points are so divergent that you’re talking about different markets, but low price points also expand a market to where lots of people can decide to do things, because it’s a totally different level of decision — to an enterprise, at that kind of price point, it might as well be free.

Give CloudFront another year of development, and there’s a high probability that it can become a seriously disruptive force in the dynamic acceleration market. The price points change the game, making it much more likely that companies, especially SaaS providers (many of whom use EC2, and AWS in general), who have been previously reluctant to adopt dynamic acceleration due to the cost, will simply get it as an easy add-on.

There is, by the way, a tremendous market opportunity out there for a company that delivers value-added services on top of CloudFront — which is to say, the professional services to help customers integrate with it, the ongoing expert technical support on a day to day basis, and a great user portal that provides industry-competitive reporting and analytics. CloudFront has reached the point where enterprises, large mainstream media companies, and other users of Akamai, Limelight, and Level 3 who feel they need ongoing support of complex implementations and a great toolset that helps them intelligently operate those CDN implementations, are genuinely interested in taking a serious look at CloudFront as an alternative, but there’s no company that I know of that provides the services and software that would bridge the gap between CloudFront and a traditional CDN implementation.

Akamai buys Cotendo

Akamai is acquiring Cotendo for a purchase price of $268 million, somewhat under the rumored $300 million that had been previously reported in the Israeli press. To judge from the stock price, the acquisition is being warmly received by investors (and for good reason).

The acquisition only impacts the website delivery/acceleration portion of the CDN market — it has no impact on the software delivery and media delivery segments. The acquisition will leave CDNetworks as the only real alternative for dynamic site acceleration that is based on network optimization techniques (EdgeCast does not seem to have made the technological cut thus far). Level 3 (via its Strangeloop Networks partnership) and Limelight (via its Acceloweb acquisition) have chosen to go with front-end optimization techniques instead for their dynamic acceleration. Obviously, AT&T is going to have some thinking to do, especially since application-fluent networking is a core part of its strategy for cloud computing going forward.

I am not going to publicly blog a detailed analysis of this acquisition, although Gartner clients are welcome to schedule an inquiry to discuss it (thus far the questions are coming from investors and primarily have to do with the rationale for the purchase price, technology capabilities, pricing impact, and competitive impact). I do feel compelled to correct two major misperceptions, though, which I keep seeing all over the place in press quotes from Wall Street analysts.

First, I’ve heard it claimed repeatedly that Cotendo’s technology is better than Akamai’s. It’s not, although Cotendo has done some important incremental engineering innovation, as well as some better marketing of specific aspects (for instance, their solution around mobility). I expect that there will be things that Akamai will want to incorporate into their own codebase, naturally, but this is not really an acquisition that is primarily being driven by the desire for the technology capabilities.

Second, I’ve also heard it claimed repeatedly that Cotendo delivers better performance than Akamai. This is nonsense. There is a specific use case in which Cotendo may deliver better performance — low-volume customers with low cache hit ratios due to infrequently-accessed content, as can occur with SaaS apps, corporate websites, and so on. Cotendo pre-fetches content into all of its POPs and keeps it there regardless of whether or not it’s been accessed recently. Akamai flushes objects out of cache if they haven’t been accessed recently. This means that you may see Akamai cache hit ratios that are only in the 70%-80% range, especially in trial evaluations, which is obviously going to have a big impact on performance. Akamai cache tuning can help some of those customers substantially drive up cache hits (for better performance, lower origin costs, etc.), although not necessarily enough; cache hit ratios have always been a competitive point that other rivals, like Mirror Image, have hammered on. It has always been a trade-off in CDN design — if you have a lot more POPs you get better edge performance, but now you also have a much more distributed cache and therefore lower likelihood of content being fresh in a particular POP.

(Those are the two big errors that keep bothering me. There are plenty of other minor factual and analysis errors that I keep seeing in the articles that I’ve been reading about the acquisition. Investors, especially, seem to frequently misunderstand the CDN market.)

Performance can be a disruptive competitive advantage

All of us are used to going to travel sites, especially for airline tickets, and waiting a while for the appropriate results to be extracted and displayed to us. I recently saw Google Flight Search for the first time and was astonished by its raw speed — essentially completely instant.

I frequently talk to customers about acceleration solutions, and discuss the business value of performance. Specifically, this is a look at business metrics that measure the success of a website or application — time spent on your site, conversion rate, shopping basket value, page views, ad views, transactions processed, employee productivity, decline in call center volume, and so forth. You compare the money associated with these metrics, against the cost of the solutions, to look at comparative ROI.

The business value of performance is usually tied to industry in a narrow and specific way, because users have a particular set of expectations and needs. For instance, for travel sites, a certain amount of performance is necessary in order to make the site usable, but the long waits for searches are things that users are conditioned to, making their overall performance expectations relatively low. Travel sites usually discover that generalized site responsiveness improve the user experience and cause revenue per site visit to increase — but only up to a certain point, at which point in time it plateaus, as the site has enough responsiveness that users aren’t discouraged from using it, and they’re going to buy what they came to buy.

Google Flight Search proves that you can “break through” the performance ceiling to actually entirely change the user experience, though. This is not the kind of incremental improvement you can achieve through acceleration techniques, though; instead, it’s a core change that affects the thing that is slowest, which is generally the back-end database and business logic, not the network. This can actually be a disruptive competitive advantage.

I typically ask my CDN clients, “What are the factors that make your site slow?” In many cases, they need to do something that goes beyond what edge caching or even network optimization (dynamic acceleration) can achieve. They need to reduce their page weight, or write better pages (and may benefit from front-end optimization techniques), or to improve the back-end responsiveness. Acceleration techniques are often used to band-aid a core problem with performance, just like CDN professional services to make a site cacheable are often used to band-aid a core problem with site structure. At some point in time it becomes more cost-effective to fix the core problem.

Too few businesses design their websites and applications with speed in mind.

Cotendo’s potential acquisition

Thus far, merger-watchers eyeing the rumored bidding for Cotendo seem to be asking: Why this high a valuation compared to the rest of the CDN industry? Who are the potential suitors and why? What if anything does Cotendo offer that other CDNs don’t? How do the various dynamic offerings in the market compare? Who else might be ripe for acquisition? What is the general trend of M&A activity in the CDN industry going forward? Do I agree with Dan Rayburn’s commentary on this deal?

However, for various reasons, I am not currently publicly commenting further on Twitter or my blog, or really in general with non-Gartner-clients, regarding the potential acquisition of Cotendo by Akamai (or AT&T, or Juniper, or anyone else who might be interested in buying them).

If you are a Gartner client, and you want to discuss the topic, you may request a written response or a phone call through the usual mechanisms for inquiry.

Yottaa, 4th-gen CDNs, and acceleration for the masses

I’d meant to blog about Yottaa when it launched back in October, because it’s one of the most interesting entrants to the CDN market that I’ve seen in a while.

Yottaa is a fourth-generation CDN that offers site analytics, edge caching, a little bit of network optimization, and front-end optimization.

While CDNs of earlier generations have heavily capital-intensive models that require owning substantial server assets, fourth-generation CDNs have a software-oriented model and own limited if any delivery assets themselves. (See a previous post for more on 4th-gen CDNs.)

In Yottaa’s case, it uses a large number of cloud IaaS providers around the world in order to get a global server footprint. Since these resources can be obtained on-demand, Yottaa can dynamically match its capacity to customer demand, rather than pouring capital into building out a server network. (This is a critical new market capability in general, because it means that as the global footprint of cloud IaaS grows, there can be far more competition in the innovative portion of the CDN market — it’s far less expensive to start a software company than a company trying to build out a competitive CDN footprint.)

There are three main things that you can do to speed up the delivery of a website (or Web-based app): you can do edge caching (classic static content CDN delivery), you can do network optimization (the kind of thing that F5 has in its Web App Accelerator add-on to its ADC, or, as a service, something like Akamai DSA), or you can do front-end optimization, sometimes known as Web content optimization or Web performance optimization (the kind of thing that Riverbed’s Aptimize does). Gartner clients only: See my research note “How to Accelerate Internet Websites and Applications” for more on acceleration techniques.

Yottaa does all three of these things, although its network optimizations are much more minimal than a typical DSA service. It does it in an easy-to-configure way that’s readily consumable by your average webmaster. And the entry price-point is $20/month. Sign up for an instant free trial, online — here’s the consumerization of CDN services for you.

When I took a look at calculating the prices at volume using the estimates that Yottaa gave me, I realized that it’s nowhere near as cheap as it might first look — it’s comparable to Akamai DSA pricing. But it’s the kind of thing that pretty much anyone could add to their site if they cared about performance. It brings acceleration to the mass market.

Sure, your typical Akamai customer is probably not going to be consuming this service just yet. But give it some time. This is the new face of competition in the CDN market — companies that are focused entirely on software development (possibly using low-cost labor: Yottaa’s engineering is in Beijing), relying on somebody else’s cloud infrastructure rather than burning capital building a network.

Three years ago, I asked in my blog: Are CDNs software or infrastructure companies? The new entrants, which also include folks like CloudFlare, come down firmly on the software side.

Recently, one of my Gartner Invest clients — a buy-side analyst who specializes in infrastructure software companies — pointed out to me that Akamai’s R&D spending is proportionately small compared to the other companies in that sector, while it is spending huge amounts of capex in building out more network capacity. I would previously have put Akamai firmly on the software side of the CDN as software or infrastructure question. It’s interesting to think about the extent to which this might or might not be the case going forward.

The Global Internet Speedup Initiative

The rather prosaically-named, if accurately and precisely named, IETF draft specification, “Client Subnet in DNS Requests” (“edns-client-subnet”), has gotten some breathless marketing spin as the Global Internet Speedup Inititative.

I blogged about this about a year and a half ago: “Google’s DNS protocol extension and CDNs“. See that post for a deeper analysis. (I also previously blogged about the problem with using DNS as the CDN vantage point.)

My opinion on this hasn’t changed. In the intervening time, various DNS service providers and CDN providers have contributed to the draft, and the end result seems to be pretty reasonable. The extension solves a common problem for the CDNs — returning appropriately close CDN servers to an end-user who is using a DNS resolver that’s not close to his own location (common for users on some ISP networks, along with those who use resolvers from OpenDNS, Neustar, etc., and potentially for some users in enterprise networks).

But I am impressed with the amount of hype that the vendors involved have managed to generate about a fiddly little technical detail that ordinary users have probably never thought about and shouldn’t ever really need to think about.

What makes Akamai sticky?

There’s one thing in particular that tends to make Akamai customers “sticky” — the amount the customer uses professional services. The more professional services a customer consumes from Akamai, the less likely it is they’ll ever switch CDNs. In short: The more of a pain it’s been for them to integrate with Akamai’s CDN (usually due to the customer having a complex site that violates best practices related to content cacheability), and the more they have to use recurring professional services every time they update their site, the less likely it is that they’re going to move to another CDN. That’s for two reasons — one, because it’s difficult and expensive to do the up-front work to get the site onto another CDN, and two, because most other CDNs don’t like to do extensive professional services on a recurring basis. That makes the use of professional services a double-edged sword, since it’s not really a business with great margins, and you’re vulnerable if the customer eventually goes and builds a site that isn’t a great big hairy mess.

But there’s one Akamai product (delivered as a value-added additional service) that’s currently sufficiently compelling that customers and prospects who want it, won’t consider any other CDN that can’t offer the same. (And since it’s currently unique to Akamai, that means no competition, always a boon in a market where pricing is daily warfare.) I’m suddenly seeing it frequently quoted, which makes it likely that it’s a significant sales push, though it’s not a brand-new product. It’s a very effective attach.

Can you guess what it is?

(You may feel free to speculate on my blog, but if you want the answer, and you’re a Gartner client, make an inquiry request through the usual means.)

AT&T’s CDN re-launch

(This is part of a series of “catch-up” posts of announcements that I’ve wanted to comment on but didn’t previously find time to blog about.)

AT&T recently essentially re-launched its CDN — new technology, new branding, new footprint.

AT&T’s existing CDN product, called iCDS, has had limited success in the marketplace. They’ve been a low-cost competitor, but their deal success in the high-volume market has been low — Level 3, for instance, has offered prices just as good or better, on a more featureful, higher-performance service, and with other competitors, notably Akamai and Limelight, willing to compete in the low-cost high-volume market, it’s been difficult for AT&T to compete successfully on price (although they certainly helped the general decline in prices). We’ve also seen them get good pick-up with CDN added to a managed hosting contract — there are plenty of managed hosting customers happy to sign on $1000 or $2000 worth of CDN. (We’ve also seen this with other hosters that casually quote a little bit of CDN along with managed hosting deals; it’s not just an AT&T phenomenon.) We’ve also seen them pitch “hey, you should use us if you want to reach iPhone customers”, but that’s too narrow for most content providers to consider right now.

Previously, AT&T had been insistent on developing all of its CDN technology in-house. AT&T has a long and proud “built here and only here” tradition, especially with AT&T Labs, but it simply hasn’t worked out well for its CDN, especially since anything that AT&T builds in portal technology tends to look like it was built by hard-core geeks for other hard-core geeks — not the slick, user-friendly, Web 2.0 interfaces that you’ll see coming out of many other service providers these days. That made all iCDS things to do with “how customers interface with the CDN to actually get something useful done”, including configuration and analytics, pretty sub-par to the market.

AT&T has now done something that would probably be smart for other carriers to emulate — buying CDN technology rather than developing it in-house. There are now plenty of vendors to choose from — Cisco, Juniper (Ankeena), Alcatel-Lucent (Velocix), Edgecast, 3Crowd, JetStream, etc. — and although these solutions vary wildly in quality and completeness, I’m still bemused by the number of carriers whose engineers are really jonesing to build their own in-house technology. In AT&T’s case, they’ve selected Edgecast’s software solution — a nice feather in the cap for Edgecast, definitely, given the kind of scrutiny that AT&T gives its solutions that are going to be deployed in its network. (Carrier CDNs are very much a hot trend at the moment, although they’re a hot trend relative to the otherwise glacial speed at which carriers do anything.)

AT&T is building out a new footprint of servers running the Edgecast software. They’ll operate both the old and new CDNs for some time — existing iCDS customers will continue to run on the existing iCDS platform and footprint, and new customers will go onto the new platform. That means it’s going to take some time to assess the real performance of the new CDN, as the POPs are being rolled out gradually. The new footprint will be similar, but not identical to, the old fotprint.

However, I don’t think the launch of a new AT&T CDN is anywhere near as significant for the market as AT&T’s continued success in reselling Cotendo. The AT&T CDN itself is simply part of the already-commoditized market for high-volume delivery — the re-launch will likely return them to real competitiveness, but doesn’t change any fundamental market dynamics.

Citrix invests in Cotendo

On the heels of the announcement of an Akamai/Riverbed partnership, Citrix has taken an investment in Cotendo, and announced the development of an integrated ADC/CDN solution.

This is a different sort of deal than Akamai/Riverbed. Whereas that deal addresses a particular use case — enterprises who want to accelerate a SaaS solution but the SaaS provider isn’t cooperating — the Citrix/Cotendo deal is intended to enhance dynamic acceleration by integrating with an on-premise ADC (in this case, a Citrix NetScaler, of course).

Back during the Netli days, Netli actually coupled their service, in most cases, with a lightweight on-premises ADC to ensure first-mile acceleration as well. This was phased out when Netli was acquired by Akamai, which did not want to have to deal with CPE (customer premises equipment). While there had been talks of partnerships with ADC vendors, the Akamai acquisition essentially killed them, and in the four years that have passed, this excellent, even vital, idea has essentially lain fallow.

Optimal acceleration of content requires end-to-end solutions — optimizing of the content itself, optimization of the network from the first mile all the way to the last mile, and optimization on the device. To make this happen, CDN providers need to have tight integration with ADC vendors.

I like the partnership and the investment, and I hope that it paves the way for an ecosystem in which many CDNs offer tighter integration with a variety of ADC devices from a range of popular vendors.