Blog Archives

Who is Distribution Cloud?

Matthew Sacks has blogged Keynote performance test results for Akamai via Distribution Cloud.

There are other Akamai resellers out there, but Distribution Cloud posts its prices publicly, starting at 50 GB for $150 per month ($3/GB), and going up to 1 TB for $2,200/month ($2.20/GB), with storage at $15/GB. That’s 10x the cost of Limelight with a Rackspace Cloud Files origin ($0.22/GB, no commit) — definitely not a commodity price, so it’s really the low commit that makes this interesting.

But the blog post got me wondering: Who is Distribution Cloud, anyway? Their website lists a phone number and the fact they’re in Cambridge, MA, but no address or anything else that indicates who the company is. A search on the phone number turns up that it’s registered to a David Reisfeld at 94 Rice St, phone service courtesy of Level 3. LinkedIn and Naymz have listings for a person who seems to match: a senior director, product management, CDN streaming services, at Level 3 — formerly a senior product manager at Akamai.

But that only deepens the mystery. Did Reisfeld run a reseller (since 2002, the site claims) while employed at Akamai, with the blessing of his employer? Is he fronting for a buddy? Is he still doing it while employed at a competitor, or is the name for the phone listing not current? If it’s not his company, whose is it?

Bookmark and Share

Pricing transparency and CDNs

It is possible that I am going to turn out to be mildly wrong about something. I predicted that neither Amazon’s CloudFront CDN nor the comparable Rackspace/Limelight offering (Mosso Cloud Files) would really impact the mainstream CDN market. I am no longer as certain that’s going to be the case, as it appears that behavioral economics play into these decisions more than one might expect. The impact is subtle, but I think it’s there.

I’m not talking about the giant video deals, mind you; those guys already get prices well below that of the cloud CDNs. I’m talking about the classic bread-and-butter of the CDN market, the e-commerce and enterprise customers, significant B2B and B2C brands that have traditionally been Akamai loyalists, or been scattered with smaller players like Mirror Image.

Simply put, the cloud CDNs put indirect pressure on mainstream CDN prices, and will absorb some new mainstream (enterprise but low-volume) clients, for a simple reason: Their pricing is transparent. $0.22/GB for Rackspace/Limelight. $0.20/GB for SoftLayer/Internap. $0.17/GB for Amazon CloudFront. And so on.

Transparent pricing forces people to rationalize what they’re buying. If I can buy Limelight service on zero commit for $0.22/GB, there’s a fair chance that I’m going to start wondering just what exactly Akamai is giving me that’s worth paying $2.50/GB for on a multi-TB commit. Now, the answer to that might be, “DSA Secure that speeds up my global e-commerce transactions and is invaluable to my business”, but that answer might also be, “The same old basic static caching I’ve been doing forever and have been blindly signing renewals for.” It is going to get me to wonder things like, “What are the actual competitive costs of the services I am using?” and, “What is the business value of what I’m buying?” It might not alter what people buy, but it will certainly alter their perception of value.

Since grim October, businesses have really cared about what things cost and what benefit they’re getting out of them. Transparent pricing really amps up the scrutiny, as I’m discovering as I talk to clients about CDN services. And remember that people can be predictably irrational.

While I’m on the topic of cloud CDNs: There have been two recent sets of public performance measurements for Rackspace (Mosso) Cloud Files on Limelight. One is part of a review by Matthew Sacks, and the other is Rackspace’s own posting of Gomez metrics comparing Cloud Files with Amazon CloudFront. The Limelight performance is, unsurprisingly, overwhelmingly better.

What I haven’t seen yet is a direct performance comparison of regular Limelight and Rackspace+Limelight. The footprint appears to be the same, but differences in cache hit ratios (likely, given that stuff on Cloud Files will likely get fewer eyeballs) and the like will create performance differences on a practical level. I assume it creates no differences for testing purposes, though (i.e., the usual “put a 10k file on two CDNs”), unless Limelight prioritizes Cloud Files requests differently.

Bookmark and Share

Amazon’s CloudFront CDN

Amazon’s previously-announced CDN service is now live. It’s called CloudFront. It was announced on the Amazon Web Services blog, and discussed in a blog post by Amazon’s CTO. The RightScale blog has a deeper look, too.

How CloudFront Works

Basically, to use the CloudFront CDN, you drop your static objects (your static HTML, images, JavaScript libraries, etc.) into an S3 bucket. (A bucket is essentially the conceptual equivalent of a folder.) Then, you register that bucket with CloudFront. Once you’ve done this, you get back a hostname that you use for as the base of the URL to your static objects. The hostname looks to be a hash in the cloudfront.net domain; I expect CloudFront customers will normally create a CNAME (alias) in their own domain that points to it.

The tasks use Amazon API calls, but like every other useful bit of AWS functionality, there are tools out there that can make it point-and-click, so this can readily be CDN for Dummies. There’s no content “publication”, just making sure that the URLs for your static content point to Amazon. Intelligently-designed sites already have static content segregated onto its own hostname, so it’s a simple switch for them; for everyone else, it’s just some text editing, and it’s the same method as just about every other CDN for the better part of the last decade.

In short, you’re basically using S3 as the origin server. The actual delivery is through one of 14 edge locations — in the particular case of these locations, a limited footprint, but not a bad one in these days of the megaPOP CDN.

How Pricing Works

Pricing for the service is a little more complex to understand. If your content is being served from the edge, it’s priced similarly to S3 data transfers (same price at 10 TB or less, and 1 cent less for higher tiers). However, before your content can get served off the edge, it has to get there. So if the edge has to go back to the origin to get the content (i.e., the content is not in cache or has expired from cache), you’ll also incur a normal S3 charge.

You can think of this as being similar to the way that Akamai charged a few years ago — you’d pay a bandwidth fee for delivering content to users, but you’d also pay something called an “administrative bandwidth fee”, which was charged for edge servers fetching from the origin. That essentially penalized you when there was a cache miss. On a big commercial CDN like Akamai, customers reasonably felt this was sort of unfair, competitors like Mirror Image hit back by not charging those fees, and by and large the fee disappeared from the market.

It makes sense for Amazon to charge for having to go back to the (S3) origin, though, because the open-to-all-comers nature of CloudFront means that they would naturally have a ton of customers who have long-tail content and therefore very low cache hit ratios. On a practical basis, however, quite a few people who try out CloudFront will probably discover that the cache miss ratio for their content is too high; since a cache miss also confers no performance benefit, the higher delivery cost won’t make sense, and they’ll go back to just using S3 itself for static delivery. In other words, the pricing scheme will make customers self-regulate, over time.

Performance and CDN Server Misdirection

Some starting fodder for the performance-minded: My preliminary testing shows that Amazon’s accuracy in choosing an appropriate node can be extremely poor. I’ve noted previously that the nameserver is the location reference, and so the CDN’s content-routing task is to choose a content server close to that location (which will also hopefully be close to the end-user).

I live in suburban DC. I tested using cdn.wolfire.com (a publicly-cited CloudFront customer), and checked some other CloudFront customers as well to make sure it wasn’t a domain-specific aberration. My local friend on Verizon FIOS, using a local DC nameserver, gets Amazon’s node in Northern Virginia (suburban DC), certainly the closest node to us. Using the UltraDNS nameserver in the New York City area, I also get the NoVa node — but from the perspective of that nameserver, Amazon’s Newark node should actually be closer. Using the OpenDNS nameserver in NoVa, Amazon tries to send me to its CDN node in Palo Alto (near San Francisco) — not even close. My ISP, MegaPath, has a local nameserver in DC; using that, Amazon sends me to Los Angeles.

That’s a serious set of errors. By ping time, the correct, NoVa node is a mere 5 ms away from me. The LA node is 71 ms from me. Palo Alto is even worse, at 81 ms. Both of those times are considerably worse than pure cross-country latency across a decent backbone. My ping to the test site’s origin (www.wolfire.com) is only 30 ms, making the CDN latency more than twice as high.

However, I have a certain degree of faith: Over time, Amazon has clearly illustrated that they learn from their mistakes, they fix them, and they improve on their products. I assume they’ll figure out proper redirection in time.

What It Means for the Market

Competitively, it seems like Rackspace’s Cloud Files plus Limelight may turn out to be the stronger offering. The price of Rackspace/Limelight is slightly higher, but apparently there’s no origin retrieval charge, and Limelight has a broader footprint and therefore probably better global performance (although there are many things that go into CDN performance beyond footprint). And while anyone can use CloudFront for the teensiest bit of delivery, the practical reality is that they’re not going to — the pricing scheme will make that irrational.

Some people will undoubtedly excitedly hype CloudFront as a game-changer. It’s not. It’s certainly yet another step towards having ubiquitous edge delivery of all popular static content, and the prices are attractive at low to moderate volumes, but high-volume customers can and will get steeper discounts from established players with bigger footprints and a richer feature set. It’s a logical move on Amazon’s part, and a useful product that’s going to find a large audience, but it’s not going to majorly shake up the CDN industry, other than to accelerate the doom of small undifferentiated providers who were already well on their way to rolling into the fiery pit of market irrelevance and insolvency.

(I can’t write a deeper market analysis on my blog. If you’re a Gartner client and want to know more, make an inquiry and I’d be happy to discuss it.)

Bookmark and Share

Akamai expands its advertising solutions

Akamai made an advertising-related announcement today, introducing something it calls Advertising Decision Solutions, and stating that it has agreed to aquire acerno for $95 million in cash.

acerno (which seems to belong to the e.e. cummings school of brand naming) is a small retailer-focused advertising network, but the reason that Akamai acquired it is that they operate a data cooperative, wherein retailers share shopping data. This data in turn is used to create a predictive model — i.e., if a customer bought X, then it’s likely they will also be shopping for Y and Z and therefore you might want to show them related ads.

Although Akamai states they’ll continue to operate the acerno business, don’t expect them to really push that ad network; Akamai knows where its bread is buttered and isn’t going to risk competing with the established large ad networks, which number amongst Akamai’s most significant customers. Instead, Akamai intends to use the acerno data and its data cooperative model to enhance the advertising-related capabilities that it offers to its customers.

This complements the Advertising Decision Solutions announcement. Basically, it appears that Akamai is going to begin to exploit its treasure-trove of user behavior data, as well as take advantage of the fact that they deliver content on behalf of the publishers as well as the ad networks, and therefore are able to insert elements into the delivery, such as cookies (thus enabling communication between cooperating Akamai customers without those customers having to manually set up such cooperation with their various partners).

This expansion of Akamai’s product portfolio is a smart move. With the cost of delivery dropping through the floor, Akamai needs new, high-value, high-margin services to offer to customers, as well as services that tie customers more closely to Akamai, creating a stickiness that will make customers more reluctant to switch providers to obtain lower costs. Note, however, that Akamai already dominates the online retail space; the new service probably won’t make much of a difference in a retail customer’s decision about whether or not to purchase Akamai services. It will, however, help them defend and grow their ad network customers, and help them maintain a hold on core website delivery for the media and entertainment space. (This is true even in the face of video delivery moving to low-cost CDNs, since you don’t need to deliver the site and the video from the same CDN.)

I think this move signals that we’re going to see Akamai move into adjacent markets where it can leverage its distributed computing platform, its aggregated data (whether about users, content, systems, or networks), or its customer ecosystem. Because these kinds of services will tend to be decoupled from the actual price of bit delivery, they should also help Akamai broaden its revenue streams.

Bookmark and Share

The Microsoft CDN study

The Microsoft/NYU CDN study by Cheng Huang, Angela Wang, et.al., seems to no longer be available. Perhaps it’s simply been temporarily withdrawn pending its presentation at the upcoming Internet Measurement Conference. You can still find it in Google’s cache, HTMLified, by searching for the title “Measuring and Evaluating Large-Scale CDNs”, though.

To sum it up in brief for those who missed reading it while it was readily available: Researchers at Microsoft and the Polytechnic Institute of New York University explored the performance of the Akamai and Limelight CDNs. Using a set of IP addresses derived from end-user clients of the MSN video service, and web hosts in Windows Live search logs, the researchers derived a set of vantage points based on the open-recursive DNS servers authoritative for those domains. They used these vantage points to chart the servers/clusters of the two CDNs. Then, using the King methodology, which measures the latency between DNS servers, they measured the performance of the two CDNs from the perspective of the vantage points. They also measured the availability of the servers. Then, they drew some conclusions about the comparative performance of the CDNs and how to prioritize deployments of new locations.

Both Akamai and Limelight pointed to flaws in the study, and I’ve done a series of posts that critique the methodology and the conclusions.

For convenience, here are the links to my analysis:
What the Microsoft CDN study measures
Blind spots in the Microsoft CDN study
Availability and the Microsoft CDN study
Assessing CDN performance

Hopefully the full PDF of the study will return to public view soon. Despite its flaws, it’s still tremendously interesting and a worthwhile read.

Bookmark and Share

Assessing CDN performance

This is the fourth and probably final post in a series examining the Microsoft CDN study. The three previous posts covered measurement, the blind spots, and availability. This post wraps up with some conclusions.

The bottom line: The Microsoft study is very interesting reading, but it doesn’t provide any useful information about CDN performance in the real world.

The study’s conclusions are flawed to begin with, but what’s of real relevance to purchasers of CDN services is that even if the study’s conclusions were valid, its narrow focus on one element — one-time small-packet latency to the DNS servers and content servers — doesn’t accurately reflect the components of real-world CDN performance.

Cache hit ratios have a tremendous impact upon real-world CDNs. Moreover, the fallback mechanism on a cache miss is also important — does a miss require going back to the origin, or is there a middle tier? This will determine how much performance is impacted by a miss. The nature of your content and the CDN’s architecture will determine what those cache hit ratios look like, especially for long-tail content.

Throughput determines how quickly you get a file, and how well a CDN can sustain a bitrate for video. Throughput is affected by many factors, and can be increased through TCP/IP optimizations. Consistency of throughput also determines what your overall experience is; start-stop behavior caused by jittery performance can readily result in user frustration.

More broadly, the problem is that any method of testing CDNs from anything other than the edge of the network, using real end-user points, is flawed. Keynote and Gomez provide the best approximations on a day to day basis, but they’re only statistical samples. Gomez’s “Actual Experience” service uses an end-user panel, but that introduces uncontrolled variables into the mix if you’re trying to compare CDNs, and it’s still only sampling.

The holy grail of CDN measurement, of course, is seeing performance in real-time — knowing exactly what users are getting at any given instant from any particular geography. But even if a real-time analytics platform existed, you’d still have to try a bunch of different CDNs to know how they’d perform for your particular situation.

Bottom line: If you want to really test a CDN’s performance, and see what it will do for your content and your users, you’ve got to run a trial.

Then, once you’ve done your trials, you’ve got to look at the performance and the cost numbers, and then ask yourself: What is the business value of performance to me? Does better performance drive real value for you? You need to measure more than just the raw performance — you need to look at time spent on your site, conversion rate, basket value, page views, ad views, or whatever it is that tells you how successful your site is. Then you can make an intelligent decision.

In the end, discussions of CDN architecture are academically interesting, and certainly of practical interest to engineers in the field, but if you’re buying CDN services, architecture is only relevant to you insofar as it results in the quality of the user experience. If you’re a buyer, don’t get dragged into the rathole that is debating the merits of one architecture versus another. Look at real-world performance, and think short-term; CDN contract lengths are getting shorter and shorter, and if you’re a high-volume buyer, what you care about is performance right now and maybe in the next year.

Bookmark and Share

Availability and the Microsoft CDN study

This post is the third in a series examining the Microsoft CDN study. My first post examined what was measured, and the second post looked at the blind spots created by the vantage-point discovery method they used. This time, I want to look at the availability and maintenance claims made by the study.

CDNs are inherently built for resilience. The whole point of a CDN is that individual servers can fail (or be taken offline for maintenance), with little impact on performance. Indeed, entire locations can fail, without affecting the availability of the whole.

If you’re a CDN, then the fewer nodes you have, the more impact the total failure of a node will have on your overall performance to end-users. However, the flip side of that is that megaPOP-architecture CDNs generally place their nodes in highly resilient facilities with extremely broad access to connectivity. The most likely scenario that takes out an entire such node is a power failure, which in such facilities generally requires a cascading chain of failure (but can go wrong at single critical points, as with the 365 Main outage of last year). By contrast, the closer you get to the edge, the higher the likelihood that you’re not in a particularly good facility and you’re getting connectivity from just one provider; failure is more probable but it also has less impact on performance.

Because the Microsoft study likely missed a significant number of Akamai server deployments, especially local deployments, it may underestimate Akamai’s single-server downtime, if you assume that such local servers are statistically more likely to be subject to failure.

I would expect, however, that most wider-scale CDN outages are related not to asset failure (facility or hardware), but to software errors. CDNs, especially large CDNs, are extraordinarily complex software systems. There are scaling challenges inherent in such systems, which is why CDNs often experience instability issues as part of their growing pains.

The problem with the Microsoft study of availability is that whether or not a particular server or set of servers responds to requests is not really germane to availability per se. What is useful to know is the variance in performance based upon that availability, and what percentage of the time the CDN selects a content server that is actually unavailable or which is returning poor performance. The variance plays into that edge-vs-megaPOP question, and the selection indicates the quality of the CDN’s software algorithms as well as real-world performance. The Microsoft study doesn’t help us there.

Similarly, whether or not a particular server is in service does not indicate what the actual maintenance cost of the CDN is. Part of the core skillset of a CDN company is the ability to maintain very large amounts of hardware without using a lot of people. They could very readily have automated processes pulling servers out of service, and executing software updates and the like with little to no human intervention.

Next up: Some conclusions.

Bookmark and Share

Blind spots in the Microsoft CDN study

This post is the second in a series examining the Microsoft CDN study comparing Akamai and Limelight. The first post discusses measurement: what the study does and doesn’t look at. Now, I want to build on that foundation to explain what the study misses.

In the meantime, Akamai has responded publicly. One of the points raised in their letter is the subject of my post — why the study does not provide a complete picture of the Akamai network, and why this matters.

The paper says that the researchers used two data sets — end-user IP addresses, as well as webservers — in order to derive the list of DNS servers to use as vantage points. Webservers are generally at the core or the middle mile, so it’s the end-user IPs we’re really interested in, since they’re the ones which indicate the degree to which broader, deeper reach matters. The study says that reverse DNS lookup was used to obtain the authoritative nameserver for an IP, and the ones which responded to open-recursive queries were used.

The King methodology dates back to 2002. Since that time, open-recursive DNS servers have become less common because they’re potentially a weapon in DDoS attacks, and open-recursive authoritatives even more so because of the potential for cache poisoning attacks. So immediately, we know that the study’s data set is going to miss lots of vantage points owned by the security-conscious. Lack of a vantage point means that the study may be “blind” to users local to it, and indeed, it may miss some networks entirely.

Let’s take an example. I live in the Washington DC area; I’m on MegaPath DSL. A friend of mine, who lives a bit less than 20 miles away, is on Verizon FIOS.

Verizon FIOS customers have IP addresses that reverse-lookup to something of a scheme format that ends in verizon.net. The nameservers that are authoritative for verizon.net are not open-recursive. Moreover, the nameservers that Verizon automatically directs customers to, which are regional (for instance, DC-suburb customers are given nameservers in the ‘burbs of Reston, Virginia, plus one in Philadelphia), are not open-recursive. So that tells us right off the bat that Verizon broadband customers are simply not measured by this study.

Let me say that again. This study almost certainly ignores one of the largest providers of broadband connectivity in the United States. They certainly can’t have used Verizon’s authoritative nameservers as a vantage point, and even if they had somehow added the Verizon resolvers manually to their list of servers to try, they couldn’t have tested from them, since they’re not open-recursive.

Of course, the study doesn’t truly ignore those users per se — those users are probably close, in a broad network sense, to some vantage point that was used in the study. But note that it almost certain to be cross-AS at that point, i.e., on somebody else’s network, which means that the traffic had to cross a peering point, which is itself a bottleneck. So right off the bat, you’re not getting an accurate measure of their experience.

The original King paper (which describes the sort of DNS-based measurement used in the Microsoft study) asserts that the methodology is still reasonable for estimating end-user latency, because, from their sample data, the distance from end-user clients to the name servers has a median of 4 hops, with about 20% longer than 8 hops; as high as 65-70% of these account for 10 ms or less of latency. But that’s a significant number of hops and a depressingly low percentage of negligible-latency distances, which absolutely matters when the core of your research question is whether being at the edge makes a performance difference.

The problem can be summed up like this: Many customers are closer to an Akamai server than they are to their nameserver.

My friend and I, living less than 20 miles apart, get totally divergent results for our lookups of Akamai hosts. We’re likely served off completely different clusters. In fact, my ISP’s closest nameserver is 18 ms from me — and my closest Akamai server is 12 ms away.

It’s a near certainty that the study has complete blind spots — places where there’s no visibility from a proximate open-recursive nameserver, but a local Akamai server. Akamai has tremendous presence in ISP POPs, and there’s a high likelihood that a substantial percentage of their caches serve primarily customers of a given ISP — that’s why ISPs agree to host those servers for free and give away the bandwidth in those locations.

More critique and some conclusions to come.

Bookmark and Share

What the Microsoft CDN study measures

Cheng Huang et.al.’s Microsoft Research and NYU collaboration on a study entitled Measuring and Evaluating Large-Scale CDNs is worth a closer look. This is the first of what I expect will be a series of posts that aims to explain what was studied and what it means.

The study charts the Akamai and Limelight CDNs, and compares their performance. Limelight has publicly responded, based on questions from Dan Rayburn.

I want to begin by talking about what this study does and doesn’t measure.

The study measures two things: latency to the CDN’s DNS server, and latency to the CDN’s content server. This is latency in the purest network sense — the milliseconds of transit time between the origin measurement point (the “vantage point”) and a particular CDN server. The study uses a modified King methodology, which means the origin measurement points are open recursive DNS servers. In plain English, that means that the origin measurement points are ordinary DNS resolvers — the servers provided by ISPs, universities, and some businesses who have their resolvers outside the firewall. The paper states that 282,700 unique resolvers were used as vantage points.

Open recursive DNS servers (I’m just going to call them “resolvers” for short) are typically at the core of networks, not at the edge. They sit in the data centers of service providers and organizations; in the case of service providers, they may sit at major aggregation points. For instance, I’m a MegaPath DSL customer; the two MegaPath-based resolvers provided to me sit at locations with ping times that average 18 ms and 76 ms away. The issues with this are particularly acute given the study’s resolver discovery methodology — open authoritatives found by a reverse DNS lookup. Among other things, this results in the large diverse networks being significantly under-represented.

So what this study emphatically does not measure is latency to the end user. Instead, think of it as latency to the core of a very broad spectrum of networks, where “the core” means a significant data center or aggregation point, and “networks” mean service provider networks as well as enterprise networks. This is going to be very important when we consider the Akamai/Limelight performance comparison.

Content delivery performance can typically broken down into the “start time” — the amount of time that passes until the first byte of content is delivered to the user — and the “transfer time”, which is how long it takes for the content to actually get delivered.

The first component of the start time is the DNS resolution time. The URL is typically a human-readable name; this has to get turned into an IP address that a computer can understand. This is where CDNs are magic — they take that hostname and they turn it into the IP address of a “good”, “nearby” CDN server to get the content from. This component is what the study is measuring when it’s measuring the CDN DNS servers. The performance of this involves:


  • the network latency between the end-user and his resolver
  • the network latency between his resolver and the CDN’s DNS server
  • the amount of time it takes for the CDN’s DNS server to return a response to the query (the CDN has to figure out which server it wants to serve the content from, which takes some computational cycles to process; in order to cut down computational time, it tends to be a “good enough” server rather than “the optimal” server)

The start time has another component, which is how long it takes for the CDN content server to find the file it’s going to serve, and start spitting it out over the network to the end user. This is a function of server performance and workload, but it’s also a function of whether or not the content is in cache. If it’s not in cache, it’s got to go fetch it from the origin server. Therefore, a cache miss is going to greatly increase the start time. The study doesn’t measure this at all, of course.

The transfer time itself is dependent upon the server performance and workload, but also upon the network performance between the CDN’s content server and the end user. This involves not just latency, but also packet loss (although most networks today have very little packet loss, to the point where some carriers offer 0% packet loss SLAs). During the transfer period, jitter (the consistency of the network performance) may also matter, since spikes in latency may impact things like video, causing a stream to rebuffer or a progressive-download viewing to pause. In the end, the performance comes down to throughput — how many bytes can be shoved across the pipe, each second. The study measures latency to the content server, but it does not measure throughput, and throughput is the real-world metric for understanding actual CDN performance. Moreover, the study measures latency using a DNS packet — lightweight and singular. So it in no way reflects any TCP/IP tricks that a CDN might be doing in order to optimize its throughput.

Now, let’s take all this in the context of the Akamai/Limelight comparison that’s being drawn. The study notes that DNS resolution time is 23% higher for Limelight than Akamai, and that Limelight’s content server latency is 114% higher. However, this includes regions for which Limelight has little or no geographic coverage. For instance, in North America, where both companies have good coverage, Akamai has a DNS server delay of 115.81 ms and a content server delay of 67.24, vs. 78.64 and 79.03 respectively for Limelight. (It’s well-known that Akamai’s DNS resolution can be somewhat slower than competitors, since its much more extensive and complex network results in greater computational complexity.)

The study theorizes that it’s comparing the Akamai “as far to the edge as possible” approach vs. the Limelight (and most other current-generation CDNs) “megaPOP” approach. In other words, the question being asked is, “How much performance difference is created by not being right at the edge?”

Unfortunately, this study doesn’t actually answer that question, because the vantage points — the open recursive DNS servers — are not at the edge. They’re at the core (whether of service provider or enterprise networks). They’re at locations with fast big-pipe connectivity, and likely located in places with excellent peering — both megaPOP-friendly. A CDN like Akamai is certainly also at those same megaPOP locations, of course, but the methodology means that a lot of vantage points are essentially looking at the same CDN points of presence, rather than the more diverse set that might otherwise be represented by actual end-users. It seems highly likely that the Akamai network performance difference, under conditions where both CDNs feel they have satisfactory coverage, is underestimated by the study’s methodology.

More to come…

Bookmark and Share

Limelight workaround, and an Akamai comparison

DataCenterKnowledge has reported that Limelight has a workaround for the Akamai patents. Limelight’s last SEC filing noted that it amended its agreement with Microsoft (to whom it has licensed its CDN technology, for Microsoft to use in building its own internal CDN), to provide them with a new version of the software that they believe is non-infringing. That suggests that they have or will have a workaround for their own network.

Also, Microsoft and NYU researchers have recently released a paper, Measuring and Evaluating Large-Scale CDNs, that charts the Akamai and Limelight networks, and offers (DNS-based) delay measurements for their DNS resolvers and content servers.

I’ll have more commentary on both topics soon, when I’ve got some more time.

Also, I have decided that I’m going to start adding stock tickers to my tags, whenever I write about something that’s likely to be of interest to investors in a particular company. Hopefully, this will help Gartner Invest clients and others with similar interests to navigate my blog.

Bookmark and Share

%d bloggers like this: