Blog Archives

Limelight Networks buys AcceloWeb

I am way behind on my news announcements, or I’d have posted on this earlier: Limelight has bought AcceloWeb.

AcceloWeb is a front-end optimization software company (sometimes called Web content optimization). FEO technologies improve website / Web application performance, through optimizing the HTML/CSS/JavaScript/images on the page. I’ve blogged about this in the past, regarding Cotendo’s integration of Google’s mod_pagespeed technology; if you’re interested in understanding more about FEO, see that post.

Like their competitors Aptimize and Strangeloop Networks, AcceloWeb is a software-based solution. FEO is an emerging technology, and it is computationally expensive — far more so than the kind of network-based optimizations that you get in ADCs like F5’s, or WOCs like Riverbed’s. It is also complex, since FEO tries to rewrite the page without breaking any of its elements — especially hard to do with complex e-commerce sites, for instance, especially those that aren’t following architectural best practices (or even good practices).

CDN and FEO services are highly complementary, since caching the optimized page elements obviously makes sense. Level 3 and Strangeloop recently partnered, with Level 3 offering Strangeloop’s technology as a service called CDN Site Optimizer, although it’s a “side by side” implementation in Level 3’s CDN POPs, not yet integrated with the Level 3 CDN. (Obviously, the next step in that partnership would be integration.)

The integration of network optimization and FEO is the most significant innovation in the optimization market in recent years. For Limelight, this is an important purchase, since it gets them into the acceleration game with a product that Akamai doesn’t offer. (Akamai only has a referral deal with Strangeloop.)

Gartner clients: My research note on improving Web performance (combining on-premise acceleration, CDN / ADN, and FEO for complete solutions) will be out soon!

Akamai and Riverbed partner on SaaS delivery

Akamai and Riverbed have signed a significant partnership deal to jointly develop solutions that combine Internet acceleration with WAN optimization. The two companies will be incorporating each other’s technologies into their platforms; this is a deep partnership with significant joint engineering, and it is probably the most significant partnership that Akamai has done to date.

Akamai has been facing increasing challenges to its leadership in the application acceleration market — what Akamai’s financial statements term “value added services”, including their Dynamic Site Accelerator (DSA) and Web Application Accelerator (WAA) services, which are B2C and B2B bundles, respectively, built on top of the same acceleration delivery network (ADN) technology. Vendors such as Cotendo (especially via its AT&T partnership), CDNetworks, and EdgeCast now have services that compete directly with what has been, for Akamai, a very high-margin, very sticky service. This market is facing severe pricing pressure, due not just to competition, but due to the delta between the cost of these services and standard CDN caching. (In other words, as basic CDN services get cheaper, application acceleration also needs to get cheaper, in order to demonstrate sufficient ROI, i.e., business value of performance, above just buying the less expensive solution.)

While Akamai has had interesting incremental innovations and value-adds since it obtained this technology via the 2007 acquisition of Netli, it has, until recently, enjoyed a monopoly on these services, and therefore hasn’t needed to do any groundbreaking innovation. While the internal enterprise WAN optimization market has been heavily competitive (between Riverbed, Cisco, and many others), other CDNs largely only began offering competitive ADN solutions in the last year. Now, while Akamai still leads in performance, it badly needs to open up some differentiation and new potential target customers, or it risks watching ADN solutions commoditize just the way basic CDN services have.

The most significant value proposition of the joint Akamai/Riverbed solution is this:

Despite the fundamental soundness of the value proposition of ADN services, most SaaS providers use only a basic CDN service, or no CDN at all. The same is true of other providers of cloud-based services. Customers, however, frequently want accelerated services, especially if they have end-users in far-flung corners of the globe; the most common problem is poor performance for end-users in Asia-Pacific when the service is based in the United States. Yet, today, doing so either requires that the SaaS provider buy an ADN service themselves (which it’s hard to do for only one customer, especially for multi-tenant SaaS), or requires the SaaS provider to allow the customer to deploy hardware in their data center (for instance, a Riverbed Steelhead WOC).

With the solution that this partnership is intended to produce, customers won’t need a SaaS provider’s cooperation to deploy an acceleration solution — they can buy it as a service and have the acceleration integrated with their existing Riverbed solution. It adds significant value to Riverbed’s customers, and it expands Akamai’s market opportunity. It’s a great idea, and in fact, this is a partnership that probably should have happened years ago. Better late than never, though.

3Crowd, a new fourth-generation CDN

3Crowd has unveiled its master plan with the recent launch of its CrowdCache product. Previously, 3Crowd had a service called CrowdDirector, essentially load-balancing for content providers who use multiple CDNs. CrowdCache is much more interesting, and it gives life and context to the existence of CrowdDirector. CrowdCache is a small, free, Java application that you can deploy onto a server, which turns it into a CDN cache. You then use CrowdDirector, which you pay for as-a-service on a per-object-request basis, to provide all the intelligence on top of that cache. CrowdDirector handles the request routing, management, analytics, and so forth. What you get, in the end, at least in theory, is a turnkey CDN.

I consider 3Crowd to be a fourth-generation CDN. (I started writing about 4th-gen CDNs back in 2008; see my blog posts on CDN overlays and MediaMelon, on the launch of CDN aggregator Aflexi, and 4th-gen CDNs and the launch of Conviva).

To recap, first-generation CDNs use a highly distributed edge model (think: Akamai), second-generation CDNs use a somewhat more concentrated but still highly distributed model (think: Speedera), and third-generation CDNs use a megaPOP model of many fewer locations (think: Limelight and most other CDNs founded in the 2005-2008 timeframe). These are heavily capital-intensive models that require owning substantial server assets.

Fourth-generation CDNs, by contrast, represent a shift towards a more software-oriented model. These companies own limited (or even no) delivery assets themselves. Some of these are not (and will not be) so much CDNs themselves, as platforms that reside in the CDN ecosystem, or CDN enablers. Fourth-generation CDNs provide software capabilities that allow their customers to turn existing delivery assets (whether in their own data centers, in the cloud, or sometimes even on clients using peer-to-peer) into CDN infrastructure. 3Crowd fits squarely into this fourth-generation model.

3Crowd is targeting three key markets: content providers who have spare capacity in their own data centers and would like to deliver content using that capacity before they resort to their CDN; Web hosters who want to add a CDN to their service offerings; and carriers who want to build CDNs of their own.

In this last market segment, especially, 3Crowd will compete against Cisco, Juniper (via the Ankeena acquisition), Alcatel-Lucent (via the Velocix acquisition), EdgeCast, Jet-Stream, and other companies that offer CDN-building solutions.

No doubt 3Crowd will also get some do-it-yourselfers who will decide to use 3Crowd to build their own CDN using cloud IaaS from Amazon or the like. This is part of what’s generating buzz for the company now, since their “Garage Startup” package is totally free.

I also think there’s potentially an enterprise play here, for those organizations who need to deliver content both internally and externally, who could potentially use 3Crowd to deploy an eCDN internally along with an Internet CDN hosted on a cloud provider, substituting for caches from BlueCoat or the like. There are lots of additional things that 3Crowd needs to be viable in that space, but it’s an interesting thing to think about.

3Crowd has federation ambitions, which is to say: Once they have a bunch of customers using their platform, they’d like to have a marketplace in which capacity-trading can be done, and, of course, also enable more private deals for federation, something which tends to be of interest to regional carriers with local CDN ambitions, who look to federation as a way of competing with the global CDNs.

Conceptually, what 3Crowd has done is not unique. Velocix, for instance, has similar hopes with its Metro product. There is certainly plenty of competition for infrastructure for the carrier CDN market (most of the world’s carriers have woken up over the last year and realize that they need a CDN strategy of some sort, even if their ambitions do not go farther than preventing their broadband networks from being swamped by video). What 3Crowd has done that’s notable is an emphasis on having an easy-to-deploy complete integrated solution that runs on commodity infrastructure resources, and the relative sophistication of the product’s feature set.

The baseline price seemed pretty cheap to me at first, and then I did some math. At the baseline pricing for a start-up, it’s about 2 cents per 10,000 requests. If you’re doing small object delivery at 10K per file, ten thousand requests is about 100 MB of content. So 1 GB of content of 10k-file requests would cost you 20 cents. That’s not cheap, since that’s just the 3Crowd cost — you still have to supply the servers and the network bandwidth. By comparison, Rackspace Cloud Files CDN-enabled delivery via Akamai, is 18 cents per GB for the actual content delivery. Anyone doing enough volume to actually have a full CDN contract and not pushing their bits through a cloud CDN is going to see pricing a lot lower than 18 cents, too.

However, the pricing dynamics are quite different for video. if you’re doing delivery of relatively low-quality, YouTube-like social video, for instance, your average file size is probably more like 10 MB. So 10,000 requests is 100 GB of content, making the per-GB surcharge a mere $0.02 cents. This is an essentially negligible amount. Consequently, the request-based pricing model makes 3Crowd far more cost-effective as a solution for video and other large-file-centric CDNs, than it does for small object delivery.

I certainly have plenty more thoughts on this, both specific to 3Crowd, and to the 4th-gen CDN and carrier CDN evolutionary path. I’m currently working on a research note on carrier CDN strategy and implementation, so keep an eye out for it. Also, I know many of the CDN watchers who read my blog are probably now asking themselves, “What are the implications for Akamai, Limelight, and Level 3?” If you’re a Gartner client, please feel free to call and make an inquiry.

Rackspace goes Akamai, Tata buys BitGravity

A little change of pace today, back to some CDN market news…

Rackspace forms a strategic alliance with Akamai. Today, Rackspace’s Cloud Files storage service is integrated with Limelight’s CDN. The new alliance means that Akamai will be replacing Limelight as the CDN, and some new features will be offered as the integration is done. (While those features are things that Limelight does for its regular customers, they were not integrated into the Rackspace cloud CDN service.) Rackspace currently uses Limelight as its CDN partner for regular hosting deals, as well, and intends to offer Akamai in the future. (Most hosters have channel deals of that sort with one or more CDNs.) What makes this interesting is that it marks the first significant integration of Akamai as a wholesale CDN to a cloud CDN — and that you’ll be able to get Akamai delivery for cloud CDN prices. (Gartner clients only: Is a Cloud CDN Right For You?)

Tata Communications buys BitGravity. Video CDN BitGravity was pretty cool and promising when it first came to market. A number of my clients in traditional media and established online companies really loved what they had to offer in the live-streaming space when they launched. Unfortunately, they never really manage to break out beyond that space. I always thought it was a bit weird for Tata to choose to license their technology as the basis for Tata’s own CDN, given Tata’s customer base and use cases that I’d expected to see Tata be successful at selling. Tata’s investment in them at the time seemed outrageously large to me — for what they paid, you’d have thought they could have just bought the company outright (especially given the rumored valuation of Panther Express when it was acquired by CDNetworks). I’d speculate that this was a desperation buyout for BitGravity, but there are lots of questions about the long-term value of this to Tata.

Bookmark and Share

EdgeCast joins the ADN fray

EdgeCast has announced the beta of its new application delivery network service. For those of you who are CDN-watchers, that means it’s leaping into the fray with Akamai (Dynamic Site Accelerator and Web Application Accelerator, bundles where DSA is B2C and WAA is B2B), Cotendo’s DSA, and CDNetworks’s Dynamic Web Acceleration. EdgeCast’s technology is another TCP optimization approach, of the variety commonly used in application delivery controllers (like F5’s Web Application Accelerator) and WAN optimization controllers (like Riverbed).

EdgeCast seems to be gaining traction with my client base in the last few weeks — they’re appearing in a lot more shortlists. This appears to be the direct result of aggressive SEO-based marketing.

What’s important about this launch: EdgeCast isn’t just a standalone CDN. It is also a CDN partner to many carriers, and it is deriving an increasingly larger percentage of its revenue by selling its software. That means that EdgeCast introducing ADN services potentially has ripple effects on the ecosystem, in terms of spreading ADN technology more widely.

Bookmark and Share

Akamai sues Cotendo for patent infringement

How to tell when a CDN has arrived: Akamai sues them for patent infringement.

The lawsuit that Akamai has filed against Cotendo alleges the violation of three patents.

The most recent of the patents, 7,693,959, is dated April 2010, but it’s a continuation of several previous applications — its age is nicely demonstrated by things like its reference to the Netscape Navigator browser and references to ADC vendors that ceased to exist years ago. It seems to be a sort of generic basic CDN patent, but it governs the use of replication to distribute objects, not just caching.

The second Akamai patent, 7,293,093, essentially covers the “whole site delivery” technique.

The oldest of the patents, 6,820,133, was obtained by Akamai when it acquired Netli. It essentially covers the basic ADN technique of optimized communication between two server nodes in a network. I don’t know how defensible this patent is; there are similar ones held by a variety of ADC and WOC vendors who use such techniques.

My expectation is that the patent issues won’t affect the CDN market in any significant way, and the lawsuit is likely to drag on forever. Fighting the lawsuit will cost money, but Cotendo has deep-pocketed backers in Sequoia and Benchmark. It will obviously fuel speculation that Akamai will try to buy them, but I don’t think that it’s going to be a simple way to eliminate competition in this market, especially given what I’ve seen of the roadmaps of other competitors (under NDA). Dynamic acceleration is too compelling to stay a monopoly (and incidentally, the ex-Netli executives are now free of their competitive lock-up), and the only reason it’s taken this long to arrive was that most CDNs were focused on the huge, and technically far easier, opportunity in video delivery.

It’s an interesting signal that Akamai takes Cotendo seriously as a competitor, though. In my opinion, they should; since early this year, I’ve regularly seen Cotendo as a bidder in the deals that I look at. The AT&T deal is just a sweetener — and since AT&T will apply your corporate enteprise discount to a Cotendo deal, that means that I’ve got enterprise clients who sometimes look at 70% discounts on top of an already less-expensive price quote. And Akamai’s problem is that Cotendo isn’t just, as Akamai alleges in its lawsuit, a low-cost competitor; Cotendo is trying to innovate and offer things that Akamai doesn’t have.

Competition is good for the market, and it’s good for customers.

Bookmark and Share

Netflix, Akamai, and video delivery performance

Dan Rayburn’s blg post about the Akamai/Netflix relationship seems to have set off something of a firestorm, and I’ve been deluged by inquiries from Gartner Invest clients about it.

I do not want to add fuel to the fire by speculating on anything, and I have access to confidential information that prevents me from stating some facts as I know them, so for blog purposes I will stick to making some general comments about Akamai’s delivery performance for long-tail video content.

From the independent third-party testing that I’ve seen, Akamai delivers good large-file video performance, but their performance is not always superior to other major CDNs (excluding greater reach, i.e., they’re going to solidly trounce a rival who doesn’t have footprint in a given international locale, for instance). Actual performance depends on a number of factors, including the specifics of a customer’s deal with Akamai and the way that the service is configured. The testing location also matters. The bottom line is that it’s very competitive performance but it’s not, say, head and shoulders above competitors.

Akamai has, for the last few years, had a specific large-file delivery service designed to cache just the beginning of a file at the very edge, with the remainder delivered from the middle tier to the edge server, thus eliminating the obvious cache storage issues involved in, say, caching entire movies, while still preserving decent cache hit ratios. However, this has been made mostly irrelevant in video delivery by the rise in popularity of adaptive streaming techniques — if you’re thinking about Akamai’s capabilities in the Netflix or similar contexts, you should think of this as an adaptive streaming thing and not a large file delivery thing.

In adaptive streaming (pioneered by Move Networks), a video is chopped up into lots of very short chunks, each just a few seconds long, and usually delivered via HTTP. The end-consumer’s video player takes care of assembly these chunks. Based on the delivery performance of each chunk, the video player decides whether it wants to upshift or downshift the bitrate / quality of the video in real time, thus determining what the URL of the next video chunk is. This technique can also be used to switch sources, allowing, for instance, the CDN to be changed in real time based on performance, as is offered by the Conviva service. Because the video player software in adaptive streaming is generally instrumented to pay attention to performance, there’s also the possibility that it may send back performance information to the content owner, thus enabling them to get a better understanding of what the typical watcher is experiencing. Using an adaptive technique, your goal is generally to get the end-user the most “upshift” possible (i.e., sustain the highest bitrate possible), and if you can, have it delivered via the least expensive source.

When adaptive streaming is in use, that means that the first chunk of videos is now just a small object, easily cached on just about any CDN. Obviously, cache hit ratios still matter, and you will generally get higher cache hit ratios on a megaPOP approach (like Limelight) than you will with a high distributed approach (like Akamai), although that starts to get more complex when you add in server-side pre-fetching, deliberately serving content off the middle tier, and the like. So now your performance starts to boil down to footprint, cache hit ratio, algorithms for TCP/IP protocol optimization, server and network performance — how quickly and seamlessly can you deliver lots and lots of small objects? Third-party testing generally shows that Akamai benchmarks very well when it comes to small object delivery — but again, specific relative CDN performance for a specific customer is always unique.

In the end, it comes down to price/performance ratios. Netflix has clearly decided that they believe Level 3 and Limelight deliver better value for some significant percentage of their traffic, at this particular instant in time. Given the incredibly fierce competition for high-volume deals, multi-vendor sourcing, and the continuing fall in CDN prices, don’t think of this as an alteration in the market, or anything particularly long-term for the fate of the vendors in question.

Bookmark and Share

Google’s mod_pagespeed and Cotendo

Those of you who are Gartner clients know that in the last year, my colleague Joe Skorupa and I have become excited about the emergence of software-based application acceleration via page optimization approaches, as exemplified by vendors like Aptimize and Strangeloop Networks. (Clients: See Cool Vendors in Enterprise Networking, 2010.) This approach to acceleration enhances the performance of Web-based content and applications, by automatically optimizing the page output of webservers according to the best practices described in books like High Performance Web Sites by Steve Souders. Techniques of this sort include automatically combining JavaScript files (which reduces overall download time), optimizing the order of the scripts, and rewriting HTML so that the browser can display the page more quickly.

Page optimization techniques can often provide significant acceleration boosts (2x or more) even when other acceleration techniques are in use, such as a hardware ADC with acceleration module (offered as add-ons by F5 and Citrix NetScaler, for instance), or a CDN (including CDN-based dynamic acceleration). Since early this year, we’ve been warning our CDN clients that this is a critical technology development to watch and to consider adopting in their networks. It’s a highly sensible thing to deploy on a CDN, for customers doing whole site delivery; the CDN offloads the computational expense of doing the optimization (which can be significant), and then caches the result (and potentially distributing the optimized pages to other nodes on the CDN). That gets you excellent, seamless acceleration for essentially no effort on the part of the customer.

Google’s Page Speed project provides free and open-source tools designed to help site owners implement these best practices. Google has recently released an open-source module, called mod_pagespeed, for the popular Apache webserver. This essentially creates an open-source competitor to commercial vendors like Aptimize and Strangeloop. Add the module into your Apache installation, and it will automatically try to optimize your pages for you.

Now, here’s where it gets interesting for CDN watchers: Cotendo has partnered wih Google. Cotendo is deploying the Google code (modified, obviously, to run on Cotendo’s proxy caches, which are not Apache-based), in order to be able to offer the page optimization service to its customers.

I know some of you will automatically be asking now, “What does this mean for Akamai?” The answer to that is, “Losing speed trials when it’s Akamai DSA vs. Cotendo DSA + Page Speed Automatic, until they can launch a competing service.” Akamai’s acceleration service hasn’t changed much since the Netli acquisition in 2007, and the evolution in technology here has to be addressed. Page optimization plus TCP optimization is generally much faster than TCP optimization alone. That doesn’t just have pricing implications; it has implications for the competitive dynamics of the space, too.

I fully expect that page optimization will become part of the standard dynamic acceleration service offerings of multiple CDNs next year. This is the new wave of innovation. Despite the well-documented nature of these best practices, organizations still frequently ignore them when coding — and even commercial packages like Sharepoint ignore them (Sharepoint gets a major performance boost when page optimization techniques are applied, and there are solutions like Certeon that are specific to it). So there’s a very broad swathe of customers that can benefit easily from these techniques, especially since they provide nice speed boosts even in environments where the network latency is pretty decent, like delivery within the United States.

Bookmark and Share

Cotendo and AT&T

A lot of Gartner Invest clients are calling to ask about the AT&T deal with Cotendo. Since I’m swamped, I’m doing a blog post, and the inquiry coordinators will try to set up a single conference call.

I’ve known about this deal for a long time, but I’ve been respecting AT&T and Cotendo’s request to keep it quiet despite the fact that it’s not under formal nondisclosure. Since the deal was noted in my recently-published Who’s Who in Content Delivery Networks, 2010, someone else has now blogged about it publicly, and I’m being asked explicitly about it, though, I’m going to go ahead and talk about it on my blog.

There are now three vendors in the market who claim true dynamic site acceleration offerings: Akamai, CDNetworks, and Cotendo. (Limelight’s recently-announced accelerator offerings are incremental evolutions of LimelightSITE.) CDNetworks has not gained any significant market traction with their offering since introducing it six months ago, whereas these days, I routinely see customers bid Cotendo along with Akamai.

However, to understand the potential impact of Cotendo, one has to understand what they actually deliver. It’s important to note that while Cotendo positions its service identically to Akamai’s, even calling it Dynamic Site Accelerator (just like Akamai brands it), it is not, from a technical perspective, like Akamai’s DSA.

Cotendo’s DSA offering, at present, consists of TCP multiplexing and connection pooling from their edge servers. Both of these technologies are common features in application delivery controllers (or, in more colloquial terms, load-balancers, i.e., F5’s LTM, Citrix’s NetScaler, etc.). If you’re not familiar with the benefits of either, F5’s DevCentral provides good articles on multiplexing and persistent connections, as does Network World (2001, but still relevant).

By contrast, Akamai’s DSA offering — the key technology acquired when they bought Netli — is sort of like a combination of functionality from an ADC and a WAN optimization controller (WOC, like Riverbed), offered as a service in the cloud (in the old-fashioned meaning, i.e., “somewhere on the Internet”). In DSA, Akamai’s edge servers essentially behave like bidirectional WOCs, speaking an optimized protocol between them; it’s coupled with Akamai’s other acceleration technologies, including pre-fetching, compression, and so on.

Engineering carrier-scale WOC functionality is hard. Netli succeeded. There have been other successes in the hardware market — for instance, Ipanema, which targets carriers. Both made significant sacrifices in the complexity of functionality in order to achieve scale. Enterprise WOC vendors have had a hard time scaling past more than a few dozen sites, and the bar is still pretty low (at Gartner, we use “scale to over a hundred sites” on our vendor evaluation, for instance). A new CDN entrant offering WOC-style, Akamai/Netli-style functionality would be a big deal — but that’s not what Cotendo actually has.

Akamai’s DSA service competes to some extent with unidirectional ADC-based acceleration (F5’s WebAccelerator, for instance), but there are definitely benefits to middle-mile bidirectional acceleration, resulting in a stacked benefit if you use an ADC plus Akamai; moreover, this kind of acceleration is not a baseline feature in ADCs. Cotendo overlaps directly with baseline ADC functionality. That means the two companies have distinctly different services, serving different target audiences.

Cotendo is offering pretty good performance in the places where they have footprint — enough to be competitive. Like all CDN performance, customers care about “good enough” rather than “the very best”, but in transactional sites, there’s usually a decent return curve for more performance before you finally hit “fast enough that faster makes no difference”. This is still dependent upon the context, though. Electronics shoppers, for instance, are much less patience than people shopping for air travel. And the baseline site performance (i.e., your application response time in general) and construction, will also determine how much site acceleration will get you in terms of ROI.

The deal with AT&T is significant for the same reason that it was significant for Akamai to have signed Verizon and IBM as resellers years ago — because larger companies can be much more comfortable buying on the paper of a big vendor they already have a relationship with. And since AT&T’s CDN wins are often add-ons to hosting deals — where you typically have a complex transactional site — selling a dynamic acceleration service over a pure static caching one is definitely preferable. AT&T has tried to get around that deficiency in the past by selling multi-data-center and managed F5 WebAccelerator solutions, but those solutions aren’t as attractive. This partnership benefits both companies, but it’s not a game-changer in the CDN industry.

Since everyone’s asking, no, I don’t see Cotendo price-pressuring Akamai at the moment. (I see as many as 15 CDN deals a week, so I feel very comfortable with my state of pricing knowledge, especially in this transactional space.) What I do see is the incredibly depressed price of static object delivery affecting what anyone can realistically charge for dynamic acceleration, because the price/performance delta gets too large. I certainly do see Cotendo winning smaller deals, but it’s important that the wins aren’t coming from just undercuts in price — for instance, my clients cite the user-friendly, attractive portal as a reason to choose Cotendo over Akamai.

I have plenty more to say on this subject, but I’ve already skimmed the edge of how much I can say in my blog vs. when I should be writing research or answering inquiry, so: If you’re a client, please feel free to make an inquiry.

Interesting side note: Since publishing my Who’s Who note a week and a half ago, my CDN inquiry from customers has suddenly started to include a lot more multi-vendor inquiry about the smaller vendors. That probably says that other CDNs could still do a lot to build brand awareness. (SEO is key to this, these days.)

Bookmark and Share

Recent research notes

Here’s a round-up of what I’ve written lately, for those of you that are Gartner clients and are following my research:

Data Center Managed Services: Regional Differences in the Move Toward the Cloud is about how the IaaS market will evolve differently in each of the major regions of the world. We’re seeing significant adoption differences between the United States, Western Europe (and Canada follows the WEU pattern), and Asia, both in terms of buyer desires and service provider evolution.

Web Hosting and Cloud Infrastructure Prices, North America, 2010 is my regular update to the state of the hosting and cloud IaaS markets, targeted at end-users (IT buyers).

Content Delivery Network Services and Pricing, 2010 is my regular update of end-user (buyer) advice, providing a brief overview of the current state of the market.

Is a Cloud Content Delivery Network Right for You? is a look at Amazon CloudFront and the other emerging “cloud CDN” services (Rackspace/Limelight, GoGrid/EdgeCast, Microsoft’s CDN for Azure, etc.). It’s a hot topic of inquiry at the moment (interestingly, mostly among Akamai customers hoping to reduce their costs).

Some of my colleagues have also recently published notes that might be of interest to those of you who follow my research. Those notes include:

Bookmark and Share

%d bloggers like this: