Blog Archives
Cogent’s Utility Computing
A client evaluating cloud computing solutions asked me about Cogent’s Utility Computing offering (and showed me a nice little product sheet for it). Never having heard of it before, and not having a clue from the marketing collateral what this was actually supposed to be (and finding zero public information about it), I got in touch with Cogent and asked them to brief me. I plan to include a blurb about it in my upcoming Who’s Who note, but it’s sufficiently unusual and interesting that I think it’s worth a call-out on my blog.
Simply put, Cogent is allowing customers to rent dedicated Linux servers at Cogent’s POPs. The servers are managed through the OS level; customers have sudo access. This by itself wouldn’t be hugely interesting (and many CDNs now allow their customers to colocate at their POPs, and might offer self-managed or simple managed dedicated hosting as well in those POPs). What’s interesting is the pricing model.
Cogent charges for this service based on bandwidth (on a Mbps basis). You pay normal Cogent prices for the bandwidth, plus an additional per-Mbps surcharge of about $1. In other words, you don’t pay any kind of compute price at all. (You do have to push a certain minimum amount of bandwidth in order for Cogent to sell you the service at all, though.)
This means that if you want to construct your own fly-by-night CDN (or even just a high-volume origin for static content), this is one way to do it. Figure you could easily be looking at $5/Mbps pricing and below, all-in. If you’re looking for cheap and crude and high-volume, then these servers in a couple of POPs, and a global load-balancing service of some sort will do it. For anything that’s not performance-sensitive, like large file downloads in the background (like game content updates), this might turn out to be a pretty interesting alternative.
I’ve always thought that Level 3’s CDN service, with its “it costs what our bandwidth costs” pricing tactic, was a competitive assault not so much on Limelight (or even AT&T, who has certainly gotten into mudpit pricing fights with Level 3), but on Cogent and other providers of low-cost high-volume bandwidth — i.e., convincing people that rather than buying servers and getting colocation space and cheap high-volume bandwidth, that they should just take CDN services. So it makes sense for Cogent to strike back with a product that circumvents having to make the investments in technology that would be required to get into the CDN mudpit directly.
I’ll be interested to see how this evolves — and will be curious to see if anyone else launches a similar service.
Who’s Who in CDN
I’m currently working on writing a research note called “Who’s Who in Content Delivery Networks“. The CDN space isn’t quite large enough yet to justify one of Gartner’s formal rating methodologies (the Magic Quadrant or MarketScope), but with the proliferation of vendors who can credibly serve enterprise customers, the market deserves a vendor note.
The format of a Who’s Who looks a lot like our Cool Vendors format — company name, headquarters location, website, a brief blurb about who they are and what they do, and a recommendation for what to use them for. I like to keep my vendor write-up formats pretty consistent, so each CDN has a comment about its size (and length of time in the business and funding source, if relevant), its footprint, services offered, whether there’s an application acceleration solution and if so what the technology approach to that is, pricing tier (premium-priced, competitive, etc.), and general strategy.
Right now, I’m writing up the ten vendors that are most commonly considered by enterprise buyers of CDN, and then planning to add some quick bullet-points of other vendors in the ecosystem but who aren’t CDNs themselves (equipment vendors, enterprise internal CDN alternatives, etc.), probably more in a ‘here are some vendor names’ with no blurbs, fashon.
For those of you who follow my research, I’m also about to publish my yearly update of the CDN market that’s targeted at our IT buyer clients (i.e., how to choose a vendor and what the pricing is like), along with another note on the emergence of cloud CDNs (to answer a very common inquiry, which is, “Can I replace my Akamai services with Amazon?”).
Q1 2010 inquiry in review
My professional life has gotten even busier — something that I thought was impossible, until I saw how far out my inquiry calendar was being booked. As usual, my blogging has suffered for it, as has my writing output in general. Nearly all of my writing now seems to be done in airports, while waiting for flights.
The things that clients are asking me about has changed in a big way since my Q4 2009 commentary, although this is partially due to an effort to shift some of my workload to other analysts on my team, so I can focus on the stuff that’s cutting edge rather than routine. I’ve been trying to shed as much of the routine colocation and data center leasing inquiry onto other analysts as possible, for instance; reviewing space-and-power contracts isn’t exactly rocket science, and I can get the trends information I need without needing to look at a zillion individual contracts.
Probably the biggest surprise of the quarter is how intensively my CDN inquiry has ramped up. It’s Akamai and more Akamai, for the most part — renewals, new contracts, and almost always, competitive bids. With aggressive new pricing across the board, a willingness to negotiate (and an often-confusing contract structure), and serious prospecting for new business, Akamai is generating a volume of CDN inquiry for me that I’ve never seen before, and I talk to a lot of customers in general. Limelight is in nearly all of these bids, too, by the way, and the competition in general has been very interesting — particularly AT&T. Given Gartner’s client base, my CDN inquiry is highly diversified; I see a tremendous amount of e-commerce, enterprise application acceleration, electronic software delivery and whatnot, in addition to video deals. (I’ve seen as many as 15 CDN deals in a week, lately.)
The application acceleration market in general is seeing some new innovations, especially on the software end (check out vendors like Aptimize), and there will be more ADN offers will be launched by the major CDN vendors this year. The question of, “Do you really need an ADN, or can you get enough speed with hardware and/or software?” is certainly a key one right now, due to the big delta in price between pure content offload and dynamic acceleration.
By the way, if you have not seen Akamai CEO Paul Sagan’s “Leading through Adversity” talk given at MIT Sloan, you might find it interesting — it’s his personal perspective on the company’s history. (His speech starts around the 5:30 mark, and is followed by open Q&A, although unfortunately the audio cuts out in one of the most interesting bits.)
Most of the rest of my inquiry time is focused around cloud computing inquiries, primarily of a general strategic sort, but also with plenty of near-term adoption of IaaS. Traditional pure-dedicated hosting inquiry, as I mentioned in my last round-up, is pretty much dead — just about every deal has some virtualized utility component, and when it doesn’t, the vendor has to offer some kind of flexible pricing arrangement. Unusually, I’m beginning to take more and more inquiry from traditional data center outsourcing clients who are now looking at shifting their sourcing model. And we’re seeing some sharp regional trends in the evolution of the cloud market that are the subject of an upcoming research note.
Equipment vendors get into the CDN act
Carriers are very interested in CDNs, whether it’s getting into the CDN market themselves, or “defensively” deploying content delivery solutions as part of an effort to reduce the cost of broadband service or find a way to monetize the gobs of video traffic now flowing to their end-users.
So far, most carriers have chosen partnerships with companies like Limelight and Edgecast, rather than entering the market with their own technology and services, but this shouldn’t be regarded as the permanent state of things. The equipment vendors clearly recognized the interest some time ago, and the solutions are finally starting to flow down the pipeline.
Last year, Alcatel-Lucent bought Velocix, a CDN provider, in order to get its carrier-focused technology, Velocix Metro (which I’ve written about before). Velocix Metro is a turnkey CDN solution, with the added interesting bit of adding aggregation capability across multiple Velocix customers.
Last month, Juniper partnered with Ankeena in order to bring a solution called Juniper MediaFlow to market. It’s a turnkey CDN solution incorporating both content routing and caches.
This month, with the introduction of the CRS-3 core router, Cisco announced its Network Positioning System (NPS). NPS is not a CDN solution of any sort. Rather, it’s essentially a system that gathers intelligence about the network, and uses that to compute the proximity between two endpoints. NPS is essentially used to provide a programmatic way to advise clients and servers about the best place to find networked resources. Today, we get a crude form of this in the technical functionality of BGP Anycast; NPS, by contrast, is supposed to use information from layers 3 through 7, and incorporate policy-based capabilities.
And in news of little companies, Blackwave has been chugging along, releasing a new version of its platform earlier this month. Blackwave is one of those companies that I expect to be eyed as an acquisition target by equipment vendors looking to add turnkey CDN solutions to their portfolios; Blackwave has a highly efficient ILM-ish media storage platform, coupled with some content routing capabilities. (Its flagship CDN customer is CDNetworks, but its other customers are primarily service providers.)
Google’s DNS protocol extension and CDNs
There have been a number of interesting new developments in the content routing space recently — specifically, the issue of how to get content to end-users from the most optimal point on the network. I’ll be talking about Cisco and Juniper in a forthcoming blog post, but for now, let’s start with Google:
A couple of weeks ago, Google and UltraDNS (part of Neustar) proposed an extension to the DNS protocol that would allow DNS servers to obtain the IP address of the end-user who originally made the request. DNS is normally recursive — the end-user queries his local DNS resolver server, which then makes queries up the chain on his behalf. The problem with this is that the resolver is not necessarily actually local — it might be far, far away from the user. And the DNS servers of things like CDNs use the location of the DNS query to figure out where the user is, which means that they actually return an optimal server for the resolver’s location, not the user’s.
I wrote about this problem in some detail about a year and a half ago, in a blog post: The nameserver as CDN vantage point. You can go back and look at that for a more cohesive explanation and a look at some numbers that illustrate how much of a problem resolver locations create. The Google proposal is certainly a boon to CDNs as well as anyone else that relies upon DNS for global load-balancing solutions. In the ecosystem where it’s supported, the enhancement will also give a slight performance boost to CDNs with more local footprint, by helping to ensure that the local cache is actually more local to the user. The resolver issue can, as I’ve noted before, erase the advantages of having more footprint closer to the edge, since that edge footprint won’t be used unless there are local resolvers that map to it. Provide the user’s IP, though, and you can figure out exactly what the best server for him is.
There’s no shortage of technical issues to debate (starting with the age-old objection to using DNS for content routing to begin with), and privacy issues have been raised as well, but my expectation is that even if it doesn’t actually get adopted as a standard (and I’m guessing it won’t, by the way), enough large entities will implement it to make it a boon for many users.
Traffic Server returns from the dead
Back in 2002, Yahoo acquired Inktomi, a struggling software vendor whose fortunes had turned unpleasantly with the dot-com crash. While at the time of the acquisition, Inktomi had refocused its efforts upon search, its original flagship product — the one that really drove its early revenue growth — was something called Traffic Server.
Traffic Server was a Web proxy server — essentially, software for running big caches. It delivered significantly greater scalability, stability, and maintainability than did the most commonly-used alternative, the open-source Squid. It was a great piece of software; at one point in time, I was one of Inktomi’s largest customers (possibly the actual largest customer), with several hundred Traffic Servers deployed in production globally, so I speak from experience, here. (This was as ISP caches, as opposed to the way that Yahoo uses it, which is a front-end, “reverse proxy” cache.)
Now, as ghosts of the dot-com era resurface, Yahoo is open-sourcing Traffic Server. This is a boon not only to Web sites that need high scalability, but also to organizations who need inexpensive, high-performance proxies for their networks, as well as low-end CDNs whose technology is still Squid-based. There are now enterprise competitors in this space (such as Blue Coat Systems), but open-source remains a lure for many seeking low-cost alternatives. Moreover, service providers and content providers have different needs from the enterprise.
This open-sourcing is only to Yahoo’s benefit. It’s not a core piece of technology, there are plenty of technology alternatives available already, and by opening up the source code to the community, they’re reasonably likely to attract active development at a pace beyond what they could invest in internally.
Recent inquiry trends
It’s been mentioned to me that my “what are you hearing about from clients” posts are particularly interesting, so I’ll try to do a regular update of this sort. I have some limits on how much detail I can blog and stay within Gartner’s policies for analysts, so I can’t get too specific; if you want to drill into detail, you’ll need to make a client inquiry.
It’s shaping up into an extremely busy fall season, with people — IT users and vendors like — sounding relatively optimistic about the future. If you attended Gartner’s High-Tech Forum (a free event we recently did for tech vendors in Silicon Valley), you saw that we showed a graph of inquiry trends, indicating that “cost” is a declining search term, and “cloud” has rapidly increased in popularity. We’re forecasting a slow recovery, but at least it’s a recovery.
This is budget and strategic planning time, so I’m spending a lot of time with people discussing their 2010 cloud deployment plans, as well as their two- and five-year cloud strategies. There’s some planning stuff going around data centers, hosting, and CDN services, too, but the longer-term the planning, the more likely it is that it’s going to involve cloud. (I posted on cloud inquiry trends previously.)
There’s certainly purchasing going on right now, though, and I’m talking to clients across the whole of the planning cycle (planning, shortlisting, RFP review, evaluating RFP responses, contract review, re-evaluating existing vendors, etc.). Because pretty much everything that I cover is a recurring service, I don’t see the end-of-year rush to finish spending 2009’s budget, but this is the time of year when people start to work on the contracts they want to go for as soon as 2010’s budget hits.
My colo inquiries this year have undergone an interesting shift towards local (and regional) data centers, rather than national players, reflecting a shift in colocation from being primarily an Internet-centric model, to being one where it’s simply another method by which businesses can get data center space. Based on the planning discussions I’m hearing, I expect this is going to be the prevailing trend going forward, as well.
People are still talking about hosting, and there are still plenty of managed hosting deals out there, but very rarely do I see a hosting deal now that doesn’t have a cloud discussion attached. If you’re a hoster and you can’t offer capacity on demand, most of my clients will now simply take you off the table. It’s an extra kick in the teeth if you’ve got an on-demand offering but it’s not yet integrated with your managed services and/or dedicated offerings; now you’re competing as if you were two providers instead of one.
The CDN wars continue unabated, and competitive bidding is increasingly the norm, even in small deals. Limelight Networks fired a salvo into the fray yesterday, with an update to their delivery platform that they’ve termed “XD”. The bottom line on that is improved performance at a baseline for all Limelight customers, plus a higher-performance tier and enhanced control and reporting for customers who are willing to pay for it. I’ll form an opinion on its impact once I see some real-world performance data.
There’s a real need in the market for a company who can monitor actual end-user performance and that can do consulting assessments of multiple CDNs and origin configurations. (It’d be useful in the equipment world, too, for ADCs and WOCs.) Not everyone can or wants to deploy Keynote or Gomez or Webmetrics for this kind of thing, those companies aren’t necessarily eager to do a consultative engagement of this sort, and practically every CDN on the planet has figured out how to game their measurements to one extent or another. It doesn’t make them without value in such assessments, but real-world data from actual users (via JavaScript agents, video player instrumentation, download client instrumentation, etc.) is still vastly preferable. Practically every client I speak to wants to do performance trials, but the means available for doing so are still overly limited and very expensive.
All in all, things are really crazy busy. So busy, in fact, that I ended up letting a whole month go by without a blog post. I’ll try to get back into the habit of more frequent updates. There’s certainly no lack of interesting stuff to write about.
Hype cycles
I’ve recently contributed to a couple of our hype cycles.
Gartner’s very first Hype Cycle for Cloud Computing features a whole array of cloud-related technologies and services. One of the most interesting things about this hype cycle, I think, is the sheer number of concepts that we believe will hit the plateau of productivity in just two to five years. For a nascent technology, that’s pretty significant — we’re talking about a significant fundamental shift in the way that IT is delivered, in a very short time frame. However, a lot of the concepts in this hype cycle haven’t yet hit the peak of inflated expectations — you can expect plenty more hype to be coming your way. There’s a good chance that for the IaaS elements that I focus on, the crash down into the trough of disillusionment will be fairly brief and shallow, but I don’t think it can be avoided. Indeed, I can already tell you tales of clients who got caught up in the overhype and got themselves into trouble. But the “try it and see” aspect of cloud IaaS means that expectations and reality can get a much faster re-alignment than it can if you’re, say, spending a year deploying a new technology in your data center. With the cloud, you’re never far from actually being able to try something and see if it fits your needs.
My hype cycle profile for CDNs appears on our Media Industry Content hype cycle, as well as our brand-new TV-focused (digital distribution and monetization of video) Media Broadcasting hype cycle. Due to the deep volume discounts media companies receive from CDNs, the value proposition is and will remain highly compelling, although I do hear plenty of rumblings about both the desire to use excess origin capacity as well as the possibilities that the cloud offers for both delivery and media archival.
I was involved in, but am not a profile author on, the Hype Cycle for Data Center Power and Cooling Technologies. If you are a data center engineering geek, you’ll probably find it to be quite interesting. Ironically, in the midst of all this new technology, a lot of data center architecture and engineering companies still want to build data centers the way they always have — known designs, known costs, little risk to them… only you lose when that happens. (Colocation companies, who have to own and operate these data centers for the long haul, may be more innovative, but not always, especially since many of them don’t design and build themselves, relying on outside expertise for that.)
Next round, Akamai vs. Limelight
In CDN news this past weekend, a judge has overturned the jury verdict in the Akamai vs. Limelight patent infringement case. Akamai has said it intends to appeal.
The judge cited Muniauction v. Thomson Corp. as the precedent for a judgement of law, which basically says that if you have a method claim in a patent that involves steps performed by multiple parties, you cannot claim direct infringement unless one party exercises control over the entire process.
I have not read the court filing yet, but based on the citation of precedent, it’s a good guess that because the CDN patent methods generally involve steps beyond the provider’s control, it falls under this citation. Unexpected, at least to me, and for those IP law watchers among you, rather fascinating, since in our increasingly federated, distributed, outsourced IT world, this would seem to raise a host of intellectual property issues for multi-party transactions, which are in some ways inherent to web services.
Research du jour
My newest research notes are all collaborative efforts.
Forecast: Sizing the Cloud; Understanding the Opportunities in Cloud Services. This is Gartner’s official take on cloud segmentation and forecasting through 2013. It was a large-team effort; my contribution was primarily on the compute services portion.
Invest Insight: Content Delivery Network Arbitrage Increases Market Competition. This is a note specifically for Gartner Invest clients, written in conjunction with my colleague Frank Marsala (a former sell-side analyst who heads up our telecom sector for investors). It’s primarily about Conviva but also touches on Cotendo, but its key point is not to look at particular companies, but to look at technology-enabled long-term trends.
Cool Vendors in Cloud Computing Management and Professional Services, 2009. This is part of our annual “cool vendors” series highlighting small vendors whom we think are doing something notable. It’s a group effort, and we pick the vendors via committee. (And no, there is no way to buy your way into the report.) This year’s picks (never a secret, since vendors usually do press releases) are Appirio, CohesiveFT, Hyperic, RightScale, and Ylastic.