Category Archives: Industry
Google builds a CDN for its own content
An article in the Wall Street Journal today describes Google’s OpenEdge initiative (along with a lot of spin around net neutrality, resulting in a Google reply on its public policy blog).
Basically, Google is trying to convince broadband providers to let it place caches within their networks — effectively, pursuing the same architecture that a deep-footprint CDN like Akamai uses, but for Google content alone.
Much of the commentary around this seems to center on the idea that if Google can use this to obtain better performance for its content and applications, everyone else is at a disadvantage and it’s a general stab to net neutrality. (Even Om Malik, who is not usually given to mindless panic, asserts, “If Google can buy better performance for its service, your web app might be at a disadvantage. If the cost of doing business means paying baksheesh to the carriers, then it is the end of innovation as we know it.”)
I think this is an awful lot of hyperbole. Today, anyone can buy better performance for their Web content and applications by paying money to a CDN. And in turn, the CDNs pay baksheesh, if you want to call it that, to the carriers. Google is simply cutting out the middleman, and given that it accounts for as more traffic on the Internet than most CDNs, it’s neither illogical nor commercially unreasonable.
Other large content providers — Microsoft and AOL notably on a historical basis — have built internal CDNs in the past; Google is just unusual in that it’s attempting to push those caches deeper into the network on a widespread basis. I’d guess that it’s YouTube, more than anything else, that’s pushing Google to make this move.
This move is likely driven at least in part by the fact that most of the broadband providers simply don’t have enough 10 Gbps ports for traffic exchange (and space and power constraints in big peering points like Equinix’s aren’t helping matters, making it artificially hard for providers to get the expansions necessary to put big new routers into those facilities). Video growth has sucked up a ton of capacity. Google, and YouTube in particular, is a gigantic part of video traffic. If Google is offering to alleviate some of that logjam by putting its servers deeper into a broadband provider’s network, that might be hugely attractive from a pure traffic engineering standpoint. And providers likely trust Google to have enough remote management and engineering expertise to ensure that those cache boxes are well-behaved and not annoying to host. (Akamai has socialized this concept well over much of the last decade, so this is not new to the providers.)
I suspect that Google wouldn’t even need to pay to do this. For the broadband providers, the traffic engineering advantages, and the better performace to end-users, might be enough. In fact, this is the same logic that explains why Akamai doesn’t pay for most of its deep-network caches. It’s not that this is unprecedented. It’s just that this is the first time that an individual content provider has reached the kind of scale where they can make the same argument as a large CDN.
The cold truth is that small companies generally do not enjoy the same advantages as large companies. If you are a small company making widgets, chances are that a large company making widgets has a lower materials cost than you do, because they are getting a discount for buying in bulk. If you are a small company doing anything whatsoever, you aren’t going to see the kind of supplier discounts that a large company gets. The same thing is true for bandwidth — and for that matter, for CDN services. And big companies often leverage their scale into greater efficiency, to boot; for instance, unsurprisingly, Gartner’s metrics data shows that the average cost to running servers drops as you get more servers in your data center. Google employs both scale and efficiency leverage.
One of the key advantages of the emerging cloud infrastructure services, for start-ups, is that such services offer the leverage of scale, on a pay-by-the-drink basis. With cloud, small providers can essentially get the advantage of big providers by banding together into consortiums or paying an aggregator. However, on the deep-network CDN front, this probably won’t help. Highly distributed models work very well for extremely popular content. For long-tail content, cache hit ratios can be too low for it to be really worthwhile. That’s why it’s doubtful that you’ll see, say, Amazon’s Cloudfront CDN, push deep rather than continuing to follow a megaPOP model.
Ironically, because caching techniques aren’t as efficient for small content providers, it might actually be useful to them to be able to buy bandwidth at a higher QoS.
Cloud research
I am spending as much of my research time as possible on cloud these days, although my core coverage (colocation, hosting, and CDNs) still demands most of my client-facing time.
Reflecting the fact that hosting and cloud infrastructure services are part of the same broad market (if you’re buying service from Joyent or GoGrid or MediaTemple or the like, you’re buying hosting), the next Gartner Magic Quadrant for Web Hosting will include cloud providers. That means I’m currently busy working on an awful lot of stuff, preparatory to beginning the formal process in January. I know we’ll be dealing with a lot of vendors who have never participated in a Magic Quadrant before, which should make this next iteration personally challenging but hopefully very interesting to our clients and exciting to vendors in the space.
Anyway, I have two new research notes out today:
Web Hosting and Cloud Infrastructure Prices, North America, 2008. This defines a segmentation for the emerging cloud infrastructure services market, and provides guidance to current pricing for the various category of Web hosting services, including cloud services.
Dataquest Insight: A Service Provider Road Map to the Cloud Infrastructure Transformation. This is a note targeted at hosting companies, carriers, IT outsourcers, and others who are in, or plan to enter, the hosting or cloud infrastructure services markets. It’s a practical guide to the evolving market, with a look at product and customer segmentation, the financial impacts, and the practicalities of evolving from traditional hosting to the cloud.
Gartner clients only for those notes, sorry.
How badly do you need to keep that revenue?
If a customer really wants to leave, you are probably best off letting them leave. Certainly, if they’ve reached the end of their contract, and you are actively engaged in dialogue with one another, you should note little things like, “Your contract auto-renews”.
What not to do: Not tell the customer about the auto-renewal, then essentially point and laugh when they complain that they’re stuck with you for the next several years. Yes, absolutely, it’s the customer’s fault when this happens, and they have only themselves to blame, but it is still a terrible way to do business. (For those of you wondering how this kind of thing happens: Many organizations don’t do a great job of contract management, and it’s the kind of thing that is often lost in the shuffle of a merger, acquisition, re-org, shift from distributed to centralized IT, etc.)
There are all kinds of variants on this — lengthy multi-year auto-renewals, auto-renewals where the price can reset to essentially arbitrary levels, and other forms of “if you auto-renew we get to screw you” clauses, sometimes coupled with massive termination penalties. We’re not talking about month-to-month extensions here, which are generally to the mutual benefit of provider and customer in instances where a new contract simply hasn’t been negotiated yet. We’re really talking about traps for the unwary.
Unhappy customers are no good, but they often sort of gloom along until the end of their contracts and quietly leave. Customers who were unhappy and that you’ve forced into a renewal now hate you. They’ll tell everyone that they can how you screwed them. (And if they tell an analyst, that analyst will probably tell anyone they ever talk to about you, how you screwed another client of theirs. We like objectivity, but we also love a good yarn.) It has a subtle long-term effect on your business that is probably not worth whatever revenue coup you feel you got to pull off. An angry customer can torpedo you to potential prospects as easily as a happy customer can bring you referrals.
There’s no free lunch on the Internet
There’s no free lunch on the Internet. That’s the title of a research note that I wrote over a year ago, to explain the peering ecosystem to clients who wanted to understand how the money flows. What we’ve got today is the result of a free market. Precursor’s Scott Cleland thinks that’s unfair — he claims Google uses 21 times more bandwidth than it pays for. Now, massive methodological flaws in his “study” aside, his conclusions betray an utter lack of understanding of the commercial arrangements underlying today’s system of Internet traffic exchange in the United States.
Internet service providers (whether backbone providers or broadband providers) offer bandwidth at a particular price, or a settlement-free peering arrangement. Content providers negotiate for the lowest prices they can get. ISPs interconnect with each other for a fee, or settlement-free. And everyone’s trying to minimize their costs.
So, let’s say that you’re a big content provider (BCP). You, Mr. BCP, want to pay as little for bandwidth as possible. So if you’ve got enough clout, you can go to someone with broadband eyeballs, like Comcast, and say, “Please can I have free peering?” And Comcast will look at your traffic, and say to itself, “Hmm. If I don’t give you free peering, you’ll go buy bandwidth from someone like Level 3, and I will have to take your singing cow videos over my peer with them. That will increase my traffic there, which will have implications for my traffic ratios, which might mean that Level 3 would charge me for the traffic. It’s better for me to take your traffic directly (and get better performance for my end-users, too) than to congest my other peer.”
That example is a cartoonishly grotesque oversimplification, but you get the idea: Comcast is going to consider where your traffic is flowing and decide whether it’s in their commercial interest to give you settlement-free peering, charge you a low rate for bandwidth, or tell you that you have too little traffic and you can pay them more money or buy from someone else. They’re not carrying your traffic as some kind of act of charity on their part. Money is changing hands, or the parties involved agree that the arrangement is fairly reciprocal and therefore no money needs to change hands.
Cleland’s suggestion that Google is somehow being subsidized by end-users or by the ISPs is ludicrous. Google isn’t forcing anyone to peer with them, or holding a gun to anyone’s head to sell them cheap bandwidth. Providers are doing it because it’s a commercially reasonable thing to do. And users are paying for Internet access — and part of the value that they’re paying for is access to Google. The cost of accessing content is implicit in what users are paying.
Now, are users paying too little for what they get? Maybe. But nobody forced the ISPs to sell them broadband at low prices, either. Sure, the carriers and cable companies are in a price war — but this is capitalism. It’s a free market. A free market is not license to act stupidly and hope that there’s a bailout coming down the road. If you, a vendor, price a service below what it costs, expect to pay the piper eventually. Blaming content providers for not paying their “fair share” is nothing short of whining about a commercial situation that ISPs have gotten themselves into and continue to actively promote.
Google has posted a response to Cleland’s “research” that’s worth reading, as are the other commentaries it links to. I’ll likely be posting my own take on the methodological flaws and dubious facts, as well.
The week’s observations
My colleague Tom Bittman has written a great summary of the hot topics from the Gartner data center conference this past week.
Some personal observations as I wrap up the week…
The future of infrastructure is the cloud. I use “cloud” in a broad sense; many larger organizations will be building their own “private clouds” (which technically aren’t actually clouds, but the “private cloud” terminology has sunk in and probably won’t be easily budged). I was surprised by how many people at the conference wanted to talk to me about initial use of public clouds, how to structure cloud services within their own organizations, and what they could learn from public cloud and hosting services.
Cloud demos are extremely compelling. I was using demos of several clouds in order to make my points to people asking about cloud computing: Terremark’s Enterprise Cloud, Rackspace’s Mosso, and Amazon’s EC2 plus RightScale. I showed some screen shots off 3Tera’s website as well. I did not warn the providers that I was going to do this, and none of them were at the conference (a pity, since I suspect this would have been lead-generating). It was interesting to see how utterly fascinated people were — particularly with the Terremark offering, which is essentially a private cloud. (People were stopping me in the hallways to say, “I hear you have a really cool cloud demo.”) I was showing the trivially easy point-and-click process of provisioning a server, which, I think, provided a kind of grounding for “here is how the cloud could apply to your business”.
Colocation is really, really hot. My one-on-one schedule was crammed with colocation questions, though, as were my conversations with attendees in hallways and over meals, yet I was shocked by how many people showed up to my Friday, 8 am talk on colocation — the best-attended talk of the slot, I was told (and one cursed by lots of A/V glitches). Over the last month, we’ve seen demand accelerate and supply projections tighten — neither businesses nor data center providers can build right now.
A crazy conference week, like always, but tremendously interesting.
New CDN research notes
I have three new research notes out:
Determine Your Video Delivery Requirements. When I talk to clients, I often find that IT is trying to source a video delivery solution without having much of an idea of what the requirements actually are. This note is directed at them; it’s intended to serve as a framework for discussions with the content owners.
Toolkit: Determining Your Content Delivery Network Requirements. This toolkit consists of three Excel worksheets. The first gathers a handful of high-level requirements, in order to figure out what type of vendor you’re probably looking for. The second helps you estimate your volume and convert between the three typical measurements used (Mbps, MPVs, or GB delivered). The third is a pricing estimator and converter.
Purchasing Content Delivery Network Services. This is a practical guide to buying CDN services, targeted towards mid-sized and enterprise purchasers.
The .tel sunrise, and some irony
Telnic would like to become the Internet’s white pages, via the DNS as well as a Web-based presence; the “sunrise” (first registrations) on their .tel domain begins on December 3rd.
In the meantime, they’re taking trial sign-ups. I signed up. Telnic promptly emailed me a registration confirmation, sending my username and password to me in cleartext.
This company would like me to believe that they can be trusted to keep my confidential personal (or business) information secure…
Electronic marketplaces aren’t free-for-alls
Much has been made of the theoretically democratizing effect of electronic marketplaces, like the iPhone store. But it’s worth noting that such marketplaces can be, like the iPhone store, gated communities, not free-for-alls where anyone can hawk their wares.
On Friday, Google was set to launch a voice recognition app on the iPhone. Now we’re being told it will probably be available today. The reason for the delay: Apple hasn’t approved the app for the store, for reasons that currently remain mysterious. Google, obviously, wasn’t expecting it, since they had a blizzard of publicity surrounding the Friday launch in what was apparently every anticipation that it’d be available then. Now, Google is well-known for the chaos of its internal organization, but it doesn’t seem unreasonable to believe that their PR people have it pretty well together, which leaves one wondering what Apple was thinking.
There have been regular App Store approval issues, of course, but the Google app is particularly high-profile.
Vendor-controlled open marketplaces are only as open as the vendor wants to make them. That’s true of the marketplaces evolving around cloud services, too. Don’t lose sight of the fact that vendor-controlled ultimately means that the vendor has the power to do anything that the market will tolerate them doing, which at the moment can be quite considerable.
Recently-published research
Here’s a quick round-up of some of my recently-published research.
Is Amazon EC2 Right For You? This is an introduction to Amazon’s Elastic Compute Cloud, written for a mildly technical audience. It summarizes Amazon’s capabilities, the typical business case for using it, and what you’ve got to do to use it. If you’re an engineer looking for a quick briefing, or you want to show a “what this newfangled thing is” summary to your manager, or you’re an investor trying to understand what exactly it is that Amazon does, this is the document for you.
Dataquest Insight: Web Hosting, North America, 2006-2012. This is an in-depth look at the colocation and hosting business, together with market forecasts and trends. (Investors may also want to look at the Invest Implications.)
Dataquest Insight: Content Delivery Networks, North America, 2006-2012. This is an in-depth look at the CDN market, segment-by-segment, with market forecasts and trends. (Investors may also want to look at the Invest Implications.)
You’ll need to be a Gartner subscriber (or purchase the individual document) in order to view these pieces.
Upcoming research (for publication in the next month): A pricing guide for Web hosting and cloud infrastructure services; a classification scheme and service provider roadmap for cloud offerings; a toolkit for CDN requirements gathering and price estimation; a framework for gathering video requirements; and a CDN selection guide.
Quick takes: Comcast, Cogent, IronScale
Some quick takes on recent news:
Comcast P4P Results. Comcast is one of the ISPs working with hybrid-P2P CDN Pando Networks on a trial, and is showing better numbers than its competitors. The takeaway: Broadband ISPs are actively interested in P2P, CDN, and figuring out a way to monetize all of the video delivery they’re doing to their end-users.
Sprint Depeers Cogent (and Repeers). In this latest round of Cogent’s peering disputes, it’s arguing over a contract it signed with Sprint. The takeaway: Cogent is trying to keep its costs down, and is responsible for driving down bandwidth costs for everyone; its competitors are hitting back, rooted in the belief that Cogent is able to keep its prices low because it isn’t pulling its fair share of traffic carriage, which gets expressed in disputes over peering settlements.
IronScale Launches. RagingWire (a colo provider in Sacramento) has launched a managed hosting offering. Like SoftLayer, this is rapidly-provisioned dedicated servers and associated infrastructure, but unlike most of the competition in this space, it’s a managed solution. The takeaway: Like I wrote almost three years ago, it’s not about virtualization, it’s about flexibility. (“Beyond the Hype“: clients only, sorry.)