Thirty cities

Analysts travel a lot. As I think over the year, here are the cities where I’ve visited clients during 2008…

Northeast: Baltimore, Boston, New York City, Philadelphia, Stamford, Washington DC

South: Atlanta, Birmingham, Charlotte, Memphis, Miami, Nashville, Richmond

Midwest: Austin, Chicago, Dallas/Fort Worth, Detroit, Houston, Milwaukee, Minneapolis/St. Paul, San Antonio, St. Louis

West: Las Vegas, Los Angeles, Portland, San Diego, San Jose, San Francisco

Canada: Montreal, Toronto

Bookmark and Share

Google builds a CDN for its own content

An article in the Wall Street Journal today describes Google’s OpenEdge initiative (along with a lot of spin around net neutrality, resulting in a Google reply on its public policy blog).

Basically, Google is trying to convince broadband providers to let it place caches within their networks — effectively, pursuing the same architecture that a deep-footprint CDN like Akamai uses, but for Google content alone.

Much of the commentary around this seems to center on the idea that if Google can use this to obtain better performance for its content and applications, everyone else is at a disadvantage and it’s a general stab to net neutrality. (Even Om Malik, who is not usually given to mindless panic, asserts, “If Google can buy better performance for its service, your web app might be at a disadvantage. If the cost of doing business means paying baksheesh to the carriers, then it is the end of innovation as we know it.”)

I think this is an awful lot of hyperbole. Today, anyone can buy better performance for their Web content and applications by paying money to a CDN. And in turn, the CDNs pay baksheesh, if you want to call it that, to the carriers. Google is simply cutting out the middleman, and given that it accounts for as more traffic on the Internet than most CDNs, it’s neither illogical nor commercially unreasonable.

Other large content providers — Microsoft and AOL notably on a historical basis — have built internal CDNs in the past; Google is just unusual in that it’s attempting to push those caches deeper into the network on a widespread basis. I’d guess that it’s YouTube, more than anything else, that’s pushing Google to make this move.

This move is likely driven at least in part by the fact that most of the broadband providers simply don’t have enough 10 Gbps ports for traffic exchange (and space and power constraints in big peering points like Equinix’s aren’t helping matters, making it artificially hard for providers to get the expansions necessary to put big new routers into those facilities). Video growth has sucked up a ton of capacity. Google, and YouTube in particular, is a gigantic part of video traffic. If Google is offering to alleviate some of that logjam by putting its servers deeper into a broadband provider’s network, that might be hugely attractive from a pure traffic engineering standpoint. And providers likely trust Google to have enough remote management and engineering expertise to ensure that those cache boxes are well-behaved and not annoying to host. (Akamai has socialized this concept well over much of the last decade, so this is not new to the providers.)

I suspect that Google wouldn’t even need to pay to do this. For the broadband providers, the traffic engineering advantages, and the better performace to end-users, might be enough. In fact, this is the same logic that explains why Akamai doesn’t pay for most of its deep-network caches. It’s not that this is unprecedented. It’s just that this is the first time that an individual content provider has reached the kind of scale where they can make the same argument as a large CDN.

The cold truth is that small companies generally do not enjoy the same advantages as large companies. If you are a small company making widgets, chances are that a large company making widgets has a lower materials cost than you do, because they are getting a discount for buying in bulk. If you are a small company doing anything whatsoever, you aren’t going to see the kind of supplier discounts that a large company gets. The same thing is true for bandwidth — and for that matter, for CDN services. And big companies often leverage their scale into greater efficiency, to boot; for instance, unsurprisingly, Gartner’s metrics data shows that the average cost to running servers drops as you get more servers in your data center. Google employs both scale and efficiency leverage.

One of the key advantages of the emerging cloud infrastructure services, for start-ups, is that such services offer the leverage of scale, on a pay-by-the-drink basis. With cloud, small providers can essentially get the advantage of big providers by banding together into consortiums or paying an aggregator. However, on the deep-network CDN front, this probably won’t help. Highly distributed models work very well for extremely popular content. For long-tail content, cache hit ratios can be too low for it to be really worthwhile. That’s why it’s doubtful that you’ll see, say, Amazon’s Cloudfront CDN, push deep rather than continuing to follow a megaPOP model.

Ironically, because caching techniques aren’t as efficient for small content providers, it might actually be useful to them to be able to buy bandwidth at a higher QoS.

Bookmark and Share

Anti-virus integration with cloud storage

Anti-virus vendor Authentium is now offering its AV-scanning SDK to cloud providers.

Authentium, unlike most other AV vendors, has traditionally been focused at the gateway; they offer an SDK designed to be embedded in applications and appliances. (Notably, Authentium is the scanning engine used by Google’s Postini service.) So courting cloud providers is logical for them.

Anti-virus integration makes particular sense for cloud storage providers. Users of cloud storage upload millions of files a day. Many businesses that use cloud storage do so for user-generated content. AV-scanning a file as part of an upload could be just another API call — one that could be charged for on a per-operation basis, just like GET, PUT, and other cloud storage operations. That would turn AV scanning into a cloud Web service, making it trivially easy for developers to integrate AV scanning into their applications. It’d be a genuine value-add for using cloud storage — a reason to do so beyond “it’s cheap”.

More broadly, security vendors have become interested in offering scanning as a service, although most have desktop installed bases to defend, and thus are looking at it as a supplement as opposed to a replacement for traditional desktop AV products; see the past news on McAfee’s Project Artemis or Trend Micro’s Smart Protection Network for examples.

Bookmark and Share

Cloud research

I am spending as much of my research time as possible on cloud these days, although my core coverage (colocation, hosting, and CDNs) still demands most of my client-facing time.

Reflecting the fact that hosting and cloud infrastructure services are part of the same broad market (if you’re buying service from Joyent or GoGrid or MediaTemple or the like, you’re buying hosting), the next Gartner Magic Quadrant for Web Hosting will include cloud providers. That means I’m currently busy working on an awful lot of stuff, preparatory to beginning the formal process in January. I know we’ll be dealing with a lot of vendors who have never participated in a Magic Quadrant before, which should make this next iteration personally challenging but hopefully very interesting to our clients and exciting to vendors in the space.

Anyway, I have two new research notes out today:

Web Hosting and Cloud Infrastructure Prices, North America, 2008. This defines a segmentation for the emerging cloud infrastructure services market, and provides guidance to current pricing for the various category of Web hosting services, including cloud services.

Dataquest Insight: A Service Provider Road Map to the Cloud Infrastructure Transformation. This is a note targeted at hosting companies, carriers, IT outsourcers, and others who are in, or plan to enter, the hosting or cloud infrastructure services markets. It’s a practical guide to the evolving market, with a look at product and customer segmentation, the financial impacts, and the practicalities of evolving from traditional hosting to the cloud.

Gartner clients only for those notes, sorry.

Bookmark and Share

Velocix Metro

CDN provider Velocix has announced the launch of a new product, called Velocix Metro. (I was first briefed on Metro almost eight months ago, so the official launch has been in the works for quite a while.)

Velocix Metro is essentially a turnkey managed CDN service, deployed at locations of an Internet service provider’s choice, and potentially embedded deep into that ISP’s network. The ISP gets a revenue share based on the traffic delivered via their networks from Velocix, plus the ability to do their own delivery from the deployed CDN nodes in their network. Velocix’s flagship customer for this service is Verizon.

You might recall that Velocix is a partner CDN to MediaMelon, which I discussed in the context of CDN overlays a few weeks ago. I believe that these kinds of federated networks are going to become increasingly common, because carriers are the natural choice to provide commoditized CDN services (due to their low network costs), and broadband service providers need some way to monetize the gargantuan and growing volumes of rich content being delivered to their end-user eyeballs.

The economics of the peering ecosystem make it very hard for broadband providers to raise the price of bandwidth bought by content providers, and intermodal competition (i.e., DSL/FiOS vs. cable) creates pricing competition that makes it hard to charge end-users more. So broadband providers need to find another out, and offering up their own CDNs, and thus faster access to their eyeballs, is certainly a reasonable approach. (That means that over the long term, providers that deploy their own CDNs are probably going to be less friendly about placing gear from other CDNs deep within their networks, especially if it’s for free.)

We are entering the period of the rise of the local CDN — CDNs with deep but strictly regional penetration. For instance, if you’re a broadcaster in Italy, with Italian-language programming, you probably aren’t trying to deliver to the world and you don’t want to pay the prices necessary to do so; you want deep coverage within Italy and other Italian-speaking countries, and that’s it. An overlay or federated approach makes it possible to tie together CDNs owned by incumbent regional carriers, giving you delivery in just the places you care about. And that, in turn, creates a compelling business case for every large network provider to have a CDN of their own. Velocix, along with other vendors who can provide turnkey solutions to providers who want to build their own CDN networks, ought to benefit from that shift.

Bookmark and Share

IronScale launches

Sacramento-based colocation provider RagingWire has launched a subsidiary, StrataScale, whose first product is a managed cloud hosting service, IronScale. (I’ve mentioned this before, but the launch is now official.) I’ll be posting more on it once I’ve had time to check out a demo, but here’s a quick take:

What’s interesting is that IronScale is not a virtualized service. The current offering is on dedicated hardware — similar to the approach taken by SoftLayer, but this is a managed service. But it has the key cloud trait of elasticity — the ability to scale up and down at will, without commitments.

IronScale has automated fast provisioning (IronScale claims 3 minutes for the whole environment), management through the OS layer (including services like patch management), an integrated environment that includes the usual network suspects (firewall, load balancing, SSL acceleration), and a 100% uptime SLA. You can buy service on a month-to-month basis or an annual contract. This is a virtual data center offering; there’s a Web portal for provisioning plus a Web services API, along with some useful tricks like cloning and snapshots.

It’s worth noting that cloud infrastructure services, in their present incarnation, are basically just an expansion of the hosting market — moving the bar considerably in terms of expected infrastructure flexibility. This is real-time infrastructure, virtualized or not. It’s essentially a challenge to other companies who offer basic managed services — Rackspace, ThePlanet, and so on — but you can also expect it to compete with the VDC hosting offerings that target the mid-sized to enteprise organizations.

Bookmark and Share

Google Native Client

Google announced something very interesting yesterday: their Native Client project.

The short form of what this does: You can develop part or all of your application client in a language that compiles down to native code (for instance, C or C++, compiled to x86 assembly), then let the user run it in their browser, in a semi-sandboxed environment that theoretically prevents malicious code from being executed.

Why would you want to do this? Because developing complex applications in JavaScript is a pain, and all of the other options (Java in a browser, Adobe Flex, Microsoft Silverlight) provide only a subset of functionality, and are slower than native applications. That’s one of the reasons that most applications are still done for the desktop.

It’s an ambitious project, not to mention one that is probably making every black-hat hacker on the planet drool right now. The security challenges inherent in this are enormous.

Adobe has previously had a similar thought, in the form of Alchemy, a labs project for a C/C++ compiler that generates code for AVM2 (the virtual machine inside the Flash player). But Google takes the idea all the way down to true native code.

The broader trend has been towards managed code environments and just-in-time compilers (JITs). But the idea of native code with managed-code-like protections is certainly extremely interesting, and the techniques developed will likely be interesting in the broader context of malware prevention in non-browser applications, too.

And while we’re talking about lower-level application infrastructure pies that Google has its fingers in, it’s worth noting that Google has also exhibited significant interest in LLVM (which stands for Low-Level Virtual Machine). LLVM is an open-source project now sponsored by Apple, who hired its developer and is now using it within MacOS X. In layman’s terms, LLVM makes it easier for developers to write new programming languages, and makes it possible to develop composite applications using multiple programming languages. A compiler or interpreter developer can generate LLVM instructions rather than compiling to native code, then let LLVM take care of dealing with the back-end, the final stage of getting it to run natively. But LLVM also makes it easier to do analysis of code, something that is going to be critical if Google’s efforts with Native Client are to succeed. I am somewhat curious if Google’s interests intersect here, or if they’re entirely unrelated (not all that uncommon in Google’s chaotic universe).

Bookmark and Share

Error pages

Royal Pingdom has an interesting and amusing compilation of Web 2.0 error pages.

Of course, error screens sometimes contain what might be considered truthiness at best. My TiVo will show “Scheduled Maintenance” as its error when things don’t work. I suppose it’s more reassuring than “Oops” (and far better than the Java exceptions it spews in all of their failed-SOAP-call glory, when its Rhapsody connection is down, which is frequently).

Bookmark and Share

How badly do you need to keep that revenue?

If a customer really wants to leave, you are probably best off letting them leave. Certainly, if they’ve reached the end of their contract, and you are actively engaged in dialogue with one another, you should note little things like, “Your contract auto-renews”.

What not to do: Not tell the customer about the auto-renewal, then essentially point and laugh when they complain that they’re stuck with you for the next several years. Yes, absolutely, it’s the customer’s fault when this happens, and they have only themselves to blame, but it is still a terrible way to do business. (For those of you wondering how this kind of thing happens: Many organizations don’t do a great job of contract management, and it’s the kind of thing that is often lost in the shuffle of a merger, acquisition, re-org, shift from distributed to centralized IT, etc.)

There are all kinds of variants on this — lengthy multi-year auto-renewals, auto-renewals where the price can reset to essentially arbitrary levels, and other forms of “if you auto-renew we get to screw you” clauses, sometimes coupled with massive termination penalties. We’re not talking about month-to-month extensions here, which are generally to the mutual benefit of provider and customer in instances where a new contract simply hasn’t been negotiated yet. We’re really talking about traps for the unwary.

Unhappy customers are no good, but they often sort of gloom along until the end of their contracts and quietly leave. Customers who were unhappy and that you’ve forced into a renewal now hate you. They’ll tell everyone that they can how you screwed them. (And if they tell an analyst, that analyst will probably tell anyone they ever talk to about you, how you screwed another client of theirs. We like objectivity, but we also love a good yarn.) It has a subtle long-term effect on your business that is probably not worth whatever revenue coup you feel you got to pull off. An angry customer can torpedo you to potential prospects as easily as a happy customer can bring you referrals.

Bookmark and Share

There’s no free lunch on the Internet

There’s no free lunch on the Internet. That’s the title of a research note that I wrote over a year ago, to explain the peering ecosystem to clients who wanted to understand how the money flows. What we’ve got today is the result of a free market. Precursor’s Scott Cleland thinks that’s unfair — he claims Google uses 21 times more bandwidth than it pays for. Now, massive methodological flaws in his “study” aside, his conclusions betray an utter lack of understanding of the commercial arrangements underlying today’s system of Internet traffic exchange in the United States.

Internet service providers (whether backbone providers or broadband providers) offer bandwidth at a particular price, or a settlement-free peering arrangement. Content providers negotiate for the lowest prices they can get. ISPs interconnect with each other for a fee, or settlement-free. And everyone’s trying to minimize their costs.

So, let’s say that you’re a big content provider (BCP). You, Mr. BCP, want to pay as little for bandwidth as possible. So if you’ve got enough clout, you can go to someone with broadband eyeballs, like Comcast, and say, “Please can I have free peering?” And Comcast will look at your traffic, and say to itself, “Hmm. If I don’t give you free peering, you’ll go buy bandwidth from someone like Level 3, and I will have to take your singing cow videos over my peer with them. That will increase my traffic there, which will have implications for my traffic ratios, which might mean that Level 3 would charge me for the traffic. It’s better for me to take your traffic directly (and get better performance for my end-users, too) than to congest my other peer.”

That example is a cartoonishly grotesque oversimplification, but you get the idea: Comcast is going to consider where your traffic is flowing and decide whether it’s in their commercial interest to give you settlement-free peering, charge you a low rate for bandwidth, or tell you that you have too little traffic and you can pay them more money or buy from someone else. They’re not carrying your traffic as some kind of act of charity on their part. Money is changing hands, or the parties involved agree that the arrangement is fairly reciprocal and therefore no money needs to change hands.

Cleland’s suggestion that Google is somehow being subsidized by end-users or by the ISPs is ludicrous. Google isn’t forcing anyone to peer with them, or holding a gun to anyone’s head to sell them cheap bandwidth. Providers are doing it because it’s a commercially reasonable thing to do. And users are paying for Internet access — and part of the value that they’re paying for is access to Google. The cost of accessing content is implicit in what users are paying.

Now, are users paying too little for what they get? Maybe. But nobody forced the ISPs to sell them broadband at low prices, either. Sure, the carriers and cable companies are in a price war — but this is capitalism. It’s a free market. A free market is not license to act stupidly and hope that there’s a bailout coming down the road. If you, a vendor, price a service below what it costs, expect to pay the piper eventually. Blaming content providers for not paying their “fair share” is nothing short of whining about a commercial situation that ISPs have gotten themselves into and continue to actively promote.

Google has posted a response to Cleland’s “research” that’s worth reading, as are the other commentaries it links to. I’ll likely be posting my own take on the methodological flaws and dubious facts, as well.

Bookmark and Share