Monthly Archives: December 2008

Cloud research

I am spending as much of my research time as possible on cloud these days, although my core coverage (colocation, hosting, and CDNs) still demands most of my client-facing time.

Reflecting the fact that hosting and cloud infrastructure services are part of the same broad market (if you’re buying service from Joyent or GoGrid or MediaTemple or the like, you’re buying hosting), the next Gartner Magic Quadrant for Web Hosting will include cloud providers. That means I’m currently busy working on an awful lot of stuff, preparatory to beginning the formal process in January. I know we’ll be dealing with a lot of vendors who have never participated in a Magic Quadrant before, which should make this next iteration personally challenging but hopefully very interesting to our clients and exciting to vendors in the space.

Anyway, I have two new research notes out today:

Web Hosting and Cloud Infrastructure Prices, North America, 2008. This defines a segmentation for the emerging cloud infrastructure services market, and provides guidance to current pricing for the various category of Web hosting services, including cloud services.

Dataquest Insight: A Service Provider Road Map to the Cloud Infrastructure Transformation. This is a note targeted at hosting companies, carriers, IT outsourcers, and others who are in, or plan to enter, the hosting or cloud infrastructure services markets. It’s a practical guide to the evolving market, with a look at product and customer segmentation, the financial impacts, and the practicalities of evolving from traditional hosting to the cloud.

Gartner clients only for those notes, sorry.

Bookmark and Share

Velocix Metro

CDN provider Velocix has announced the launch of a new product, called Velocix Metro. (I was first briefed on Metro almost eight months ago, so the official launch has been in the works for quite a while.)

Velocix Metro is essentially a turnkey managed CDN service, deployed at locations of an Internet service provider’s choice, and potentially embedded deep into that ISP’s network. The ISP gets a revenue share based on the traffic delivered via their networks from Velocix, plus the ability to do their own delivery from the deployed CDN nodes in their network. Velocix’s flagship customer for this service is Verizon.

You might recall that Velocix is a partner CDN to MediaMelon, which I discussed in the context of CDN overlays a few weeks ago. I believe that these kinds of federated networks are going to become increasingly common, because carriers are the natural choice to provide commoditized CDN services (due to their low network costs), and broadband service providers need some way to monetize the gargantuan and growing volumes of rich content being delivered to their end-user eyeballs.

The economics of the peering ecosystem make it very hard for broadband providers to raise the price of bandwidth bought by content providers, and intermodal competition (i.e., DSL/FiOS vs. cable) creates pricing competition that makes it hard to charge end-users more. So broadband providers need to find another out, and offering up their own CDNs, and thus faster access to their eyeballs, is certainly a reasonable approach. (That means that over the long term, providers that deploy their own CDNs are probably going to be less friendly about placing gear from other CDNs deep within their networks, especially if it’s for free.)

We are entering the period of the rise of the local CDN — CDNs with deep but strictly regional penetration. For instance, if you’re a broadcaster in Italy, with Italian-language programming, you probably aren’t trying to deliver to the world and you don’t want to pay the prices necessary to do so; you want deep coverage within Italy and other Italian-speaking countries, and that’s it. An overlay or federated approach makes it possible to tie together CDNs owned by incumbent regional carriers, giving you delivery in just the places you care about. And that, in turn, creates a compelling business case for every large network provider to have a CDN of their own. Velocix, along with other vendors who can provide turnkey solutions to providers who want to build their own CDN networks, ought to benefit from that shift.

Bookmark and Share

IronScale launches

Sacramento-based colocation provider RagingWire has launched a subsidiary, StrataScale, whose first product is a managed cloud hosting service, IronScale. (I’ve mentioned this before, but the launch is now official.) I’ll be posting more on it once I’ve had time to check out a demo, but here’s a quick take:

What’s interesting is that IronScale is not a virtualized service. The current offering is on dedicated hardware — similar to the approach taken by SoftLayer, but this is a managed service. But it has the key cloud trait of elasticity — the ability to scale up and down at will, without commitments.

IronScale has automated fast provisioning (IronScale claims 3 minutes for the whole environment), management through the OS layer (including services like patch management), an integrated environment that includes the usual network suspects (firewall, load balancing, SSL acceleration), and a 100% uptime SLA. You can buy service on a month-to-month basis or an annual contract. This is a virtual data center offering; there’s a Web portal for provisioning plus a Web services API, along with some useful tricks like cloning and snapshots.

It’s worth noting that cloud infrastructure services, in their present incarnation, are basically just an expansion of the hosting market — moving the bar considerably in terms of expected infrastructure flexibility. This is real-time infrastructure, virtualized or not. It’s essentially a challenge to other companies who offer basic managed services — Rackspace, ThePlanet, and so on — but you can also expect it to compete with the VDC hosting offerings that target the mid-sized to enteprise organizations.

Bookmark and Share

Google Native Client

Google announced something very interesting yesterday: their Native Client project.

The short form of what this does: You can develop part or all of your application client in a language that compiles down to native code (for instance, C or C++, compiled to x86 assembly), then let the user run it in their browser, in a semi-sandboxed environment that theoretically prevents malicious code from being executed.

Why would you want to do this? Because developing complex applications in JavaScript is a pain, and all of the other options (Java in a browser, Adobe Flex, Microsoft Silverlight) provide only a subset of functionality, and are slower than native applications. That’s one of the reasons that most applications are still done for the desktop.

It’s an ambitious project, not to mention one that is probably making every black-hat hacker on the planet drool right now. The security challenges inherent in this are enormous.

Adobe has previously had a similar thought, in the form of Alchemy, a labs project for a C/C++ compiler that generates code for AVM2 (the virtual machine inside the Flash player). But Google takes the idea all the way down to true native code.

The broader trend has been towards managed code environments and just-in-time compilers (JITs). But the idea of native code with managed-code-like protections is certainly extremely interesting, and the techniques developed will likely be interesting in the broader context of malware prevention in non-browser applications, too.

And while we’re talking about lower-level application infrastructure pies that Google has its fingers in, it’s worth noting that Google has also exhibited significant interest in LLVM (which stands for Low-Level Virtual Machine). LLVM is an open-source project now sponsored by Apple, who hired its developer and is now using it within MacOS X. In layman’s terms, LLVM makes it easier for developers to write new programming languages, and makes it possible to develop composite applications using multiple programming languages. A compiler or interpreter developer can generate LLVM instructions rather than compiling to native code, then let LLVM take care of dealing with the back-end, the final stage of getting it to run natively. But LLVM also makes it easier to do analysis of code, something that is going to be critical if Google’s efforts with Native Client are to succeed. I am somewhat curious if Google’s interests intersect here, or if they’re entirely unrelated (not all that uncommon in Google’s chaotic universe).

Bookmark and Share

Error pages

Royal Pingdom has an interesting and amusing compilation of Web 2.0 error pages.

Of course, error screens sometimes contain what might be considered truthiness at best. My TiVo will show “Scheduled Maintenance” as its error when things don’t work. I suppose it’s more reassuring than “Oops” (and far better than the Java exceptions it spews in all of their failed-SOAP-call glory, when its Rhapsody connection is down, which is frequently).

Bookmark and Share

How badly do you need to keep that revenue?

If a customer really wants to leave, you are probably best off letting them leave. Certainly, if they’ve reached the end of their contract, and you are actively engaged in dialogue with one another, you should note little things like, “Your contract auto-renews”.

What not to do: Not tell the customer about the auto-renewal, then essentially point and laugh when they complain that they’re stuck with you for the next several years. Yes, absolutely, it’s the customer’s fault when this happens, and they have only themselves to blame, but it is still a terrible way to do business. (For those of you wondering how this kind of thing happens: Many organizations don’t do a great job of contract management, and it’s the kind of thing that is often lost in the shuffle of a merger, acquisition, re-org, shift from distributed to centralized IT, etc.)

There are all kinds of variants on this — lengthy multi-year auto-renewals, auto-renewals where the price can reset to essentially arbitrary levels, and other forms of “if you auto-renew we get to screw you” clauses, sometimes coupled with massive termination penalties. We’re not talking about month-to-month extensions here, which are generally to the mutual benefit of provider and customer in instances where a new contract simply hasn’t been negotiated yet. We’re really talking about traps for the unwary.

Unhappy customers are no good, but they often sort of gloom along until the end of their contracts and quietly leave. Customers who were unhappy and that you’ve forced into a renewal now hate you. They’ll tell everyone that they can how you screwed them. (And if they tell an analyst, that analyst will probably tell anyone they ever talk to about you, how you screwed another client of theirs. We like objectivity, but we also love a good yarn.) It has a subtle long-term effect on your business that is probably not worth whatever revenue coup you feel you got to pull off. An angry customer can torpedo you to potential prospects as easily as a happy customer can bring you referrals.

Bookmark and Share

There’s no free lunch on the Internet

There’s no free lunch on the Internet. That’s the title of a research note that I wrote over a year ago, to explain the peering ecosystem to clients who wanted to understand how the money flows. What we’ve got today is the result of a free market. Precursor’s Scott Cleland thinks that’s unfair — he claims Google uses 21 times more bandwidth than it pays for. Now, massive methodological flaws in his “study” aside, his conclusions betray an utter lack of understanding of the commercial arrangements underlying today’s system of Internet traffic exchange in the United States.

Internet service providers (whether backbone providers or broadband providers) offer bandwidth at a particular price, or a settlement-free peering arrangement. Content providers negotiate for the lowest prices they can get. ISPs interconnect with each other for a fee, or settlement-free. And everyone’s trying to minimize their costs.

So, let’s say that you’re a big content provider (BCP). You, Mr. BCP, want to pay as little for bandwidth as possible. So if you’ve got enough clout, you can go to someone with broadband eyeballs, like Comcast, and say, “Please can I have free peering?” And Comcast will look at your traffic, and say to itself, “Hmm. If I don’t give you free peering, you’ll go buy bandwidth from someone like Level 3, and I will have to take your singing cow videos over my peer with them. That will increase my traffic there, which will have implications for my traffic ratios, which might mean that Level 3 would charge me for the traffic. It’s better for me to take your traffic directly (and get better performance for my end-users, too) than to congest my other peer.”

That example is a cartoonishly grotesque oversimplification, but you get the idea: Comcast is going to consider where your traffic is flowing and decide whether it’s in their commercial interest to give you settlement-free peering, charge you a low rate for bandwidth, or tell you that you have too little traffic and you can pay them more money or buy from someone else. They’re not carrying your traffic as some kind of act of charity on their part. Money is changing hands, or the parties involved agree that the arrangement is fairly reciprocal and therefore no money needs to change hands.

Cleland’s suggestion that Google is somehow being subsidized by end-users or by the ISPs is ludicrous. Google isn’t forcing anyone to peer with them, or holding a gun to anyone’s head to sell them cheap bandwidth. Providers are doing it because it’s a commercially reasonable thing to do. And users are paying for Internet access — and part of the value that they’re paying for is access to Google. The cost of accessing content is implicit in what users are paying.

Now, are users paying too little for what they get? Maybe. But nobody forced the ISPs to sell them broadband at low prices, either. Sure, the carriers and cable companies are in a price war — but this is capitalism. It’s a free market. A free market is not license to act stupidly and hope that there’s a bailout coming down the road. If you, a vendor, price a service below what it costs, expect to pay the piper eventually. Blaming content providers for not paying their “fair share” is nothing short of whining about a commercial situation that ISPs have gotten themselves into and continue to actively promote.

Google has posted a response to Cleland’s “research” that’s worth reading, as are the other commentaries it links to. I’ll likely be posting my own take on the methodological flaws and dubious facts, as well.

Bookmark and Share

The week’s observations

My colleague Tom Bittman has written a great summary of the hot topics from the Gartner data center conference this past week.

Some personal observations as I wrap up the week…

The future of infrastructure is the cloud. I use “cloud” in a broad sense; many larger organizations will be building their own “private clouds” (which technically aren’t actually clouds, but the “private cloud” terminology has sunk in and probably won’t be easily budged). I was surprised by how many people at the conference wanted to talk to me about initial use of public clouds, how to structure cloud services within their own organizations, and what they could learn from public cloud and hosting services.

Cloud demos are extremely compelling. I was using demos of several clouds in order to make my points to people asking about cloud computing: Terremark’s Enterprise Cloud, Rackspace’s Mosso, and Amazon’s EC2 plus RightScale. I showed some screen shots off 3Tera’s website as well. I did not warn the providers that I was going to do this, and none of them were at the conference (a pity, since I suspect this would have been lead-generating). It was interesting to see how utterly fascinated people were — particularly with the Terremark offering, which is essentially a private cloud. (People were stopping me in the hallways to say, “I hear you have a really cool cloud demo.”) I was showing the trivially easy point-and-click process of provisioning a server, which, I think, provided a kind of grounding for “here is how the cloud could apply to your business”.

Colocation is really, really hot. My one-on-one schedule was crammed with colocation questions, though, as were my conversations with attendees in hallways and over meals, yet I was shocked by how many people showed up to my Friday, 8 am talk on colocation — the best-attended talk of the slot, I was told (and one cursed by lots of A/V glitches). Over the last month, we’ve seen demand accelerate and supply projections tighten — neither businesses nor data center providers can build right now.

A crazy conference week, like always, but tremendously interesting.

Bookmark and Share

Amazon SimpleDB, plus a bit on cloud storage

Amazon SimpleDB is now in public beta. This database-as-a-service has been in private beta for some time, but what’s really noteworthy is that with the public beta, Amazon has dropped the price drastically, and the first 25 machine hours, 1 GB of storage, and 1 GB of transfer are free, meaning that it’s essentially free to experiment with.

On another Amazon-related note, my colleagues who cover storage have recently put out a research note titled, “A Look at Amazon’s S3 Cloud-Computing Storage Service“. If you’re a Gartner client contemplating use of S3, I’d suggest checking it out.

I want to stress something that’s probably not obvious from that note: You can’t mount S3 storage like a normal filesystem. You access it via its APIs, and that’s all. If you use EC2 and you need cloud storage that looks like a regular filesystem, you’ll want to use Amazon’s Elastic Block Store. If you’re using S3, whether within EC2 or from your own infrastructure, you’re either going to make API calls directly (which will make your apps dependent upon S3), or you’re going to have to have to go through a filesystem driver like Fuse (commercially, Subcloud).

Cloud storage, at this stage, is typically reliant upon proprietary APIs. Some providers are starting to offer filesystems, such as Nirvanix‘s CloudNAS (now in beta), but we’re at the very earliest stages of that. I suspect that the implementation hurdles created by API-only access, and not the contractual issues, will be what stop enterprises from adopting it in the near term.

On a final storage-related note, Rackspace (Mosso) Cloud Files remains in a definitively beta stage. I was playing with the shell I was writing (adding an FTP-like get and put with progress bars and such), and trying to figure out why my API calls were failing. It turned out that the service was in read-only mode for a while yesterday, and even read calls (via the API) were failing for a bit (returning 500 Internal Server Error codes). On the plus side, my real-time chat — Rackspace’s support via an instant-messaging-like interface — support request, which I made to report the read outage, was answered immediately, politely, and knowledgeably, one clear way that the Rackspace offering wins over S3. (Amazon charges for support.)

Bookmark and Share

New CDN research notes

I have three new research notes out:

Determine Your Video Delivery Requirements. When I talk to clients, I often find that IT is trying to source a video delivery solution without having much of an idea of what the requirements actually are. This note is directed at them; it’s intended to serve as a framework for discussions with the content owners.

Toolkit: Determining Your Content Delivery Network Requirements. This toolkit consists of three Excel worksheets. The first gathers a handful of high-level requirements, in order to figure out what type of vendor you’re probably looking for. The second helps you estimate your volume and convert between the three typical measurements used (Mbps, MPVs, or GB delivered). The third is a pricing estimator and converter.

Purchasing Content Delivery Network Services. This is a practical guide to buying CDN services, targeted towards mid-sized and enterprise purchasers.

Bookmark and Share

%d bloggers like this: