Monthly Archives: March 2010
And so it begins
We’re about to start the process for the next Magic Quadrant for Cloud Infrastructure Services and Web Hosting, along with the Critical Capabilities for Cloud Infrastructure Services (titles tentative and very much subject to change). Our hope is to publish in late July. These documents are typically a multi-month ordeal of vendor cat-herding; the evaluations themselves tend to be pretty quick, but getting all the briefings scheduled, references called, and paperwork done tends to eat up an inordinate amount of time. (This time, I’ve begged one of our admin assistants for help.)
What’s the difference? The MQ positions vendors in an overall broad market. CC, on the other hand, rates individual vendor products on how well they meet the requirements for a set of defined use cases. You get use-case by use-case ratings, which means that this year we’ll be doing things like “how well do these specific self-managed cloud offerings support a particular type of test-and-development environment need”. The MQ tends to favor vendors who do a broad set of things well; a CC rating, on the other hand, is essentially a narrow, specific evaluation based on specific requirements, and a product’s current ability to meet those needs (and therefore tends to favor vendors that have great product features).
Also, we’ve decided the CC note is going to be strictly focused on self-managed cloud — Amazon EC2 and its competitors, Terremark Enterprise Cloud and its competitors, and so on. This is a fairly pure features-and-functionality thing, in other words.
Anyone thinking about participation should check out my past posts on Magic Quadrants.
Equipment vendors get into the CDN act
Carriers are very interested in CDNs, whether it’s getting into the CDN market themselves, or “defensively” deploying content delivery solutions as part of an effort to reduce the cost of broadband service or find a way to monetize the gobs of video traffic now flowing to their end-users.
So far, most carriers have chosen partnerships with companies like Limelight and Edgecast, rather than entering the market with their own technology and services, but this shouldn’t be regarded as the permanent state of things. The equipment vendors clearly recognized the interest some time ago, and the solutions are finally starting to flow down the pipeline.
Last year, Alcatel-Lucent bought Velocix, a CDN provider, in order to get its carrier-focused technology, Velocix Metro (which I’ve written about before). Velocix Metro is a turnkey CDN solution, with the added interesting bit of adding aggregation capability across multiple Velocix customers.
Last month, Juniper partnered with Ankeena in order to bring a solution called Juniper MediaFlow to market. It’s a turnkey CDN solution incorporating both content routing and caches.
This month, with the introduction of the CRS-3 core router, Cisco announced its Network Positioning System (NPS). NPS is not a CDN solution of any sort. Rather, it’s essentially a system that gathers intelligence about the network, and uses that to compute the proximity between two endpoints. NPS is essentially used to provide a programmatic way to advise clients and servers about the best place to find networked resources. Today, we get a crude form of this in the technical functionality of BGP Anycast; NPS, by contrast, is supposed to use information from layers 3 through 7, and incorporate policy-based capabilities.
And in news of little companies, Blackwave has been chugging along, releasing a new version of its platform earlier this month. Blackwave is one of those companies that I expect to be eyed as an acquisition target by equipment vendors looking to add turnkey CDN solutions to their portfolios; Blackwave has a highly efficient ILM-ish media storage platform, coupled with some content routing capabilities. (Its flagship CDN customer is CDNetworks, but its other customers are primarily service providers.)
The (temporary?) transformation of hosters
Classically, hosting companies have been integrators of technology, not developers of technology. Yet the cloud world is increasingly pushing hosting companies into being software developers — companies who create competitive advantage in significant part by creating software which is used to deliver capabilities to customers.
I’ve heard the cloud IaaS business compared to the colocation market of the 1990s — the idea that you build big warehouses full of computers and you rent that compute capacity to people, comparable conceptually to renting data center space. People who hold this view tend to say things like, “Why doesn’t company X build a giant data center, buy a lot of computers, and rent them? Won’t the guy who can spend the most money building data centers win?” This view is, bluntly, completely and utterly wrong.
IaaS is functionally becoming a software business right now, one that is driven by the ability to develop software in order to introduce new features and capabilities, and to drive quality and efficiency. IaaS might not always be a software business; it might eventually be a service-and-support business that is enabled by third-party software. (This would be a reasonable view if you think that VMware’s vCloud is going to own the world, for instance.) And you can get some interesting dissonances when you’ve got some competitors in a market who are high-value software businesses vs. other folks who are mostly commodity infrastructure providers enabled by software (the CDN market is a great example of this). But for the next couple of years at least, it’s going to be increasingly a software business in its core dynamics; you can kind of think of it as a SaaS business in which the service delivered happens to be infrastructure.
To illustrate, let’s talk about Rackspace. Specifically, let’s talk about Rackspace vs. Amazon.
Amazon is an e-commerce company, with formidable retail operations skills embedded in its DNA, but it is also a software company, with more than a decade of experience under its belt in rolling out a continuous stream of software enhancements and using software to drive competitive advantage.
Amazon, in the cloud IaaS part of its Web Services division, is in the business of delivering highly automated IT infrastructure to customers. Custom-written software drives their entire infrastructure, all the way down to their network devices. Software provides the value-added enhancements that they deliver on top of the raw compute and storage, from the spot pricing marketplace to auto-scaling to the partially-automated MySQL management provided by the RDS service. Amazon’s success and market leadership depends on consistently rolling out new and enhanced features, functions, capabilities. It can develop and release software on such aggressive schedules that it can afford to be almost entirely tactical in its approach to the market, prioritizing whatever customers and prospects are demanding right now.
Rackspace, on the other hand, is a managed hosting company, built around a deep culture of customer service. Like all managed hosters, they’re imperfect, but on the whole, they are the gold standard of service, and customer service is one of the key differentiators in managed hosting, driving Rackspace’s very rapid growth over the last five years. Rackspace has not traditionally been a technology leader; historically, it’s been a reasonably fast follower implementing mainstream technologies in use by its target customers, but people, not engineering, has been its competitive advantage.
And now, Rackspace is going head to head with Amazon on cloud IaaS. It has made a series of acquisitions aimed at acquiring developers and software technology, including Slicehost, JungleDisk, and Webmail.us. (JungleDisk is almost purely a software company, in fact; it makes money by selling software licenses.) Even if they emphasize other competitive differentiation, like customer support, they’re still in direct competition with Amazon on pure functionality. Can Rackspace obtain the competencies it will need to be a software leader?
And in related questions: Can the other hosters who eschew the VMware vCloud route manage to drive the featureset and innovation they’ll need competitively? Will vCloud be inexpensive enough and useful enough to be widely adopted by hosters, and if it is, how much will it commoditize this market? What does this new emphasis upon true development, not just integration, do to hosters and to the market as a whole? (I’ve been thinking about this a lot, lately, although I suspect it’ll go into a real research note rather than a blog post.)
Google’s DNS protocol extension and CDNs
There have been a number of interesting new developments in the content routing space recently — specifically, the issue of how to get content to end-users from the most optimal point on the network. I’ll be talking about Cisco and Juniper in a forthcoming blog post, but for now, let’s start with Google:
A couple of weeks ago, Google and UltraDNS (part of Neustar) proposed an extension to the DNS protocol that would allow DNS servers to obtain the IP address of the end-user who originally made the request. DNS is normally recursive — the end-user queries his local DNS resolver server, which then makes queries up the chain on his behalf. The problem with this is that the resolver is not necessarily actually local — it might be far, far away from the user. And the DNS servers of things like CDNs use the location of the DNS query to figure out where the user is, which means that they actually return an optimal server for the resolver’s location, not the user’s.
I wrote about this problem in some detail about a year and a half ago, in a blog post: The nameserver as CDN vantage point. You can go back and look at that for a more cohesive explanation and a look at some numbers that illustrate how much of a problem resolver locations create. The Google proposal is certainly a boon to CDNs as well as anyone else that relies upon DNS for global load-balancing solutions. In the ecosystem where it’s supported, the enhancement will also give a slight performance boost to CDNs with more local footprint, by helping to ensure that the local cache is actually more local to the user. The resolver issue can, as I’ve noted before, erase the advantages of having more footprint closer to the edge, since that edge footprint won’t be used unless there are local resolvers that map to it. Provide the user’s IP, though, and you can figure out exactly what the best server for him is.
There’s no shortage of technical issues to debate (starting with the age-old objection to using DNS for content routing to begin with), and privacy issues have been raised as well, but my expectation is that even if it doesn’t actually get adopted as a standard (and I’m guessing it won’t, by the way), enough large entities will implement it to make it a boon for many users.