Author Archives: Lydia Leong
Google’s DNS protocol extension and CDNs
There have been a number of interesting new developments in the content routing space recently — specifically, the issue of how to get content to end-users from the most optimal point on the network. I’ll be talking about Cisco and Juniper in a forthcoming blog post, but for now, let’s start with Google:
A couple of weeks ago, Google and UltraDNS (part of Neustar) proposed an extension to the DNS protocol that would allow DNS servers to obtain the IP address of the end-user who originally made the request. DNS is normally recursive — the end-user queries his local DNS resolver server, which then makes queries up the chain on his behalf. The problem with this is that the resolver is not necessarily actually local — it might be far, far away from the user. And the DNS servers of things like CDNs use the location of the DNS query to figure out where the user is, which means that they actually return an optimal server for the resolver’s location, not the user’s.
I wrote about this problem in some detail about a year and a half ago, in a blog post: The nameserver as CDN vantage point. You can go back and look at that for a more cohesive explanation and a look at some numbers that illustrate how much of a problem resolver locations create. The Google proposal is certainly a boon to CDNs as well as anyone else that relies upon DNS for global load-balancing solutions. In the ecosystem where it’s supported, the enhancement will also give a slight performance boost to CDNs with more local footprint, by helping to ensure that the local cache is actually more local to the user. The resolver issue can, as I’ve noted before, erase the advantages of having more footprint closer to the edge, since that edge footprint won’t be used unless there are local resolvers that map to it. Provide the user’s IP, though, and you can figure out exactly what the best server for him is.
There’s no shortage of technical issues to debate (starting with the age-old objection to using DNS for content routing to begin with), and privacy issues have been raised as well, but my expectation is that even if it doesn’t actually get adopted as a standard (and I’m guessing it won’t, by the way), enough large entities will implement it to make it a boon for many users.
Cloudkick launches a commercial service
Cloudkick, a start-up which has been offering free multi-cloud management and monitoring services in preview (and which is the originator and sponsor of the open-source libcloud project), has launched its commercial offering.
Quite a bit has been written about Cloudkick in other venues, so I’ll offer a musing of a different sort: I am contemplating the degree to which cloud-agnostic monitoring service providers will evolve into general monitoring SaaS vendors. A tiny fraction of cloud IaaS users will actually be significantly multi-cloud, whereas a far vaster addressable market exists for really excellent monitoring tools, including cost-effective ways of doing third-party monitoring for the purposes of cloud SLA enforcement. Even though enterprises are likely to extend their own internal monitoring systems into their cloud environments, there will continue to be a need for third-party monitoring, and for many organizations, third-party monitoring that can also feed alerts into internal monitoring systems will be a popular choice.
Cloudkick has been interesting in the context of the debate over alleged capacity issues on Amazon EC2. Their monitoring has been showing latency issues growing in severity since Christmas or so. The public nature of this data, among other things, has pushed Amazon into making a statement that they don’t have overcapacity problems; it’s an interesting example of how making such data openly available can bring pressure to bear on service providers.
Cloud ecosystems for small businesses
As I’ve been predicting for a while, Microsoft and Intuit have joined forces around Quickbooks and Azure: Microsoft and Intuit announced that Intuit would name Microsoft’s Windows Azure as the preferred platform for cloud app development on its Intuit Partner Platform. This is an eminently logical partnership. MSDN developers, are a critical channel for reaching the small business with applications, Azure is evolving to be well-suited to that community, and Intuit’s Quickbooks is a key anchor application for the small business. Think of this partnership as the equivalent of Force.com for the small business; arguably, Quickbooks is an even more compelling anchor application for a PaaS ecosystem than CRM is.
A lot of non-IT companies are thinking about cloud strategies these days. I get a great deal of inquiry from companies seeking to target the small business with cloud offerings, and the question that I keep having to ask is, “What natural value does your existing business bring when extended to the cloud?” An astounding number of strategy people at miscellaneous companies seem to believe that they ought to be cloud IaaS providers, or resellers of other people’s SaaS solutions for small businesses — without being natural places for small businesses to turn for either infrastructure or software.
Whatever your business is, if you want to create a cloud ecosystem, you need an anchor service. Take something that you do today, and leverage cloud precepts. Consider doing something like creating a data service around it, opening up an API, and the like. (Gartner clients: My colleague Eric Knipp has written a useful research note on this topic entitled Open RESTful APIs are Big Business.) Use that as the centerpiece for an ecosystem of related services from partners, and the community of users.
VMware buys Zimbra
Hot on the heels of a day of rumors, VMware announced the acquisition of Zimbra, the open-source email platform vendor previously purchased (and up until now, still owned by) Yahoo!.
It’s a somewhat puzzling acquisition. Its key strategic thrust seems to be enabling the service provider ecosystem. Most service providers (i.e., hosters) already offer email as a service — although today, they primarily offer Hosted Exchange. Other platforms — Open-Xchange, OpenWave, Critical Path, Mirapoint, etc. — are used for more commodity email services. Relatively speaking, Zimbra doesn’t have as much traction in the service provider space, and VMware’s service provider ecosystem is not gasping for lack of reasonable platforms on which to offer email SaaS.
The email SaaS space is super-competitive. Businesses have come to the realization that email is a commodity that can be safely outsourced, and that huge cost savings can be realized by outsourcing it; this is driving rapid growth of mailboxes delivered as SaaS, but it’s also driving aggressive price competition. That, in turn, drives service providers to push their underlying email software vendors for lower license costs.
One has to speculate, then, that this acquisition is not just about email. It’s about the broader platform strategy, and the degree to which VMware wants to own an entire stack.
My colleagues and I are working on publishing a Gartner position (a “First Take”) on this acquisition, so I’m sorry to be a bit brief, and cryptic.
Gmail, Macquarie, and US regulation
Google continues to successfully push Gmail into higher education, in an Australian deal with Macquarie University. (Microsoft is its primary competitor in this market, but for Microsoft, most such Live@edu represent cannibalization of their higher ed Exchange base.)
That, by itself, isn’t a particularly interesting announcement. Email SaaS is a huge trend, and the low-cost .edu offerings have been gaining particular momentum. What caught my eye was this:
The university was hesitant to move staff members on to Gmail due to regulatory and cost factors. They were concerned that their email messages would be subject to draconian US law. In particular, they were worried about protecting their intellectual property under the Patriot Act and Digital Millennium Copyright Act, Mr. Bailey said. “In the end, Google agreed to store that data under EU jurisdiction, which we accepted,” he said.
That tells us that Google can divide their data storage into zones if need be, as one would expect, but it also tells us that they can do so for particular customers (presumably, given Google’s approach to the world, as a configurable, automated thing, and not as a one-off).
However, the remark about the Patriot Act and DMCA is what really caught my attention. DMCA is a worry for universities (due to the high likelihood of pirated media), but USA PATRIOT is a significant worry for a lot of the non-US clients that I talk to about cloud computing, especially those in Europe — to the point where I speak with clients who won’t use US-based vendors, even if the infrastructure itself is in Europe. (Australian clients are more likely to end up with a vendor that has somewhat local infrastructure to begin with, due to the latency issues.)
Cross-border issues are a serious barrier to cloud adoption in Europe in general, often due to regulatory requirements to keep data within-country (or sometimes less stringently, within the EU). That will make it more difficult for European cloud computing vendors to gain significant operational scale. (Whether this will also be the case in Asia remains to be seen.)
But if you’re in the US, it’s worth thinking about how the Patriot Act is perceived outside the US, and how it and any similar measures will limit the desire to use US-based cloud vendors. A lot of US-based folks tell me that they don’t understand why anyone would worry about it, but the “you should just trust that the US government won’t abuse it” story plays considerably less well elsewhere in the world.
Savvis CEO Phil Koen resigns
Savvis announced the resignation of CEO Phil Koen on Friday, citing a “joint decision” between Koen and the board of directors. This was clearly not a planned event, and it’s interesting, coming at the end of a year in which Savvis’s stock has performed pretty well (it’s up 96% over last year, although the last quarter has been rockier, -8%). The presumed conflict between Koen and the board becomes clearer when one looks at a managed hosting comparable like Rackspace (up 276% over last year, 19% in the last quarter), rather than at the colocation vendors.
When the newly-appointed interim CEO Jim Ousley says “more aggressive pursuit of expanding our growth”, I read that as, “Savvis missed the chance to be an early cloud computing leader”. A leader in utility computing, offering on-demand compute on eGenera-based blade architectures, Savvis could have taken its core market message, shifted its technology approach to embrace a primarily virtualization-based implementation, and led the charge into enterprise cloud. Instead, its multi-pronged approach (do you want dedicated servers? blades? VMs?) led to a lengthy period of confusion for prospective customers, both in marketing material and in the sales cycle itself.
Savvis still has solid products and services, and we still see plenty of contract volume in our own dealings with enterprise clients, as well as generally positive customer experiences. But Savvis has become a technology follower, conservative in its approach, rather than a boldly visionary leader. Under previous CEO Rob McCormick, the company was often ahead of its time, which isn’t ideal either, but in this period of rapid market evolution, the consumerization of IT, and self-service, Savvis’s increasingly IBM-like market messages are a bit discordant with the marketplace, and its product portfolio has largely steered away from the fastest-growing segment of the market, self-managed cloud hosting.
Koen made many good decisions — among them, focusing on managed hosting rather than colocation. But his tenure was also a time of significant turnover within the Savvis ranks, especially at the senior levels of sales and marketing. When Ousley says the company is going to take a “fresh look” at sales and marketing, I read that as, “Try to retain sales and marketing leadership for long enough for them to make a long-term impact.”
Having an interim CEO in the short term — and one drawn from the ranks of the board, rather than from the existing executive leadership — means what is effectively a holding pattern until a new CEO can be selected, gets acquainted with the business, and figures out what he wants to do. That’s not going to be quick, which is potentially dangerous at this time of fast-moving market evolution. But the impact of that won’t be felt for many months; in the near term, one would expect projects to continue to execute as planned.
Thus, for existing and prospective Savvis customers, I’d expect that this change in the executive ranks will result in exactly zero impact in the near term; anyone considering signing a contract should just proceed as if nothing’s changed.
The next round-up of links
Renesys has posted its yearly ranking of Internet transit providers. For anyone interested in understanding how transit volumes across various networks are changing, this should be very interesting data.
Ryan Kearney’s Comparing CDN Performance is an interesting overview of cloud CDNs. His methodology is flawed by the limited number of locations he’s testing from, but his comparison charts of features and whatnot are a handy reference for anyone who’s looking at file delivery off the cloud. (And for those who have missed the announcement: don’t forget the Windows Azure CDN, which presumably uses the tech that Microsoft licensed from Limelight.)
Jack of All Clouds has some nice graphs in a State of the Cloud post, showing sites (out of the top 500,000 sites) hosted by various public clouds.
Rich Miller rounds up a Slashdot discussion on how many servers an admin can manage. I’ll throw in my two cents that it’s not just a matter of how many people you have in true systems operations — you also have to look at what you invested in tools and the people to write and maintain those tools. There’s a TCO to be looked at here. Tools scale; people don’t. Anyone operating at dot-com or service provider scale rapidly develops a passion for automating everything humanly possible (or agrees that they’ll be giving up on sleep). But for the enterprise, tools implementations often don’t go as well as one might hope.
And on a Jim Cramer note, Rich Miller also has a fun round-up of data center stock performance in 2009 and since Cramer’s call.
The last quarter in review
The end of 2009 was extraordinarily busy, and that’s meant that, shamefully, I haven’t posted to my blog in ages. I aim to try to return to near-daily posting in 2010, but this means creating time in my schedule to think and research and write, rather than being entirely consumed by client inquiry.
December was Gartner’s data center conference, where I spent most of a week in back-to-back meetings, punctuated by a cloud computing end-user roundtable, a cloud computing town hall, and my talk on getting data center space. Attendance at the conference is skewed heavily towards large enterprises, but one of the most fascinating bits that emerged out of the week was the number of people walking around with emails from their CEO saying that they had to investigate this cloud computing thing, and whose major goals for the conference included figuring out how the heck they were going to reply to that email.
My cloud computing webinar is now available for replay — it’s a lightweight introduction to the subject. Ironically, when I started working at Gartner, I was terrified of public speaking, and much more comfortable doing talks over the phone. Now, I’m used to having live audiences and public speaking is just another routine day on the job… but speaking into the dead silence of an ATC is a little unnerving. (I once spent ten minutes giving a presentation to dead air, not realizing that the phone bridge had gone dead.) There were tons of great questions asked by the audience, far more than could possibly be answered in the Q&A time, but I’m taking the input and using it to figure out how to decide what I should be writing this year.
Q4 2009, by and large, continued my Q3 inquiry trends. Tons of colocation inquiries — but colocation is often giving way to leasing, now, and local/regional players are prominent in nearly every deal (and winning a lot of the deals). Relatively quiet on the CDN front, but this has to be put in context — Gartner’s analysts took over 1300 inquiries on enterprise video during 2009, and these days I’m pretty likely to look at a client’s needs and tell them they need someone like Kontiki or Ignite, not a traditional Internet CDN. And cloud, cloud, cloud is very much on everyone’s radar screen, with Asia suddenly becoming hot. Traditional dedicated hosting is dying at a remarkable pace; it’s unusual to see new deals that aren’t virtualized.
I’ll be writing on all this and more in the new year.
Traffic Server returns from the dead
Back in 2002, Yahoo acquired Inktomi, a struggling software vendor whose fortunes had turned unpleasantly with the dot-com crash. While at the time of the acquisition, Inktomi had refocused its efforts upon search, its original flagship product — the one that really drove its early revenue growth — was something called Traffic Server.
Traffic Server was a Web proxy server — essentially, software for running big caches. It delivered significantly greater scalability, stability, and maintainability than did the most commonly-used alternative, the open-source Squid. It was a great piece of software; at one point in time, I was one of Inktomi’s largest customers (possibly the actual largest customer), with several hundred Traffic Servers deployed in production globally, so I speak from experience, here. (This was as ISP caches, as opposed to the way that Yahoo uses it, which is a front-end, “reverse proxy” cache.)
Now, as ghosts of the dot-com era resurface, Yahoo is open-sourcing Traffic Server. This is a boon not only to Web sites that need high scalability, but also to organizations who need inexpensive, high-performance proxies for their networks, as well as low-end CDNs whose technology is still Squid-based. There are now enterprise competitors in this space (such as Blue Coat Systems), but open-source remains a lure for many seeking low-cost alternatives. Moreover, service providers and content providers have different needs from the enterprise.
This open-sourcing is only to Yahoo’s benefit. It’s not a core piece of technology, there are plenty of technology alternatives available already, and by opening up the source code to the community, they’re reasonably likely to attract active development at a pace beyond what they could invest in internally.
Jim Cramer’s “Death of the Data Center”
Jim Cramer’s “Mad Money” featured an interesting segment yesterday, titled “Sell Block: The Death of the Data Center?”
Basically, the premise of the segment is that Intel’s Nehalem DP processors will allow businessses to shrink their data center footprint, and thus businesses won’t need as much data center space, commercial data centers will empty out, and businesses might even bring previously colocated gear back into in-house data centers. He claims, somewhat weirdly, that because the Wall Street analysts who cover this space are primarily telco analysts, they’re not thinking about the impact of compute density on the growth of data center space.
I started to write a “Jim Cramer has no idea what he’s talking about” post, but I saw that Rich Miller over at Data Center Knowledge beat me to it.
Processing power has been increasing exponentially forever, but data center needs have grown even more quickly — certainly in the exponential-growth dot-com world, but even in the enterprise. There’s no reason to believe that this next generation of chips changes that at all, and it’s certainly not backed up by survey data from enterprise buyers, much less rapidly-growing dot-coms.
Cramer also seems to fail to understand the fundamental value proposition of Equinix in particular. It’s not about providing the space more cheaply; it’s about the ability to interconnect to lots of networks. That’s why companies like Google, Microsoft, etc. have built their own data centers in places where there’s cheap power — but continued to leave edge footprints and interconnect within Equinix and other high-network-density facilities.