Author Archives: Lydia Leong
The Tip of the Spear
So I caught an interesting Horses for Sources blog post via Twitter — Phil Fersht of HfS called out a blog post of ISG’s Stanton Jones discussing the Gartner Magic Quadrant for Managed Hosting that I published earlier this year.
Stanton Jones’s argument seems to be that analysts sit in ivory towers, theorizing about suppliers, and making broad general statements about the market, whereas sourcing consultants actually get down and dirty with clients who are buying stuff. He says, “Analyst research is not at the tip of the spear, where buying and selling is actually occurring.”
However, that’s not actually true at the end-user-focused research firms like Gartner and Forrester. As an analyst at Gartner, I can do over a thousand one-on-one (or one-on-client-team) interactions in a single year. Each of those interactions represents a client who is considering what to buy (or build), and then going through the procurement cycle. They’re short-listing vendors, writing RFPs, wanting to discuss RFP responses, negotiating contracts and prices. It is absolutely the tip of the spear, and critically, over the long term, you also get feedback from the client as to the success of their initiative, so you’re hopefully not throwing out advice that turns out to fail in the real world.
So yes, something like a Magic Quadrant provides broad, generic advice (although I do try to orient my advice towards specific use cases). But that’s all that written research can get you. However, nearly every Gartner client buys access to inquiry, so that they can get those one-on-one, freewheeling entirely-custom interactions — so they can say, this is my exact situation, and get to Stanton Jones’s “I’m a multinational company who wants a company that can support a transformational infrastructure sourcing initiative” and ask which of these vendors I’d recommend.
By the way, if a client asked that question, I’d tell them, “You don’t want a managed hoster for this. Try discussing this with our strategic sourcing analysts, or with our data center outsourcing analysts.” (And it’s not improbable that this would be caught at the level of our client services organization, which would look at that question and say, gosh, this doesn’t sound like managed hosting, maybe you’d like these other pieces of more appropriate research and a discussion with these other analysts instead. For negotiating that kind of deal, by the way, Gartner has a consulting division that can do it; analysts don’t do the kind of work they, or a competitor like TPI, do.)
One more note: If I published in an MQ the feedback, good bad and ugly, that I’ve gotten from clients using service providers “on the ground”, I would never ever actually be able to get the research out, because it would undoubtedly be caught in a zillion Ombudsman escalations from the vendors. But if you talk to me on an inquiry, I might very well tell you, “In the last six months, every customer I have talked to of Vendor X has been unhappy, which represents a big swing in their historical quality of customer service,” or even “Customers who use Vendor X for Use Case A are happy, but those who have Use Case B think they lack sufficient expertise with the technology”. Client inquiry volume at a big research house like Gartner gets you enough anecdotal data points to show you trends. Clients want the ground truth; that’s part of what they’re paying an analyst firm for.
Amazon CloudFront gets whole site delivery and acceleration
For months, there have been an abundance of rumors that Amazon was intending to enter the dynamic site acceleration market; it was the logical next step for its CloudFront CDN. Today, Amazon released a set of features oriented towards dynamic content, described in blog posts from Amazon’s Jeff Barr and Werner Vogels.
When CloudFront introduced custom origins (as opposed to the original CloudFront, which required you to use S3 as the origin), and dropped minimum TTLs down to zero, it effectively edged into the “whole site delivery” feature set that’s become mainstream for the major CDNs.
With this latest release, whole site delivery is much more of a reality — you can have multiple origins so you can mix static and dynamic content (which are often served from different hostnames, i.e., you might have images.mycompany.com serving your static content, but http://www.mycompany.com serving your dynamic content), and you’ve got pattern-matching rules that let you define what the cache behavior should be for content whose URL matches a particular pattern.
The “whole site delivery” feature set is important, because it hugely simplifies CDN configuration. Rather than having to go through your site and change its URL references to the CDN (long-time CDN watchers may remember that Akamai in the early days would have customers “Akamaize” their site using a tool that did these URL rewrites), the CDN is smart — it just goes to the origin and pulls things, and it can do so dynamically (so, for instance, you don’t have to explicitly publish to the CDN when you add a new page, image, etc. to your website). It gets you closer to simply being able to repoint the URL of your website to the CDN and having magic happen.
The dynamic site acceleration features — the actual network optimization features — that are being introduced are much more limited. They basically amount to TCP connection multiplexing, TCP connection peristency/pooling, and TCP window size optimization, much like Cotendo in its very first version. At this current stage, it’s not going to be seriously competing against Akamai’s DSA offering (or CDNetworks’s similar DWA offering), but it might have appeal against EdgeCast’s ADN offering.
However, I would expect that like everything else that Amazon releases, there will be frequent updates that introduce new features. The acceleration techniques are well known at this point, and Amazon would presumably logically add bidirectional (symmetric POP-to-POP) acceleration as the next big feature, in addition to implementing the common other optimizations (dynamic congestion control, TCP “FastRamp”, etc.).
What’s important here: CloudFront dynamic acceleration costs the same as static delivery. For US delivery, that starts at about $0.12/GB and goes down to below $0.02/GB for high volumes. That’s easily somewhere between one-half and one-tenth of the going rate for dynamic delivery. The delta is even greater if you look at a dynamic product like Akamai WAA (or its next generation, Terra Alta), where enterprise applications that might do all of a TB of delivery a month typically cost $6000 per app per month — whereas a TB of CloudFront delivery is $120. Akamai is pushing the envelope forward in feature development, and arguably those price points are so divergent that you’re talking about different markets, but low price points also expand a market to where lots of people can decide to do things, because it’s a totally different level of decision — to an enterprise, at that kind of price point, it might as well be free.
Give CloudFront another year of development, and there’s a high probability that it can become a seriously disruptive force in the dynamic acceleration market. The price points change the game, making it much more likely that companies, especially SaaS providers (many of whom use EC2, and AWS in general), who have been previously reluctant to adopt dynamic acceleration due to the cost, will simply get it as an easy add-on.
There is, by the way, a tremendous market opportunity out there for a company that delivers value-added services on top of CloudFront — which is to say, the professional services to help customers integrate with it, the ongoing expert technical support on a day to day basis, and a great user portal that provides industry-competitive reporting and analytics. CloudFront has reached the point where enterprises, large mainstream media companies, and other users of Akamai, Limelight, and Level 3 who feel they need ongoing support of complex implementations and a great toolset that helps them intelligently operate those CDN implementations, are genuinely interested in taking a serious look at CloudFront as an alternative, but there’s no company that I know of that provides the services and software that would bridge the gap between CloudFront and a traditional CDN implementation.
Do Amazon’s APIs matter?
For those who have been wondering where I personally stand in the brouhaha over Amazon, Citrix, Eucalyptus, CloudStack, OpenStack, Rackspace, HP, and so on, along with the broader competitive market that includes VMware, Microsoft, and the Four Horsemen of management tools… I should state up-front that I hold the optimistic viewpoint that I want everyone to be successful as possible — service providers, commercial vendors, open-source projects, and the customers and users that depend upon them.
I feel that the more competent the competition in a market, the more that everyone in the ecosystem is motivated to do better, and the more customers benefit as a result. Customers benefit from better technology, lower costs, more responsive sales, and differentiated approaches to the market. Clearly, competition can hurt companies, but especially with emerging technology markets, competition often results in making the pie bigger for everyone, by expanding the range of customers that can be served — although yes, sometimes weaker competitors will be culled from the herd.
I believe that companies are best served by being the best they can be — you can target a competitor by responding on a tactical basis, and sometimes you want to, but for your optimal long-term success, you should strive to be great yourself. Obsessing over what your competitors are doing can easily distract companies from doing the right thing on a long-term strategic basis.
That said:
Dan Woods over on Forbes has written a blog post about questions around Amazon’s API strategy, and Jim Plamondon (Rackspace Developer Relations) has posted a comment on my blog about Amazon ecosystem zombiefication.
I’ve been thinking about the implications of Amazon API compatibility, and the degree to which it is or isn’t to Amazon’s advantage to encourage other people to build Amazon-compatible clouds.
I think it comes down to the following: If Amazon believes that they can innovate faster, drive lower costs, and deliver better service than all of their competitors that are using the same APIs (or, for that matter, enterprises who are using those same APIs), then it is to their advantage to encourage as many ways to “on-ramp” onto those APIs as possible, with the expectation that they will switch onto the superior Amazon platform over time.
But I would also argue that all this nattering about the basic semantics of provisioning bare resource elements is largely a waste of time for most people. None of the APIs for provisioning compute and storage (whether EC2/S3/EBS or their counterparts in other clouds) are complicated things at their core. They’re almost always wrappered with an abstraction layer, third-party library, or management tool. However, APIs may matter to people who are building clouds because they implicitly express the underlying conceptual framework of the system, though, and the richness of the API semantics constrain what can be expressed and therefore, what can be controlled via the API; the constraints of the Amazon APIs forces everyone else to express richer concepts in some other way.
But the battle will increasingly not be fought at this very basic level of ‘how do I get raw resources’. I recognize that building a cloud infrastructure platform at scale and with a lot of flexibility is a very difficult problem (although a simple and rigid one is not an especially difficult problem, as you can see from the zillion CMPs out in the market). But it’s not where value is ultimately created for users.
Value for users is ultimately created at the layers above the core infrastructure. Everyone has to get core infrastructure right, but the real question is: How quickly can you build value-added services, and how well does the adaptibility of your core infrastructure allow you to serve a broad range of use cases (or serve a narrow range of use cases in a fashion superior to everyone else) and to deliver new capabilities to your users?
Ecosystems in conflict – Amazon vs. VMware, and OpenStack
Citrix contributing CloudStack to the Apache Software Foundation isn’t so much a shot at OpenStack (it just happens to get caught in the crossfire), as it’s a shot against VMware.
There are two primary ecosystems developing in the world: VMware and Amazon. Other possibilities, like Microsoft and OpenStack, are completely secondary to those two. You can think of VMware as “cloud-out” and Amazon as “cloud-in” approaches.
In the VMware world, you move your data center (with its legacy applications) into the modern era with virtualization, and then you build a private cloud on top of that virtualized infrastructure; to get additional capacity, business agility, and so forth, you add external cloud IaaS, and hopefully do so with a VMware-virtualized provider (and, they hope, specifically a vCloud provider who has adopted the stack all the way up to vCloud Director).
In the Amazon world, you build and launch new applications directly onto cloud IaaS. Then, as you get to scale and a significant amount of steady-state capacity, you pull workloads back into your own data center, where you have Amazon-API-compatible infrastructure. Because you have a common API and set of tools across both, where to place your workloads is largely a matter of economics (assuming that you’re not using AWS capabilities beyond EC2, S3, and EBS). You can develop and test internally or externally, though if you intend to run production on AWS, you have to take its availability and performance characteristics into account when you do your application architecture. You might also adopt this strategy for disaster recovery.
While CloudStack has been an important CMP option for service providers — notably competing against the vCloud stack, OnApp, Hexagrid, and OpenStack — in the end, these providers are almost a decoration to the Amazon ecosystem. They’re mostly successful competing in places that Amazon doesn’t play — in countries where Amazon doesn’t have a data center, in the managed services / hosting space, in the hypervisor-neutral space (Amazon-style clouds built on top of VMware’s hypervisor, more specifically), and in a higher-performance, higher-availability market.
Where CloudStack has been more interesting is in its use to be a “cloud-in” platform for organizations who are using AWS in a significant fashion, and who want their own private cloud that’s compatible with it. Eucalyptus fills this niche as well, although Eucalyptus customers tend to be smaller and Eucalyptus tends to compete in the general private-cloud-builder CMP space targeted at enterprises — against the vCloud stack, Abiquo, HP CloudSystem, BMC Cloud Lifecycle Manager, CA’s 3Tera AppLogic, and so on. CloudStack tends to be used by bigger organizations; while it’s in the general CMP competitive space, enterprises that evaluate it are more likely to be also evaluating, say, Nimbula and OpenStack.
CloudStack has firmly aligned itself with the Amazon ecosystem. But OpenStack is an interesting case of an organization caught in the middle. Its service provider supporters are fundamentally interested in competing against AWS (far more so than with the VMware-based cloud providers, at least in terms of whatever service they’re building on top of OpenStack). Many of its vendor contributors are afraid of a VMware-centric world (especially as VMware moves from virtualizing compute to also virtualizing storage and networks), but just as importantly they’re afraid of a world in which AWS becomes the primary way that businesses buy infrastructure. It is to their advantage to have at least one additional successful widely-adopted CMP in the market, and at least one service provider successfully competing strongly against AWS. Yet AWS has established itself as a de facto standard for cloud APIs and for the way that a service “should” be designed. (This is why OpenStack has an aptly named “Nova Feature Parity Team” playing catch-up to AWS, after all, and why debates about the API continue in the OpenStack community.)
But make no mistake about it. This is not about scrappy free open-source upstarts trying to upset an established vendor ecosystem. This is a war between vendors. As Simon Wardley put it, beware of geeks bearing gifts. CloudStack is Citrix’s effort to take on VMware and enlist the rest of the vendor community in doing so. OpenStack is an effort on the part of multiple vendors — notably Rackspace and HP — to pool their engineering efforts in order to take on Amazon. There’s no altruism here, and it’s not coincidental that the committers to the projects have an explicit and direct commercial interest — they are people working full-time for vendors, contributing as employees of those vendors, and by and large not individuals contributing for fun.
So it really comes down to this: Who can innovate more quickly, and choose the right ways to innovate that will drive customer adoption?
Ladies and gentlemen, place your bets.
Citrix, CloudStack, OpenStack, and the war for open-source clouds
There are dozens upon dozens of cloud management platforms (CMPs), sometimes known as “cloud stacks” or “cloud operating systems”, out in the wild, both commercial and open source. Two have been in the news recently — Eucalyptus and CloudStack — with implications for the third, OpenStack.
Last week, Eucalyptus licensed Amazon’s API, and just yesterday, Wired extolled the promise of OpenStack.
Now, today, Citrix has dropped a bombshell into the open-source CMP world by announcing that it is contributing CloudStack (the Amazon-API-compatible CMP it acquired via its staggeringly expensive Cloud.com acquisition) to the Apache Software Foundation (ASF). This includes not just the core components, which are already open-source, but also all of the currently closed-source commercial components (except any third-party things that were licensed from other technology companies under non-Apache-compatible licenses).
I have historically considered CloudStack a commercial CMP that happens to have a token open-source core, simply because anyone considering a real deployment of CloudStack buys the commercial version to get all the features — you just don’t really see people adopting the non-commercial version, which I consider a litmus test of whether or not an open-core approach is really viable. This did change with Citrix, and the ASF move truly puts the whole thing out there as open source, so adopters have a genuine choice about whether or not they want to pay for commercial support, and it should spur more contributions from people and organizations that were opposed to the open-core model.
What makes this big news is the fact that OpenStack is a highly immature platform (it’s unstable and buggy and still far from feature-complete, and people who work with it politely characterize it as “challenging”), but CloudStack is, at this point in its evolution, a solid product — it’s production-stable and relatively turnkey, comparable to VMware’s vCloud Director (some providers who have lab-tested both even claim stability and ease of implementation are better than vCD). Taking a stable, featureful base, and adding onto it, is far easier for an open-source community to do than trying to build complex software from scratch.
Also, by simply giving CloudStack to the ASF, Citrix explicitly embraces a wholly-open, committer-driven governance model for an open-source CMP. Eucalyptus has already wrangled with its community over its open-core closed-extensions approach, and Rackspace is still strugging with governance issues even though it’s promised to put OpenStack into a foundation, because of the proposed commercial sponsorship of board seats. CloudStack is also changing from GPLv3 to the Apache license, which should remove some concerns about contributing. (OpenStack also uses the Apache license.)
Citrix, of course, stands to benefit indirectly — most people who choose to use CloudStack also choose to use Xen, and often purchase XenServer, plus Citrix will continue to provide commercial support for CloudStack. (It will just be a commercial distribution and support, though, without any additional closed-soure code.) And they rightfully see VMware as the enemy, so explicitly embracing the Amazon ecosystem makes a lot of sense. (Randy Bias has more thoughts on Citrix; read James Urquhart’s comment, too.)
Citrix has also explicitly emphasized Amazon compatibility with this announcement. OpenStack’s community has been waffling about whether or not they want to continue to support an Amazon-compatible API; at the moment, OpenStack has its own API but also secondarily supports Amazon compatibility. It’s an ecosystem question, as well as potentially an intellectual property issue if Amazon ever decides to get tetchy about its rights. (Presumably Citrix isn’t being this loud about compatibility without Amazon quietly telling them, “No, we’re not going to sue you.”)
I think this move is going to cause a lot of near-term soul-searching amongst the major commercial contributors to OpenStack. While clearly there’s value in working on multiple projects, each of the vendors still needs to place bets on where their engineering time and budgets are best spent. Momentum is with OpenStack, but it’s also got a long way to go.
HP has effectively recently doubled down on OpenStack; it’s not too late for them to change their mind, but for the moment, they’re committed in an OpenStack direction both for their public developer-centric cloud IaaS, and where they’re going with their hybrid cloud and management software strategy. No doubt they’ll end up supporting every major CMP that sees significant success, but HP is typically a slow mover, and it’s taken them this long to get aligned on a strategy; I’m not personally expecting them to shift anytime soon.
But the other vendors are largely free to choose — likely to support both for the time being, but there may be a strong argument for primarily backing an ASF project that’s already got a decent core codebase and is ready for mainstream production use, over spending the next year to two years (depending on who you talk to) trying to get OpenStack to the point where it’s a real commercial product (defined as meeting enterprise expectations for stable, relatively maintenance-free software).
The absence of major supporting vendor announcements along with the Citrix announcement is notable, though. Most of the big vendors have made loud commitments to OpenStack, commitments that I don’t expect anyone to back down on, in public, even if I expect that there could be quiet repositioning of resources in the background. I’ve certainly had plenty of confidential conversations with a broad array of technology vendors around their concerns for the future of OpenStack, and in particular, when it will reach commercial readiness; I expect that many of them would prefer to put their efforts behind something that’s commercially ready right now.
There will undoubtedly be some people who say that Citrix’s move basically indicates that CloudStack has failed to compete against OpenStack. I don’t think that’s true. I think that CloudStack is gaining better “real world” adoption than OpenStack, because it’s actually usable in its current form without special effort (i.e., compared to other commercial software) — but the Rackspace marketing machine has done an outstanding job with hyping OpenStack, and they’ve done a terrific job building a vendor community, whereas CloudStack’s primary committers have been, to date, almost solely Cloud.com/Citrix.
Both OpenStack and CloudStack can co-exist in the market, but if Citrix wants to speed up the creation of Amazon-compatible clouds that can be used in large-scale production by enterprises trying to do Amazon hybrid clouds (or more precisely, who want freedom to easily choose where to place their workloads), it needs to persuade other vendors to devote their efforts to enhancing CloudStack rather than pouring more time into OpenStack.
Note that with this announcement, Citrix also cancels Project Olympus, its planned OpenStack commercial distribution, although it intends to continue contributing to OpenStack. (Certainly they need to, if they’re going to support folks like Rackspace who are trying to do XenServer with OpenStack; the OpenStack deployments to date have been KVM for stability reasons.)
But it’s certainly going to be interesting out there. At this stage of the CMP evolution, I think that the war is much more for corporate commitment and backing with engineers paid to work on the projects, than it is for individual committers from the broader world — although certainly individual engineers (the open-source talent, so to speak) will choose to join the companies who work on their preferred projects.
The Amazon-Eucalyptus partnership
Eucalyptus, a commercial open-source cloud management platform (“CMP”, software used to build cloud infrastructure), recently announced that it had signed a partnership with Amazon.
Eucalyptus began life as a university project to build a CMP that would create Amazon-API-compatible cloud infrastructure, but eventually turned into a commercial effort. However, like all other CMPs offering Amazon compatibility, Eucalyptus has always lived under the shadow of the threat that Amazon might someday try to enforce intellectual property rights related to its API.
With this partnership, Eucalyptus has formally licensed the Amazon API. There’s been a lot of speculation on what this means. My understanding is the following:
This is a non-exclusive technology partnership. Eucalyptus now has a formal license to build products that are compatible with the AWS APIs; at the moment, that’s EC2, S3, and EBS, but Eucalyptus can adopt the other APIs as well if they choose to. Amazon may enter into similar licensing agreements with others, enter into different sorts of partnerships, and so forth; this is a non-restrictive deal. Furthermore, this partnership is not a signal that Amazon is changing its stance towards other products/services with Amazon-compatible APIs, where it has to date adopted a laissez-faire attitude.
This is an API licensing deal, not a technology licensing deal. Amazon will provide Eucalyptus with API specifications, including related engineering specifications not provided in the public user-level documentation. However, Amazon will not be giving any technology away to Eucalyptus — this is not engineering assistance with the actual implementation. Eucalyptus will still need to do all of its own product engineering.
There is no coupling of Amazon and Eucalyptus’s development cycles. While Amazon will try to inform Eucalyptus of planned API changes so that Eucalyptus is able to release its own updates in a timely manner, Eucalyptus is on its own — if it can keep up with Amazon, fine, if it can’t, too bad. Eucalyptus is not obliged to remain Amazon-compatible, nor is Amazon obliged to ensure that it’s feasible for Eucalyptus to remain compatible.
Some people think that this deal with give Eucalyptus some much-needed life, since it has met with limited commercial interest, and its developer community has yet to really recover from the rifts created by a past licensing change.
I personally don’t agree. With people increasingly writing to libraries, or using third-party tools (RightScale, enStratus, etc.), developers tend to care less about what’s under the hood as long as their favorite tool supports it. Yes, Amazon’s API has become a de facto standard, but I haven’t seen Eucalyptus be the Amazon-compatible CMP of choice; instead, I see serious adopters choose CloudStack (Citrix, from the Cloud.com acquisition), and the vendors who want to be part of an open-source cloud project put their support primarily behind OpenStack. I’m not convinced that this licensing deal, however interesting, is going to significantly either shift buyer desires towards Eucalyptus, or improve their community support.
Wanted – Cloud IaaS Expert
Back in December, I blogged about five reasons you should work at Gartner with me, and I’ve pleased to announce that Doug Toombs (formerly of Tier 1 Research / 451 Group) has joined my team.
However, Ted Chamberlin, my long-time colleague, has decided he’d like a change of pace, and has just left us for the vendor universe, and so we’re now seeking to backfill someone into his job. You can find the formal job listing here: IaaS and Managed Hosting Analyst.
In plain language, this analyst is going to spend their time assisting our clients who are trying to adopt cloud IaaS and hosting (especially managed hosting) solutions. The ideal candidate is someone who has real-world expertise with building solutions using cloud IaaS (and preferably has tried multiple cloud IaaS offerings and can intelligently discuss the pros and cons of each), or has been involved in building a cloud IaaS offering for a service provider (and is knowledgeable about the competing offerings). They’re sharp technically but also understand the needs of the business — someone who is currently in an architect role, or is working for a cloud provider, is probably the most likely fit.
If this sounds like you, please get in touch with me — send a resume in email, or ask me privately for more information. (If you applied the last time around, please feel free to get in touch again; the requirements for this role are somewhat different.)
My recent published research
I’d gotten out of the social media habit — Twitter and blogging — over the holidays and never really restarted, and now that a quarter has gone by, I’m feeling like I really ought to get back into the habit.
So, it’s time for a catch-up, starting with a round-up of my recent research, and, over the next few days, a glimpse into what I’m currently working on, what clients have been saying, and some thoughts on recent industry news.
Please note that unless otherwise stated, the research notes are available to Gartner clients only.
The Magic Quadrant for Managed Hosting is now out. (See the free reprint if you’re not a client.) This should have been a 2011 document, but was delivered late; consequently, there will be a late-2012 update, back on the normal publication schedule. This Magic Quadrant is being split into two regional ones — one for North America and one for Western Europe — for that late-2012 iteration. That should allow us to cover a broader set of providers and to better focus on the particular needs and desires of the two geographies, rather than presenting a single global view that has tended to be US-centric.
Our most recent set of market definitions, explanation of the market structure, and general pricing guidance can be found in the Pricing and Buyer’s Guide for Web Hosting and Cloud Infrastructure, 2012. This also explains the specific markets covered by our various Magic Quadrants.
Amazon has been a topic of great interest to all of our client constituencies. What Managers Need to Know About Amazon EC2 is a plain-language guide to this aspect of Amazon Web Services (and has some broader guidance on purchasing AWS services in general, as well). It’s targeted at an audience looking for fast facts, including non-technical audiences, like procurement managers and investors trying to get smart on what Amazon does.
The Competitive Landscape: New Entrants to the Cloud IaaS Market Face Tough Competitive Challenges is targeted at a technology provider audience (and potentially at investors). It’s a look at what’s really required to compete in the cloud IaaS market going forward, and it profiles both Amazon and CSC deeply, demonstrating two very different paths to success in this market.
Everyone wonders what cloud IaaS is being used for on a practical basis. In Case Study: Using Cloud IaaS for Business Continuity Solutions, we profile a major consumer electronics retailer, and how they use Amazon to provide a lightweight version of their website when they’re doing maintenance of their primary side, have excessive amounts of traffic, or have a primary-site outage.
Finally, on the CDN front, I’ve updated a previous note with current market info and a bit on front-end optimization: Content Delivery Network Services and Pricing, 2012.
Akamai buys Cotendo
Akamai is acquiring Cotendo for a purchase price of $268 million, somewhat under the rumored $300 million that had been previously reported in the Israeli press. To judge from the stock price, the acquisition is being warmly received by investors (and for good reason).
The acquisition only impacts the website delivery/acceleration portion of the CDN market — it has no impact on the software delivery and media delivery segments. The acquisition will leave CDNetworks as the only real alternative for dynamic site acceleration that is based on network optimization techniques (EdgeCast does not seem to have made the technological cut thus far). Level 3 (via its Strangeloop Networks partnership) and Limelight (via its Acceloweb acquisition) have chosen to go with front-end optimization techniques instead for their dynamic acceleration. Obviously, AT&T is going to have some thinking to do, especially since application-fluent networking is a core part of its strategy for cloud computing going forward.
I am not going to publicly blog a detailed analysis of this acquisition, although Gartner clients are welcome to schedule an inquiry to discuss it (thus far the questions are coming from investors and primarily have to do with the rationale for the purchase price, technology capabilities, pricing impact, and competitive impact). I do feel compelled to correct two major misperceptions, though, which I keep seeing all over the place in press quotes from Wall Street analysts.
First, I’ve heard it claimed repeatedly that Cotendo’s technology is better than Akamai’s. It’s not, although Cotendo has done some important incremental engineering innovation, as well as some better marketing of specific aspects (for instance, their solution around mobility). I expect that there will be things that Akamai will want to incorporate into their own codebase, naturally, but this is not really an acquisition that is primarily being driven by the desire for the technology capabilities.
Second, I’ve also heard it claimed repeatedly that Cotendo delivers better performance than Akamai. This is nonsense. There is a specific use case in which Cotendo may deliver better performance — low-volume customers with low cache hit ratios due to infrequently-accessed content, as can occur with SaaS apps, corporate websites, and so on. Cotendo pre-fetches content into all of its POPs and keeps it there regardless of whether or not it’s been accessed recently. Akamai flushes objects out of cache if they haven’t been accessed recently. This means that you may see Akamai cache hit ratios that are only in the 70%-80% range, especially in trial evaluations, which is obviously going to have a big impact on performance. Akamai cache tuning can help some of those customers substantially drive up cache hits (for better performance, lower origin costs, etc.), although not necessarily enough; cache hit ratios have always been a competitive point that other rivals, like Mirror Image, have hammered on. It has always been a trade-off in CDN design — if you have a lot more POPs you get better edge performance, but now you also have a much more distributed cache and therefore lower likelihood of content being fresh in a particular POP.
(Those are the two big errors that keep bothering me. There are plenty of other minor factual and analysis errors that I keep seeing in the articles that I’ve been reading about the acquisition. Investors, especially, seem to frequently misunderstand the CDN market.)
The challenge of hiring development teams
A recent blog post on Forbes by Venkatesh Rao, The Rise of Developernomics, has ignited a lot of controversy around the concept that some developers are as much as 10x more productive than others. It’s not a new debate; the assertion that some developers are 20x more productive than others has been around forever, and folks like Jole Spolsky have asserted that it’s not just a matter of productivity, but also a developer’s ability to hit the high notes of real breakthrough achievement that makes for greatness.
Worth reading out of all of these threads: Avichal Garg of Spool’s blog post on building 10x teams, which has a very nice dissection of the composition of great teams.
Also, for those of you who haven’t read it: Now Discover Your Strengths is a fantastic way to look at what people’s work-related strengths are, since it takes into account a broad range of personal and interpersonal traits. Rackspace turned me onto it a number of years ago; they actually hang a little sign with each employee’s strengths on their cube. (See mine for an example.)
Jon Evans of TechCrunch wrote a good blog post a few months ago, Why the New Guy Can’t Code, which illustrates the challenges of hiring good developers. (There are shocking numbers of developers out there who have never really produced significant code in their jobs. Indeed, I once interviewed a developer with five years of experience who had never written code in a work context — he kept being moved from project to project that was only in the formal requirements phase, so all he had was his five-years-stale student efforts from his CS degree.)
Even with the massive pile of unemployed developers out there, it’s still phenomenally challenging to hire good people. And if your company requires a narrow and specific set of things that the developer must have worked with before, rather than hiring a smart Swiss army knife of a developer who can pick up anything given a few days, you will have an even bigger problem, especially if you require multiple years of experience with brand-new technologies like AWS, NoSQL, Hadoop, etc.
With more and more Web hosters, systems integrators, and other infrastructure-specialist companies transforming themselves into cloud providers, and sometimes outright buying software companies (such as Terremark buying CloudSwitch, and Virtustream buying Enomaly), serious software development chops are becoming a key for a whole range of service providers who never really had significant development teams in the past. No one should underestimate how much of a shortage there is for great talent.
As a reminder, Gartner is hiring!