Category Archives: Infrastructure

Amazon CloudFront gets whole site delivery and acceleration

For months, there have been an abundance of rumors that Amazon was intending to enter the dynamic site acceleration market; it was the logical next step for its CloudFront CDN. Today, Amazon released a set of features oriented towards dynamic content, described in blog posts from Amazon’s Jeff Barr and Werner Vogels.

When CloudFront introduced custom origins (as opposed to the original CloudFront, which required you to use S3 as the origin), and dropped minimum TTLs down to zero, it effectively edged into the “whole site delivery” feature set that’s become mainstream for the major CDNs.

With this latest release, whole site delivery is much more of a reality — you can have multiple origins so you can mix static and dynamic content (which are often served from different hostnames, i.e., you might have images.mycompany.com serving your static content, but http://www.mycompany.com serving your dynamic content), and you’ve got pattern-matching rules that let you define what the cache behavior should be for content whose URL matches a particular pattern.

The “whole site delivery” feature set is important, because it hugely simplifies CDN configuration. Rather than having to go through your site and change its URL references to the CDN (long-time CDN watchers may remember that Akamai in the early days would have customers “Akamaize” their site using a tool that did these URL rewrites), the CDN is smart — it just goes to the origin and pulls things, and it can do so dynamically (so, for instance, you don’t have to explicitly publish to the CDN when you add a new page, image, etc. to your website). It gets you closer to simply being able to repoint the URL of your website to the CDN and having magic happen.

The dynamic site acceleration features — the actual network optimization features — that are being introduced are much more limited. They basically amount to TCP connection multiplexing, TCP connection peristency/pooling, and TCP window size optimization, much like Cotendo in its very first version. At this current stage, it’s not going to be seriously competing against Akamai’s DSA offering (or CDNetworks’s similar DWA offering), but it might have appeal against EdgeCast’s ADN offering.

However, I would expect that like everything else that Amazon releases, there will be frequent updates that introduce new features. The acceleration techniques are well known at this point, and Amazon would presumably logically add bidirectional (symmetric POP-to-POP) acceleration as the next big feature, in addition to implementing the common other optimizations (dynamic congestion control, TCP “FastRamp”, etc.).

What’s important here: CloudFront dynamic acceleration costs the same as static delivery. For US delivery, that starts at about $0.12/GB and goes down to below $0.02/GB for high volumes. That’s easily somewhere between one-half and one-tenth of the going rate for dynamic delivery. The delta is even greater if you look at a dynamic product like Akamai WAA (or its next generation, Terra Alta), where enterprise applications that might do all of a TB of delivery a month typically cost $6000 per app per month — whereas a TB of CloudFront delivery is $120. Akamai is pushing the envelope forward in feature development, and arguably those price points are so divergent that you’re talking about different markets, but low price points also expand a market to where lots of people can decide to do things, because it’s a totally different level of decision — to an enterprise, at that kind of price point, it might as well be free.

Give CloudFront another year of development, and there’s a high probability that it can become a seriously disruptive force in the dynamic acceleration market. The price points change the game, making it much more likely that companies, especially SaaS providers (many of whom use EC2, and AWS in general), who have been previously reluctant to adopt dynamic acceleration due to the cost, will simply get it as an easy add-on.

There is, by the way, a tremendous market opportunity out there for a company that delivers value-added services on top of CloudFront — which is to say, the professional services to help customers integrate with it, the ongoing expert technical support on a day to day basis, and a great user portal that provides industry-competitive reporting and analytics. CloudFront has reached the point where enterprises, large mainstream media companies, and other users of Akamai, Limelight, and Level 3 who feel they need ongoing support of complex implementations and a great toolset that helps them intelligently operate those CDN implementations, are genuinely interested in taking a serious look at CloudFront as an alternative, but there’s no company that I know of that provides the services and software that would bridge the gap between CloudFront and a traditional CDN implementation.

My recent published research

I’d gotten out of the social media habit — Twitter and blogging — over the holidays and never really restarted, and now that a quarter has gone by, I’m feeling like I really ought to get back into the habit.

So, it’s time for a catch-up, starting with a round-up of my recent research, and, over the next few days, a glimpse into what I’m currently working on, what clients have been saying, and some thoughts on recent industry news.

Please note that unless otherwise stated, the research notes are available to Gartner clients only.

The Magic Quadrant for Managed Hosting is now out. (See the free reprint if you’re not a client.) This should have been a 2011 document, but was delivered late; consequently, there will be a late-2012 update, back on the normal publication schedule. This Magic Quadrant is being split into two regional ones — one for North America and one for Western Europe — for that late-2012 iteration. That should allow us to cover a broader set of providers and to better focus on the particular needs and desires of the two geographies, rather than presenting a single global view that has tended to be US-centric.

Our most recent set of market definitions, explanation of the market structure, and general pricing guidance can be found in the Pricing and Buyer’s Guide for Web Hosting and Cloud Infrastructure, 2012. This also explains the specific markets covered by our various Magic Quadrants.

Amazon has been a topic of great interest to all of our client constituencies. What Managers Need to Know About Amazon EC2 is a plain-language guide to this aspect of Amazon Web Services (and has some broader guidance on purchasing AWS services in general, as well). It’s targeted at an audience looking for fast facts, including non-technical audiences, like procurement managers and investors trying to get smart on what Amazon does.

The Competitive Landscape: New Entrants to the Cloud IaaS Market Face Tough Competitive Challenges is targeted at a technology provider audience (and potentially at investors). It’s a look at what’s really required to compete in the cloud IaaS market going forward, and it profiles both Amazon and CSC deeply, demonstrating two very different paths to success in this market.

Everyone wonders what cloud IaaS is being used for on a practical basis. In Case Study: Using Cloud IaaS for Business Continuity Solutions, we profile a major consumer electronics retailer, and how they use Amazon to provide a lightweight version of their website when they’re doing maintenance of their primary side, have excessive amounts of traffic, or have a primary-site outage.

Finally, on the CDN front, I’ve updated a previous note with current market info and a bit on front-end optimization: Content Delivery Network Services and Pricing, 2012.

Akamai buys Cotendo

Akamai is acquiring Cotendo for a purchase price of $268 million, somewhat under the rumored $300 million that had been previously reported in the Israeli press. To judge from the stock price, the acquisition is being warmly received by investors (and for good reason).

The acquisition only impacts the website delivery/acceleration portion of the CDN market — it has no impact on the software delivery and media delivery segments. The acquisition will leave CDNetworks as the only real alternative for dynamic site acceleration that is based on network optimization techniques (EdgeCast does not seem to have made the technological cut thus far). Level 3 (via its Strangeloop Networks partnership) and Limelight (via its Acceloweb acquisition) have chosen to go with front-end optimization techniques instead for their dynamic acceleration. Obviously, AT&T is going to have some thinking to do, especially since application-fluent networking is a core part of its strategy for cloud computing going forward.

I am not going to publicly blog a detailed analysis of this acquisition, although Gartner clients are welcome to schedule an inquiry to discuss it (thus far the questions are coming from investors and primarily have to do with the rationale for the purchase price, technology capabilities, pricing impact, and competitive impact). I do feel compelled to correct two major misperceptions, though, which I keep seeing all over the place in press quotes from Wall Street analysts.

First, I’ve heard it claimed repeatedly that Cotendo’s technology is better than Akamai’s. It’s not, although Cotendo has done some important incremental engineering innovation, as well as some better marketing of specific aspects (for instance, their solution around mobility). I expect that there will be things that Akamai will want to incorporate into their own codebase, naturally, but this is not really an acquisition that is primarily being driven by the desire for the technology capabilities.

Second, I’ve also heard it claimed repeatedly that Cotendo delivers better performance than Akamai. This is nonsense. There is a specific use case in which Cotendo may deliver better performance — low-volume customers with low cache hit ratios due to infrequently-accessed content, as can occur with SaaS apps, corporate websites, and so on. Cotendo pre-fetches content into all of its POPs and keeps it there regardless of whether or not it’s been accessed recently. Akamai flushes objects out of cache if they haven’t been accessed recently. This means that you may see Akamai cache hit ratios that are only in the 70%-80% range, especially in trial evaluations, which is obviously going to have a big impact on performance. Akamai cache tuning can help some of those customers substantially drive up cache hits (for better performance, lower origin costs, etc.), although not necessarily enough; cache hit ratios have always been a competitive point that other rivals, like Mirror Image, have hammered on. It has always been a trade-off in CDN design — if you have a lot more POPs you get better edge performance, but now you also have a much more distributed cache and therefore lower likelihood of content being fresh in a particular POP.

(Those are the two big errors that keep bothering me. There are plenty of other minor factual and analysis errors that I keep seeing in the articles that I’ve been reading about the acquisition. Investors, especially, seem to frequently misunderstand the CDN market.)

Introducing the new Magic Quadrant for Public Cloud IaaS

I’m happy to announce that the new Gartner Magic Quadrant for Public Cloud Infrastructure as a Service has been published. (Client-only link. Non-clients can read a reprint.)

This is a brand-new Magic Quadrant; our previous Magic Quadrant has essentially been split into two MQs, this new Public Cloud IaaS MQ that focuses on self-service, and an updated and more focused iteration of the previous MQ, focused on managed services, called the Managed Hosting and Cloud IaaS MQ.

It’s been a long and interesting and sometimes controversial journey. Threaded throughout this whole Magic Quadrant are the fundamental dichotomies of the market, like IT Operations vs. developer buyers, new applications vs. existing workloads, “virtualization plus” vs. the fundamental move towards programmatic infrastructure, and so forth. We’ve tried hard to focus on a pragmatic view of the immediate wants and needs of Gartner clients, which also reflect these dichotomies.

This is a Magic Quadrant unlike the ones we have historically done in our services research; it is focused upon capabilities and features, in a manner that is much more comparable to the way that we compare software companies, than it is to things like network services or managed hosting or data center outsourcing. This reflects that public cloud IaaS goes far beyond just self-service VMs, creating significant disparities in provider capabilities.

In fact, for this Magic Quadrant, we tried just about every provider hands-on, which is highly unusual for Gartner’s evaluation approach. However, because Gartner’s general philosophy isn’t to do the kind of lab evaluations that we consider to be the domain of journalists, the hands-on stuff was primarily to confirm that providers had particular features and the specifics of what they had, without having to constantly pepper them with questions. Consequently this also involved in reading a lot of documentation, community forums, etc. This wasn’t full-fledged serious trialing. (The expense of the trials was paid on my personal credit card. Fortunately, since this was the cloud, it amounted to less than $150 all told.)

However, like all Magic Quadrants, there’s a heavy emphasis on business factors and not just technology — we are evaluating the positions of companies in the market, which are a composite of many things not directly related to comparable functionality of the services.

Like other Magic Quadrants, this one is targeted at the typical Gartner client — a mid-market company or an enterprise, but also our many tech company clients who range from tiny start-ups to huge monoliths. We believe that cloud IaaS, including the public cloud, is being used to run not only new applications, but also existing workloads. We don’t believe that public cloud IaaS is only for apps written specifically for the cloud, and we certainly don’t believe that it’s only for start-ups or leading-edge companies. It’s a nascent market, yes, but companies can use it productively today as long as they’re thoughtful about their use cases and deployment approach. We also don’t believe that cloud IaaS is solely the province of mass-scale providers; multi-tenancy can be cost-effectively delivered on a relatively small scale, as long as most of the workloads are steady-state (which legacy workloads often are).

Service features, sales, and marketing are all impacted by the need to serve two different buying constituencies, IT Operations and developers. Because we believe that developers are the face of business buyers, though, we believe that addressing this audience is just as important as it is addressing the traditional IT Operations audience. We do, however, emphasize a fundamentally corporate audience — this is definitely not an MQ aimed at, say, an individual building an iPhone app, or even non-technology small businesses.

Nowhere are those dichotomies better illustrated than two of the Leaders in this MQ — Amazon Web Services and CSC. Amazon excels at addressing a developer audience and new applications; CSC excels at addressing a mid-market IT Operations audience on the path towards data center transformation and automation of IT operations management, by migrating to cloud IaaS. Both companies address audiences and use cases beyond that expertise, of course, but they have enormously different visions of their fundamental value proposition, that are both valid. (For those of you who are going, “CSC? Really?” — yes, really. And they’ve been quietly growing far faster than any other VMware-based provider, so for all you vendors out there, if they’re not on your competitive radar screen, they should be.)

Of course, this means that no single provider in the Magic Quadrant is a fantastic fit for all needs. Furthermore, the right provider is always dependent upon not just the actual technical needs, but also the business needs and corporate culture, like the way that the company likes to engage with its vendors, its appetite for risk, and its viewpoint on strategic vs. tactical vendors.

Gartner has asked its analysts not to debate published research in public (per our updated Public Web Participation policy), especially Magic Quadrants. Consequently, I’m willing to engage in a certain amount of conversation about this MQ in public, but I’m not going to get into the kinds of public debates that I got into last year.

If you have questions about the MQ or are looking for more detail than is in the text itself, I’m happy to discuss. If you’re a Gartner client, please schedule an inquiry. If you’re a journalist, please arrange a call through Gartner’s press office. Depending on the circumstances, I may also consider a discussion in email.

This was a fascinating Magic Quadrant to research and write, and within the limits of that “no public debates” restriction, I may end up blogging more about it in the future. Also, as this is a fast-moving market, we’re highly likely to target an update for the middle of next year.

Beware misleading marketing of “private clouds”

Many cloud IaaS providers have been struggling to articulate their differentiation for a while now, and many of them labor under the delusion that “not being Amazon” is differentiating. But it also tends to lead them into misleading marketing, especially when it comes to trying to label their multi-tenant cloud IaaS “private cloud IaaS”, to distinguish it from Those Scary And Dangerous Public Cloud Guys. (And now that we have over four dozen newly-minted vCloud Powered providers in the early market-entrance stage, the noise is only going to get worse, as these providers thrash about trying to differentiate.)

Even providers who are clear in their marketing material that the offering is a public, multi-tenant cloud IaaS, sometimes have salespeople who pitch the offering as private cloud. We also find that customers are sometimes under the illusion that they’ve bought a private cloud, even when they haven’t.

I’ve seen three common variants of provider rationalization for why they are misleadingly labeling a multi-tenant cloud IaaS as “private cloud”:

We use a shared resource pool model. These providers claim that because customers buy by the resource pool allocation (for instance, “100 vCPUs and 200 GB of RAM”) and can carve that capacity up into VMs as they choose, that capacity is therefore “private”, even though the infrastructure is fully multi-tenant. However, there is always still contention for these resources (even if neither the provider nor the customer deliberately oversubscribes capacity), as well as any other shared elements, like storage and networking. It also doesn’t alter any of the risks of multi-tenancy. In short, a shared resource pool, versus a pay-by-the-VM model, is largely just a matter of the billing scheme and management convenience, possibly including the nice feature of allowing the customer to voluntarily self-oversubscribe his purchased resources. It’s certainly not private. (This is probably the situation that customers most commonly confuse as “private”, even after long experience with the service — a non-trivial number of them think the shared resource pool is physically carved off for them.)

Our customers don’t connect to us over the Internet. These providers claim that private networking makes them a private cloud. But in fact, nearly all cloud IaaS providers offer multiple networking options other than plain old Internet, ranging from IPsec VPN over the Internet to a variety of private connectivity options from the carrier of your choice (MPLS, Ethernet, etc.). This has been true for years, now, as I noted when I wrote about Amazon’s introduction of VPC back in 2009. Even Amazon essentially offers private connectivity these days, since you can use Amazon Direct Connect to get a cross-connect at select Equinix data centers, and from there, buy any connectivity that you wish.

We don’t allow everyone to use our cloud, so we’re not really “public”. These providers claim to have a “private cloud” because they vet their customers and only allow “real businesses”, however they define that. (The ones who exclude net-native companies as not being “real businesses” make me cringe.) They claim that a “public cloud” would allow anyone to sign up, and it would be an uncontrolled environment. This is hogwash. It can also lead to a false sense of complacency, as I’ve written before — the assumption that their customers are good guys means that they might not adequately defend against customer compromises or customer employees who go rogue.

The NIST definition of private cloud is clear: “Private cloud. The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.” In other words, NIST defines private cloud as single-tenant.

Given the widespread use of NIST cloud definitions, and the reasonable expectation that customers have that a provider’s terminology for its offering will conform to those definitions, calling a multi-tenant offering “private cloud” is misleading at best. And at some point in time, the provider is going to have to fess up to the customer.

I do fully acknowledge that by claiming private cloud, a provider will get customers into the buying cycle that they wouldn’t have gotten if they admitted multi-tenancy. Bait-and-switch is unpleasant, though, and given that trust is a key component of provider relationships as businesses move into the cloud, customers should use providers that are clear and up-front about their architecture, so that they can make an accurate risk assessment.

Cloud IaaS is not magical, and the Amazon reboot-a-thon

Randy Bias has blogged about Amazon mandating instance reboots for hundreds, perhaps thousands, of instances (Amazon’s term for VMs). Affected instances seem to be scheduled for reboots over the next couple of weeks. Speculation is that the reboots are to patch a recently-reported vulnerability in the Xen hypervisor, which is the virtualization technology that underlies Amazon’s EC2. The GigaOm story gives some links, and the CRN story discusses customer pain.

Maintenance reboots are not new on Amazon, and are detailed on Amazon’s documentation about scheduled maintenance. The required reboots this time are instance reboots, which are easily accomplished — just point-and-click to reboot on your own schedule rather than Amazon’s (although you cannot delay past the scheduled reboot). Importantly, instance reboots do not result in a change of IP address nor do they erase the data in instance storage (which is normally non-persistent).

For some customers, of course, a reboot represents a headache, and it results in several minutes of downtime for that instance. Also, since this is peak retail season, it is already a sensitive, heavy-traffic time for many businesses, so the timing of this widespread maintenance is problematic for many customers.

However, cloud IaaS isn’t magical. If these customers were using dedicated hosting, they would still be subject to mandated reboots for security patches — hosting providers generally offer some flexibility on scheduling such reboots, but not aa lot (and sometimes none at all if there’s an exploit in the wild). If these customers were using a provider that uses live migration technology (like VMotion on a VMware-virtualized cloud), they might be spared reboots for system reasons, but they might still be subject to reboots for mandated operating system patches.

Given that what’s underlying EC2 are ordinary physical servers running virtualization without a live migration technology in use, customers should reasonably expect that they will be subject to reboots — server-level (what Amazon calls a system reboot), as well as instance-level — and also anticipate that they may sometimes need to reboot for their own guest OS patches and the like (assuming that they don’t simply patch their AMIs and re-launch their instances, arguably a more “cloudy” way to approach this problem).

What makes this rolling scheduled maintenance remarkable is its sheer scale. Hosting providers typically have a few hundred customers and a few thousand servers. Mass-market VPS hosters have lots of VPS containers, but there’s a roughly 1:1 VPS:customer ratio and a small-business-centricity that doesn’t lead to this kind of hullabaloo. Amazon’s largest competitor is estimated to be around the 100,000 VM mark. Only the largest cloud IaaS providers have more than 2,000 VMs. Consequently, this involves a virtually unprecedented number of customers and mission-critical systems.

Amazon has actually been very good about not taking down its cloud customers for extended maintenance windows. (I can think of one major Amazon competitor that took down one whole data center for an eight-hour maintenence evidently involving a total outage this past weekend, and which regularly has long-downtime maintenance windows in general.) A reboot is an inconvenience, but if you are running production infrastructure, you should darn well think about how to handle the occasional reboot, including reboots that affect a significant percentage of your infrastructure, because reboots are not likely to go away in IaaS anytime soon.

To hammer on the point again: Cloud IaaS is not magical. It still requires management, and it still has some of the foibles of both physical servers and non-cloud virtualization. Being able to push a button and get infrastructure is nice, but the responsibility to manage that infrastructure doesn’t go away — it’s just that many cloud customers manage to delay the day of reckoning when the attention they haven’t paid to management comes back to bite them.

If you run infrastructure, regardless of whether it’s in your own data center, in hosting, or in cloud IaaS, you should have a plan for “what happens if I need to mass-reboot my servers?” because it is something that will happen. And add “what if I have to do that immediately?” to the list, because that is also something that will happen, because mass exploits and worms certainly have not gone away.

(Gartner clients only: Check out a note by my security colleagues, “Address Concentration Risk in Public Cloud Deployments and Shared-Service Delivery Models to Avoid Unacceptable Losses“.)

Cloud IaaS feature sets and target buyers

As I noted previously, cloud IaaS is a lot more than just self-service VMs. As service providers strive to differentiate themselves from one another, they enter a software-development rat race centered around “what other features can we add to make our cloud more useful to customers”.

However, cloud IaaS providers today have to deal with two different constituencies — developers (developers are the face of business buyers) and IT Operations. These two groups have different priorities and needs, and sometimes even different use cases for the cloud.

IaaS providers may be inclined to cater heavily towards one group or the other, and selectively add features that are critical to the other group, in order to ease buying frictions. Others may decide to try to appeal to both — a strategy likely to be available only to those with a lot of engineering resources at their disposal. Over time (years), there will be convergence in the market, as all providers reach a certain degree of feature parity on the critical bits, and then differentiation will be on smaller bits of creeping featurism.

Take a feature like role-based access control (RBAC). For the needs of a typical business buyer — where the developers are running the show on a project basis — RBAC is mostly a matter of roles on the development team, likely in a fairly minimalistic way, but fine-grained security may be desired on API keys so that any script’s access to the API is strictly limited to just what that script needs to do. For IT Operations, though, RBAC needs tend to get blown out into full-fledged lab management — having to manage a large population of users (many of them individual developers) who need access to their own pools of infrastructure and who want to be segregated from one another.

Some providers like to think of the business buyer vs. IT Operations buyer split as a “new applications” vs. “legacy applications” split instead. I think there’s an element of truth to that, but it’s often articulated as “commodity components that you can self-assemble if you’re smart enough to know how to architect for the cloud” vs. “expensive enterprise-class gear providing a safe familiar environment”. This latter distinction will become less and less relevant as an increasing number of providers offer multi-tiered infrastructure at different price points within the same cloud. Similarly, the “new vs. legacy apps” distinction will fade with feature-set convergence — a broad-appeal cloud IaaS offering should be able to support either type of workload.

But the buying constituencies themselves will remain split. The business and IT will continue to have different priorities, despite the best efforts of IT to try to align itself closer to what the business needs.

Cotendo’s potential acquisition

Thus far, merger-watchers eyeing the rumored bidding for Cotendo seem to be asking: Why this high a valuation compared to the rest of the CDN industry? Who are the potential suitors and why? What if anything does Cotendo offer that other CDNs don’t? How do the various dynamic offerings in the market compare? Who else might be ripe for acquisition? What is the general trend of M&A activity in the CDN industry going forward? Do I agree with Dan Rayburn’s commentary on this deal?

However, for various reasons, I am not currently publicly commenting further on Twitter or my blog, or really in general with non-Gartner-clients, regarding the potential acquisition of Cotendo by Akamai (or AT&T, or Juniper, or anyone else who might be interested in buying them).

If you are a Gartner client, and you want to discuss the topic, you may request a written response or a phone call through the usual mechanisms for inquiry.

Private clouds aren’t necessarily more secure

Eric Domage, an analyst over at IDC, is being quoted as saying, “The decision in the next year or two will only be about the private cloud. The bigger the company, the more they will consider the private cloud. The enterprise cloud is locked down and totally managed. It is the closest replication of virtualisation.” The same article goes on to quote Domage as cautioning against the dangers of the public cloud, and, quoting the article: “He urged delegates at the conference to ‘please consider more private cloud than public cloud.'”

I disagree entirely, and I think Domage’s comments ignore the reality of what is going on within enterprises, both in terms of their internal private cloud initiatives, as well as their adoption of public cloud IaaS. (I assume Domage’s commentary is intended to be specific to infrastructure, or it would be purely nonsensical.)

While not all IaaS providers build to the same security standards, nearly all build a high degree of security into their offering. Furthermore, end-to-end encryption, which Domage claims is unavailable in public cloud IaaS, is available in multiple offerings today, presuming that it refers to both end-to-end network encryption, along with encryption of both storage in motion and storage at rest. (Obviously, computation has to occur either on unencrypted data, or your app has to treat encrypted data like a passthrough.)

And for the truly paranoid, you can adopt something like Harris Trusted Cloud — private or public cloud IaaS built with security and compliance as the first priority, where each and every component is checked for validity. (Wyatt Starnes, the guiding brain behind this, founded Tripwire, so you can guess where the thinking comes from.) Find me an enterprise that takes security to that level today.

I’ve found that the bigger the company, the more likely they are to have already adopted public cloud IaaS. Yes, it’s tactical, but their businesses are moving faster than they can deploy private clouds, and the workloads they have in the public cloud are growing every day. Yes, they’ll also build a private cloud (or in many cases, already have), but they’ll use both.

The idea that the enterprise cloud is “locked down and totally managed” is a fantasy. Not only do many enterprises struggle with managing the security within private clouds, many of them have practically surrendered control to the restless natives (developers) who are deploying VMs within that environment. They’re struggling with basic governance, and often haven’t extended their enterprise IT operations management systems successfully into that private cloud. (Assuming, as the article seems to imply, that private cloud is being used to refer to self-service IaaS, not merely virtualized infrastructure.)

The head-in-the-sand “la la la public cloud is too insecure to adopt, only I can build something good enough” position will only make an enterprise IT manager sound clueless and out of touch both with reality and with the needs of the business.

Organizations certainly have to do their due diligence — hopefully before, and not after, the business is asking what cloud infrastructure solutions can be used right this instant. But the prudent thing to do is to build expertise with public cloud (or hosted private cloud), and for organizations which intend to continue running data centers long-term, simultaneously building out a private cloud.

Amazon and the power of default choices

Estimates of Amazon’s revenues in the cloud IaaS market vary, but you could put it upwards of $1 billion in 2011 and not cause too much controversy. That’s a dominant market share, comprised heavily of early adopters but at this point, also drawing in the mainstream business — particularly the enterprise, which has become increasingly comfortable adopting Amazon services in a tactical manner. (Today, Amazon’s weakness is the mid-market — and it’s clear from the revenue patterns, too, that Amazon’s competitors are mostly winning in the mid-market. The enterprise is highly likely to go with Amazon, although it may also have an alternative provider such as Terremark for use cases not well-suited to Amazon.)

There are many, many other providers out there who are offering cloud IaaS, but Amazon is the brand that people know. They created this market; they have remained synonymous with it.

That means that for many organizations that are only now beginning to adopt cloud IaaS (i.e., traditional businesses that already run their own data centers), Amazon is the default choice. It’s the provider that everyone looks at because they’re big — and because they’re #1, they’re increasingly perceived as a safe choice. And because Amazon makes it superbly easy to sign up and get started (and get started for free, if you’re just monkeying around), there’s no reason not to give them a whirl.

Default choices are phenomenally powerful. (You can read any number of scientific papers and books about this.) Many businesses believe that they’ve got a low-risk project that they can toss on cloud IaaS and see what happens next. Or they’ve got an instant need and no time to review all the options, so they simply do something, because it’s better than not doing something (assuming that the organization is one in which people who get things done are not punished for not having filled out a form in triplicate first).

Default choices are often followed by inertia. Yeah, the company put a project on Amazon. It’s running fine, so people figure, why mess with it? They’ve got this larger internal private cloud story they’re working on, or this other larger cloud IaaS deal they’re working on, but… well, they figure, they can migrate stuff later. And it’s absolutely true that people can and do migrate, or in many cases, build a private cloud or add another cloud IaaS provider, but a high enough percentage of the time, whatever they stuck out there remains at Amazon, and possibly begins to accrete other stuff.

This is increasingly leaving the rest of the market trying to pry customers away from a provider they’re already using. It’s absolutely true that Amazon is not the ideal provider for all use cases. It’s absolutely true that any number of service providers can tell me endless stories of customers who have left Amazon for them. It’s probably true, as many service providers claim, that customers who are experienced with Amazon are better educated about the cloud and their needs, and therefore become better consumers of their next cloud provider.

But it does not change the fact that Amazon has been working on conquering the market one developer at a time, and that in turn has become the bean-counters in business saying, hey, shouldn’t we be using these Amazon guys?

This is what every vendor wants: For the dude at the customer to be trying to explain to his boss why he’s not using them.

This is increasingly my client inquiry pattern: Client has decided they are definitively not using Amazon (for various reasons, sometimes emotional and sometimes well thought out) and are looking at other options, or they are looking at cloud IaaS and are figuring that they’ll probably use Amazon or have even actually deployed stuff on Amazon (even if they have done zero reading or evaluation). Two extremes.