Blog Archives

Observations from the Gartner data center conference

I’m at Gartner’s Data Center Conference this week, and I’m finding it to be an interesting contrast to our recent Application Architecture, Development, and Integration Summit.

AADI’s primary attendees are enterprise architects and other people who hold leadership roles in applications development. The data center conference’s primary attendees are IT operations directors and others with leadership roles in the data center. Both have significant CIO attendance, especially the data center conference. Attendees at the data center conference, especially, skew heavily towards larger enterprises and those who otherwise have big data centers, so when you see polling results from the conference, keep the bias of the attendees in mind. (Those of you who read my blog regularly: I cite survey data — formal field research, demographically weighted, etc. — differently than conference polling data, as the latter is non-scientific.)

At AADI, the embrace of the public cloud was enthusiastic, and if you asked people what they were doing, they would happily tell you about their experiments with Amazon and whatnot. At this conference, the embrace of the public cloud is far more tentative. In fact, my conversations not-infrequently go like this:

Me: Are you doing any public cloud infrastructure now?
Them: No, we’re just thinking we should do a private cloud ourselves.
Me: Nobody in your company is doing anything on Amazon or a similar vendor?
Them: Oh, yeah, we have a thing there, but that’s not really our direction.

That is not “No, we’re not doing anything on the public cloud”. That’s, “Yes, we’re using the public cloud but we’re in denial about it”.

Lots of unease here about Amazon, which is not particularly surprising. That was true at AADI as well, but people were much more measured there — they had specific concerns, and ways they were addressing, or living with, those concerns. Here the concerns are more strident, particularly around security and SLAs.

Feedback from folks using the various VMware-based public cloud providers seems to be consistently positive — people seem to uniformly be happy with the services themselves and are getting the benefits they hoped to get, and are comfortable. Terremark seems to be the most common vendor for this, by a significant margin. Some Savvis, too. And Verizon customers seem to have talked to Verizon about CaaS, at least. (This reflects my normal inquiry trends, as well.)

Bookmark and Share

What does the cloud mean to you?

My Magic Quadrant for Cloud Infrastructure as a Service and Web Hosting is done. The last week has been spent in discussion with service providers over their positioning and the positioning of their competitors and the whys and wherefores and whatnots. That has proven to be remarkably interesting this year, because it’s been full of angry indignation by providers claiming diametrically opposed things about the market.

Gartner gathers its data about what people want in two ways — from primary research surveys, and, often more importantly, from client inquiry, the IT organizations who are actually planning to buy things or better yet are actually buying things. I currently see a very large number of data points — a dozen or more conversations of this sort a day, much of it focused on buying cloud IaaS.

And so when a provider tells me, “Nobody in the market wants to buy X!”, I generally have a good base from which to judge whether or not that’s true, particularly since I’ve got an entire team of colleagues here looking at cloud stuff. It’s never that those customers don’t exist; it’s that the provider’s positioning has essentially guaranteed that they don’t see the deals outside their tunnel vision service.

The top common fallacy, overwhelmingly, is that enterprises don’t want to buy from Amazon. I’ve blogged previously about how wrong this is, but at some point in the future, I’m going to have to devote a post (or even a research note) to why this is one of the single greatest, and most dangerous, delusions, that a cloud provider can have. If you offer cloud IaaS, or heck, you’re a data-center-related business, and you think you don’t compete with Amazon, you are almost certainly wrong. Yes, even if your customers are purely enterprise — especially if your customers are large enterprises.

The fact of the matter is that the people out there are looking at different slices of cloud IaaS, but they are still slices of the same market. This requires enough examination that I’m actually going to write a research note instead of just blogging about it, but in summary, my thinking goes like this (crudely segmented, saving the refined thinking for a research note):

There are customers who want self-managed IaaS. They are confident and comfortable managing their infrastructure on their own. They want someone to provide them with the closest thing they can get to bare metal, good tools to control things (or an API they can use to write their own tools), and then they’ll make decisions about what they’re comfortable trusting to this environment.

There are customers who want lightly-managed IaaS, which I often think of as “give me raw infrastructure, but don’t let me get hacked” — which is to say, OS management (specifically patch management) and managed security. They’re happy managing their own applications, but would like someone to do all the duties they typically entrust to their junior sysadmins.

There are customers who want complex management, who really want soup-to-nuts operations, possibly also including application management.

And then in each of these segments, you can divide customers into those with a single application (which may have multiple components and be highly complex, potentially), and those who have a whole range of stuff that encompass more general data center needs. That drives different customer behaviors and different service requirements.

Claiming that there’s no “real” enterprise market for self-managed is just as delusional as claiming there’s no market for complex management. They’re different use cases in the same market, and customers often start out confused about where they fall along this spectrum, and many customers will eventually need solutions all along this spectrum.

Now, there’s absolutely an argument to be made that the self-managed and lightly-managed segments together represent an especially important segment of the market, where a high degree of innovation is taking place. It means that I’m writing some targeted research — selection notes, a Critical Capabilities rating of individual services, probably a Magic Quadrant that focuses specifically on this next year. But the whole spectrum is part of the cloud IaaS adoption phenomenon, and any individual segment isn’t representative of the total market evolution.

Bookmark and Share

Designing to fail

Cloud-savvy application architects don’t do things the same way that they’re done in the traditional enterprise.

Cloud applications assume failure. That is, well-architected cloud applications assume that just about anything can fail. Servers fail. Storage fails. Networks fail. Other application components fail. Cloud applications are designed to be resilient to failure, and they are designed to be robust at the application level rather than at the infrastructure level.

Enterprises, for the most part, design for infrastructure robustness. They build expensive data centers with redundant components. They buy expensive servers with dual-everything in case a component fails. They buy expensive storage and mirror their disks. And then whatever hardware they buy, they need two of. All so the application never has to deal with the failure of the underlying infrastructure.

The cloud philosophy is generally that you buy dirt-cheap things and expect they’ll fail. Since you’re scaling out anyway, you expect to have a bunch of boxes, so that any box failing is not an issue. You protect against data center failure by being in multiple data centers.

Cloud applications assume variable performance. Well-architected cloud applications don’t assume that anything is going to complete in a certain amount of time. The application has to deal with network latencies that might be random, storage latencies that might be random, and compute latencies that might be random. The principle of the distributed application of this sort is that just about anything that you’re talking to can mysteriously drop off the face of the Earth at any point in time, or at least not get back to you for a whlie.

Here’s where it gets funkier. Even most cloud-savvy architects don’t build applications this way today. This is why people howl about Amazon’s storage back-end for EBS, for instance — they’re used to consistent and reliable storage performance, and EBS isn’t built that way, and most applications are built with the assumption that seemingly local standard I/O is functionally local and therefore is totally reliable and high-performance. This is why people twitch about VM-to-VM latencies, although at least here there’s usually some application robustness (since people are more likely to architect with network issues in mind). This is the kind of problem things like Node.js were created to solve (don’t block on anything, and assume anything can fail), but it’s also a type of thinking that’s brand-new to most application architects.

Performance is actually where the real problems occur when moving applications to the cloud. Most businesses who are moving existing apps can deal with the infrastructure issues — and indeed, many cloud providers (generally the VMware-based ones) use clustering and live migration and so forth to present users with a reliable infrastructure layer. But most existing traditional enterprise apps don’t deal well with variable performance, and that’s a problem that will be much trickier to solve.

Bookmark and Share

Amazon, ISO 27001, and a correction

FlyingPenguin has posted a good critique of my earlier post about Amazon’s ISO 27001 certification.

Here’s a succinct correction:

To quote Wikipedia, ISO 27001 requires that management:

  • Systematically examine the organization’s information security risks, taking account of the threats, vulnerabilities and impacts;
  • Design and implement a coherent and comprehensive suite of information security controls and/or other forms of risk treatment (such as risk avoidance or risk transfer) to address those risks that are deemed unacceptable; and
  • Adopt an overarching management process to ensure that the information security controls continue to meet the organization’s information security needs on an ongoing basis.

ISO 27002, which details the security best practices, is not required to be used in conjunction with 27001, although this is customary. I forgot this when I wrote my post (when I was reading docs written by my colleagues on our security team, which specifically recommend the 27001 approach, in the context of 27002).

In other words: 27002 is proscriptive in its controls; 27001 is not that specific.

So FlyingPenguin is right — without the 27002, we have no idea what security controls Amazon has actually implemented.

Bookmark and Share

Amazon, ISO 27001, and some conference observations

Greetings from Gartner’s Application Architecture, Development, and Integration Summit. There are around 900 people here, and the audience is heavy on enterprise architects and other application development leaders.

One of the common themes of my interaction here has been talking to an awful lot of people who are using or have used Amazon for IaaS. They’re a different audience than the typical clients I talk to about the cloud, who are generally IT Operations folks, IT executives, or Procurement folks. The audience here is involved in assessing the cloud, and in adopting the cloud in more skunkworks ways — but they are generally not ultimately the ones making the purchasing decisions. Consequently, they’ve got a level of enthusiasm about it that my usual clients don’t share (although it correlates with the reported enthusiasm they know their app dev folks have for it). Fun conversations.

So on the heels of Amazon’s ISO 27001 certification, I thought it’d be worth jotting down a few thoughts about Amazon and the enterprise.

To start with, SAS 70 Is Not Proof of Security, Continuity or Privacy Compliance (Gartner clients only). As my security colleagues Jay Heiser and French Caldwell put it, “The SAS 70 auditing report is widely misused by service providers that find it convenient to mischaracterize the program as being a form of security certification. Gartner considers this to be a deceptive and harmful practice.” It certainly is possible for a vendor to do a great SAS 70 certification — to hold themselves to best pratices and have the audit show that they follow them consistently — but SAS 70 itself doesn’t require adherence to security best practices. It just requires you to define a set of controls, and then demonstrate you follow them.

ISO 27001, on the other hand, is a security certification standard that examines the efficacy of risk management and an organization’s security posture, in the context of ISO 27002, which is a detailed security control framework. This certification actually means that you can be reasonably assured that an organization’s security controls are actually good, effective ones.

The 27001 cert — especially meaningful here because Amazon certified its actual infrastructure platform, not just its physical data centers — addresses two significant issues with assessing Amazon’s security to date. First, Amazon doesn’t allow enterprises to bring third-party auditors into its facilities and to peer into its operations, so customers have to depend on Amazon’s own audits (which Amazon does share under certain circumstances). Second, Amazon does a lot of security secret sauce, implementing things in ways different than is the norm — for instance, Amazon claims to provide network isolation between virtual machines, but unlike the rest of the world, it doesn’t use VLANs to achieve this. Getting something like ISO 27001, which is proscriptive, hopefully offers some assurance that Amazon’s stuff constitutes effective, auditable controls.

(Important correction: See my follow-up. The above statement is not true, because we have no guarantee Amazon follows 27002.)

A lot of people like to tell me, “Amazon will never be used by the enterprise!” Those people are wrong (and are almost always shocked to hear it). Amazon is already used by the enterprise — a lot. Not necessarily always in particularly “official” ways, but those unofficial ways can sometimes stack up to pretty impressive aggregate spend. (Some of my enterprise clients end up being shocked by how much they’re spending, once they total up all the credit cards.)

And here’s the important thing: The larger the enterprise, the more likely it is that they use Amazon, to judge from my client interactions. (Not necessarily as their only cloud IaaS provider, though.) Large enterprises have people who can be spared to go do thorough evaluations, and sit on committees that write recommendations, and decide that there are particular use cases that they allow, or actively recommend, Amazon for. These are companies that assess their risks, deal with those risks, and are clear on what risks they’re willing to take with what stuff in the cloud. These are organizations — some of the largest global companies in the world — for whom Amazon will become a part of their infrastructure portfolio, and they’re comfortable with that, even if their organizations are quite conservative.

Don’t underestimate the rate of change that’s taking place here. The world isn’t shifting overnight, and we’re going to be looking at internal data centers and private clouds for many years to come, but nobody can afford to sit around smugly and decide that public cloud is going to lose and that a vendor like Amazon is never going to be a significant player for “real businesses”.

One more thing, on the subject of “real businesses”: All of the service providers who keep telling me that your multi-tenant cloud isn’t actually “public” because you only allow “real businesses”, not just anyone who can put down a credit card? Get over it. (And get extra-negative points if you consider all Internet-centric companies to not be “real businesses”.) Not only isn’t it a differentiator, but customers aren’t actually fooled by this kind of circumlocution, and the guys who accept credit cards still vet their customers, albeit in more subtle ways. You’re multi-tenant, and your customers aren’t buying as a consortium or community? Then you’re a public cloud, and to claim otherwise is actively misleading.

Bookmark and Share

Amazon’s Free Usage Tier

Amazon recently introduced a Free Usage Tier for its Web Services. Basically, you can try Amazon EC2, with a free micro-instance (specifically, enough hours to run such an instance full-time, and have a few hours left over to run a second instance, too; or you can presumably use a bunch of micro-instances part-time), and the storage and bandwidth to go with it.

Here’s what the deal is worth at current pricing, per-month:

  • Linux micro-instance – $15
  • Elastic load balancing – $18.87
  • EBS – $1.27
  • Bandwidth – $3.60

That’s $38.74 all in all, or $464.88 over the course of the one-year free period — not too shabby. Realistically, you don’t need the load-balancing if you’re running a single instance, so that’s really $19.87/month, $238.44/year. It also proves to be an interesting illustration of how much the little incremental pennies on Amazon can really add up.

It’s a clever and bold promotion, making it cost nothing to trial Amazon, and potentially punching Rackspace’s lowest-end Cloud Servers business in the nose. A single instance of that type is enough to run a server to play around with if you’re a hobbyist, or you’re a garage developer building an app or website. It’s this last type of customer that’s really coveted, because all cloud providers hope that whatever he’s building will become wildly popular, causing him to eventually grow to consume bucketloads of resources. Lose that garage guy, the thinking goes, and you might not be able to capture him later. (Although Rackspace’s problem at the moment is that their cloud can’t compete against Amazon’s capabilities once customers really need to get to scale.)

While most of the cloud IaaS providers are actually offering free trials to most customers they’re in discussions with, there’s still a lot to be said about just being able to sign up online and use something (although you still have to give a valid credit card number).

Bookmark and Share

Symposium, Smart Control, and cloud computing

I originally wrote this right after Orlando Symposium and forgot to post it.

Now that Symposium is over, I want to reflect back on the experiences of the week. Other than the debate session that I did (Eric Knipp and I arguing the future of PaaS vs. IaaS), I spent the conference in the one-on-one room or meeting with customers over meals. And those conversations resonated in sharp and sometimes uncomfortable ways with the messages of this year’s Symposiums.

The analyst keynote said this:

An old rule for an old era: absolute centralized IT control over technology people, infrastructure, and services. Rather than fight to regain control, CIOs and IT Leaders must transform themselves from controllers to influencers; from implementers to advisers; from employees to partners. You own this metamorphosis. As an IT leader, you must apply a new rule, for a new era: Smart Control. Open the IT environment to a Wild Open World of unprecedented choice, and better balance cost, risk, innovation and value. Users can achieve incredible things WITHOUT you, but they can only maximize their IT capabilities WITH you. Smart Control is about managing technology in tighter alignment with business goals, by loosening your grip on IT.

The tension of this loss of control, was by far the overwhelming theme of my conversations with clients at Symposium. Since I cover cloud computing, my topic was right at the heart of the new worries. Data from a survey we did earlier this year showed that less than 50% of cloud computing projects are initiated by IT leadership.

Most of the people that I talked to strongly held one of two utterly opposing beliefs: that cloud computing was going to be the Thing of the Future and the way they and their companies would consume IT in the future, and that cloud computing would be something that companies could never embrace. My mental shorthand for the extremes of these positions is “enthusiastic embrace” and “fear and loathing”.

I met IT leaders, especially in mid-sized business, who were tremendously excited by the possibilities of the cloud to free their businesses to move and innovate in ways that they never had before, and to free IT from the chores of keeping the lights on in order to help deliver more busines value. They understood that the technology in cloud is still fairly immature, but they wanted to figure out how they could derive benefits right now, taking smart risks to develop some learnings, and to deliver immediate business value.

And I also met IT leaders who fear the new world and everything it represents — a loss of control, a loss of predictability, an emphasis on outcomes rather than outputs, the need to take risks to obtain rewards. These people were looking for reasons not to change — reasons that they could take back to the business for why they should continue to do the things they always have, perhaps with some implementation of a “private cloud”, in-house.

The analyst keynote pointed out: The new type of CIO won’t ask first what the implementation cost is, or whether something complies with the architecture, but whether it’s good for the enterprise. They will train their teams to think like a business executive, asking first, “is this valuable”? And only then asking, “how can we make this work”? Rather than the other way around.

Changing technologies is often only moderately difficult. Changing mindsets, though, is an enormous challenge.

Bookmark and Share

More on Symposium 1-on-1s

My calendar for one-on-ones at Symposium is now totally full, as far as I know, so here’s a look at some updated stats:

Cloud 23
Colocation 9
Hosting 9
CDN 3

(No overlaps above. Things have been disambiguated. This counts only the formal 1-on-1s, and not any other meetings I’m doing here.)

The hosting discussions have a very strong cloud flavor to them, as one might expect. The broad trend from today is that most people talking about cloud infrastructure here are really talking about putting individual production applications on virtualized infrastructure in a managed hosting environment, with at least some degree of capacity flexibility. But at the same time, this is a good thing for service provider — it clearly illustrates that people are comfortable putting highly mission-critical, production applications on shared infrastructure.

Bookmark and Share

Symposium 1-on-1 trends

My 1-on-1 schedule is filling rapidly. (People who didn’t pre-book, you’re in luck: I was only added to the system on Friday or so, so I still have openings, at least as of this writing.)

Trend-watchers might be interested in how these break down so far:
17 on cloud
8 on colocation
4 on hosting
2 on CDN

(A few of these mention two topics, such as ‘colo and cloud’, and are counted twice above.)

Slightly over half the cloud 1-on-1s so far are about cloud strategy in general; the remainder are about infrastructure specifically.

What’s also interesting to me is that the 1-on-1s scheduled prior to on-site registration appear to be more about colocation and hosting, but the on-site 1-on-1 requests are very nearly pure cloud. I’m not sure what that signifies, although I expect the conversations may be illuminating in this regard.

Bookmark and Share

Upcoming Gartner conferences

I will be at three Gartner conferences during the remainder of this year.

I will be at Symposium ITxpo Orlando. My main session here will be The Great Debate: Shared-Hardware vs. Shared-Everything Multitenancy, or Amazon’s Apples vs. Force.com’s Oranges. (Or for those of you who heard Larry Ellison’s OpenWorld keynote, Oracle ExaLogic vs. Salesforce.com…) The debate will be moderated by my colleague Ray Valdes, I’ll be taking the shared-hardware side while my colleague Eric Knipp takes the shared-everything side. I’m also likely to be running some end-user roundtables, but mostly, I’ll be available to take questions in 1-on-1 sessions.

If you go to Symposium, I highly encourage you to attend a session by one of my colleagues, Neil MacDonald. It’s called Why Cloud-Based Computing Will Be More Secure Than What You Have Today, and it’s what we call a “maverick pitch”, which means that it’s follows an idea that’s not a common consensus opinion at Gartner. But it’s also the foundation of some really, really interesting work that we’re doing on the future of the cloud, and it follows an incredibly important tenet that we’re talking about a lot: that control is not a substitute for trust, and the historical model that enterprises have had of equating the two is fundamentally broken.

I will be at the Application Architecture, Development, and Integration Summit (ala, our Web and Cloud conference) in November. I’m giving two presentations there. The first will be Controlling the Cloud: How to Leverage Cloud Computing Without Losing Control of Your IT Processes. The second is Infrastructure as a Service: Providing Data Center Services in the Cloud. I’ll also be running an end-user roundtable on building private clouds, and be available to take 1-on-1 questions.

Finally, I will be at the Data Center Conference in December. I’m giving two presentations there. The first will be Is Amazon, Not VMware, the Future of Your Data Center? The second is Getting Real With Cloud Infrastructure Services. I’ll also be in one of our “town hall” meetings on cloud, running an end-user roundtable on cloud IaaS, and be available to take 1-on-1 questions.

Bookmark and Share