Bits and pieces

Interesting recent news:

Amazon’s revocation of Orwell novels on the Kindle has stirred up some cloud debate. There seems to have been a thread of “will this controversy kill cloud computing”, which you can find in plenty of blogs and press articles. I think that question, in this context, is silly, and am not going to dignify it with a lengthy post of my own. I do think, however, that it highlights important questions around content ownership, application ownership, and data ownership, and the role that contracts (whether in the form of EULAs or traditional contracts) will play in the cloud. By giving up control over physical assets, whether data or devices, we place ourselves into the hands of thir parties, and we’re now subject to their policies and foibles. The transition from a world of ownership to a world of rental, even “permanent” lifetime rental, is not a trivial one.

Engine Yard has expanded its EC2 offering. Previously, Engine Yard was offering Amazon EC2 deployment of its stack via an offering called Solo, for low-end customers who only needed a single instance. Now, they’ve introduced a version called Flex, which is oriented around customers who need a cluster and associated capabilities, along with a higher level of support. This is notable because Engine Yard has been serving these higher-end customers out of their own data center and infrastructure. This move, however, seems to be consistent with Engine Yard’s gradual shift from hosting towards being more software-centric.

The Rackspace Cloud Servers API is now in open beta. Cloud Servers is essentially the product that resulted from Rackspace’s acquisition of Slicehost. Previously, you dealt with your Cloud Server through a Web portal; this new release adds a RESTful API, along with some new features, like shared IPs (useful for keepalived and the like). Also of note is the resize operation, letting you scale your server size up or down, but this is really handwaving magic in front of replacing a smaller virtual server with a larger virtual server, rather than expanding an already-running virtual instance. The API is fairly extensive and the documentation seems decent, although I haven’t had time to personally try it out yet. The API responses, interestingly, include both human-readable data as well as WADL (Web Application Description Language, which is machine-parseable).

SOASTA has introduced a cloud-based performance certification program. Certification is something of a marketing gimmick, but I do think that SOASTA is, overally, an interesting company. Very simply, SOASTA leverages cloud system infrastructure to offer high-volume load-testing services. In the past, you’d typically execute such tests using a tool like HP’s LoadRunner, and many Web hosters offer, as part of their professional services offerings, performance testing using LoadRunner or a similar tool. SOASTA is a full-fledged software as a service offering (i.e., it is their own test harness, monitors, analytics, etc., not a cloud repackaging of another vendor), and the price point makes it reasonable not just for the sort of well-established organizations that could previously afford commercial performance-testing tools, but also for start-ups.

Bookmark and Share

Cloud computing adoption surveys

A recent Forrester survey apparently indicates that that one out of four large companies plan to use an external provider soon, or have already done so. (The Cloud Storage Strategy blog has a good round-up linking to the original report, a summary of the key points, and various commentators.)

Various pundits are apparently surprised by these results. I’m not. I haven’t been able to obtain a copy of the Forrester report, but from the comments I’ve read, it appears that software as a service and hosting (part of infrastructure as a service) are included as part of the surveyed services. SaaS and IaaS are both well-established markets, with significant penetration across all segments of business, and interest in both IaaS and SaaS models has accelerated. We’ve wrapped the “cloud” label around some or all of these existing markets (how much gets encompassed depends on your definitions), so it shouldn’t come as a surprise to already see high adoption rates.

Gartner’s own survey on this topic has just been published. It’s titled, “User Survey Analysis: Economic Pressures Drive Cost-Oriented Outsourcing, Worldwide, 2008-2009“. Among its many components is a breakdown of current and planned use of alternative delivery models (which include things like SaaS and IT infrastructure utilities) over the next 24 months. We show even higher current and planned adoption numbers than Forrester, with IaaS leading the pack in terms of current and near-term adoption, and very healthy numbers for SaaS as well.

Bookmark and Share

A hodgepodge of links

This is just a round-up of links that I’ve recently found to be interesting.

Barroso and Holzle (Google): Warehouse-Scale Computing. This is a formal lecture-paper covering the design of what these folks from Google refer to as WSCs. They write, “WSCs differ significantly from traditional data centers: they belong to a single organization, use a relatively homogenous hardware and system software platform, and share a common systems management layer. Often, much of the application, middleware, and system software is built in-house compared to the predominance of third-party software running in conventional data centers. Most importantly, WSCs run a smaller number of very large applications (or Internet services), and the common resource management infrastructure allows significant deployment flexibility.” The paper is wide-ranging but written to be readily understandable by the mildly technical layman. Highly recommended for anyone interested in cloud.

Washington Post: Metrorail Crash May Exemplify Automation Paradox. The WaPo looks back at serious failures of automated systems, and quotes a “growing consensus among experts that automated systems should be designed to enhance the accuracy and performance of human operators rather than to supplant them or make them complacent. By definition, accidents happen when unusual events come together. No matter how clever the designers of automated systems might be, they simply cannot account for every possible scenario, which is why it is so dangerous to eliminate ‘human interference’.” Definitely something to chew over in the cloud context.

Malcolm Gladwell: Priced to Sell. The author of The Tipping Point takes on Chris Anderon’s Free, and challenges the notion that information wants to be free. In turn, Seth Godin thinks Gladwell is wrong, and the book seems to be setting off some healthy debate.

Bruce Robertson: Capacity Planning Equals Budget Planning. My colleague Bruce riffs off a recent blog post of mine, and discusses how enterprise architects need to change the way they design solutions.

Martin English: Install SAP on Amazon Web Services. An interesting blog devoted to how to get SAP running on AWS. This is for people interested in hands-on instructions.

Robin Burkinshaw: Being homeless in the Sims 3. This blog tells the story, in words and images, of “Alice and Kev”, a pair of characters that the author (a game design student) created in the Sims 3. It’s a fascinating bit of user-generated content, and a very interesting take on what can be done with modern sandbox-style games.

Bookmark and Share

Magic Quadrant (hosting and cloud), published!

The new Magic Quadrant for Web Hosting and Hosted Cloud System Infrastructure Services (On Demand) has been published. (Gartner clients only, although I imagine public copies will become available soon as vendors buy reprints.) Inclusion criteria was set primarily by revenue; if you’re wondering why your favorite vendor wasn’t included, it was probably because they didn’t, at the January cut-off date, have a cloud compute service, or didn’t have enough revenue to meet the bar. Also, take note that this is direct services only (thus the somewhat convoluted construction of the title); it does not include vendors with enabling technology like Enomaly, or overlaid services like RightScale.

It marks the first time we’ve done a formal vendor rating of many of the cloud system infrastructure service providers. We do so in the context of the Web hosting market, though, which means that the providers are evaluated on the full breadth of the five most common hosting use cases that Gartner clients have. Self-managed hosting (including “virtual data center” hosting of the Amazon EC2, GoGrid, Terremark Enterprise Cloud, etc. sort) is just one of those use cases. (The primary cloud infrastructure use case not in this evaluation is batch-oriented processing, like scientific computing.)

We mingled Web hosting and cloud infrastructure on the same vendor rating because one of the primary use cases for cloud infrastructure is for the hosting of Web applications and content. For more details on this, see my blog post about how customers buy solutions to business needs, not technology. (You might also want to read my blog post on “enterprise class” cloud.)

We rated more than 60 individual factors for each vendor, spanning five use cases. The evaluation criteria note (Gartner clients only) gives an overview of the factors that we evaluate in the course of the MQ. The quantitative scores from the factors were rolled up into category scores, which in turn rolled up into overall vision and execution scores, which turn into the dot placement in the Quadrant. All the number crunching is done by software — analysts don’t get to arbitrarily move dots around.

To understand the Magic Quadrant methodology, I’d suggest you read the following:

Some people might look at the vendors on this MQ and wonder why exciting new entrants aren’t highly rated on vision and/or execution. Simply put, many of these vendors might be superb at what they do, yet still not rate very highly in the overall market represented by the MQ, because they are good at just one of the five use cases encompassed by the MQ’s market definition, or even good at just one particular aspect of a single use case. This is not just a cloud-related rating; to excel in the market as a whole, one has to be able to offer a complete range of solutions.

Because there’s considerable interest in vendor selection for various use cases (including non-hosting use cases) that are unique to public cloud compute services, we’re also planning to publish some companion research, using a recently-introduced Gartner methodology called a Critical Capabilities note. These notes look at vendors in the context of a single product/service, broken down by use case. (Magic Quadrants, on the other hand, look at overall vendor positioning within an entire market.) The Critical Capabilities note solves one of the eternal dilemmas of looking at a MQ, which is trying to figure out which vendors are highly rated for the particular business need that you have, since, as I want to re-iterate again, a MQ niche player may be do the exact thing you need in a vastly more awesome fashion than a vendor rated a leader. Critical Capabilities notes break things down feature-by-feature.

In the meantime, for more on choosing a cloud infrastructure provider, Gartner clients should also look at some of my other notes:

For cloud infrastructure service providers: We may expand the number of vendors we evaluate for the Critical Capabilities note. If you’ve never briefed us before, we’d welcome you to do so now; schedule a briefing with myself, Ted Chamberlin, and Mike Spink (a brand-new colleague in Europe).

Bookmark and Share

Does this describe your IT project plan?

Does this picture describe your IT project plan? Evidence indicates that it is illustrative of a significant percentage of the plans of the clients that I speak with, once I probe beneath the glossy surface of false confidence.

Bookmark and Share

I’m thinking about using Amazon, IBM, or Rackspace…

At Gartner, much of our coverage of the cloud system infrastructure services market (i.e., Amazon, GoGrid, Joyent, etc.) is an outgrowth of our coverage of the hosting market. Hosting is certainly not the only common use case for cloud, but it is the use case that is driving much of the revenue right now, a high percentage of the providers are hosters, and most of the offerings lean heavily in this direction.

This leads to some interesting phenomenons, like the inquiries where the client begins with, “I’m considering using Amazon, IBM, or Rackspace…” That’s the result of customers thinking about the trade-offs between different types of solutions, not just vendors. Also, ultimately, customers buy solutions to business needs, not technology.

Customers say things like, “I’ve got an e-commerce website that uses the following list of technologies. I get a lot more traffic around Mother’s Day and Christmas. Also, I run marketing campaigns, but I’m never sure how much additional traffic an advertisement will drive to my site.”

If you’re currently soaking in the cloud hype, you might quickly jump on that to say, “A perfect case for cloud!” and it could be, but then you get into other questions. Is maximum cost savings the most important budgetary aspect, or is predictability of the bill more important? When he has traffic spikes, are they gradual, giving him hours (or even days) to build up the necessary capacity, or are they sudden, requiring provisioning in close to real time as possible? Does he understand how to architect the infrastructure (and app!) to scale, or does he need help? Does his application scale horizontally or vertically? Does he want to do capacity planning himself, or does he want someone else to take care of it? (Capacity planning equals budget planning, so it’s rarely an, “eh, because we can scale quickly, it doesn’t matter.”) Does he have a good change management process, or does he want a provider to shepherd that for him? Does he need to be PCI compliant, and if so, how does he plan to achieve that? How much systems management does he want to do himself, and to what degree does he have automation tools, or want to use provider-supplied automation? And so on.

That’s just one of the use cases for cloud compute as a service. Similar sets of questions exist in each of the other use cases where cloud is a possible solution. It’s definitely not as simple as “more efficient utilization of infrastructure equals Win”.

Bookmark and Share

Does Procurement know what you care about?

In many enterprises, IT folks decide what they want to buy and who they want to buy it from, but Procurement negotiates the contract, manages the relationship, and has significant influence on renewals. Right now, especially, purchasing folks have a lot of influence, because they’re often now the ones who go out and shop for alternatives that might be cheaper, forcing IT into the position of having to consider competitive bids.

A significant percentage of enterprise seatholders who use industry advisory firms have inquiry access for their Procurement group, so I routinely talk to people who work in purchasing. Even the ones who are dedicated to an IT procurement function tend not to have more than a minimal understanding of technology. Moreover, when it comes to renewals, they often have no thorough understanding of what exactly it is that the business is actually trying to buy.

Increasingly, though, procurement is self-educating via the Internet. I’ve been seeing this a bit in relationship to the cloud (although there, the big waves are being made by business leadership, especially the CEO and CFO, reading about cloud in the press and online, more so than Purchasing), and a whole lot in the CDN market, where things like Dan Rayburn’s blog posts on CDN pricing provide some open guidance on market pricing. Bereft of context, and armed with just enough knowledge to be dangerous, purchasing folks looking across a market for the cheapest place to source something, can arrive at incorrect conclusions about what IT is really trying to source, and misjudge how much negotiating leverage they’ll really have with a vendor.

The larger the organization gets, the greater the disconnect between IT decision-makers and the actual sourcing folks. In markets where commoditization is extant or in process, vendors have to keep that in mind, and IT buyers need to make sure that the actual procurement staff has enough information to make good negotiation decisions, especially if there are any non-commodity aspects that are important to the buyer.

Bookmark and Share

How not to use a Magic Quadrant

The Web hosting Magic Quadrant is currently in editing, the culmination of a six-month process (despite my strenuous efforts to keep it to four months). Many, many client conversations, reference calls, and vendor discussions later, we arrive at the demonstration of a constant challenge: the user tendency to misinterpret the Magic Quadrant, and the correlating vendor tendency to become obsessive about which quadrant they’re placed in.

Even though Gartner has an extensive explanation of the Magic Quadrant methodology on our website, vendors and users alike tend to oversimplify what it means. So a complex methodology ends up translating down to something like this:

But the MQ isn’t intended to be used this way. Just because a vendor isn’t listed as a Leader doesn’t mean that they suck. It doesn’t mean that they don’t have enterprise clients, that those clients don’t like them, that their product sucks, that they don’t routinely beat out Leaders for business, or, most importantly, that we wouldn’t recommend them or that you shouldn’t use them.

The MQ reflects the overall position of a vendor within an entire market. An MQ leader tends to do well at a broad selection of products/services within that market, but is not necessarily the best at any particular product/service within that market. And even the vendor who is typically best at something might not be the right vendor for you, especially if your profile or use case deviates significantly from the “typical”.

I recognize, of course, that one of the reasons that people look at visual tools like the MQ is that they want to rapidly cull down the number of vendors in the market, in order to make a short-list. I’m not naive about the fact that users will say things like, “We will only use Leaders” or “We won’t use a Niche Player”. However, this is explicitly what the MQ is not designed to do. It’s incredibly important to match your needs to what a vendor is good at, and you have to read the text of the MQ in order to understand that. Also, there may be vendors who are too small or too need-specific to have qualified to be on the MQ, who shouldn’t be overlooked.

Also, an MQ reflects only a tiny percentage of what an analyst actually knows about the vendor. Its beauty is that it reduces a ton of quantified specific ratings (nearly 5 dozen, in the case of my upcoming MQ) to a point on a graph, and a pile of qualitative data to somewhere between six and ten one-or-two-sentence bullet points about a vendor. It’s convenient reference material that’s produced by an exhaustive (and exhausting) process, but it’s not necessarily the best medium for expressing an analyst’s nuanced opinions about a vendor.

I say this in advance of the Web hosting MQ’s release: In general, the greater the breadth of your needs, or the more mainstream they are, the more likely it is that an MQ’s ratings are going to reflect your evaluation of the vendors. Vendors who specialize in just a single use case, like most of the emerging cloud vendors, have market placements that reflect that specialization, although they may serve that specific use case better than vendors who have broader product portfolios.

Bookmark and Share

ICANN and DNS

ICANN has been on the soapbox on the topic of DNS recently, encouraging DNSSEC adoption, and taking a stand against top-level domain (TLD) redirection of DNS inquiries.

The DNS error resolution market — usually manifesting itself as the display of an advertising-festooned Web page when a user tries to browse to a non-existent domain — has been growing over the years, primarily thanks to ISPs who have foisted it upon their users. The feature is supported in commercial DNS software and services that target the network service provider market; in most current deployments of this sort, business customers typically have an opt-out option, and consumers might as well.

While ICANN’s Security and Stability Advisory Committee (SSAC) believes this is detrimental to the DNS, their big concern is what happens when this is done at the TLD level. We all got a taste of that with VeriSign’s SiteFinder back in 2003, which affected the .com and .net TLDs. Since then, though, similar redirections have found their way into smaller TLDs (i.e., ones where there’s no global outcry against the practice). SSAC wants this practice explicitly forbidden at the TLD level.

I personally feel that the DNS error resolution market, at whatever level of the DNS food chain, is harmful to the DNS and to the Internet as a whole. The Internet Architecture Board’s evaluation is a worthy indictment, although it’s missing one significant use case — the VPN issues that redirection can cause. Nevertheless, I also recognize that until there are explicit standards forbidding this kind of use, it will continue to be commercially attractive and thus commonplace; indeed, I continue to assist commercial DNS companies, and service providers, who are trying to facilitate and gain revenue related to this market. (Part of the analyst ethic is much like a lawyer’s; it requires being able to put aside one’s personal feelings about a matter in order to assist a client to the best of one’s ability.)

I applaud ICANN taking a stand against redirection at the TLD level; it’s a start.

Bookmark and Share

Overpromising

I’ve turned one of my earlier blog entries, Smoke-and-mirrors and cloud software into a full-blown research note: “Software on Amazon’s Elastic Compute Cloud: How to Tell Hype From Reality” (clients only). It’s a Q&A for your software vendor, if they suggest that you deploy their solution on EC2, or if you want to do so and you’re wondering what vendor support you’ll get if you do so. The information is specific to Amazon (since most client inquiries of this type involve Amazon), but somewhat applicable to other cloud compute service providers, too.

More broadly, I’ve noticed an increasing tendency on the part of cloud compute vendors to over-promise. It’s not credible, and it leaves prospective customers scratching their heads and feeling like someone has tried to pull a fast one on them. Worse still, it could leave more gullible businesses going into implementations that ultimately fail. This is exactly what drives the Trough of Disillusionment of the hype cycle and hampers productive mainstream adoption.

Customers: When you have doubts about a cloud vendor’s breezy claims that sure, it will all work out, ask them to propose a specific solution. If you’re wondering how they’ll handle X, Y, or Z, ask them and don’t be satisfied with assurances that you (or they) will figure it out.

Vendors: I believe that if you can’t give the customer the right solution, you’re better off letting him go do the right thing with someone else. Stretching your capabilities can be positive for both you and your customer, but if your solution isn’t the right path, or it is a significantly more difficult path than an alternative solution, both of you are likely to be happier if that customer doesn’t buy from you right now, at least not in that particular context. Better to come back to this customer eventually when your technology is mature enough to meet his needs, or look for the customer’s needs that do suit what you can offer right now. If you screw up a premature implementation, chances are that you won’t get the chance to grow this business the way that you hoped. There are enough early adopters with needs that you can meet, that you should be going after them. There’s nothing wrong with serving start-ups and getting “foothold” implementations in enterprises; don’t bite off more than you can chew.

Almost a decade of analyst experience has shown me that it’s hard for a vendor to get a second chance with a customer if they screwed up the first encounter. Even if, many many years later, the vendor has a vastly augmented set of capabilities and is managed entirely differently, a burned customer still tends to look at them through the lens of that initial experience, and often take that attitude to the various companies they move to. My observation is that in IT outsourcing, customers certainly hold vendor “grudges” for more than five years, and may do so for more than a decade. This is hugely important in emerging markets, as it can dilute early-mover advantages as time progresses.

Bookmark and Share