Blog Archives
Cloudy inquiry trends
I haven’t been posting much lately, due to being overwhelmingly busy with client inquiries, and having a few medical issues that have taken me out of the action somewhat. So, this is something of a catch-up, state-of-the-universe-from-my-perspective, inquiry-trends post.
With the economy picking up a bit, and businesses starting to return to growth initiatives rather than just cost optimization, and the approach of the budget season, the flow of client inquiry around cloud strategy has accelerated dramatically, to the point where cloud inquiries are becoming the overwhelming majority of my inquiries. Even my colocation and data center leasing inquiries are frequently taking on a cloud flavor, i.e., “How long more should we plan to have this data center, rather than just putting everything in the cloud?”
Organizations have really absorbed the hype — they genuinely believe that shortly, the cloud will solve all of their infrastructure issues. Sometimes, they’ve even made promises to executive management that this will be the case. Unfortunately, in the short term (i.e., for 2010 and 2011 planning), this isn’t going to be the case for your typical mid-size and enterprise business. There’s just too much legacy burden. Also, traditional software licensing schemes simply don’t work in this brave new world of elastic capacity.
The enthusiasm, though, is vast, which means that there are tremendous opportunities out there, and I think it’s both entirely safe and mainstream to run cloud infrastructure pilot projects right now, including large-scale, mission-critical, production infrastructure pilots for a particular business need (as opposed to deciding to move your whole data center into the cloud, which is still bleeding-edge adopter stuff). Indeed, I think there’s a significant untapped potential for tools that ease this transition. (Certainly there are any number of outsourcers and consultants who would love to charge you vast amounts of money to help you migrate.)
We see the colocation and data center leasing markets shift with the economy, and the trends and the players shift with them, especially as strong new regionals and high-density players emerge. The cloud influence is also significant, as people try to evaluate what their real needs for space will be going forward; this is particularly true for anyone looking at long-term leases, and wondering what the state of IT will be like going out ten years. Followers of this space should check out SwitchNAP for a good example of the kind of impact that a new player can make in a very short time (they opened in December).
August has been a consistently quiet month for CDN contract inquiries, and this year is no exception, but the whole of last three months has really been hopping. The industry is continuing to shift in interesting ways, not just because of the dynamics of the companies involved, but because of changing buyer needs. Also, there was a very interesting new launch in July, in the application delivery network space, a company called Asankya, definitely worth checking out if you follow this space.
All in all, there’s a lot of activity, and it’s becoming more future-focused as people get ready to prep their budgets. This is good news for everyone, I think. Even though the fundamental economic shifts have driven companies to be more value-driven, I think there’s a valuable emphasis being placed on the right solutions at the right price, that do the right thing for the business.
What’s mid-sized?
As I talk to clients, it strikes me that companies with fairly similar IT infrastructures can use very different words to describe how they feel about it. One client might say, “Oh, we’re just a small IT shop, we’ve only got a little over 250 servers, we think cloud computing is for people like us.” Another client that’s functionally identical (same approximate business size, industry, mix of workloads and technologies) might say, much more indignantly, “We’re a big IT shop! We’ve got more than 250 servers! Cloud computing can’t help enterprises like us!”
“SMB” is a broadly confused term. So, for that matter, is “enterprise”. I tend to prefer the term “mid-market”, but even that is sort of cop-out language. Moreover, business size and IT size don’t correlate. Consider the Fortune 500 companies that extract natural resources, vs. their neighbors on the list, for instance.
Vendors have to be careful how they pitch their marketing. Mid-sized companies and/or mid-sized IT shops don’t always know when they’re talking about them, and not some other sort of company. Conversely, IT managers have to look more deeply to figure out if a particular sort of cloud service is right for their organization. Don’t dismiss a cloud service out of hand because you think you’re either too big or too small for it.
Bits and pieces
Interesting recent news:
Amazon’s revocation of Orwell novels on the Kindle has stirred up some cloud debate. There seems to have been a thread of “will this controversy kill cloud computing”, which you can find in plenty of blogs and press articles. I think that question, in this context, is silly, and am not going to dignify it with a lengthy post of my own. I do think, however, that it highlights important questions around content ownership, application ownership, and data ownership, and the role that contracts (whether in the form of EULAs or traditional contracts) will play in the cloud. By giving up control over physical assets, whether data or devices, we place ourselves into the hands of thir parties, and we’re now subject to their policies and foibles. The transition from a world of ownership to a world of rental, even “permanent” lifetime rental, is not a trivial one.
Engine Yard has expanded its EC2 offering. Previously, Engine Yard was offering Amazon EC2 deployment of its stack via an offering called Solo, for low-end customers who only needed a single instance. Now, they’ve introduced a version called Flex, which is oriented around customers who need a cluster and associated capabilities, along with a higher level of support. This is notable because Engine Yard has been serving these higher-end customers out of their own data center and infrastructure. This move, however, seems to be consistent with Engine Yard’s gradual shift from hosting towards being more software-centric.
The Rackspace Cloud Servers API is now in open beta. Cloud Servers is essentially the product that resulted from Rackspace’s acquisition of Slicehost. Previously, you dealt with your Cloud Server through a Web portal; this new release adds a RESTful API, along with some new features, like shared IPs (useful for keepalived and the like). Also of note is the resize operation, letting you scale your server size up or down, but this is really handwaving magic in front of replacing a smaller virtual server with a larger virtual server, rather than expanding an already-running virtual instance. The API is fairly extensive and the documentation seems decent, although I haven’t had time to personally try it out yet. The API responses, interestingly, include both human-readable data as well as WADL (Web Application Description Language, which is machine-parseable).
SOASTA has introduced a cloud-based performance certification program. Certification is something of a marketing gimmick, but I do think that SOASTA is, overally, an interesting company. Very simply, SOASTA leverages cloud system infrastructure to offer high-volume load-testing services. In the past, you’d typically execute such tests using a tool like HP’s LoadRunner, and many Web hosters offer, as part of their professional services offerings, performance testing using LoadRunner or a similar tool. SOASTA is a full-fledged software as a service offering (i.e., it is their own test harness, monitors, analytics, etc., not a cloud repackaging of another vendor), and the price point makes it reasonable not just for the sort of well-established organizations that could previously afford commercial performance-testing tools, but also for start-ups.
Cloud computing adoption surveys
A recent Forrester survey apparently indicates that that one out of four large companies plan to use an external provider soon, or have already done so. (The Cloud Storage Strategy blog has a good round-up linking to the original report, a summary of the key points, and various commentators.)
Various pundits are apparently surprised by these results. I’m not. I haven’t been able to obtain a copy of the Forrester report, but from the comments I’ve read, it appears that software as a service and hosting (part of infrastructure as a service) are included as part of the surveyed services. SaaS and IaaS are both well-established markets, with significant penetration across all segments of business, and interest in both IaaS and SaaS models has accelerated. We’ve wrapped the “cloud” label around some or all of these existing markets (how much gets encompassed depends on your definitions), so it shouldn’t come as a surprise to already see high adoption rates.
Gartner’s own survey on this topic has just been published. It’s titled, “User Survey Analysis: Economic Pressures Drive Cost-Oriented Outsourcing, Worldwide, 2008-2009“. Among its many components is a breakdown of current and planned use of alternative delivery models (which include things like SaaS and IT infrastructure utilities) over the next 24 months. We show even higher current and planned adoption numbers than Forrester, with IaaS leading the pack in terms of current and near-term adoption, and very healthy numbers for SaaS as well.
A hodgepodge of links
This is just a round-up of links that I’ve recently found to be interesting.
Barroso and Holzle (Google): Warehouse-Scale Computing. This is a formal lecture-paper covering the design of what these folks from Google refer to as WSCs. They write, “WSCs differ significantly from traditional data centers: they belong to a single organization, use a relatively homogenous hardware and system software platform, and share a common systems management layer. Often, much of the application, middleware, and system software is built in-house compared to the predominance of third-party software running in conventional data centers. Most importantly, WSCs run a smaller number of very large applications (or Internet services), and the common resource management infrastructure allows significant deployment flexibility.” The paper is wide-ranging but written to be readily understandable by the mildly technical layman. Highly recommended for anyone interested in cloud.
Washington Post: Metrorail Crash May Exemplify Automation Paradox. The WaPo looks back at serious failures of automated systems, and quotes a “growing consensus among experts that automated systems should be designed to enhance the accuracy and performance of human operators rather than to supplant them or make them complacent. By definition, accidents happen when unusual events come together. No matter how clever the designers of automated systems might be, they simply cannot account for every possible scenario, which is why it is so dangerous to eliminate ‘human interference’.” Definitely something to chew over in the cloud context.
Malcolm Gladwell: Priced to Sell. The author of The Tipping Point takes on Chris Anderon’s Free, and challenges the notion that information wants to be free. In turn, Seth Godin thinks Gladwell is wrong, and the book seems to be setting off some healthy debate.
Bruce Robertson: Capacity Planning Equals Budget Planning. My colleague Bruce riffs off a recent blog post of mine, and discusses how enterprise architects need to change the way they design solutions.
Martin English: Install SAP on Amazon Web Services. An interesting blog devoted to how to get SAP running on AWS. This is for people interested in hands-on instructions.
Robin Burkinshaw: Being homeless in the Sims 3. This blog tells the story, in words and images, of “Alice and Kev”, a pair of characters that the author (a game design student) created in the Sims 3. It’s a fascinating bit of user-generated content, and a very interesting take on what can be done with modern sandbox-style games.
Magic Quadrant (hosting and cloud), published!
The new Magic Quadrant for Web Hosting and Hosted Cloud System Infrastructure Services (On Demand) has been published. (Gartner clients only, although I imagine public copies will become available soon as vendors buy reprints.) Inclusion criteria was set primarily by revenue; if you’re wondering why your favorite vendor wasn’t included, it was probably because they didn’t, at the January cut-off date, have a cloud compute service, or didn’t have enough revenue to meet the bar. Also, take note that this is direct services only (thus the somewhat convoluted construction of the title); it does not include vendors with enabling technology like Enomaly, or overlaid services like RightScale.
It marks the first time we’ve done a formal vendor rating of many of the cloud system infrastructure service providers. We do so in the context of the Web hosting market, though, which means that the providers are evaluated on the full breadth of the five most common hosting use cases that Gartner clients have. Self-managed hosting (including “virtual data center” hosting of the Amazon EC2, GoGrid, Terremark Enterprise Cloud, etc. sort) is just one of those use cases. (The primary cloud infrastructure use case not in this evaluation is batch-oriented processing, like scientific computing.)
We mingled Web hosting and cloud infrastructure on the same vendor rating because one of the primary use cases for cloud infrastructure is for the hosting of Web applications and content. For more details on this, see my blog post about how customers buy solutions to business needs, not technology. (You might also want to read my blog post on “enterprise class” cloud.)
We rated more than 60 individual factors for each vendor, spanning five use cases. The evaluation criteria note (Gartner clients only) gives an overview of the factors that we evaluate in the course of the MQ. The quantitative scores from the factors were rolled up into category scores, which in turn rolled up into overall vision and execution scores, which turn into the dot placement in the Quadrant. All the number crunching is done by software — analysts don’t get to arbitrarily move dots around.
To understand the Magic Quadrant methodology, I’d suggest you read the following:
- The official How Gartner Evaluates Vendors within a Market guide to Magic Quadrants
- My colleague Jim Holincheck’s blog post on Misunderstanding Magic Quadrants
- My blog post on How Not To Use a Magic Quadrant
- Analyst industry watcher SageCircle’s commentary
Some people might look at the vendors on this MQ and wonder why exciting new entrants aren’t highly rated on vision and/or execution. Simply put, many of these vendors might be superb at what they do, yet still not rate very highly in the overall market represented by the MQ, because they are good at just one of the five use cases encompassed by the MQ’s market definition, or even good at just one particular aspect of a single use case. This is not just a cloud-related rating; to excel in the market as a whole, one has to be able to offer a complete range of solutions.
Because there’s considerable interest in vendor selection for various use cases (including non-hosting use cases) that are unique to public cloud compute services, we’re also planning to publish some companion research, using a recently-introduced Gartner methodology called a Critical Capabilities note. These notes look at vendors in the context of a single product/service, broken down by use case. (Magic Quadrants, on the other hand, look at overall vendor positioning within an entire market.) The Critical Capabilities note solves one of the eternal dilemmas of looking at a MQ, which is trying to figure out which vendors are highly rated for the particular business need that you have, since, as I want to re-iterate again, a MQ niche player may be do the exact thing you need in a vastly more awesome fashion than a vendor rated a leader. Critical Capabilities notes break things down feature-by-feature.
In the meantime, for more on choosing a cloud infrastructure provider, Gartner clients should also look at some of my other notes:
- How to Select a Cloud Computing Infrastructure Provider
- Toolkit: Comparing Cloud Computing Infrastructure Providers
- Toolkit: Estimating the Cost of Cloud Infrastructure
For cloud infrastructure service providers: We may expand the number of vendors we evaluate for the Critical Capabilities note. If you’ve never briefed us before, we’d welcome you to do so now; schedule a briefing with myself, Ted Chamberlin, and Mike Spink (a brand-new colleague in Europe).
I’m thinking about using Amazon, IBM, or Rackspace…
At Gartner, much of our coverage of the cloud system infrastructure services market (i.e., Amazon, GoGrid, Joyent, etc.) is an outgrowth of our coverage of the hosting market. Hosting is certainly not the only common use case for cloud, but it is the use case that is driving much of the revenue right now, a high percentage of the providers are hosters, and most of the offerings lean heavily in this direction.
This leads to some interesting phenomenons, like the inquiries where the client begins with, “I’m considering using Amazon, IBM, or Rackspace…” That’s the result of customers thinking about the trade-offs between different types of solutions, not just vendors. Also, ultimately, customers buy solutions to business needs, not technology.
Customers say things like, “I’ve got an e-commerce website that uses the following list of technologies. I get a lot more traffic around Mother’s Day and Christmas. Also, I run marketing campaigns, but I’m never sure how much additional traffic an advertisement will drive to my site.”
If you’re currently soaking in the cloud hype, you might quickly jump on that to say, “A perfect case for cloud!” and it could be, but then you get into other questions. Is maximum cost savings the most important budgetary aspect, or is predictability of the bill more important? When he has traffic spikes, are they gradual, giving him hours (or even days) to build up the necessary capacity, or are they sudden, requiring provisioning in close to real time as possible? Does he understand how to architect the infrastructure (and app!) to scale, or does he need help? Does his application scale horizontally or vertically? Does he want to do capacity planning himself, or does he want someone else to take care of it? (Capacity planning equals budget planning, so it’s rarely an, “eh, because we can scale quickly, it doesn’t matter.”) Does he have a good change management process, or does he want a provider to shepherd that for him? Does he need to be PCI compliant, and if so, how does he plan to achieve that? How much systems management does he want to do himself, and to what degree does he have automation tools, or want to use provider-supplied automation? And so on.
That’s just one of the use cases for cloud compute as a service. Similar sets of questions exist in each of the other use cases where cloud is a possible solution. It’s definitely not as simple as “more efficient utilization of infrastructure equals Win”.
Overpromising
I’ve turned one of my earlier blog entries, Smoke-and-mirrors and cloud software into a full-blown research note: “Software on Amazon’s Elastic Compute Cloud: How to Tell Hype From Reality” (clients only). It’s a Q&A for your software vendor, if they suggest that you deploy their solution on EC2, or if you want to do so and you’re wondering what vendor support you’ll get if you do so. The information is specific to Amazon (since most client inquiries of this type involve Amazon), but somewhat applicable to other cloud compute service providers, too.
More broadly, I’ve noticed an increasing tendency on the part of cloud compute vendors to over-promise. It’s not credible, and it leaves prospective customers scratching their heads and feeling like someone has tried to pull a fast one on them. Worse still, it could leave more gullible businesses going into implementations that ultimately fail. This is exactly what drives the Trough of Disillusionment of the hype cycle and hampers productive mainstream adoption.
Customers: When you have doubts about a cloud vendor’s breezy claims that sure, it will all work out, ask them to propose a specific solution. If you’re wondering how they’ll handle X, Y, or Z, ask them and don’t be satisfied with assurances that you (or they) will figure it out.
Vendors: I believe that if you can’t give the customer the right solution, you’re better off letting him go do the right thing with someone else. Stretching your capabilities can be positive for both you and your customer, but if your solution isn’t the right path, or it is a significantly more difficult path than an alternative solution, both of you are likely to be happier if that customer doesn’t buy from you right now, at least not in that particular context. Better to come back to this customer eventually when your technology is mature enough to meet his needs, or look for the customer’s needs that do suit what you can offer right now. If you screw up a premature implementation, chances are that you won’t get the chance to grow this business the way that you hoped. There are enough early adopters with needs that you can meet, that you should be going after them. There’s nothing wrong with serving start-ups and getting “foothold” implementations in enterprises; don’t bite off more than you can chew.
Almost a decade of analyst experience has shown me that it’s hard for a vendor to get a second chance with a customer if they screwed up the first encounter. Even if, many many years later, the vendor has a vastly augmented set of capabilities and is managed entirely differently, a burned customer still tends to look at them through the lens of that initial experience, and often take that attitude to the various companies they move to. My observation is that in IT outsourcing, customers certainly hold vendor “grudges” for more than five years, and may do so for more than a decade. This is hugely important in emerging markets, as it can dilute early-mover advantages as time progresses.
Job-based vs. request-based computing
Companies are adopting cloud systems infrastructure services in two different ways: job-based “batch processing”, non-interactive computing; and request-based, real-time-response, interactive computing. The two have distinct requirements, but much as in the olden days of time-sharing, they can potentially share the same infrastructure.
Job-based computing is usually of a number-crunching nature — scientific or high-performance computing. This is the sort of thing that users usually like to do on parallel computers with very fast interconnection (Infiniband or the equivalent thereof), but in the cloud, total compute time may be traded for a lower cost, and, eventually, algorithms may be altered to reduce dependency on server-to-server or server-to-storage communications. Putting these jobs on the cloud generally reduces reliance on, and scheduling for, a fixed amount of supercomputing infrastructure. Alternatively, job-based computing on the cloud may represent one-time computationally-intensive projects (transcoding, for instance).
Request-based computing, on the other hand, demands instant response to interaction. This kind of use of the cloud is classically for Web hosting, whether the interaction is based on a user with a browser, or another server making Web services requests. Most of this kind of computing is not CPU-intensive.
Observation: Most cloud compute services today target request-based computing, and this is the logical evolution of the hosting industry. However, a significant amount of large-enterprise immediate-term adoption is job-based computing.
Dilemma for cloud providers: Optimize infrastructure with low-power low-cost processors for request-based computing? Or try to balance job-based and request-based compute in a way that maximizes efficient use of faster CPUs?
“Enterprise class” cloud
There seems to be an endless parade of hosting companies eager to explain to me that they have an “enterprise class” cloud offering. (Cloud systems infrastructure services, to be precise; I continue to be careless in my shorthand on this blog, although all of us here at Gartner are trying to get into the habit of using cloud as an adjective attached to more specific terminology.)
If you’re a hosting vendor, get this into your head now: Just because your cloud compute service is differentiated from Amazon’s doesn’t mean that you’re differentiated from any other hoster’s cloud offering.
Yes, these offerings are indeed targeted at the enterprise. Yes, there are in fact plenty of non-startups who are ready and willing and eager to adopt cloud infrastructure. Yes, there are features that they want (or need) that they can’t get on some of the existing cloud offerings, especially those of the early entrants. But that does not make them unique.
These offerings tend to share the following common traits:
1. “Premium” equipment. Name-brand everything. HP blades, Cisco gear except for F5’s ADCs, etc. No white boxes.
2. VMware-based. This reflects the fact that VMware is overwhelmingly the most popular virtualization technology used in enterprises.
3. Private VLANs. Enterprises perceive private VLANs as more secure.
4. Private connectivity. That usually means Internet VPN support, but also the ability to drop your own private WAN connection into the facility. Enterprises who are integrating cloud-based solutions with their legacy infrastructure often want to be able to get MPLS VPN connections back to their own data center.
5. Colocated or provider-owned dedicated gear. Not all workloads virtualize well, and some things are available only as hardware. If you have Oracle RAC clusters, you are almost certainly going to do it on dedicated servers. People have Google search appliances, hardware ADCs custom-configured for complex tasks, black-box encryption devices, etc. Dedicated equipment is not going away for a very, very long time. (Clients only: See statistics and advice on what not to virtualize.)
6. Managed service options. People still want support, managed services, and professional services; the cloud simplifies and automates some operations tasks, but we have a very long way to go before it fulfills its potential to reduce IT operations labor costs. And this, of course, is where most hosters will make their money.
These are traits that it doesn’t take a genius to think of. Most are known requirements established through a decade and a half of hosting industry experience. If you want to differentiate, you need to get beyond them.
On-demand cloud offerings are a critical evolution stage for hosters. I continue to be very, very interested in hearing from hosters who are introducing this new set of capabilities. For the moment, there’s also some differentiation in which part of the cloud conundrum a hoster has decided to attack first, creating provider differences for both the immediate offerings and the near-term roadmap offerings. But hosters are making a big mistake by thinking their cloud competition is Amazon. Amazon certainly is a competitor now, but a hoster’s biggest worry should still be other hosters, given the worrisome similarities in the emerging services.