Job-based vs. request-based computing

Companies are adopting cloud systems infrastructure services in two different ways: job-based “batch processing”, non-interactive computing; and request-based, real-time-response, interactive computing. The two have distinct requirements, but much as in the olden days of time-sharing, they can potentially share the same infrastructure.

Job-based computing is usually of a number-crunching nature — scientific or high-performance computing. This is the sort of thing that users usually like to do on parallel computers with very fast interconnection (Infiniband or the equivalent thereof), but in the cloud, total compute time may be traded for a lower cost, and, eventually, algorithms may be altered to reduce dependency on server-to-server or server-to-storage communications. Putting these jobs on the cloud generally reduces reliance on, and scheduling for, a fixed amount of supercomputing infrastructure. Alternatively, job-based computing on the cloud may represent one-time computationally-intensive projects (transcoding, for instance).

Request-based computing, on the other hand, demands instant response to interaction. This kind of use of the cloud is classically for Web hosting, whether the interaction is based on a user with a browser, or another server making Web services requests. Most of this kind of computing is not CPU-intensive.

Observation: Most cloud compute services today target request-based computing, and this is the logical evolution of the hosting industry. However, a significant amount of large-enterprise immediate-term adoption is job-based computing.

Dilemma for cloud providers: Optimize infrastructure with low-power low-cost processors for request-based computing? Or try to balance job-based and request-based compute in a way that maximizes efficient use of faster CPUs?

Bookmark and Share

“Enterprise class” cloud

There seems to be an endless parade of hosting companies eager to explain to me that they have an “enterprise class” cloud offering. (Cloud systems infrastructure services, to be precise; I continue to be careless in my shorthand on this blog, although all of us here at Gartner are trying to get into the habit of using cloud as an adjective attached to more specific terminology.)

If you’re a hosting vendor, get this into your head now: Just because your cloud compute service is differentiated from Amazon’s doesn’t mean that you’re differentiated from any other hoster’s cloud offering.

Yes, these offerings are indeed targeted at the enterprise. Yes, there are in fact plenty of non-startups who are ready and willing and eager to adopt cloud infrastructure. Yes, there are features that they want (or need) that they can’t get on some of the existing cloud offerings, especially those of the early entrants. But that does not make them unique.

These offerings tend to share the following common traits:

1. “Premium” equipment. Name-brand everything. HP blades, Cisco gear except for F5’s ADCs, etc. No white boxes.

2. VMware-based. This reflects the fact that VMware is overwhelmingly the most popular virtualization technology used in enterprises.

3. Private VLANs. Enterprises perceive private VLANs as more secure.

4. Private connectivity. That usually means Internet VPN support, but also the ability to drop your own private WAN connection into the facility. Enterprises who are integrating cloud-based solutions with their legacy infrastructure often want to be able to get MPLS VPN connections back to their own data center.

5. Colocated or provider-owned dedicated gear. Not all workloads virtualize well, and some things are available only as hardware. If you have Oracle RAC clusters, you are almost certainly going to do it on dedicated servers. People have Google search appliances, hardware ADCs custom-configured for complex tasks, black-box encryption devices, etc. Dedicated equipment is not going away for a very, very long time. (Clients only: See statistics and advice on what not to virtualize.)

6. Managed service options. People still want support, managed services, and professional services; the cloud simplifies and automates some operations tasks, but we have a very long way to go before it fulfills its potential to reduce IT operations labor costs. And this, of course, is where most hosters will make their money.

These are traits that it doesn’t take a genius to think of. Most are known requirements established through a decade and a half of hosting industry experience. If you want to differentiate, you need to get beyond them.

On-demand cloud offerings are a critical evolution stage for hosters. I continue to be very, very interested in hearing from hosters who are introducing this new set of capabilities. For the moment, there’s also some differentiation in which part of the cloud conundrum a hoster has decided to attack first, creating provider differences for both the immediate offerings and the near-term roadmap offerings. But hosters are making a big mistake by thinking their cloud competition is Amazon. Amazon certainly is a competitor now, but a hoster’s biggest worry should still be other hosters, given the worrisome similarities in the emerging services.

Bookmark and Share

What makes for an effective MQ briefing?

My colleague Ted Chamberlin and I are currently finalizing the new Gartner Magic Quadrant for Web Hosting. This year, we’ve nearly doubled the number of providers on the MQ, adding a bunch of cloud providers who offer hosting services (i.e., providers who are cloud system infrastructure service providers, and who aren’t pure storage or backup).

The draft has gone out for vendor review, and these last few days have been occupied by more than a dozen conversations with vendors about where they’ve placed in the MQ. (No matter what, most vendors are convinced they should be further right and further up.) Over the course of these conversations, one clear pattern seems to be characterizing this year: We’re seeing lots of data presented in the feedback process that wasn’t presented as part of the MQ briefing or any previous briefing the vendor did with us.

I recognize the MQ can be a mysterious process to vendors. So here’s a couple of thoughts from the analyst side on what makes for effective MQ briefing content. These are by no means universal opinions, but may be shared by my colleagues who cover service businesses.

In brief, the execution axis is about what you’re doing now. The vision axis is about where you’re going. A set of definitions for the criteria on each axis are included with every vendor notification that begins the Magic Quadrant process. If you’re a vendor being considered for an MQ, you really want to read the criteria. We do not throw darts to determine vendor placement. Every vendor gets a numerical rating on every single one of those criteria, and a tool plots the dots. It’s a good idea to address each of those criteria in your briefing (or supplemental material, if you can’t fit in everything you need into the briefing).

We generally have reasonably good visibility into execution from our client base, but the less market presence you have, especially among Gartner’s typical client base (mid-size business to large enterprise, and tech companies), the less we’ve probably seen you in deals or have gotten feedback from your customers. Similarly, if you don’t have much in the way of a channel, there’s less of a chance any of your partners have talked to us about what they’re doing with you. Thus, your best use of briefing time for an MQ is to fill us in on what we don’t know about your company’s achievements — the things that aren’t readily culled from publicly-available information or talking to prospects, customers, and partners.

It’s useful to briefly summarize your company’s achievements over the last year — revenue growth, metrics showing improvements in various parts of the business, new product introductions, interesting customer wins, and so forth. Focus on the key trends of your business. Tell us what strategic initiatives you’ve undertaken and the ways they’ve contributed to your business. You can use this to help give us context to the things we’ve observed about you. We may, for instance, have observed that your customer service seems to have improved, but not know what specific measures you took to improve it. Telling us also helps us to judge how far along the curve you are with an initiative, which in turn helps us to advise our clients better and more accurately rate you.

Vision, on the other hand, is something that only you can really tell us about. Because this is where you’re going, rather than where you are now or where you’ve been, the amount of information you’re willing to disclose is likely to directly correlate to our judgement of your vision. A one-year, quarter-by-quarter roadmap is usually the best way to show us what you’re thinking; a two-year roadmap is even better. (Note that we do rate track record, so you don’t want to claim things that you aren’t going to deliver — you’ll essentially take a penalty next year if you failed to deliver on the roadmap.) We want to know what you think of the market and your place in it, but the very best way to demonstrate that you’re planning to do something exciting and different is to tell us what you’re expecting to do. (We can keep the specifics under NDA, although the more we can talk about publicly, the more we can tell our clients that if they choose you, there’ll be some really cool stuff you’ll be doing for them soon.) If you don’t disclose your initiatives, we’re forced to guess based on your general statements of direction, and generally we’re going to be conservative in our guesses, which probably means a lower rating than you might otherwise have been able to get.

The key thing to remember, though, is that if at all possible, an MQ briefing should be a summary and refresher, not an attempt to cram a year’s worth of information into an hour. If you’ve been doing routine briefings covering product updates and the launch of key initiatives, you can skip all that in an MQ briefing, and focus on presenting the metrics and key achievements that show what you’ve done, and the roadmap that shows where you’re going. Note that you don’t have to be a client in order to conduct briefings. If MQ placement or analyst recommendations are important to your business, keep in mind that when you keep an analyst well-informed, you reap the benefit the whole year ’round in the hundreds or even thousands of conversations analysts have with your prospective customers, not just on the MQ itself.

Bookmark and Share

Wading into the waters of cloud adoption

I’ve been pondering the dot write-ups that I need to do for Gartner’s upcoming Cloud Computing hype cycle, as well as my forthcoming Magic Quadrant on Web Hosting (which now includes a bunch of cloud-based providers), and contemplating this thought:

We are at the start of an adoption curve for cloud computing. Getting from here, to the realization of the grand vision, will be, for most organizations, a series of steps into the water, and not a grand leap.

Start-ups have it easy; by starting with a greenfield build, they can choose from the very beginning to embrace new technologies and methodologies. Established organizations can sometimes do this with new projects, but still have heavy constraints imposed by the legacy environment. And new projects, especially now, are massively dwarfed by the existing installed base of business IT infrastructure: existing policies (and regulations), processes, methodologies, employees, and, of course, systems and applications.

The weight of all that history creates, in many organizations, a “can’t do” attitude. Sometimes that attitude comes right from the top of the business or from the CIO, but I’ve also encountered many a CIO eager to embrace innovation, only to be stymied by the morass of his organization’s IT legacy. Part of the fascination of the cloud services, of course, is that it allows business leaders to “go rogue” — to bypass the IT organization entirely in order to get what they want done ASAP, without much in the way of constraints and oversight. The counter-force is the move to develop private clouds that provide greater agility to internal IT.

Two client questions have been particularly prominent in the inquiries I’ve been taking on cloud (a super-hot topic of inquiry, as you’d expect): Is this cloud stuff real? and What can I do with the cloud right now? Companies are sticking their toes into the water, but few are jumping off the high dive. What interests me, though, is that many are engaging in active vendor discussions about taking the plunge, even if their actual expectation (or intent) is to just wade out a little. Everyone is afraid of sharks; it’s viewed as a high-risk activity.

In my research work, I have been, like the other analysts who do core cloud work here at Gartner, looking at a lot of big-picture stuff. But I’ve been focusing my written research very heavily on the practicalities of immediate-term adoption — answering the huge client demand for frameworks to use in formulating and executing on near-term cloud infrastructure plans, and in long-term strategic planning for their data centers. The interest is undoubtedly there. There’s just a gap between the solutions that people want to adopt, and the solutions that actually exist in the market. The market is evolving with tremendous rapidity, though, so not being able to find the solution you want today doesn’t mean that you won’t be able to get it next year.

Bookmark and Share

Verizon and Carpathia launch hybrid offerings

Two public cloud announcements from hosting providers this week, with some interesting similarities…

Verizon

Verizon has launched its Computing as a Service (CaaS) offering. This is a virtual data center (VDC) offering, which means that it’s got a Web-based GUI within which you provision and manage your infrastructure. You contract for CaaS itself on a one-year basis, paying for that base access on a monthly basis. Within CaaS, you can provision “farms”, which are individual virtual data centers. Within a farm, you can provision servers (along with the usual storage, load-balancing, firewall, etc.). Farms and servers are on-demand, with a daily price.

Two things make the Verizon offering distinctive (at least temporarily). First, farms can contain both physical servers and virtual (VMware-based) servers, on an HP C-class blade platform; while hybridized offerings have become increasingly common, Verizon is one of the few to allow them to be managed out of a unified GUI. Second, Verizon offers managed services across the whole platform. By default, you get basic management (including patch management) for the OS and Verizon-provided app infrastructure. You can also upgrade to full managed service. It looks like, compared to similar providers, the Verizon offering is going to be extremely cost-competitive.

Carpathia Hosting

In yet another example of a smaller hoster “growing up” with serious cloud computing ambitions, Carpathia has released an offering it calls Cloud Orchestration. It’s a hybrid utility hosting model, combining its managed dedicated hosting service (AlwaysOn) with scaling on its virtual server offering, InstantOn.

Carpathia has stated it’s the first hybrid offering; I don’t agree that it is. However, I do think that Carpathia has rolled out a notable number of features on its cloud platform (Citrix Xen-based). It’s made a foray into the cloud storage space, based on ParaScale. It also has auto-scaling, including auto-provisioning based on performance and availability SLA violations (the only vendor I know of that currently offers that feature). OS patch management is included, as are other basic managed hosting services. Check out Carpathia CTO Jon Greaves’s blog post on the value proposition, for an indication of where their thinking is at.

Side thought: Carpathia is one of the few Xen-based cloud providers to use Citrix Xen, rather than open-source Xen. However, now that Citrix is offering XenServer for free, it seems likely that service providers will gradually drift that way. Live migration (XenMotion) will probably be the main thing that drives that switch.

Bookmark and Share

Vendor horror stories

Everyone has vendor horror stories. No matter how good a vendor normally is, there will be times that they screw up. Some customers will exacerbate a vendor’s tendency to screw up — for instance, they may be someone the vendor really shouldn’t have tried to serve in the first place (heavy customization, i.e., many one-offs from a vendor who emphasizes standardization), or they may just be unlucky and have a sub-par employee on their account team.

Competitors of a vendor, especially small, less-well-known ones, will often loudly trumpet, as part of a briefing, how they won such-and-such a customer from some more prominent vendor, and how that vendor did something particularly horrible to that customer. I often find myself annoyed at such stories. It’s fine to say that you frequently win business away from X company. It’s great to explain your points of differentiation from your rivals. I’m deeply interested in who you think your most significant competitors are. But it’s declasse’ to tell me how much your competitors suck. Also, I can often hear the horror-story anomalies in those tales, as well as detect the real reason — like the desire to shift from a lightly-managed environment to a entirely managed one, or the desire to go from managed to nearly entirely self-managed, etc. I’ll often ask a vendor point-blank about that, and get an admission that this was what really drove the sale. So why not be honest about that in the first place? Say something positive about what you do well.

I think, for the most part, that it doesn’t work on prospective customers any better than it works on analysts. Most decent people recoil somewhat at hearing others put down, whether they are individuals or competing vendors. Prospects often ask me about badmouthing; naturally, they wonder what’s behind the horror stories, but they also wonder why the vendor feels the need to badmouth a competitor in the first place.

I often find that it’s not really the massive, boneheaded incidences that tend to drive churn, anyway. They can be the flash point that precipitates a departure, but far more often, churn is the result of the accumulation of a pile of things that the customer perceives as slights. The vendor has failed to generate competence and/or caring. While sincerity is not a substitute for competence, it can be a temporary salve for it; conversely, competence without conveying that the customer is valued can also be negatively perceived. Human beings, it seems, like to feel important.

Horror stories can be useful to illustrate these patterns of weakness for a particular vendor — a vendor that has trouble planning ahead, a vendor whose proposed customer architectures have a tendency not to scale well, a vendor with a broken service delivery structure, a vendor that doesn’t take accountability, and so on. Interestingly, above-and-beyond stories about vendors can cut both ways — they can illustrate service that is consistently good but is sometimes outstanding, but they can also illustrate exceptions to a vendor’s normal pattern of mediocre service.

As an analyst, I tend to pay the most attention to what customers say about their routine interactions with the vendor. Crisis management is also an important vendor skill, and I like to know how well a vendor responds in a crisis; similarly, the ramp up to getting a renewal is also an important skill. However, it’s the day-to-day stuff that tends to most color people’s perceptions of the relationship.

Still, we all like to tell stories. I’m always looking for a good case study, especially one that illustrates the things that went wrong as well as the things that went right.

Bookmark and Share

Link round-up

Recent links of interest…

I’ve heard that no less than four memcached start-ups have been recently funded. GigaOM speculates interestingly on whether memcached is good or bad for MySQL. It seems to me that in the age of cloud and hyperscale, we’re willing to sacrifice ACID compliance in many our transactions. RAM is cheap, and simplicity and speed are king. But I’m not sure that the widespread use of memcached in Web 2.0 applications, as a method of scaling a database, reflects the strengths of memcache so much as they reflect the weaknesses of the underlying databases.

Column-oriented databases are picking up some buzz lately. Sybase has a new white paper out on high-performance analytics. MySQL is plugging Infobright, a column-oriented engine for MySQL (replacing MyISAM, InnoDB, etc., just like any other engine).

Brian Krebs, the security blogger for the Washington Post, has an excellent post called The Scrap Value of a Hacked PC. It’s an examination of the ways that hacked PCs can be put to criminal use, and it’s intended to be printed out and handed to end-users who don’t think that security is their personal responsibility.

My colleague Ray Valdes has some thoughts on intuition-based vs. evidence-based design. It’s a riff on the recent New York Times article, Data, Not Design, Is King in the Age of Google, and a departing designer’s blog post that provides a fascinating look at data-driven decision making in an environment where you can immediately test everything.

Google and Salesforce.com

While I’ve been out of the office, Google has made some significant announcements. My colleague Ray Valdes has been writing about Google Wave and its secret sauce. I highly encourage you to go read his blog.

Google and Salesforce.com continue to build on their partnership. In April, they unveiled Salesforce for Google Apps. Now, they’re introducing Force.com for Google App Engine.

The announcement, in a nutshell, is this: There are now public Salesforce APIs that can be downloaded, and will work on Google App Engine (GAE). Those APIs are a subset of the functionality available in Force.com’s regular Web Services APIs. Check out the User Guide for details.

Note that this is not a replacement for Force.com and its (proprietary) Apex programming language. Salesforce clearly articulates web services vs. Force.com in its developer guide. Rather, this should be thought of as easing the curve for developers who want to extend their Web applications for use with Salesforce data.

A question that lingers in my mind: Normally, on Force.com, a Developer Edition account means that you can’t affect your organization’s live data. If a similar restriction exists on the GAE version of the APIs, it’s not mentioned in the documentation. I wonder if you can do very lightweight apps, using live data, with just a Developer Edition account with Salesforce, if you do it through GAE. If so, that would certainly open up the realm of developers who might try building something on the platform.

My colleague Eric Knipp has also blogged about the announcement. I’d encourage you to read his analysis.

Bookmark and Share

What’s the worth of six guys in a garage?

The cloud industry is young. Amazon’s EC2 service dates back just to October 2007, and just about everything related to public cloud infrastructure post-dates that point. Your typical cloud start-up is at most 18 months old, and in most cases, less than a year old. It has a handful of developers, some interesting tech, plenty of big dreams, and the need for capital.

So what’s that worth? Do you buy their software, or do you hire six guys, put them in nice offices, and give them a couple of months to try to duplicate that functionality? Do you just go acquire the company on the cheap, giving six guys a reasonably nice payday for the year of their life spent developing the tech, and getting six smart employees to continue developing this stuff for you? How important is time to market? And if you’re an investor, what type of valuation do you put on that?

Infrastructure and systems management is fairly well understood. Although the cloud is bringing some new ideas and approaches, people need most of the same stuff on the cloud that they’ve traditionally needed in the physical world. That means the near-term feature roadmaps are relatively clear-cut, and it’s a question of how many developers you can throw at cranking out features as quickly as possible. Some approaches have greater value than others, and there’s inherent value in well-developed software, but the question is, what is the defensible intellectual property? Relatively few companies in this space have patentable technology, for instance.

The recent Oracle acquisition of Virtual Iron may pose one possible answer to this. One could say the same about the Cincinnatti Bell (CBTS) acquisition of Virtual Blocks back in February. The rumor mill seems to indicate that in both cases, the valuations were rather low.

Don’t get me wrong. There are certainly companies out there who are carving out defensible spaces and which have exciting, interesting, unique ideas backed by serious management and technical chops. But as with all such gold rushes to majorly hyped tech trends, there’s also a lot of me-toos. What intrigues me is the extent to which second-rate software companies are getting funding, but first-rate infrastructure services companies are not.

Bookmark and Share

Amazon’s CloudWatch and other features

Catching up on some commentary…

Amazon recently introduced three new features: monitoring, load-balancing, and auto-scaling. (As usual, Werner Vogels has further explanation, and RightScale has a detailed examination.)

The monitoring service, called CloudWatch, provides utilization metrics for your running EC2 instances. This is a premium service on top of the regular EC2 fee; it costs 1.5 cents per instance-hour. The data is persisted for just two weeks, but is independent of running instances. If you need longer-term historical graphing, you’ll need to retrieve and archive the data yourself. There’s some simple data aggregation, but anyone who needs real correlation capabilities will want to feed this data back into their own monitoring tools.

CloudWatch is required to use the auto-scaling service, since that service uses the monitoring data to figure out when to launch or terminate instances. Basically, you define business rules for scaling that are based on the available CloudWatch metrics. Developers should take note that this is not magical auto-scaling. Adding or subtracting instances based on metrics isn’t rocket science. The tough part is usually writing an app that scales horizontally, plus automatically and seamlessly making other configuration changes necessary when you change the number of virtual servers in its capacity pool. (I field an awful lot of client calls from developers under the delusion that they can just write code any way they want, and simply putting their application on EC2 will remove all worries about scalability.)

The new load-balancing service essentially serves both global and local functions — between availability zones, and between instances within a zone. It’s auto-scaling-aware, but its health checks are connection-based, rather than using CloudWatch metrics. However, it’s free to EC2 customers and does not require use of CloudWatch. Customers who have been using HAproxy are likely to find this useful. It won’t touch the requirements of those who need full-fledged application delivery controller (ADC) functionality and have been using Zeus or the like.

As always, Amazon’s new features eat into the differentiating capabilities of third-party tools (RightScale, Elastra, etc.) with these services, but the “most, but not all of the way there” nature of their implementations mans that third-party tools still add value to the baseline. That’s particularly true given that only the load-balancing feature is free.

Bookmark and Share