Blog Archives

Wading into the waters of cloud adoption

I’ve been pondering the dot write-ups that I need to do for Gartner’s upcoming Cloud Computing hype cycle, as well as my forthcoming Magic Quadrant on Web Hosting (which now includes a bunch of cloud-based providers), and contemplating this thought:

We are at the start of an adoption curve for cloud computing. Getting from here, to the realization of the grand vision, will be, for most organizations, a series of steps into the water, and not a grand leap.

Start-ups have it easy; by starting with a greenfield build, they can choose from the very beginning to embrace new technologies and methodologies. Established organizations can sometimes do this with new projects, but still have heavy constraints imposed by the legacy environment. And new projects, especially now, are massively dwarfed by the existing installed base of business IT infrastructure: existing policies (and regulations), processes, methodologies, employees, and, of course, systems and applications.

The weight of all that history creates, in many organizations, a “can’t do” attitude. Sometimes that attitude comes right from the top of the business or from the CIO, but I’ve also encountered many a CIO eager to embrace innovation, only to be stymied by the morass of his organization’s IT legacy. Part of the fascination of the cloud services, of course, is that it allows business leaders to “go rogue” — to bypass the IT organization entirely in order to get what they want done ASAP, without much in the way of constraints and oversight. The counter-force is the move to develop private clouds that provide greater agility to internal IT.

Two client questions have been particularly prominent in the inquiries I’ve been taking on cloud (a super-hot topic of inquiry, as you’d expect): Is this cloud stuff real? and What can I do with the cloud right now? Companies are sticking their toes into the water, but few are jumping off the high dive. What interests me, though, is that many are engaging in active vendor discussions about taking the plunge, even if their actual expectation (or intent) is to just wade out a little. Everyone is afraid of sharks; it’s viewed as a high-risk activity.

In my research work, I have been, like the other analysts who do core cloud work here at Gartner, looking at a lot of big-picture stuff. But I’ve been focusing my written research very heavily on the practicalities of immediate-term adoption — answering the huge client demand for frameworks to use in formulating and executing on near-term cloud infrastructure plans, and in long-term strategic planning for their data centers. The interest is undoubtedly there. There’s just a gap between the solutions that people want to adopt, and the solutions that actually exist in the market. The market is evolving with tremendous rapidity, though, so not being able to find the solution you want today doesn’t mean that you won’t be able to get it next year.

Bookmark and Share

Verizon and Carpathia launch hybrid offerings

Two public cloud announcements from hosting providers this week, with some interesting similarities…

Verizon

Verizon has launched its Computing as a Service (CaaS) offering. This is a virtual data center (VDC) offering, which means that it’s got a Web-based GUI within which you provision and manage your infrastructure. You contract for CaaS itself on a one-year basis, paying for that base access on a monthly basis. Within CaaS, you can provision “farms”, which are individual virtual data centers. Within a farm, you can provision servers (along with the usual storage, load-balancing, firewall, etc.). Farms and servers are on-demand, with a daily price.

Two things make the Verizon offering distinctive (at least temporarily). First, farms can contain both physical servers and virtual (VMware-based) servers, on an HP C-class blade platform; while hybridized offerings have become increasingly common, Verizon is one of the few to allow them to be managed out of a unified GUI. Second, Verizon offers managed services across the whole platform. By default, you get basic management (including patch management) for the OS and Verizon-provided app infrastructure. You can also upgrade to full managed service. It looks like, compared to similar providers, the Verizon offering is going to be extremely cost-competitive.

Carpathia Hosting

In yet another example of a smaller hoster “growing up” with serious cloud computing ambitions, Carpathia has released an offering it calls Cloud Orchestration. It’s a hybrid utility hosting model, combining its managed dedicated hosting service (AlwaysOn) with scaling on its virtual server offering, InstantOn.

Carpathia has stated it’s the first hybrid offering; I don’t agree that it is. However, I do think that Carpathia has rolled out a notable number of features on its cloud platform (Citrix Xen-based). It’s made a foray into the cloud storage space, based on ParaScale. It also has auto-scaling, including auto-provisioning based on performance and availability SLA violations (the only vendor I know of that currently offers that feature). OS patch management is included, as are other basic managed hosting services. Check out Carpathia CTO Jon Greaves’s blog post on the value proposition, for an indication of where their thinking is at.

Side thought: Carpathia is one of the few Xen-based cloud providers to use Citrix Xen, rather than open-source Xen. However, now that Citrix is offering XenServer for free, it seems likely that service providers will gradually drift that way. Live migration (XenMotion) will probably be the main thing that drives that switch.

Bookmark and Share

Link round-up

Recent links of interest…

I’ve heard that no less than four memcached start-ups have been recently funded. GigaOM speculates interestingly on whether memcached is good or bad for MySQL. It seems to me that in the age of cloud and hyperscale, we’re willing to sacrifice ACID compliance in many our transactions. RAM is cheap, and simplicity and speed are king. But I’m not sure that the widespread use of memcached in Web 2.0 applications, as a method of scaling a database, reflects the strengths of memcache so much as they reflect the weaknesses of the underlying databases.

Column-oriented databases are picking up some buzz lately. Sybase has a new white paper out on high-performance analytics. MySQL is plugging Infobright, a column-oriented engine for MySQL (replacing MyISAM, InnoDB, etc., just like any other engine).

Brian Krebs, the security blogger for the Washington Post, has an excellent post called The Scrap Value of a Hacked PC. It’s an examination of the ways that hacked PCs can be put to criminal use, and it’s intended to be printed out and handed to end-users who don’t think that security is their personal responsibility.

My colleague Ray Valdes has some thoughts on intuition-based vs. evidence-based design. It’s a riff on the recent New York Times article, Data, Not Design, Is King in the Age of Google, and a departing designer’s blog post that provides a fascinating look at data-driven decision making in an environment where you can immediately test everything.

Google and Salesforce.com

While I’ve been out of the office, Google has made some significant announcements. My colleague Ray Valdes has been writing about Google Wave and its secret sauce. I highly encourage you to go read his blog.

Google and Salesforce.com continue to build on their partnership. In April, they unveiled Salesforce for Google Apps. Now, they’re introducing Force.com for Google App Engine.

The announcement, in a nutshell, is this: There are now public Salesforce APIs that can be downloaded, and will work on Google App Engine (GAE). Those APIs are a subset of the functionality available in Force.com’s regular Web Services APIs. Check out the User Guide for details.

Note that this is not a replacement for Force.com and its (proprietary) Apex programming language. Salesforce clearly articulates web services vs. Force.com in its developer guide. Rather, this should be thought of as easing the curve for developers who want to extend their Web applications for use with Salesforce data.

A question that lingers in my mind: Normally, on Force.com, a Developer Edition account means that you can’t affect your organization’s live data. If a similar restriction exists on the GAE version of the APIs, it’s not mentioned in the documentation. I wonder if you can do very lightweight apps, using live data, with just a Developer Edition account with Salesforce, if you do it through GAE. If so, that would certainly open up the realm of developers who might try building something on the platform.

My colleague Eric Knipp has also blogged about the announcement. I’d encourage you to read his analysis.

Bookmark and Share

What’s the worth of six guys in a garage?

The cloud industry is young. Amazon’s EC2 service dates back just to October 2007, and just about everything related to public cloud infrastructure post-dates that point. Your typical cloud start-up is at most 18 months old, and in most cases, less than a year old. It has a handful of developers, some interesting tech, plenty of big dreams, and the need for capital.

So what’s that worth? Do you buy their software, or do you hire six guys, put them in nice offices, and give them a couple of months to try to duplicate that functionality? Do you just go acquire the company on the cheap, giving six guys a reasonably nice payday for the year of their life spent developing the tech, and getting six smart employees to continue developing this stuff for you? How important is time to market? And if you’re an investor, what type of valuation do you put on that?

Infrastructure and systems management is fairly well understood. Although the cloud is bringing some new ideas and approaches, people need most of the same stuff on the cloud that they’ve traditionally needed in the physical world. That means the near-term feature roadmaps are relatively clear-cut, and it’s a question of how many developers you can throw at cranking out features as quickly as possible. Some approaches have greater value than others, and there’s inherent value in well-developed software, but the question is, what is the defensible intellectual property? Relatively few companies in this space have patentable technology, for instance.

The recent Oracle acquisition of Virtual Iron may pose one possible answer to this. One could say the same about the Cincinnatti Bell (CBTS) acquisition of Virtual Blocks back in February. The rumor mill seems to indicate that in both cases, the valuations were rather low.

Don’t get me wrong. There are certainly companies out there who are carving out defensible spaces and which have exciting, interesting, unique ideas backed by serious management and technical chops. But as with all such gold rushes to majorly hyped tech trends, there’s also a lot of me-toos. What intrigues me is the extent to which second-rate software companies are getting funding, but first-rate infrastructure services companies are not.

Bookmark and Share

Amazon’s CloudWatch and other features

Catching up on some commentary…

Amazon recently introduced three new features: monitoring, load-balancing, and auto-scaling. (As usual, Werner Vogels has further explanation, and RightScale has a detailed examination.)

The monitoring service, called CloudWatch, provides utilization metrics for your running EC2 instances. This is a premium service on top of the regular EC2 fee; it costs 1.5 cents per instance-hour. The data is persisted for just two weeks, but is independent of running instances. If you need longer-term historical graphing, you’ll need to retrieve and archive the data yourself. There’s some simple data aggregation, but anyone who needs real correlation capabilities will want to feed this data back into their own monitoring tools.

CloudWatch is required to use the auto-scaling service, since that service uses the monitoring data to figure out when to launch or terminate instances. Basically, you define business rules for scaling that are based on the available CloudWatch metrics. Developers should take note that this is not magical auto-scaling. Adding or subtracting instances based on metrics isn’t rocket science. The tough part is usually writing an app that scales horizontally, plus automatically and seamlessly making other configuration changes necessary when you change the number of virtual servers in its capacity pool. (I field an awful lot of client calls from developers under the delusion that they can just write code any way they want, and simply putting their application on EC2 will remove all worries about scalability.)

The new load-balancing service essentially serves both global and local functions — between availability zones, and between instances within a zone. It’s auto-scaling-aware, but its health checks are connection-based, rather than using CloudWatch metrics. However, it’s free to EC2 customers and does not require use of CloudWatch. Customers who have been using HAproxy are likely to find this useful. It won’t touch the requirements of those who need full-fledged application delivery controller (ADC) functionality and have been using Zeus or the like.

As always, Amazon’s new features eat into the differentiating capabilities of third-party tools (RightScale, Elastra, etc.) with these services, but the “most, but not all of the way there” nature of their implementations mans that third-party tools still add value to the baseline. That’s particularly true given that only the load-balancing feature is free.

Bookmark and Share

VMware takes stake in Terremark

I have been crazily, insanely busy, and my frequency of blog posting has suffered for it. On the plus side, I’ve been busy because a huge number of people — users, vendors, investors — want to talk about cloud.

I’ve seen enough questions about VMware investing $20 million in Terremark that I figured I’d write a quick take, though.

Terremark is a close VMware partner (and their service provider of the year for 2008). Data Return (acquired by Terremark in 2007) was the first to have a significant VMware-based utility hosting offering, dating all the way back to 2005. Terremark has since also gotten good traction with its VMware-based Enterprise Cloud offering, which is a virtual data center service. However, Terremark is not just a hosting/cloud provider; it also does carrier-neutral colocation. It has been sinking capital into data center builds, so an external infusion, particularly one directed specifically at funding the cloud-related engineering efforts, is probably welcome.

Terremark has been the leading-edge service provider for VMware-based on-demand infrastructure. It is to VMware’s advantage to get service providers to use its cutting-edge stuff, particularly the upcoming vCloud, as soon as possible, so giving Terremark money to accelerate its cloud plans is a perfectly good tactical move. I don’t think it’s necessary to read any big strategic message into this investment, although undoubtedly it’s interesting to contemplate.

Bookmark and Share

The cloud computing forecast

John Treadway of Cloud Bzz asked my colleague Ben Pring, at our Outsourcing Summit, about how we derived our cloud forecast. Ben’s answer is apparently causing a bit of concern. I figured it might be useful for me to respond publicly, since I’m one of the authors of the forecast.

The full forecast document (clients only, sorry) contains a lot of different segments, which in turn make up the full market that we’ve termed “cloud computing”. We’ve forecasted each segment, along with subsegments within them. Those segments, and their subsegments, are Business Process Services (cloud-based advertising, e-commerce, HR, payments, and other); Applications (no subcategories; this is “cloud SaaS”); Application Infrastructure (platform and integration); and System Infrastructure (compute, storage, and backup).

Obviously, one argue whether or not it’s valid to include advertising revenue, but a key point that should not be missed is that in the trend towards the consumerization of IT, it is the advertiser that often implicitly pays for the consumer’s use of an IT service, rather than the consuer himself. Advertising revenue is a significant component of the overall market, part of the “cloud” phenomenon even if you don’t necessarily think of it as “computing”.

Because we offer highly granular breakouts within the forecast, those who are looking for specific details or who wish to classify the market in a particular way should be able to do so. If you want to define cloud computing as just typical notions of PaaS plus IaaS, for instance, you can probably simply take our platform, compute, and storage line-items and add them together.

Is it confusing to see the giant number with advertising included? It can be. I often start off descriptions of our forecast with, “This is a huge number, but you should note that a substantial percentage of these revenues are derived from online advertising.” and then drill down into a forecast for a particular segment or subsegment of audience interest.

Giant numbers can be splashily exciting on conference presentations, but pretty much anyone doing anything practical with the forecast (like trying to figure out their market opportunity) looks at a segment or even a subsegment.

Bookmark and Share

The perils of defaults

A Fortune 1000 technology vendor installed a new IP phone system last year. There was one problem: By IT department policy, that company does not change any defaults associated with hardware or software purchased from a vendor. In this case, the IP phones defaulted to no ring tone. So the phone does not ring audibly when it gets a call. You can imagine just how useful that is. Stunningly, this remains the case months after the initial installation — the company would rather, say, miss customer calls, than change the Holy Defaults.

A software vendor was having an interesting difficulty with a larger customer. The vendor’s configuration file, as shipped with the software, has defaults set up for single-server operation. If you want to run multi-server for high availability or load distribution, you need to change some of the defaults in the configuration file. They encountered a customer with the same kind of “we do not change any defaults”. Unsurprisingly, their multi-server deployment was breaking. The vendor’s support explained what was wrong, explained how to fix it, and was confounded by the policy. This is one of the things a custom distribution from the vendor can be used for, of course, but it’s a head-slapping moment and a grotesque waste of everyone’s time.

Now I’m seeing cloud configurations confounding people who have these kinds of policies. What is “default” when you’re picking from drop-down menus? What do you do when the default selection is something other than what you actually need? And the big one: Will running software on cloud infrastructure necessitate violating virgin defaults?

As an analyst, I’m used to delivering carefully nuanced advice based on individual company situations, policies, and needs. But here’s one no-exceptions opinion: “We never ever change vendor defaults” is a universally stupid policy. It is particularly staggeringly dumb in the cloud world, where generally, if you can pick a configuration, it is a supported configuration. And bluntly, in the non-cloud world, configurable parameters are also just that — things that the vendor intends for you to be able to change. There are obviously ways to screw up your configuration, but those parameters are changeable for a reason. Moreover, if you are just using cloud infrastructure but regular software, you should expect that you may need to tune configuration parameters in order to get optimal performance on a shared virtualized environment that your users are accessing remotely (and you may want to change the security parameters, too).

Vendors: Be aware that some companies, even really big successful companies, sometimes have nonsensical, highly rigid policies regarding defaults. Consider the tradeoffs between defaults as a minimalistic set, and defaults as a common-configuration set. Consider offering multiple default “profiles”. Packaging up your software specifically for cloud deployment isn’t a bad idea, either (i.e., “virtual appliances”).

IT management: Your staff really isn’t so stupid that they’re not able to change any defaults without incurring catastrophic risks. If they are, it’s time for some different engineers, not needlessly ironclad policies.

Bookmark and Share

If you worry about hardware, it’s not cloud

If you need more RAM, and you have to call your service provider, they’ve got to order the RAM, wait until they receive it, and then put it in a physical server, before you actually get more memory, and then they bill you on a one-off basis for buying and installing the RAM, you’re not doing cloud computing. If you have to negotiate the price of that RAM each time they buy some, you are really really not doing cloud computing.

I talked to a client yesterday who is in exactly this situation, with a small vendor who calls themselves a cloud computing provider. (I am not going to name names on my blog, in this case.)

Cloud infrastructure services should not be full of one-offs. (The example I cited is merely the worst of the service provider’s offenses against cloud concepts.) It’s reasonably to hybridize cloud solutions with non-cloud solutions, but for basic things — compute cores, RAM, storage, bandwidth — if it’s not on-demand, seamless, and nigh-instant, it’s not cloud, at least not in any reasonable definition of public cloud computing. (“Private cloud”, in the sense of in-house, virtualized data centers, adopts some but not all traits of the public cloud to varying degrees, and therefore gets cut more slack.)

Cloud infrastructure should be a fabric, not individual VMs that are tied to specific physical servers.

Bookmark and Share