Blog Archives

Yet more reasons to work at Gartner with me

The TL;DR: My team at Gartner has an open position for someone who has a strong understanding of cloud IaaS — someone who has experience architecting for the cloud, or who has worked on the vendor side of the market (product management, solutions architecture, engineering, consulting, etc.), or is an analyst at another firm covering a related topic. If you’re interested, please email me or contact me on LinkedIn.

The details:

A few years ago, I wrote a blog post on “Five reasons you should work at Gartner with me“, detailing the benefits of the analyst role. I followed it up last year with “Five more reasons to work at Gartner with me“, targeted at women. Both times we were hiring. And we’re continuing to hire right now.

We’re steadily expanding our coverage of cloud computing, which means that we have multiple openings. On my team, we’re looking for an analyst who can cover IaaS, and if you have a good understanding of cloud security, PaaS, and/or DevOps, that would be a plus. (The official posting is for a cloud security analyst, but we’re flexible on the skill set and the job itself, so don’t read too much into the job description.) This role can be entirely work-from-home, and you must work a US time zone schedule, which means candidates should be based in North America or South America.

Previously, I noted great reasons to work at Gartner:

  1. It is an unbeatably interesting job for people who thrive on input.
  2. You get to help people in bite-sized chunks.
  3. You get to work with great colleagues.
  4. Your work is self-directed.
  5. We don’t do any pay-for-play.

In my follow-up post for women, I added the following reasons (which benefit men, too):

  1. We have a lot of women in very senior, very visible roles, including in management.
  2. The traits that might make a woman termed “too aggressive” are valued in analysts.
  3. You are shielded from most misgyny in the tech world.
  4. You will use both technical and non-technical skills, and have a real impact.
  5. This is a flexible-hours, work-from-anywhere job.

I encourage you to go read those posts. Here, I’ll add a few more things about our culture. (If you’re working at another analyst firm or have considered another analyst firm in the past, you might find the below points to be of particular interest.)

1. People love their jobs. While some analysts decide after a year or two that this isn’t the life for them, the ones that stay, pretty much stay forever. Almost everyone is very engaged in their job, works hard, and tries to do the right thing. Although we’re a work-from-home culture, we nevertheless do a good job in establishing a strong corporate culture in which people collaborate remotely.

2. We have no hierarchy. We are an exceptionally flat organization. Every analyst has a team manager, but teams are largely HR reporting structures — a support system, by and large. To get work done, we form ad-hoc and informal groups of collaborators. We have internal research communities of interest, an open peer review process for all research, and freewheeling discussions without organization boundaries. That means more junior analysts are free to take on as much as they want to, and their voices are no less important than anyone else’s.

3. We have no hard-and-fast coverage boundaries. As long as you are meeting the needs of our clients, your coverage can shift as you see fit. Indeed, to be successful, your coverage should naturally evolve over time, as clients change their technology wants and needs. We have no “book of business” or “programs” or the like, which at other analyst firms sometimes encourage analysts to fiercely defend their turf; we actively discourage territoriality. Collaboration across topic boundaries is encouraged. We do have some formal vehicles for coverage — agendas and special reports among them — but these are open to anyone, regardless of the specific team they work on. (We do have product boundaries, but analysts can collaborate across these boundaries.)

4. We have good support systems. There are teams that manage calendaring and client contact, so analysts don’t have to deal with scheduling headaches (we just indicate when we’re available). Events run smoothly and attention is paid to making sure that analysts don’t have to worry about coordination issues. There’s admin and project manager support for things that generate a lot of administrative overhead or require coordination. Management, in the last few years, has paid active attention to things that help make analysts more productive.

5. Analysts do not have any sales responsibility. Analysts do not carry a “book of business” or any other form of direct tie to revenue. We don’t do any pay-for-play. Importantly, that means that you are never beholden to a vendor, nor do you have an incentive to tell a client anything less than the best advice you have to give. The sales team understands the rules (there are always a few bad apples, but Gartner tries very hard to ensure that analysts are not influenced by sales). Performance evaluations are based on metrics such as the popularity of our documents, and customer satisfaction scores across the different dimensions of things we do (inquiries, conference presentations, documents, and so on).

If this sounds like something that’s of interest to you, please get in touch!

HP buys Eucalyptus

In an interesting move that seems to be predominantly an acquihire, HP has bought Eucalyptus for an undisclosed sum, though speculation is that the deal’s under $100m, less than a 2x multiple on what Eucalyptus has raised in funding (although that would still be a huge multiple on revenue).

Out of this, HP gets Eucalyptus’s CEO, Marten Mickos, who will be installed as the head of HP’s cloud business, reporting to Meg Whitman. It also gets Eucalyptus’s people, including its engineering staff, whom they believe to really have expertise in what HP termed (in a discussion with myself and a number of other Gartner colleagues) the “Amazon architectural pattern”. Finally, it gets Eucalyptus’s software, although this seems to have been very secondary to the people and their know-how — unsurprising given HP’s commitment to OpenStack at the core of HP Helion.

Eucalyptus will apparently be continuing onward within HP. Mickos had indicated something of a change in direction previously, when he explained in a blog post why he would be keynoting an OpenStack conference. It seems like Eucalyptus had been headed in the direction of being an open-source cloud management platform (CMP) that provides an AWS API-compatible framework over a choice of underlying components, including OpenStack component options. In this context, it makes sense to have a standalone Eucalyptus product / add-on, providing an AWS-compatible private cloud software option to customers for whom this is important — and it sidesteps the OpenStack community debate on whether or not AWS compatibility should be important within OpenStack itself.

HP did not answer my direct question if Eucalyptus’s agreement with Amazon includes a change-of-control clause, but they did say that partnerships require ongoing collaboration between the two parties. I interpreted that to mean that AWS has some latitude to determine what they do here. The existing partnership has been an API licensing deal — specifically, AWS has provided Eucalyptus with engineering communications around their API specifications, without any technology transfer or documentation. The partnership been important to ensuring that Eucalyptus replicates AWS behavior as closely as possible, so the question of whether AWS continues to partner going forward is likely important to the fidelity of future Eucalyptus work.

It’s important to note that Eucalyptus is by no means a full AWS clone. It offers the EC2, S3, and IAM APIs, including relatively full support for EC2 features such as EBS. However, it does not support the VPC networking features. And of course, it’s missing the huge array of value-added capabilities that surround the basic compute and storage resources. It’s not as if HP or anyone else is going to take Eucalyptus and build a service that is seriously competitive to AWS. Eucalyptus had mostly found its niche serving SMBs who wanted to run a CMP that would support the most common AWS compute capabilities, either in a hybrid cloud mode (i.e., for organizations still doing substantial things in AWS) or as an on-prem alternative to public cloud IaaS.

Probably importantly to the future success of HP Helion and OpenStack, though, Mickos’s management tenure at Eucalyptus included turning the product from its roots as a research project, into much slicker commercial software that was relatively easy to install and run, without requiring professional services for implementation. He also turned its sales efforts to focus on SMBs with a genuine cloud agility desire, rather than chasing IT operations organizations looking for a better virtualization mousetrap (another example of bimodal IT thinking). Eucalyptus met with limited commercial success — but thus far, CloudStack and OpenStack haven’t fared much better. This has been, at least in part, a broader issue with the private cloud market and the scope of capabilities of the open-source products.

Of the many leaders that HP could have chosen for its cloud division, the choice of Mickos is an interesting one; he’s best known for being CEO of MySQL and eventually selling it to Sun, and thus he makes most sense as a leader in the context of open-source-oriented thinking. I’m not inclined to call the HP-Eucalyptus acquisition a game-changer, but I do think it’s an interesting indicator of HP’s thinking — although it perhaps further muddies waters that are already pretty muddy. The cloud strategies of IBM, Microsoft, Oracle, and VMware, for instance, are all very clear to me. HP hasn’t reached that level of crispness, even if they insist that they’ve got a plan and are executing on it.

Edit: Marten Mickos contacted in me in email to clarify the Amazon/Eucalyptus partnership, and to remind me that MySQL was sold to Sun, not Oracle. I’ve made the corrections.

Bimodal IT, VMworld, and the future of VMware

In Gartner’s 2014 research for CIOs, we’ve increasingly been talking about “bimodal IT”. Bimodal IT is the idea that organizations need two speeds of IT — call them traditional IT and agile IT (Gartner just calls them mode-1 and mode-2). Traditional IT is focused on “doing IT right”, with a strong emphasis on efficiency and safety, approval-based governance and price-for-performance. Agile IT is focused on “doing IT fast”, supporting prototyping and iterative development, rapid delivery, continuous and process-based governance, and value to the business (being business-centric and close to the customer).

We’ve found that organizations are most successful when they have two modes of IT — with different people, processes, and tools supporting each. You can make traditional IT more agile — but you cannot simply add a little agility to it to get full-on agile IT. Rather, that requires fundamental transformation. At some point in time, the agile IT mode becomes strategic and begins to modernize and transform the rest of IT, but it’s actually good to allow the agile-mode team to discover transformative new approaches without being burdened by the existing legacy.

Furthermore, agile IT doesn’t just require new technologies and new skills — it requires a different set of skills from IT professionals. The IT-centric individual who is a cautious guardian and enjoys meticulously following well-defined processes is unlikely going to turn into a business-centric individual who is a risk-taking innovator and enjoys improvising in an uncertain environment.

That brings us to VMware (and many of the other traditional IT vendors who are trying to figure out what to do in an increasingly cloud-y world). Today’s keynote messages at VMworld have been heavily focused on cost reduction and offering more agility while maintaining safety (security, availability, reliability) and control. This is clearly a message that is targeted at traditional IT, and it’s really a story of incremental agility, using the software-defined data center to do IT better. There’s a heavy overtone of reassurance that the VMware faithful can continue to do business as usual, partaking of some cool new technologies in conjunction with the VMware infrastructure that they know and love — and control.

But a huge majority of the new agile-mode IT is cloud-native. It’s got different champions with different skills (especially development skills), and a different approach to development and operations that results in different processes and tooling. “Agility” doesn’t just mean “faster provisioning” (although to judge from the VMware keynote and customer speakers, IT Operations continue to believe this is the case). VMware needs to find ways to be relevant to the agile-IT mode, rather than just helping tradtional-IT VMware admins try to improve operations efficiency in a desperate grasp to retain control. (Unfortunately for VMware, the developer-relevant portions of the company were spun off into Pivotal.)

Bimodal IT also implies that hybrid IT is really simply the peaceful coexistence of non-cloud and cloud application components — not the idea that it’s one set of management tools that sit on top of all environments. VMware admins are obviously attracted to the ability to extend their existing tools and processes to the cloud (whether service provider IaaS or an internal private cloud), but that’s not necessarily the right thing to do. You might run traditional IT both in non-cloud and cloud modes and want hybrid tooling for both — but you should not do that for traditional-IT and agile-IT modes (regardless of whether it’s non-cloud or cloud), but instead use best-of-breed tooling for each mode.

If you’re considering the future of any IT vendor today, you have to ask yourself: What is their strategy to address each mode of IT? The mere recognition of the importance of applications and the business is insufficient.

(Gartner clients only: See Taming the Digital Dragon: The 2014 CIO Agenda and Bimodal IT: How to Be Digitally Agile Without Making a Mess for a lot more information about bimodal IT. See our 2013 and 2014 Professional Effective Planning Guides, Coming to Terms With the Nexus of Forces and Reshaping IT for the Digital Business for a guide to what IT professionals should do to advance their careers and organizations given these trends.)

AWS 2Q14 and why the sky is not falling

Amazon posted weaker 2Q 2014 results for Amazon Web Services, leading some to speculate about competitive pressures despite continued enormous growth in usage.

Investors are asking, probably reasonably, what’s going on here. Is the overall market for cloud computing weakening? Are competitors taking more market share? Or is this largely the result of the price cuts that went into effect at the beginning of 2Q? And if it’s the price cuts, are these price cuts temporary, or are they part of a larger trend that eventually drives the price to zero?

The TL;DR version: No, cloud growth remains tremendous. No, AWS’s market share likely continues to grow despite the fact that they’re already the dominant player. Yes, this is a result of the price cuts. No, the price cuts are permanent, and yes, cuts will eventually likely drive prices down to near-cost, but this is nevertheless not a commodity market.

The deep dive follows. Note that when I use the term “market share”, I do not mean revenue; I mean revenue-generating capacity-in-use, which controls for the fact that prices vary significantly across providers.

What’s with these price drops?

I have said repeatedly in the past that the dynamics of the cloud IaaS market (which is increasingly also convergent with the high-control PaaS market) are fundamentally those of a software market. It is a market in which customers are ultimately buying IT operations management software as a service, and as they go up-stack, middleware as a service. Yes, they are also getting the underlying compute, storage, and networking resources, but the major value is the actually in all the software that automates this stuff and delivers it as an easy-to-consume, less-effort-to-manage solution. The automation reduces a customer’s labor costs, which is where the customer sees cost-savings in services over doing it themselves.

You’ll note that in other SaaS markets, the infrastructure is effectively delivered at cost. That’s extremely likely to be true of the IaaS market over time, as well. Furthermore, like other software markets, the largest vendors with the greatest breadth and depth of capabilities are the winners, and they’re made extra-sticky by their ecosystems. (Think Oracle.) Over time, the IaaS providers will make most of their margin off higher-level services rather than the raw resources; think about the difference between what a customer pays for Redshift data warehousing, versus raw EC2 compute plus EBS storage.

The magnitude of the price drops is scaring away many rival providers. So is the pace of innovation. Few competitors have the deep pockets or willpower to pour money into engineering and data centers in order to compete at this level. Most are now scurrying to stake out a niche of the market where they believe they can differentiate.

Lowering the price expands the addressable market, as well. The cheaper it is to do it in the cloud, the more difficult it is to make a business case to do an on-premises solution, especially a private cloud. Many of Gartner’s clients tell us that even if they have a financially viable case to build a private cloud right now, their costs will be essentially static over the amortization period of 3 to 5 years — versus their expectation that the major IaaS providers will drop prices 30% every year. Time-to-value for private cloud is generally 18 to 24 months, and it typically delivers a much more limited set of features, especially where developer enablement is concerned. It’s tough for internal IT to compete, especially when the major IT vendors aren’t delivering software that allows IT to create equivalent capabilities at the speed of an AWS, Microsoft, or Google.

The size of the price cut certainly had a negative impact on AWS’s revenues this past quarter. The price cut is likely larger than AWS would have done without pressure from Google. At the same time, the slight dip in revenue, versus the magnitude of the cuts, makes it clear that AWS is still growing at a staggeringly fast pace.

The Impact of Microsoft Azure

Microsoft has certainly been on an aggressive tear in the market over the last year, especially since September 2013, and Azure has been growing impressively. Microsoft has been extremely generous with both discounts and with free credits, which has helped fuel the growth of Azure usage. But interestingly, Microsoft’s proactive evangelism to their customers has helped expand the market, particularly because Microsoft has been good at convincing mainstream adopters that they’re actually late adopters, spurring companies to action. Microsoft’s comprehensive hybrid story, which spans applications and platforms as well as infrastructure, is highly attractive to many companies, drawing them towards the cloud in general.

Some customers who have been pitched Azure will actually go to Azure. But for many, this is a trigger to look at several providers. It’s not unusual for customers to then choose AWS over Azure as the primary provider, due to AWS’s better feature set, greater maturity, and better ecosystem; those customers will probably also do things in Azure, although they may be earlier-stage and smaller projects. Of course, many existing AWS customers are also adding Azure as a secondary provider. But it may well be that Microsoft’s serious entry into this market has actually helped AWS, in terms of absolute usage gains, more than it has hurt it — for the time being, of course. Over time, Microsoft’s share gains will come at AWS’s expense.

The Impact of Google

Google has been delivering technology at an impressive pace, but there is an enormous gulf between its technology capabilities and its go-to-market prowess. They also have to overcome the perception problem that people aren’t sure if Google is serious about this market, and are therefore reluctant to commit to the platform.

While everyone is super-interested in what Google is doing, at the moment they do not seem to be winning significant customers from AWS. They’re doing reasonably well in batch-computing scenarios (more as a rival to AWS spot instances), and in scenarios where all of Google Cloud Platform, including App Engine, is of interest. But they do not seem to have really gotten market momentum, yet.

However, in this market and in many other markets, a competitor does not need to win over the market leader in order to hurt them. They merely need to change customer expectations of what the price should be — and AWS has allowed Google to set the price. As long as Google seems like a credible threat, AWS will likely continue to be competitive on price, at least for those things that Google also offers. Microsoft has already publicly pledged to be price-competitive with AWS, so that pretty much guarantees that the three providers will move in lockstep on pricing.

The Impact of Smaller Providers

Digital Ocean, in particular, is making a lot of noise in the market. They’ve clearly established themselves as a VPS provider to watch. (VPS: virtual private server, a form of mass-market hosting.) However, my expectation is that Digital Ocean’s growth comes primarily at the expense of providers like Rackspace and Media Temple (owned by GoDaddy), rather than at the expense of AWS et.al. A Digital Ocean droplet (VM) is half the price of an AWS t2.micro (the cheapest thing you can get from AWS), and they’ve been generous with free-service coupons.

VPS providers have friendly control panels, services that are simple and thus easy to use, and super-low prices; they tend to have a huge customer base, but each customer is, on the average, tiny (usually just a single VM). These days, VPS is looking more and more like cloud IaaS, but the orientation of the providers tends to be quite different, both from a technology and a go-to-market perspective. There’s increasing fluidity between the VPS and cloud IaaS market, but the impact on AWS is almost certainly minimal. (I imagine that some not-insignificant percentage of AWS’s free tier would be on VPS instead if not for the free service, though.)

There’s plenty of other noise out there. IBM is aggressively pitching SoftLayer to its customer base, but the deals I’ve seen have generally been bare metal on long-term contracts, usually as a replatform of an IBM data center outsourcing deal. Rackspace’s return to its managed hosting roots is revitalizing. CenturyLink continues to have persuasive sales. VMware customers remain curious about vCHS. And so on. But none of these providers are growing the way that AWS and Microsoft Azure are, and both providers are gaining overall market share at the expense of pretty much everyone else.

The sky is not falling

With Microsoft and Google apparently now serious about this market, AWS finally has credible competitors. Having aggressive, innovative rivals are helping to push this market forward even faster, to the detriment of most other providers in the market, as well as the IT vendors selling do-it-yourself on-premises private cloud. AWS is likely to continue to dominate this market for years, but the market direction is no longer as thoroughly in its control.

The 2014 Cloud IaaS Magic Quadrant

Gartner’s Magic Quadrant for Cloud Infrastructure as a Service, 2014, has just been released (see the client-only interactive version, or the free reprint). If you’re a Gartner client, you can also view the related charts, which summarize the offerings, features, and data center locations in a convenient table format. (The charts are unfortunately less readable than they could be, as our publication system doesn’t allow comments in Excel spreadsheets. Sorry.)

We’re continuing to update this Magic Quadrant every nine months, since the market is moving so quickly. There have been significant changes in vendor positions since the August 2013 Magic Quadrant (the free reprint has expired, but the graphic is floating around, and Gartner clients can use the “History” tab in the online Magic Quadrant tool, which allows you to compare 2012, 2013, and 2014 interactively).

We’ve observed, over the last nine months, a major shift in Gartner’s client base — the desire to make strategic bets on cloud IaaS providers. In general, this reduces the number of significant suppliers to an organization to just one or two (whereas many organizations had as many as four), with the overwhelming bulk of the workloads going to one provider. It also means that clients are interested in knowing not just who is winning right now, but who is going to be the winner in five or even ten years. That’s really the lens that this Magic Quadrant should be viewed through: Who has what it takes to convince the customer that they can serve both current needs and will sustain market leadership over the long term?

Our clients have, since September of 2013 (which seemed to mark a change in Microsoft’s go-to-market approach for Azure), consistently viewed this as an AWS vs. Microsoft battle, with AWS continuing to win the vast majority of business but Microsoft winning significant inroads, especially with later-adopter customers. In recent weeks since the big price drops, lots of clients have been asking about the future of Google, as well, and there are a lot of curiosity questions about IBM (SoftLayer) also, although the IBM questions tend to be more outsourcing and broader-strategy in orientation. Of course, prospects consider other vendors, especially their existing incumbent vendors, as well, but AWS and Microsoft are overwhelmingly the top contenders.

What’s interesting about this year’s Visionaries is that they all have new platforms — CenturyLink with the Tier 3 acquisition, CSC with the ServiceMesh acquisition coupled with the AWS partnership, Google with Google Compute Engine, IBM with the SoftLayer acquisition, and Verizon Terremark with the still-beta Verizon Cloud. (Arguably VMware falls into this bucket as well, despite being a Niche Player this year.) These providers are in the middle of reinventing themselves, most with the idea of battling it out for the #3 spot in the market.

This is not a market for the faint of heart. (I recently asked a large vendor if they intended to compete seriously in the IaaS space, and was told, “Only an idiot takes on Amazon, Microsoft, and Google simultaneously.”) For that matter, this is not a market for the shallow of pocket. You can’t spend your way to success here, but you need engineers, intellectual property, and to be a real #3, substantial capital investment in infrastructure.

There’s also a clear convergence with the PaaS market that’s taking place here. AWS has long offered an array of services that are PaaS elements, as well as many things that sit on the spectrum between pure IaaS and pure PaaS. Microsoft and Google started as PaaS providers and then launched IaaS offerings. The distintions will blur and increasingly become less relevant, as providers fight it out on features and capabilities.

Gartner continues to separate our evaluation of related managed and professional services from the core cloud IaaS platform, because we believe that clients are increasingly choosing a platform, and then choosing consultants and managed services providers (or alternatively, turning to a trusted integrator who helps them choose the right platforms for their needs). I’ll be writing on this more in the future, but keep an eye out for the upcoming regional Magic Quadrants for Cloud-Enabled Managed Hosting for a managed services-oriented view.

Reflections on the OpenStack Atlanta summit

When I wrote a research note called Don’t Let OpenStack Hype Distort Your Selection of a Cloud Management Platform in 2012, in September 2012, I took quite a bit of flak in public for my statements about OpenStack’s maturity. At the time, I felt that the industry was about 18 to 24 months from the point where real commercial adoption of OpenStack would begin. It now looks like I made the right call — 20 months have passed since I wrote that note, and indeed, OpenStack seems to be on the cusp of that tipping point. OpenStack is truly becoming a business. Last year’s Portland summit was a developer summit. This year’s summit has the feel of a trade show, although of course it’s still a set of working meetings as well as a user conference.

There’s much work to be done still, but things are grinding onwards in an encouraging fashion. The will to solve the common problems of installs, upgrades, and networking seems to have permeated the community sufficiently that these basic elements of usability and stability are getting into the core. The involvement of larger vendors has created a collective determination to do what it takes to make enterprise adoption of OpenStack possible, in due time.

In March of this year, I wrote a new document called An Overview of OpenStack, 2014. It contains the updated Gartner positions on OpenStack — along with practical information for users, like use cases, vendors, and how to select a distro. (No vendor has done a free reprint of the note, so it’s behind the paywall, sorry.) I have no updates to that position after this summit; it has been largely what I expected it to be. However, I did want to comment on what I see as one of the key questions now facing the OpenStack Foundation and contributing vendors.

One of the positions taken in my recent note is a re-iteration of a 2012 position — we believe that OpenStack “will eventually mature into a solid open-source core at the heart of multiple commercial products and services.” One of the key questions that seems to be at hand now is how large that core should be — a fundamental controversy for OpenStack Foundation members, each of whom has a position based on where their company adds value.

At one pole of the spectrum are the vendors who want to maximize the capabilities in OpenStack that are fully open-source — I’ll call them the “more open” camp. (End-users, of course, also all want this.) These vendors typically differentiate in some way that is not the software itself. They do consulting, they are managed services providers, they are cloud IaaS providers, or they are selling some kind of product or service that uses OpenStack under the covers but delivers some other kind of value (NFV, SaaS, and so on). They want the maximum capabilities delivered in the software, and they’re willing to contribute their own work towards this end.

At the other pole of the spectrum are the vendors who intend to sell a cloud management platform (CMP) and need to be able to differentiate — I’ll call them the “more proprietary” camp. That means that there’s the question of “how does a distro differentiate”. It has already been previously argued that installation and upgrades should be left to commercial distributions. At long last it seems to be agreed that for the good of the community at least some of these capabilities need to be decent in the core. The next controversial one seems to be an HA control plane. But it also gets into the broader question of how deep the functionality of OpenStack as a whole should go. Vendors that sell OpenStack software really fall into two broad categories — those that intend to supportively wrap what is essentially vanilla OpenStack (like the Linux vendors), and those who are building a full-fledged CMP (or CMP suite) into which OpenStack may essentially disappear near-invisibly (except for maybe an exposed API), surrounded by a rich fudgy layer of proprietary software (like HP and IBM). Most of these vendors want just enough in the OpenStack open-source to make OpenStack overall successful.

There are nuances here, of course, and many vendors fall somewhere between these two poles, but I think that summarizes the two camps pretty well. Each camp has its own beliefs about what is best for their own companies and what is best for OpenStack. These are legitimate debates about what is “just enough” functionality in OpenStack (and how that “just enough” changes over time), even amongst vendors who occupy the “more proprietary” camp — and whether that “just enough” is sufficient to satisfy the “more open” camp. Indeed, the “more open” camp may find that they cannot get their contributions accepted because the “more proprietary” camp is gatekeeping.

It is critical to note that no vendor I’ve ever spoken to thinks that OpenStack interoperability means that you should be able to easily switch between distributions or OpenStack-based service providers. Rather, the desire is to ensure that there’s enough of an interoperability construct that there can be a viable OpenStack ecosystem — it’s about the ability of ecosystem vendors to interoperate with a variety of OpenStack-based vendors, far more than it is about the user’s ability to interoperate between OpenStack-based solutions. To reiterate another point from my previous research notes: Customers should expect to be no less locked into an OpenStack-based vendor/provider than they would into any other CMP or cloud IaaS provider.

First impressions of IBM BlueMix

IBM has launched the beta of BlueMix, its Cloud Foundry-based PaaS. Understanding what BlueMix, and IBM, do and don’t bring to the table means a bit of a digression into how Cloud Foundry works as a PaaS. Since my blog is usually pretty infrastructure-oriented, I’m guessing that a significant percentage of readers won’t know very much about Cloud Foundry (which I’ll abbreviate as CF).

In CF, users write application code, which they deploy onto CF runtime environments (defined by “buildpacks”) — i.e., programming languages and associated frameworks. When CF is deployed as a PaaS, it will normally have some built-in buildpacks, but users can also add additional ones through a mechanism called buildpacks (which originated at Heroku, a PaaS provider that is not CF-based). CF runs applications in its own “Warden” containers (which are OS-independent), staging the runtime and app code into what it calls “droplets”. These application instances are of a size controlled by the user (developer), and the user chooses how many of them there are. Cloud Foundry does not have native auto-scaling currently.

CF can also expose a catalog of services; these services might or might not be built on top of Cloud Foundry. These services are called “Managed Services”, and they support CF’s Service Broker API, allowing CF to provision those services and bind them to applications. Users can also bind their own service instances, supplying credentials for services that exist outside of CF and that aren’t directly integrated via the Service Broker API. Users of CF can also bind external services that don’t support CF explicitly.

IBM has built its own UI for BlueMix. IBM has said at Pulse that it’s got a new focus on design, and BlueMix shows it — the interface is modern and attractive, and its entire look-and-feel and usability are in stark contrast to, say, its previous SmartCloud Application Services offering. Interacting with the UI is pleasant enough. Most users will probably use the CF command-line tool (CLI), though. Apps are normally deployed using the CLI, unless the customer is using JazzHub (a developer service created out of IBM UrbanCode).

For the BlueMix beta, IBM has created two buildpacks of its own, for Liberty (Java) and Node.js, which it says it has hardened and instrumented. They also supply two community buildpacks, for Ruby on Rails and Ruby Sinatra. As with normal CF, users can supply their own buildpacks, and the open-source CF buildpacks appear to work fine, IBM calls these “runtimes” in the BlueMix portal.

IBM also has a bunch of CF services — “Managed Services” in CF parlance. Some of these are IBM-created, like the DataCache (which is WebSphere eXtreme Scale) and Elastic MQ (WebSphere MQ). Others are labeled “community” and are likely open-source CF service implementations of popular packages like MySQL and MongoDB. As is true with all CF services, the implementation of a service is not necessarily on Cloud Foundry — for instance, one of the services is Cloudant, which is entirely external.

Finally, IBM provides what it calls “boilerplates”, which you can click to create an application with a runtime plus a number of additional services that are bound to the app. The most notable is the “mobile backend starter”, which combines Node.js with a number of mobile-oriented services, like a mobile data store and push notifications.

All in all, the BlueMix beta is a showcase for IBM middleware and other IBM software of interest to developers. IBM has essentially had to SaaS-ify (or PaaS-ify, if you prefer that term) its enterprise software assets to achieve this. Obviously, this is only a sliver of its portfolio, but bringing more software assets into BlueMix is clearly key to its strategy — BlueMix is as much a service catalog as a PaaS in this case.

Broadly, though, it’s very clear that IBM is targeting the enterprise developer, especially the enterprise developer who is currently developing in Java on WebSphere technologies. It’s bringing those developers to the cloud — not targeting cloud-native developers, who are more likely to be drawn to something like AppFog if they’re looking for a CF service. Given that IBM says that it will provide strong support for integrating with existing on-premise applications, this is a strategy that makes sense.

Standard CF constraints apply — limited RAM per application instance (and tight resource limitations in general in BlueMix beta), no writes to the local filesystem, and so forth. Other features that would be value-added, like monitoring and automatic caching of static content, are missing at present.

The short-form way to think of BlueMix beta is “Cloud Foundry with some IBM middleware as a service”. It’s hosted in SoftLayer data centers. Presumably at some point IBM will introduce SLAs for at least portions of the service. It’s certainly worth checking out if you’re a WebSphere shop, and if you’re checking out Cloud Foundry in general, this seems to be a perfectly decent way to do it. There’s solid promise here, and my expectation is that at this stage of the game, PaaS might well be a much stronger play for IBM than IaaS, at least in terms of the ability to articulate the overall value of the IBM ecosystem and make an argument for making a strategic bet on IBM in the cloud.

Impressions of IBM Pulse

Once upon a time, IBM Pulse was a systems management conference. But this year, IBM has marketed it as a “cloud” conference. The first day of keynotes was largely devoted to IBM’s cloud efforts, although the second day keynote went back to the systems management roots (if still with a cloudy spin). IBM has done a good job of presenting a coherent vision of how it intends to go forward into the world of cloud, which is explicitly a world of changing business demands and an altered relationship between business and IT.

Notably absent amidst all of this has been any mention of IBM’s traditional services business (strategic outsourcing et.al.), but the theme of “IBM as a service” has resonated strongly throughout. IBM possesses a deep portfolio of assets, and exposing those assets as services is key to its strategy. This is going to require radical changes in the way that IBM goes to market, with a much greater emphasis on marketing-driven online sign-up and self-service. (IBM, like other large tech vendors, is largely sales-driven today.)

Some serious brand-building for IBM’s SoftLayer acquisition is being done here, although IBM seems to be trying to redefine everything that SoftLayer does as cloud, although SoftLayer’s business is almost all dedicated hosting (bare metal, sold month-to-month), not cloud IaaS in the usual sense of the word. There’s abundant confusion as a result; the cloudwashing is to IBM’s benefit, though, at least for now.

IBM has an enormous installed base, across its broad portfolio, and for a large percentage of that base, it is a strategic vendor. IBM has to figure out how to get that customer base to buy into its cloud vision, and to make the bet that IBM is the right strategic partner for that cloud journey. IBM looks to be taking a highly neutral stance on the balance of cloud (services) versus internal IT; arguably, much like Microsoft, its strength lies in the ostensible ability to blend on-premises do-it-yourself IT with services in the cloud, extending the lifetime of existing technology stacks.

Much like Microsoft, IBM has an existing legacy of enterprise software — specifically, software built for single-tenant, on-premise, bespoke environments. Such software tends not to scale, and it tends not to be easily retrofitted into a services model. Again like Microsoft, IBM is on a journey towards “cloud first” architecture. IBM’s acquisition of Cloudant isn’t just the acquisition of a nice bit of technology — it’s also the acquisition of the know-how of how to build a service at scale, a crucial bit of engineering expertise that it needs to absorb and teach within IBM, as IBM’s engineers embark on turning its software into services.

Again like Microsoft, IBM has the advantage of an existing developer ecosystem and middleware that’s proven to be sticky for that ecosystem — and consequently IBM has the potential to turn itself into a compelling cloud platform (in the broadest of senses, integrated across the IaaS, PaaS, and SaaS boundaries). Since cloud is in many ways about the empowerment of the line-of-business and developers, this is decidedly helpful for IBM’s future ambitions in the cloud.

So another juggernaut is on the move. Things should get interesting, especially when it comes to platforms, which is arguably where the real war for the cloud will be fought.

The end of the beginning of cloud computing

As my colleague Daryl Plummer has put it: We’re at the end of the beginning phase of cloud computing.

As 2014 dawns, we’re moving into an era of truly mainstream adoption of cloud IaaS. While many organizations have already been using cloud IaaS for several years, gradually moving from development to production, with an ever-expanding range of use cases and applications, the shift to truly strategic adoption is just getting underway. Increasingly, organizations are asking what can’t go to the cloud, rather than what can.

Organizations that haven’t done at least a cloud IaaS pilot by now, however informal (“informal” includes that one crazy developer who decided to give his credit card to Amazon) are at the trailing edge of adoption. The larger the business, the more likely it is to be doing things in cloud IaaS; this is a trend that starts from enterprises and works its way down. (Technology companies of all sizes, of course, are comfortably ensconced in the cloud.)

Gartner’s clients with multiple years of cloud IaaS under their belts are now comfortably going towards more strategic adoption. What’s interesting, though, is that later adopters are also going towards strategic adoption — they’re skipping the years of early getting-their-feet-wet, and immediately jumping in with more significant projects, with more ambitious goals. That makes a great deal of sense, though — by this point, the market is more mature, and there are immediate and clear answers to practical issues like, “How do I connect my enterprise network?” (That one question, by the way, continues to benefit Amazon, which has a precise answer, versus the often-fuzzy or complex answers of other competitors who have less industrialized processes for doing so.)

I’ve said before that developers are the key to cloud IaaS adoption in most organizations. It’s also becoming clear that the most successful strategic efforts will be developer-led, usually with an enterprise architect as the lead for the organization-wide effort. It is the developers that have the strategic vision for the future of application development and operations, and that care about things like faster delivery (i.e., business agility), continuous integration, continuous deployment, application lifecycle management, and infrastructure as code. IT operations seems to almost inevitably be mired in thinking about solely their own domain, which tends to be focused on a data center view that effectively reduces to “how do we keep the lights on, at a lower cost?” This has a high probability of leading to solutions that might be right for IT operations, but wrong for the business.

At the moment, I’m writing research focused on best practices — the lessons learned from the trenches, from organizations who have adopted cloud IaaS over the last seven years of the market. I’m always interested in hearing your stories.

Recommended reading for 2014 Cloud IaaS and Managed Hosting Magic Quadrants

If you’re a service provider interested in participating in the research process for Gartner’s Magic Quadrant for Cloud IaaS (see the call for vendors), or the regional Magic Quadrants for Cloud-Enabled Managed Hosting (see that call for vendors), you will probably want to read some of my previous blog posts.

The Magic Quadrant Process Itself

AR contacts for a Magic Quadrant should read everything. An explanation of why it’s critical to read every word of every communication received during the MQ process.

The process of a Magic Quadrant. Understanding a little bit about how MQs get put together.

Vendors, Magic Quadrants, and client status. Appropriate use of communications channels during the MQ process.

General tips for Magic Quadrant briefings and Specific tips for Magic Quadrant briefings. Information on how to conduct an effective and concise Magic Quadrant briefing.

The art of the customer reference. Tips on how to choose reference customers.

Gartner’s Understanding of the Market

Foundational Gartner research notes on cloud IaaS and managed hosting, 2014. Recommended reading to understand our thinking on the markets.

Having cloud-enabled technology != Having a cloud. Critical for understanding what we do and don’t consider cloud IaaS to be.

Infrastructure resilience, fast VM restart, and Google Compute Engine. An explanation of why infrastructure resilience still matters in the cloud, and what we mean by the term.

No World of Two Clouds. Why we do not believe that there will be a separation of the cloud IaaS offerings that target the enterprise, from those that target cloud-native organizations.

Cloud IaaS market share and the developer-centric world. How developers, rather than IT operations admins, drive spend in the cloud IaaS market.

Follow

Get every new post delivered to your Inbox.

Join 9,848 other followers

%d bloggers like this: