Monthly Archives: July 2019

Gartner’s cloud IaaS assessments, 2019 edition

We’ve just completed our 2019 evaluations of cloud IaaS providers, resulting in a new Magic Quadrant, Critical Capabilities, and six Solution Scorecards — one for each of the providers included in the Magic Quadrant. This process has also resulted in fresh benchmarking data within Gartner’s Cloud Decisions tool, a SaaS offering available to Gartner for Technical Professionals clients, which contains benchmarks and monitoring results for many cloud providers.

As part of this, we are pleased to introduce Gartner’s new Solution Scorecards,  an updated document format for what we used to call In-Depth Assessments. Solution Scorecards assess an individual vendor solution against our recently-revised Solution Criteria (formerly branded Evaluation Criteria). They are highly detailed documents — typically 60 pages or so, assessing 265 individual capabilities as well as providing broader recommendations to Gartner clients.

The criteria are always divided into Required, Preferred, and Optional categories — essentially, things that everyone wants (and where they need to compensate/risk-mitigate if something is missing), things that most people want but can live without or work around readily, and things that are use case-specific. The Required, Preferred, and Optional criteria are weighted into a 4:2:1 ratio in order to calculate an overall Solution Score.

2019 Scores

If you are a Gartner for Technical Professionals client, the scorecards are available to you today. You can access them from the links below (Gartner paywall):

We will be providing a comparison of these vendors and their Solution Scorecards at the annual “Cloud Wars” presentation at the Gartner Catalyst conference — one of numerous great reasons to come to San Diego the week of August 11th (or Catalyst UK in London the week of September 15th)! Catalyst has tons of great content for cloud architects and other technical professionals involved in implementing cloud computing. 

Note that we are specifically assessing just the integrated IaaS+PaaS offerings — everything offered through a single integrated self-service experience and on a single contract. Also, only cloud services count; capabilities offered as software, hosting, or a human-managed service do not count. Capabilities also have to be first-party.

Also note that this is not a full evaluation of a cloud provider’s entire portfolio. The scorecards have “IaaS” in the title, and the scope is specified clearly in the Solution Criteria. For the details of which specific provider services or products were or were not evaluated, please refer to each specific Scorecard document.

All the scores are current as of the end of March, and count only generally-available (GA) capabilities. Because it takes weeks to work with vendors for them to review and ensure accuracy, and time to edit and publish, some capabilities will have gone beta or GA since that time; because we only score what we’re able to test, the evaluation period has a cut-off date. After that, we update the document text for accuracy but we don’t change the numerical scores. We expect to update the Solution Scorecards approximately every 6 months, and working to increase our cadence for evaluation updates.

This year’s scores vs. last year’s

When you review the scores, you’ll see that broadly, the scores are lower than they were in 2018, even though all the providers have improved their capabilities. There are several reasons why the 2019 scores are lower than in previous years. (For a full explanation of the revision of the Solution Criteria in 2019, see the related blog post.)

First, for many feature-sets, several Required criteria were consolidated into a single multi-part criterion with “table stakes” functionality; missing any part of that criterion caused the vendor to receive a “No” score for that criterion (“Yes” is 1 point; “No” is zero points; there is no partial credit). The scorecard text explains how the vendor does or does not meet each portion of a criterion. The text also mentions if there is beta functionality, or if a feature was introduced after the evaluation period.

Second, many criteria that were Preferred in 2018 were promoted to Required in 2019, due to increasing customer expectations. Similarly, many criteria that were Optional in 2018 are now Preferred. We introduced some brand-new criteria to all three categories as well, but providers that might have done well primarily on table-stakes Required functionality in previous years may have scored lower this year due to the increased customer expectations reflected by revised and new criteria.

Customizing the scores

The solution criteria, with all of the criteria detail, is available to all Gartner for Technical  Professionals clients, and comes with a spreadsheet that allows you to score any provider yourself; we also provide a filled-out spreadsheet with each Solution Scorecard so you can adapt the evaluation for your own needs. The Solution Scorecards are similarly transparent on which parts of a criterion are or aren’t met, and we link to documentation that provides evidence for each point (in some cases Gartner was provided with NDA information, in which case we tell you how to get that info from the provider). 

This allows you to customize the scores as you see fit. Thus, if you decide that getting 3 out of 4 elements of a criteria is good enough for you, or you think that the thing they miss isn’t relevant to you, or you want to give the provider credit for newly-released capabilities, or you want to do region-specific scoring, you can modify the spreadsheet accordingly. 

If you’re a Gartner client and are interested in discussing the solution criteria, assessment process, and the cloud providers, please schedule an inquiry or a 1-on-1 at Catalyst. We’d be happy to talk to you!

Updating Gartner’s cloud IaaS evaluation criteria

In February of this year, we revised the Evaluation Criteria for Cloud IaaS (Gartner paywall). The evaluation criteria (now rebranded Solution Criteria) are essentially the sort of criteria that prospective customers typically include in RFPs. They are highly detailed technical criteria, along with some objectively-verifiable business capabilities (such as elements in a technical support program, enterprise ISV partnerships, ability to support particular compliance requirements, etc.).

The Solution Criteria are intended to help cloud architects evaluate cloud IaaS providers (and integrated IaaS+PaaS providers such as the hyperscale cloud providers), whether public or private, or assess their own internal private cloud. We are about to publish Solution Scorecards (formerly branded In-Depth Assessments) for multiple providers; Gartner analysts assess these solutions hands-on and determine whether or not they have capabilities that meet the requirements of a criterion.

The TL;DR version

In summary, we revised the Solution Criteria extensively in 2019, and the results were as follows:

  • The criteria have been updated to reflect the current IaaS+PaaS market.
  • Expectations are significantly higher than in previous years.
  • Expectations have been aligned to other Gartner research, taking into account customer wants and needs in the relevant market, not just in a cloud-specific context.
  • Many capabilities have been consolidated and are now required.
  • Most vendor scores in the Solution Scorecards have dropped dramatically since last year, and there is a much broader spread of vendor scores.

The Evolution of Customer Demands

The Evaluation Criteria (EC) for Cloud IaaS was first published in 2012. It received a significant update every other year (each even-numbered year) thereafter. When first written, the EC reflected the concerns of our clients at the time, many of whom were infrastructure and operations (I&O) professionals with VMware backgrounds. With each iteration, the EC evolved significantly, yet incrementally.

In the meantime, the market moved extremely quickly. The market evolution towards cloud integrated IaaS and PaaS (IaaS+PaaS) providers, and the market exit (or strategic de-investment) of many of the “commodity” providers, radically changed the structure and nature of the market over time. Cloud IaaS providers weren’t just expected to provide “hardware infrastructure”, but also “software infrastructure”, including all of the necessary management and automation. This essentially forced these providers into introducing services that compete in many IT markets and in an extraordinary number of software niches.

Furthermore, as the market matured, the roles and expectations of our clients also evolved significantly. The focus shifted to enterprise-wide initiatives, rather than project-based adoption. Digital business transformation elevated the importance of cloud-native workloads, while IT transformation emphasized the need for high-quality cloud migration of existing workloads. The notion that a cloud IaaS provider could successfully run all, or almost all, of a customer’s IT became part of the assumptions that needed to underpin the provider evaluation process. 

Today’s cloud IaaS customers have high expectations. Experienced customers are becoming more sophisticated, but late adopters also have high expectations of a provider that have to be met to help the customer overcome barriers to adoption.

For 2019, we decided to take a look at the EC“from scratch”,  in order to try to construct a list of criteria that are the most relevant to  the initiatives of customers today. In many cases, our clients are trying to pick a primary strategic IaaS provider. In other cases, our clients already have a primary provider but are trying to pick a strategic secondary provider as they implement a multicloud strategy. Finally, some of our clients are choosing a provider for a tactical need, but still need to understand that provider’s capabilities in detail.

Constructing the Revision

The revision needed to keep a similar number of criteria (in order to keep the assessment time manageable and the assessment itself at a readable length) — we ended up with 265 for 2019.

In order to keep the total number of criteria down, we needed to consolidate closely-related criteria into a single criterion. Many criteria became multi-part as a result. We tried to consolidate the “table stakes” functionality that could be assumed to be a part of all (or almost all) cloud IaaS offerings, in order to make room for more differentiated capabilities. 

We tried to be as vendor-neutral as possible. The evaluation criteria have evolved since the initial 2012 introduction; when we introduced new criteria in the past, we often ended up with criteria requirements that closely mirrored the feature-set of the first provider to offer a capability, since that provider shaped customer expectations. In this 2019 revision, we tried to go back to the core customer requirements, without concern as to whether cloud provider implementations fully aligned with those requirements — the criteria are intended to reflect what customers want and not what vendors offer.  There are requirements that no vendors meet, but which we often hear our clients ask for; in such cases we tried to phrase those requirements in ways that are reasonable and implementable at scale, as it’s okay for the criteria to be somewhat aspirational for the market.

We tried to make sure that the criteria were worded using standard Gartner terms or general market terminology, avoiding vendor-specific terms. (Note that because vendors not-infrequently adopt Gartner terms, there were cases where providers had adopted terminology from earlier versions of EC, and we made no attempt to alter such terms.)

We tried to keep to requirements, without dictating implementation, where possible. However, we had to keep in mind that in cloud IaaS, where there are customers who want fine-grained visibility and control over the infrastructure, there still must be implementation specificity when the customer explicitly wants those elements exposed.

Defining the Criteria

During the process of determining the criteria, we sought input broadly within Gartner, both in terms of discussing the criteria with other analysts as well as incorporating things from existing Gartner written research. (And the criteria reflect, as much as possible, the discussions we’ve had with clients about what they’re looking for, and what they’re putting into their RFPs.)

In some cases, we needed input from specialists in a topic. In some areas of technology, clients who need to have deep-dive discussions on features may talk almost exclusively to analysts specialized in those areas. Those analysts are familiar with current requirements as well as the future of those technology areas, and are thus the best source for determining those needs. For example, areas such as machine learning and IoT are primarily covered by analysts with those specializations, even when the customers are implementing cloud solutions. There are also areas, such as Security, where we have detailed cloud recommendations from those teams. So we extensively incorporated their input..

We also looked at non-cloud capabilities when there were market gaps relative to customer desires. There are areas where either cloud providers do not currently have capabilities, or where those capabilities are relatively nascent. Thus, we needed to identify where customers are using on-premises solutions, and want cloud solutions. We also needed to determine what the “minimum viable product” should be for the purposes of constructing a criterion around it.

Feedback from non-cloud analysts was also important because it identified areas where clients were not using a cloud solution because of something that was missing. In many cases, these were not technology features, but issues around transparency, or the lack of solutions acceptable on a global basis.

Finally,  the way that customers source solutions, build applications, and manage their data is changing. We tried to ensure that the new criteria aligned with these trends.

Because more and more of our clients are deploying cloud solutions globally, every criterion also had some requirements as to its global availability. These are used only for advisory purposes and are not part of scoring. 

The vendors were allowed to give feedback on the criteria prior to publication. We wanted to check if the criteria were reasonable, and seemed fair. We incorporated feedback that constituted good, vendor-neutral suggestions that aligned to customer requirements.

The End Results

When you see the Solution Scorecards, you may be surprised by lower scores on the part of many of the providers. We’re being transparent about the Evaluation Criteria (Solution Criteria) revision in order to help you understand why the scores are lower.

The lower scores were an unintentional side-effect of the revision, but reflect, to some degree, the state of the market  relative to the very high expectations of customers. Note that this year’s lower scores do not indicate that providers have “gone backwards” or removed capabilities; they just reflect the provider’s status against a raised bar of customer expectations. 

We expect that when we update the scorecards in the second half of this year, scores will increase, as many of the vendors have since introduced missing capabilities, or will do so by the next update.  We retain confidence that the solution criteria are a good reflection of a broad range of current customer expectations. Because many vendors are doing a good job of listening to what customers and prospects want, and planning accordingly, we think that the solution criteria will also be reflected in future vendor roadmaps and market development.

We discuss the Solution Scorecards and scores in a separate blog post.