Category Archives: Industry

Banks are accelerating their cloud journeys

In the past couple of months, I have talked to the majority of the world’s largest banks about what is necessary to drive successful cloud adoption at enterprise scale. These conversations have a lot of things in common with one another, and I often send the same research notes as a follow-up to our conversations. Here are those notes, with some context. The notes are all behind the Gartner paywall, in most cases Gartner for Technical Professionals, but some of these are available to IT Leaders clients, or Executive Programs clients.

Banks are indeed really moving core banking to the cloud. The long-held adage that “banks might put new systems of innovation or systems of engagement in the cloud, but they’ll never move core banking”, is crumbling. Gartner has statistics supporting this, which you can find in “Core Banking Hot Spot: Moving the Core Into the Cloud“.

Banks cite application modernization as a critical driver for cloud adoption. An increasing number of banks are migrating a substantial percentage of their existing application estate to public cloud IaaS (and PaaS). Supporting survey data can be found in “Application Modernization Is the Most Common Identified Priority for End-User Cloud Adoption in Banking and Investment Services” (but other priorities are closely clustered in importance).

Banks are striving to mature their cloud adoption. Some banks have had a lot of ad hoc adoption over the years, while other banks have been more cautious (venturing into a bit of SaaS but sometimes zero IaaS or PaaS). But we’ve hit the inflection point (starting about two years ago) where banks became comfortable with cloud provider security and then seemingly all of a sudden went to a “go go go!” mode in which cloud was viewed as a critical accelerator of digital banking initiatives. (See “Advance Through Public Cloud Adoption Maturity” for a view of typical journeys.)

Central cloud governance is the norm for banks. Banks generally like the Gartner-style cloud center of excellence (CCOE) model where an enterprise architecture function provides cloud governance, brokerage, and transformation assistance. (See “How to Build a Cloud Center of Excellence“.) However, their CCOE model is likely to be federated to empower different business units or regions to take charge of their own destinies (especially when the cloud strategy is more regional than global). And many banks are splitting off a separate cloud IT unit under a deputy CIO, which is effectively a self-contained organization with hundreds of people devoted to the cloud migration and transformation effort.

While banks still do detailed technical evaluation of cloud providers, strategic selection is based on alignment to the IT strategy. Banks still really care about nitpicky technical details, but ultimately, their selection of strategic providers is based on broader IT priorities, just like most other cloud customers these days. (See “How to Initiate the Selection of Strategic Cloud IaaS Providers“.) Sometimes there’s a certain degree of hope for some kind of innovation partnership. (I am cynical about such “partnerships”, especially when they come in the form of vague sales platitudes without contractual guarantees or a close business development relationship.)

Banks tend to be multicloud. The larger the bank, the more likely it is to adopt a multicloud strategy, similar to other enterprises (see “Comparing Cloud Workload Placement Strategies“). However, this does not mean that all cloud providers are treated equally. My anecdotal impression is that in terms of primary strategic provider, AWS dominates the the top end of the market (the largest banks) but that Azure captures the middle of the pack (from the US midmarket banks that tend to outsource their processing, to the banks that are important at the country/region level but not highly global).

Banks are making the transition to a more systematic approach to multicloud. Like many large distributed enterprises, banks often have pockets of cloud adoption, each aligned to a different cloud provider. With the maturation of their cloud journeys, they are becoming more systematic, building workload placement policies to guide where workloads should go. (See “Designing a Cloud Workload Placement Policy Document“.)

Banks worry about cloud concentration risks. Many banks face regulatory regimes that require them to address concentration risk. Regulators tend not to provide prescriptive guidance for what they must do, though. Banks have told me that attempting to maintain multicloud portability for applications essentially destroys the business case for cloud. Portability significantly impacts application development time, thus reducing the agility benefits. Without the ability to exploit the unique differentiated capabilities of a cloud provider, there’s little compelling reason not to just do it on-premises — which might actually be more risky than doing it in the cloud.  There are effective practical risk-reduction approaches that don’t involve “maintain constant portability of all my apps”, though. (See “How to Create a Public Cloud Integrated IaaS and PaaS Exit Plan“.)

I hope to collaborate with a Gartner colleague to write bank-targeted research in the future. If you’re a cloud architect at a bank, I’d love to speak with you in client inquiry.

Terms of Service: From anti-spam to content takedown

Many consumers are familiar with the terms of service (ToS) that govern their use of consumer platforms that contain user-generated content (UGC), such as Facebook, Twitter, and YouTube. But many people are less familiar with the terms of service and acceptable use policy (AUP) that governs the relationships between businesses and their service providers.

In light of the recent decision undertaken by Amazon Web Services (AWS) to suspend service to Parler, a Twitter-like social network that was used to plan the January 6th insurrection at the US Capitol, there have been numerous falsehoods circulating on social media that seem related to a lack of understanding of service provider behavior, ToS and AUPs: claims of “free speech” violations, a coordinated conspiracy between big tech companies, and the like. This post is intended to examine, without judgment, how service providers — cloud, hosting and colo, CDN and other Internet infrastructure providers, and the ISPs that connect everyone to the Internet — came to a place of industry “standards” for behavior, and how enforcement is handled in B2B situations.

The TL;DR summary: The global service provider community, as a result of anti-spam efforts dating back to the mid-90s, enforces extremely similar policies governing content, including user-generated content, through a combination of B2B peer pressure and contractual obligations. Business customers who contravene these norms have very few options.

These norms will  greatly limit  Parler’s options for a new home. Many sites with far-right and similarly controversial content have ultimately ended up using a provider in a supply chain that relies on Russian connectivity, thus dodging the Internet norms that prevail in the rest of the world.

Internet Architecture and Service Provider Dependencies

While the Internet is a collection of loosely federated networks that are in theory independent from one another, it is also an interdependent web of interconnections between those networks. There are two ways that ISPs connect with one another — through “settlement-free peering” (essentially an exchange of traffic between two ISPs that view themselves as equals) and through the purchase of “transit” (making a “downstream ISP” the customer of an “upstream ISP”).

This results in a three-tier model for ISPs.  The Tier 1 ISPs are big global carriers of network connectivity — companies like AT&T, BT and NTT — who have settlement-free peers with each other,  and sell transit to smaller ISPs. Tier 2 ISPs are usually regional, and have settlement-free peers with others in and around their region, but also are reliant on transit from the Tier 1s. Tier 3 ISPs are entirely dependent on purchasing transit. ISPs at all three tiers also sell connectivity directly to businesses and/or consumers.

In practice, this means that ISPs are generally contractually bound to other ISPs. All transit contracts are governed by terms of service that normally incorporate, by reference, an AUP.  Even settlement-free peering agreements are legal contracts, which normally includes the mutual agreement to maintain and enforce some form of AUP. (In the earlier days of the Internet, peering was done on a handshake, but anything of that sort is basically a legacy that can come to an abrupt end should one party suddenly decide to behave badly.)

AUP documents are interesting because they are deliberately created as living documents, allowing AUPs to be adapted to changing circumstances — unlike standard contract terms, which apply for the length of what is usually a multiyear contract. AUPs are also normally ironclad; it’s usually difficult to impossible for a business to get any form of AUP exemption written into their contract. Most contracts provide minimal or no notice for AUP changes. Businesses tend to simply agree to them because most businesses do not plan to engage in the kind of behavior that violates an AUP — and because they don’t have much choice.

The existence of ISP tiering means that upstream providers have significant influence over the behavior of their downstream. Upstream ISPs normally mandate that their downstream ISPs — and other service providers that use their connectivity, like hosting providers — enforce an AUP that enables the downstream provider to be compliant with the upstream’s terms of service. Downstream providers that fail to do so can have their connectivity temporary suspended or their contract terminated. And between the Tier 1 providers, peer pressure ensures a common global understanding and enforcement of  acceptable behavior on the Internet.

Note that this has all occurred in the absence of regulation. ISPs have come to these arrangements through decisions about what’s good for their individual businesses first and foremost, with the general agreement that these community standards for AUPs are good for the community of network operators as a whole.

We’re Here Because Nobody Likes Spammers

So how did we arrive at this state in the first place?

In the mid-90s, as the Internet was growing rapidly, in the near-total absence of regulation, spam was a growing problem. Spam came from both legitimate businesses who simply weren’t aware of or didn’t especially care about Internet etiquette, as well as commercial spammers (bad actors with deceptive or fraudulent ads, and/or illegal/grey-market products).

Many B2B ISPs did not feel that it was necessarily their responsibility to intervene, despite general distaste for spammers — and, sometimes, a flood of consumer complaints. Some percentage of spammers were otherwise “good customers” — i.e. they paid their bills on time and bought a lot of bandwidth. Many more, however, obtained services under fraudulent pretenses, didn’t pay their bills, or tended not to pay on time.

Gradually, the community of network operators came to a common understanding that spammers were generally bad for business, whether they were your own customers, or whether they were the customers of, say, a web hosting company that you provided Internet connectivity for.

This resulted in upstream ISPs exerting pressure on downstream ISPs. Downstream ISPs, in turn, exerted pressure on their customers — kicking spammers off their networks and pushing hosters to kick spammers out of hosting environments. ISPs formalized AUPs. AUP enforcement took longer. Many ISPs were initially pretty shoddy and inconsistent in their enforcement — either because they needed the revenue they were getting from spammers, or due to unwillingness or inability to fund a staff to deal with abuse, or corporate lawyers who urged caution. It took years, but ISPs eventually arrived at AUPs that were contractually enforceable, processes for handling complaints, and relatively consistent enforcement. Legislation like the CAN-SPAM act in the US didn’t hurt, but by the time CAN-SPAM was passed (in 2003), ISPs had already arrived at a fairly successful commercial resolution to the problem.

Because anti-spam efforts were largely fueled by agreements enshrined in B2B contracts, and not in government regulation, there was never full consistency across the industry. Different ISPs created different AUPs — some stricter and some looser. Different ISPs wrote different terms of service into their contracts, with different “cure” periods (a period of time that a party in the contract is given to come into compliance with a contractual requirement). Different ISPs had different attitudes towards balancing “customer service” versus their responsibilities to their upstream providers and to the broader community of network operators.

Consequently, there’s nothing that says “We need to receive X number of spam complaints before we take action,” for instance. Some providers may have internal process standards for this. A lot of enforcement simply takes place via automated algorithms; i.e. if a certain threshold of users reports something as spam, enforcement actions take place. Providers effectively establish, through peer norms, what constitutes “effective” enforcement in accordance with terms of service obligations. Providers don’t need to threaten each other with network disconnection, because a norm has been established. But the implicit threat — and the contractual teeth behind that threat — always remains.

Nobody really likes terminating customers. So there are often fairly long cure periods, recognizing that it can take a while for a customer to properly comply with an AUP. In the suspension letter that AWS sent Parler, AWS cites communications “over the past several weeks”. Usually the providers look for their customers to demonstrate good-faith efforts, but may take suspension or termination efforts if it does not look like a good-faith effort to comply is being made, or if it appears that the effort, no matter how seemingly earnest, does not seem likely to bring compliance within a reasonable time period. 30 days is a common timeframe specified as a cure period in contracts (and is the cure period in the AWS standard Enterprise Agreement), but cloud provider click-through agreements (such as the AWS Customer Agreement) do not normally have a cure period, allowing immediate action to be taken at the provider’s discretion.

What Does This Have to Do With Policing Users on Social Media?

When providers established anti-spam AUPs, they also added a whole laundry list of offenses beyond spamming. Think of that list, “Everything a good corporate lawyer thought an ISP might ever want to terminate a customer for doing.” Illegal behavior, harassment, behavior that disrupts provider operations, behavior that threatens the safety/security/operations of other businesses, etc. are all prohibited.

Hosting companies — eventually followed by cloud providers like AWS, Microsoft Azure, and Google Cloud Platform, as well as companies that hold key roles in the Internet ecosystem (domain registrars and the companies that operate the DNS; content delivery networks like Akamai and Cloudflare, etc.) — were essentially obliged to incorporate their upstream ISP usage policies into their own terms of service and AUPs, and to enforce those policies on their users if they wanted to stay connected to the Internet. Some such providers have also explicitly chosen not to sell to customers in certain business segments — for instance, no gambling, or no pornography, even if the business is fully legitimate and legal (for instance, like MGM Resorts or Playboy) — through limiting what’s allowed in their terms of service. An AUP may restrict activities that are perfectly legal in a given jurisdiction.

Even extremely large tech companies that have their own data centers, like Facebook and Apple, are ultimately beholden to ISPs. (Google is something of an odd case because in addition to owning their own data centers, they are one of the largest global network operators. Google owns extensive fiber routes and peers with Tier 1 ISPs as an equal.) And even though AWS has, to some degree, a network of its own, it is effectively a Tier 2 ISP, making it beholden to the AUPs of its upstream. Other cloud providers are typically mostly or fully transit-dependent, and are thus entirely beholden to their upstream.

In short: Everyone who hosts content companies, and the content companies themselves, is essentially trapped, by the chain of AUP obligations, to policing content to ensure that it is not illegal, harassing, or otherwise seen as commercially problematic.

You have to go outside the normal Internet supply chain — for instance, to the Russian service providers — before you escape the commercial arrangements that bound notions of good business behavior on the Internet. It doesn’t matter what a provider’s philosophical alignment is. Commercially, they simply can’t really push back on the established order. And because these agreements are global, regulation at a single-country level can’t really force these agreements to be significantly more or less restrictive, because of the globalized nature of peering/transit; providers generally interconnect in multiple countries.

It also means that these aren’t just “Silicon Valley” standards. These are global norms for behavior, which means they are not influenced solely by the relatively laissez-faire content standards of the United States, but by the more stringent European and APAC environments.

It’s an interesting result of what happens when businesses police themselves. Even without formal industry-association “rules” or regulatory obligations, a fairly ironclad order can emerge that exerts extremely effective downstream pressure (as we saw in the cases of 8Chan and the Daily Stormer back in 2019).

Does being multicloud help with terms of service violations?

Some people will undoubtedly ask, “Would it have helped Parler to have been multicloud?” Parler has already said that they are merely bare-metal customers of AWS, reducing technical dependencies / improving portability. But their situation is such that they would almost certainly have had the exact same issue if they had been running on Microsoft Azure, Google Cloud Platform, or even Oracle Cloud Infrastructure as well (even though the three companies have top executives with political views spanning the spectrum). A multicloud strategy won’t help any business that violates AUP norms.

AWS and its cloud/hosting competitors are usually pretty generous when working with business customers that unintentionally violate their AUPs. But a business that chooses not to comply is unlikely to find itself welcome anywhere, which makes multicloud deployment largely useless as a defensive strategy.

Beware of vendors bearing transformation Turkish Delight

“It is a lovely place, my house,” said the Queen. “I am sure you would like it. There are whole rooms full of Turkish Delight, and what’s more, I have no children of my own. I want a nice boy whom I could bring up as a Prince and who would be King of Narnia when I am gone. While he was Prince he would wear a gold crown and eat Turkish Delight all day long; and you are much the cleverest and handsomest young man I’ve ever met. I think I would like to make you the Prince—some day, when you bring the others to visit me.” — The White Witch (C.S. Lewis; The Lion, The Witch, and the Wardrobe)

When most people read the Narnia novels as children, they have no idea what Turkish Delight is. Its obscurity in recent decades has allowed everyone to imagine it as an entirely wonderful substance, carrying all their hopes and dreams of the perfect candy.

So, too, do people pour all of their business hopes and dreams into a nebulously-defined future of “digital transformation”.

Because the cloud is such a key enabling technology for digital business, I have plenty of discussions with clients who have been promised grand “digital transformation” outcomes by cloud providers and cloud MSPs. But it certainly not a phenomenon limited to the cloud. Hardware vendors and ISVs, outsourcers, consultancies, etc. are all selling this dream. While I can think of vendors who are more guilty of this than others, it’s a cross-IT-industry phenomenon.

Beware all digital transformation promises. Especially the ones where the vendor promises to partner with you to change the future of your industry or reinvent/revolutionize/disrupt X (where X is what your industry does).

I’ve quietly watched a string of broken transformation promises over the last few years, gently privately warning clients in inquiry conversations that you generally can’t trust these sorts of vendor promises. These behaviors have become much more prominent recently, though. And a colleague recently told me about a conversation that seemed like just a bridge too far: a large tech vendor promising to partner with a small Midwestern industrial manufacturer (tech laggards not doing anything exciting) to create transformative products, as part of a sales negotiation. (This vendor had not previously exhibited such behavior, so it was doubly startling.)

Clients come to us with tales of vendors who, in the course of sales discussions, promises to partner with them — possibly even dangling the possibility of a joint venture — to launch a transformational digital business, revolutionize the future of their industry, or the like. (Note that this is distinct from companies that sell transformation consulting. They promise to help you figure out the future, not form a business partnership to create that future — i.e. McKinsey, Deloitte, etc.)

Usually, neither the customer nor the vendor have a concrete idea of what that looks like. Usually, the vendor refuses to put this partnership notion in writing as a formal contract. On the rare occasion that there is a contract, it is pretty vague, does not oblige the vendor to put forth any business ideas, and allows the vendor to refuse any business idea and investment. In other words, it has zero teeth. Because it’s  so open-ended, the customer can fill the void with all their Turkish Delight dreams.

Moreover, the vendor may sometimes dangle samples of transformation-oriented services and consulting during the sales process. The customer gobbles down these sweet nuggets, and then stares mournfully at the empty box of transformation candy. For the promise of more, they’ll cheerfully betray their enterprise procurement standards, while the sourcing managers stand on the sidelines frantically waving contract-related warnings.

Listen to your sourcing managers when they warn you that the proposed “partnership” is a fiction. The White Witch probably doesn’t have your best interests at heart. Good digital transformation promises — ones that are likely to actually be kept — have concrete outcomes. They specify what the partnership will be developing, together with timelines, budgets, and the legal entity (such as a JV) that will be used to deliver the products/services. Or they specify the specific consulting services that will be provided — workshops, deliverables from those workshops, work-for-hire agreements with specific costs (and discounts, if applicable), and so forth.

Without concrete contractual outcomes, the vendor can vanish the candy into thin air with no repercussions. Sure, in a concrete transformation proposal, the end result will probably not be your Turkish Delight dreams. It might resemble a bowl of ordinary M&Ms. Or maybe a tasty grab-bag of Lindt truffles. (You’d have to get particularly lucky for it get much beyond the realm of grocery-store candy, though.) But you’re much more likely to actually get a good outcome.

Off-hand, I can think of one public example where a prominent “change the industry” vendor partnership with an enterprise, seems to have resulted in a credible product: Microsoft’s Connected Vehicle Platform. There, Microsoft signed a deal with a collection of automakers, each of whom had specific outcomes they wished to achieve — outcomes which could be realistically achieved in a reasonable amount of time, and representing industry advancement but not anything truly revolutionary. Microsoft built upon those individual projects to deliver a platform which would move the industry forward, which was announced with a clear mission and a timeframe for launch. Sure, it didn’t “change the future of cars”, but it brought tangible benefits to the customers.

Vendors often try to sell to who you hope to be, rather than who you are now. Your aspirations aren’t bad. Just make sure that your aspirations are well-defined and there’s a realistic roadmap to achieve them. Hope is not a strategy. The vendor may have little incentive not to promise everything  you could dream of, in order to get you to sign a large purchase agreement.

HP buys Eucalyptus

In an interesting move that seems to be predominantly an acquihire, HP has bought Eucalyptus for an undisclosed sum, though speculation is that the deal’s under $100m, less than a 2x multiple on what Eucalyptus has raised in funding (although that would still be a huge multiple on revenue).

Out of this, HP gets Eucalyptus’s CEO, Marten Mickos, who will be installed as the head of HP’s cloud business, reporting to Meg Whitman. It also gets Eucalyptus’s people, including its engineering staff, whom they believe to really have expertise in what HP termed (in a discussion with myself and a number of other Gartner colleagues) the “Amazon architectural pattern”. Finally, it gets Eucalyptus’s software, although this seems to have been very secondary to the people and their know-how — unsurprising given HP’s commitment to OpenStack at the core of HP Helion.

Eucalyptus will apparently be continuing onward within HP. Mickos had indicated something of a change in direction previously, when he explained in a blog post why he would be keynoting an OpenStack conference. It seems like Eucalyptus had been headed in the direction of being an open-source cloud management platform (CMP) that provides an AWS API-compatible framework over a choice of underlying components, including OpenStack component options. In this context, it makes sense to have a standalone Eucalyptus product / add-on, providing an AWS-compatible private cloud software option to customers for whom this is important — and it sidesteps the OpenStack community debate on whether or not AWS compatibility should be important within OpenStack itself.

HP did not answer my direct question if Eucalyptus’s agreement with Amazon includes a change-of-control clause, but they did say that partnerships require ongoing collaboration between the two parties. I interpreted that to mean that AWS has some latitude to determine what they do here. The existing partnership has been an API licensing deal — specifically, AWS has provided Eucalyptus with engineering communications around their API specifications, without any technology transfer or documentation. The partnership been important to ensuring that Eucalyptus replicates AWS behavior as closely as possible, so the question of whether AWS continues to partner going forward is likely important to the fidelity of future Eucalyptus work.

It’s important to note that Eucalyptus is by no means a full AWS clone. It offers the EC2, S3, and IAM APIs, including relatively full support for EC2 features such as EBS. However, it does not support the VPC networking features. And of course, it’s missing the huge array of value-added capabilities that surround the basic compute and storage resources. It’s not as if HP or anyone else is going to take Eucalyptus and build a service that is seriously competitive to AWS. Eucalyptus had mostly found its niche serving SMBs who wanted to run a CMP that would support the most common AWS compute capabilities, either in a hybrid cloud mode (i.e., for organizations still doing substantial things in AWS) or as an on-prem alternative to public cloud IaaS.

Probably importantly to the future success of HP Helion and OpenStack, though, Mickos’s management tenure at Eucalyptus included turning the product from its roots as a research project, into much slicker commercial software that was relatively easy to install and run, without requiring professional services for implementation. He also turned its sales efforts to focus on SMBs with a genuine cloud agility desire, rather than chasing IT operations organizations looking for a better virtualization mousetrap (another example of bimodal IT thinking). Eucalyptus met with limited commercial success — but thus far, CloudStack and OpenStack haven’t fared much better. This has been, at least in part, a broader issue with the private cloud market and the scope of capabilities of the open-source products.

Of the many leaders that HP could have chosen for its cloud division, the choice of Mickos is an interesting one; he’s best known for being CEO of MySQL and eventually selling it to Sun, and thus he makes most sense as a leader in the context of open-source-oriented thinking. I’m not inclined to call the HP-Eucalyptus acquisition a game-changer, but I do think it’s an interesting indicator of HP’s thinking — although it perhaps further muddies waters that are already pretty muddy. The cloud strategies of IBM, Microsoft, Oracle, and VMware, for instance, are all very clear to me. HP hasn’t reached that level of crispness, even if they insist that they’ve got a plan and are executing on it.

Edit: Marten Mickos contacted in me in email to clarify the Amazon/Eucalyptus partnership, and to remind me that MySQL was sold to Sun, not Oracle. I’ve made the corrections.

Bimodal IT, VMworld, and the future of VMware

In Gartner’s 2014 research for CIOs, we’ve increasingly been talking about “bimodal IT”. Bimodal IT is the idea that organizations need two speeds of IT — call them traditional IT and agile IT (Gartner just calls them mode-1 and mode-2). Traditional IT is focused on “doing IT right”, with a strong emphasis on efficiency and safety, approval-based governance and price-for-performance. Agile IT is focused on “doing IT fast”, supporting prototyping and iterative development, rapid delivery, continuous and process-based governance, and value to the business (being business-centric and close to the customer).

We’ve found that organizations are most successful when they have two modes of IT — with different people, processes, and tools supporting each. You can make traditional IT more agile — but you cannot simply add a little agility to it to get full-on agile IT. Rather, that requires fundamental transformation. At some point in time, the agile IT mode becomes strategic and begins to modernize and transform the rest of IT, but it’s actually good to allow the agile-mode team to discover transformative new approaches without being burdened by the existing legacy.

Furthermore, agile IT doesn’t just require new technologies and new skills — it requires a different set of skills from IT professionals. The IT-centric individual who is a cautious guardian and enjoys meticulously following well-defined processes is unlikely going to turn into a business-centric individual who is a risk-taking innovator and enjoys improvising in an uncertain environment.

That brings us to VMware (and many of the other traditional IT vendors who are trying to figure out what to do in an increasingly cloud-y world). Today’s keynote messages at VMworld have been heavily focused on cost reduction and offering more agility while maintaining safety (security, availability, reliability) and control. This is clearly a message that is targeted at traditional IT, and it’s really a story of incremental agility, using the software-defined data center to do IT better. There’s a heavy overtone of reassurance that the VMware faithful can continue to do business as usual, partaking of some cool new technologies in conjunction with the VMware infrastructure that they know and love — and control.

But a huge majority of the new agile-mode IT is cloud-native. It’s got different champions with different skills (especially development skills), and a different approach to development and operations that results in different processes and tooling. “Agility” doesn’t just mean “faster provisioning” (although to judge from the VMware keynote and customer speakers, IT Operations continue to believe this is the case). VMware needs to find ways to be relevant to the agile-IT mode, rather than just helping tradtional-IT VMware admins try to improve operations efficiency in a desperate grasp to retain control. (Unfortunately for VMware, the developer-relevant portions of the company were spun off into Pivotal.)

Bimodal IT also implies that hybrid IT is really simply the peaceful coexistence of non-cloud and cloud application components — not the idea that it’s one set of management tools that sit on top of all environments. VMware admins are obviously attracted to the ability to extend their existing tools and processes to the cloud (whether service provider IaaS or an internal private cloud), but that’s not necessarily the right thing to do. You might run traditional IT both in non-cloud and cloud modes and want hybrid tooling for both — but you should not do that for traditional-IT and agile-IT modes (regardless of whether it’s non-cloud or cloud), but instead use best-of-breed tooling for each mode.

If you’re considering the future of any IT vendor today, you have to ask yourself: What is their strategy to address each mode of IT? The mere recognition of the importance of applications and the business is insufficient.

(Gartner clients only: See Taming the Digital Dragon: The 2014 CIO Agenda and Bimodal IT: How to Be Digitally Agile Without Making a Mess for a lot more information about bimodal IT. See our 2013 and 2014 Professional Effective Planning Guides, Coming to Terms With the Nexus of Forces and Reshaping IT for the Digital Business for a guide to what IT professionals should do to advance their careers and organizations given these trends.)

AWS 2Q14 and why the sky is not falling

Amazon posted weaker 2Q 2014 results for Amazon Web Services, leading some to speculate about competitive pressures despite continued enormous growth in usage.

Investors are asking, probably reasonably, what’s going on here. Is the overall market for cloud computing weakening? Are competitors taking more market share? Or is this largely the result of the price cuts that went into effect at the beginning of 2Q? And if it’s the price cuts, are these price cuts temporary, or are they part of a larger trend that eventually drives the price to zero?

The TL;DR version: No, cloud growth remains tremendous. No, AWS’s market share likely continues to grow despite the fact that they’re already the dominant player. Yes, this is a result of the price cuts. No, the price cuts are permanent, and yes, cuts will eventually likely drive prices down to near-cost, but this is nevertheless not a commodity market.

The deep dive follows. Note that when I use the term “market share”, I do not mean revenue; I mean revenue-generating capacity-in-use, which controls for the fact that prices vary significantly across providers.

What’s with these price drops?

I have said repeatedly in the past that the dynamics of the cloud IaaS market (which is increasingly also convergent with the high-control PaaS market) are fundamentally those of a software market. It is a market in which customers are ultimately buying IT operations management software as a service, and as they go up-stack, middleware as a service. Yes, they are also getting the underlying compute, storage, and networking resources, but the major value is the actually in all the software that automates this stuff and delivers it as an easy-to-consume, less-effort-to-manage solution. The automation reduces a customer’s labor costs, which is where the customer sees cost-savings in services over doing it themselves.

You’ll note that in other SaaS markets, the infrastructure is effectively delivered at cost. That’s extremely likely to be true of the IaaS market over time, as well. Furthermore, like other software markets, the largest vendors with the greatest breadth and depth of capabilities are the winners, and they’re made extra-sticky by their ecosystems. (Think Oracle.) Over time, the IaaS providers will make most of their margin off higher-level services rather than the raw resources; think about the difference between what a customer pays for Redshift data warehousing, versus raw EC2 compute plus EBS storage.

The magnitude of the price drops is scaring away many rival providers. So is the pace of innovation. Few competitors have the deep pockets or willpower to pour money into engineering and data centers in order to compete at this level. Most are now scurrying to stake out a niche of the market where they believe they can differentiate.

Lowering the price expands the addressable market, as well. The cheaper it is to do it in the cloud, the more difficult it is to make a business case to do an on-premises solution, especially a private cloud. Many of Gartner’s clients tell us that even if they have a financially viable case to build a private cloud right now, their costs will be essentially static over the amortization period of 3 to 5 years — versus their expectation that the major IaaS providers will drop prices 30% every year. Time-to-value for private cloud is generally 18 to 24 months, and it typically delivers a much more limited set of features, especially where developer enablement is concerned. It’s tough for internal IT to compete, especially when the major IT vendors aren’t delivering software that allows IT to create equivalent capabilities at the speed of an AWS, Microsoft, or Google.

The size of the price cut certainly had a negative impact on AWS’s revenues this past quarter. The price cut is likely larger than AWS would have done without pressure from Google. At the same time, the slight dip in revenue, versus the magnitude of the cuts, makes it clear that AWS is still growing at a staggeringly fast pace.

The Impact of Microsoft Azure

Microsoft has certainly been on an aggressive tear in the market over the last year, especially since September 2013, and Azure has been growing impressively. Microsoft has been extremely generous with both discounts and with free credits, which has helped fuel the growth of Azure usage. But interestingly, Microsoft’s proactive evangelism to their customers has helped expand the market, particularly because Microsoft has been good at convincing mainstream adopters that they’re actually late adopters, spurring companies to action. Microsoft’s comprehensive hybrid story, which spans applications and platforms as well as infrastructure, is highly attractive to many companies, drawing them towards the cloud in general.

Some customers who have been pitched Azure will actually go to Azure. But for many, this is a trigger to look at several providers. It’s not unusual for customers to then choose AWS over Azure as the primary provider, due to AWS’s better feature set, greater maturity, and better ecosystem; those customers will probably also do things in Azure, although they may be earlier-stage and smaller projects. Of course, many existing AWS customers are also adding Azure as a secondary provider. But it may well be that Microsoft’s serious entry into this market has actually helped AWS, in terms of absolute usage gains, more than it has hurt it — for the time being, of course. Over time, Microsoft’s share gains will come at AWS’s expense.

The Impact of Google

Google has been delivering technology at an impressive pace, but there is an enormous gulf between its technology capabilities and its go-to-market prowess. They also have to overcome the perception problem that people aren’t sure if Google is serious about this market, and are therefore reluctant to commit to the platform.

While everyone is super-interested in what Google is doing, at the moment they do not seem to be winning significant customers from AWS. They’re doing reasonably well in batch-computing scenarios (more as a rival to AWS spot instances), and in scenarios where all of Google Cloud Platform, including App Engine, is of interest. But they do not seem to have really gotten market momentum, yet.

However, in this market and in many other markets, a competitor does not need to win over the market leader in order to hurt them. They merely need to change customer expectations of what the price should be — and AWS has allowed Google to set the price. As long as Google seems like a credible threat, AWS will likely continue to be competitive on price, at least for those things that Google also offers. Microsoft has already publicly pledged to be price-competitive with AWS, so that pretty much guarantees that the three providers will move in lockstep on pricing.

The Impact of Smaller Providers

Digital Ocean, in particular, is making a lot of noise in the market. They’ve clearly established themselves as a VPS provider to watch. (VPS: virtual private server, a form of mass-market hosting.) However, my expectation is that Digital Ocean’s growth comes primarily at the expense of providers like Rackspace and Media Temple (owned by GoDaddy), rather than at the expense of AWS et.al. A Digital Ocean droplet (VM) is half the price of an AWS t2.micro (the cheapest thing you can get from AWS), and they’ve been generous with free-service coupons.

VPS providers have friendly control panels, services that are simple and thus easy to use, and super-low prices; they tend to have a huge customer base, but each customer is, on the average, tiny (usually just a single VM). These days, VPS is looking more and more like cloud IaaS, but the orientation of the providers tends to be quite different, both from a technology and a go-to-market perspective. There’s increasing fluidity between the VPS and cloud IaaS market, but the impact on AWS is almost certainly minimal. (I imagine that some not-insignificant percentage of AWS’s free tier would be on VPS instead if not for the free service, though.)

There’s plenty of other noise out there. IBM is aggressively pitching SoftLayer to its customer base, but the deals I’ve seen have generally been bare metal on long-term contracts, usually as a replatform of an IBM data center outsourcing deal. Rackspace’s return to its managed hosting roots is revitalizing. CenturyLink continues to have persuasive sales. VMware customers remain curious about vCHS. And so on. But none of these providers are growing the way that AWS and Microsoft Azure are, and both providers are gaining overall market share at the expense of pretty much everyone else.

The sky is not falling

With Microsoft and Google apparently now serious about this market, AWS finally has credible competitors. Having aggressive, innovative rivals are helping to push this market forward even faster, to the detriment of most other providers in the market, as well as the IT vendors selling do-it-yourself on-premises private cloud. AWS is likely to continue to dominate this market for years, but the market direction is no longer as thoroughly in its control.

Impressions of IBM Pulse

Once upon a time, IBM Pulse was a systems management conference. But this year, IBM has marketed it as a “cloud” conference. The first day of keynotes was largely devoted to IBM’s cloud efforts, although the second day keynote went back to the systems management roots (if still with a cloudy spin). IBM has done a good job of presenting a coherent vision of how it intends to go forward into the world of cloud, which is explicitly a world of changing business demands and an altered relationship between business and IT.

Notably absent amidst all of this has been any mention of IBM’s traditional services business (strategic outsourcing et.al.), but the theme of “IBM as a service” has resonated strongly throughout. IBM possesses a deep portfolio of assets, and exposing those assets as services is key to its strategy. This is going to require radical changes in the way that IBM goes to market, with a much greater emphasis on marketing-driven online sign-up and self-service. (IBM, like other large tech vendors, is largely sales-driven today.)

Some serious brand-building for IBM’s SoftLayer acquisition is being done here, although IBM seems to be trying to redefine everything that SoftLayer does as cloud, although SoftLayer’s business is almost all dedicated hosting (bare metal, sold month-to-month), not cloud IaaS in the usual sense of the word. There’s abundant confusion as a result; the cloudwashing is to IBM’s benefit, though, at least for now.

IBM has an enormous installed base, across its broad portfolio, and for a large percentage of that base, it is a strategic vendor. IBM has to figure out how to get that customer base to buy into its cloud vision, and to make the bet that IBM is the right strategic partner for that cloud journey. IBM looks to be taking a highly neutral stance on the balance of cloud (services) versus internal IT; arguably, much like Microsoft, its strength lies in the ostensible ability to blend on-premises do-it-yourself IT with services in the cloud, extending the lifetime of existing technology stacks.

Much like Microsoft, IBM has an existing legacy of enterprise software — specifically, software built for single-tenant, on-premise, bespoke environments. Such software tends not to scale, and it tends not to be easily retrofitted into a services model. Again like Microsoft, IBM is on a journey towards “cloud first” architecture. IBM’s acquisition of Cloudant isn’t just the acquisition of a nice bit of technology — it’s also the acquisition of the know-how of how to build a service at scale, a crucial bit of engineering expertise that it needs to absorb and teach within IBM, as IBM’s engineers embark on turning its software into services.

Again like Microsoft, IBM has the advantage of an existing developer ecosystem and middleware that’s proven to be sticky for that ecosystem — and consequently IBM has the potential to turn itself into a compelling cloud platform (in the broadest of senses, integrated across the IaaS, PaaS, and SaaS boundaries). Since cloud is in many ways about the empowerment of the line-of-business and developers, this is decidedly helpful for IBM’s future ambitions in the cloud.

So another juggernaut is on the move. Things should get interesting, especially when it comes to platforms, which is arguably where the real war for the cloud will be fought.

Vendors, Magic Quadrants, and client status

I’m writing this blog post for vendors who are in Magic Quadrants or who are hoping to be in Magic Quadrants, as well as the Gartner account executives (AEs) who have such vendors as clients and prospects. It’s in lieu of having to send an email blast to a lot of people; since it’s more generic than just my own Magic Quadrants, here it is for the world.

So, to sum up:

Whether or not a vendor is a Gartner client has no bearing on whether they are on a Magic Quadrant, or how they are rated. Vendors should therefore refrain from attempting to use pressure tactics on Gartner AEs, and Gartner AEs should be careful to avoid even the appearance of impropriety in dealing with vendors in a Magic Quadrant context. Vendors should conduct Magic Quadrant communications directly, using the contact information they were given.

And here’s the deeper dive:

Vendors, you’ve been given contact info for a reason. Please use it. As part of the process, every vendor being considered for an MQ is given points of contact — generally an admin coordinator as well as one or more of the analysts involved in the MQ. You’re told who you should go to if you have questions or issues — often the coordinator, lead analyst, or some specific analyst designated as your point of contact (POC). You should communicate directly with the POC. Do not go through your Gartner AE, other analysts that you deal with, or otherwise attempt to have a third party relay your concerns. Also, communicate via the contact you designated as the responsible party within your organization; we cannot, for instance, work with your PR firm. Gartner has a strict process that governs MQ-related communications; we ask that you do this so that we can ensure that all conversations are documented, and that your message is clearly and directly heard.

Yes, we mean it. Please contact us with questions and issues. If you’ve read everything available to you (the official communications, the Gartner documentation on how MQs work, any URLs you were given, and so on), and it doesn’t answer your question, please reach out to us. If you have an issue, please let us know. The analyst is the authoritative source. Anything you hear from anyone else isn’t. Gartner AEs don’t have any kind of privileged knowledge about the process, so don’t depend on them for information.

A vendor’s client relationship is of no relevance. The analysts do not care if a vendor is a client, how big of a client they are, whether they’re going to buy reprints if they get a certain placement, will become a client if they’re included on the MQ, or about any other attempt to throw their weight around. Vendors who try to do so are likely to be laughed at. Gartner AEs who try to advocate on behalf of their clients will annoy the analyst, and if it doesn’t cease, strongly risk having the analyst complain about them to the Ombudsman. In general, analysts prefer not to even know about what issues an AE might be having that may in some way be impacted by the MQ. It may even backfire, as the analyst’s desire to avoid any appearance of impropriety may lead to much closer scrutiny of any positive statements made about that vendor.

In short: Vendors shouldn’t try to go through Gartner Sales to communicate with the analysts involved in a Magic Quadrant. The right way to do this is direct, via the designated contacts. I know it’s natural to go to someone whom you may feel is better able to plead your case or tell you how they think you can best deal with the analyst, but please avoid the urge, unless you really just want a sounding board and not a relay. If you want to talk, get in touch.

Specific Tips for Magic Quadrant briefings

This is part 2 of a two-part post. The first part contains general tips for Magic Quadrant briefings, applicable to any vendor regardless of how much contact they’ve had with the analyst. This second part divides the tips by that level of contact.

Richard Stiennon’s UP and to the RIGHT, which provides advice about the Magic Quadrants, stresses that preparation for this process is really a continuous thing — not a massive effort that’s just focused upon the evaluation period itself. I agree very strongly with that advice.

Nevertheless, even AR professionals — indeed, even AR professionals who are part of big teams at big vendors — often don’t do that long-term prep work. Consequently, there are really three types of vendors that enter into this process — vendors that the analyst is already deeply familiar with and where the vendor and analyst maintain regular contact (no client relationship necessary, can just be through briefings), vendors that the analyst has had some contact with but doesn’t speak to regularly, and vendors that the analyst doesn’t really know.

Tips for Vendors the Analyst is in Frequent Contact With

In general, if you have regular contact with the analyst — at least once, if not several times, a quarter — a briefing like this shouldn’t contain any surprises. It’s an opportunity to pull everything together in a unified way and back up your statements with data — to turn what might have been a disjointed set of updates and conversations, over the course of a year, into one unified picture.

Hit the highlights. Use the beginning of the briefing as a way to summarize your accomplishments over the course of the year, and refresh the analyst’s memory on things you consider particularly important. You may want to lay out your achievements in a quarter-by-quarter way in the slide deck, for easy reference.

Provide information that hasn’t been covered in previous briefings. Make sure you remember to mention your general corporate achievements. Customer satisfaction, changes to your channel and partnership model, financial accomplishments, and other general initiatives are examples of things you might want to touch on.

Focus on the future. Lay out where you see your business going. If you can do so on a quarter-by-quarter basis (which you may want to stipulate is under an NDA), do so.

Tips for Vendors the Analyst Knows, Without Frequent Contact

If the analyst knows you pretty well, but you haven’t been in regular contact — the analyst hasn’t gotten consistently briefed on updates, or been asked for input on future plans — not only do you want to present the big picture, but you want to make sure that they haven’t missed anything. Your briefing is going to look much like the briefing of a vendor who has been in frequent contact, but with a couple of additional points:

Tell a clear story. When you go through the highlights of your year, explain how what you’ve done and what you’re planning fits into a coherent vision of the market, where you see yourself going, and how it contributes to your unique value proposition.

Have plenty of supplemental material. Because the analyst might not have seen all the announcements, it’s particularly important that you ensure that your slide deck has an appendix that summarizes everything. Link to the press release or more detailed product description, if need be.

Tips for Vendors the Analyst Doesn’t Really Know

Sometimes, you just won’t know the analyst very well. Maybe you’re a new vendor to the space, or previously been too small to draw attention from the analyst. Maybe this space isn’t a big focus for you. Maybe you’ve briefed the analyst once a year or so, but don’t really stay in touch. Whatever the case, the analyst doesn’t know you well, and therefore you’re going to spend your briefing building a case from the ground up.

Clearly articulate who you are. Start the briefing with your elevator pitch. This is your business and differentiation in a handful of sentences that occupy less than two minutes. The analyst is trying to figure out how to summarize you in a nutshell. Your best chance of controlling that message is to (credibly) assert a summary yourself. Put your company history and salient facts on an appendix to the slides, for later perusal. Up front, just have your core pitch and any metrics that support it.

Pare your story to the bare essentials. Spend the most time talking about whatever it is that is your differentiation. (Example: If you know your product isn’t significantly differentiated but for whatever reason they love you in emerging markets, sweep through describing your product by comparing it to a common baseline in your market, and conceding that’s not where you differentiate, then focus on your emerging markets story in detail.) If your differentiation is in your product/service and not in your general business, focus on that — you can assume the analyst knows what the common baseline is, so you can gloss over that baseline in a few sentences (you can have more detail on your slides if need be) and then move on to talking about how you’re unique.

Be customer-centric. Explain the profile of your customers and their use cases, and make it clear why they typically choose you. Resist the urge to focus on brand-name logos, especially if those aren’t typical or aren’t your normal use case. Logo slides are nice, but they’re even better if the logos are divided by use case or some other organizing function that makes it clear what won you the deal.

Ensure you talk about your go-to-market strategy. Specifically, explain how you sell and market your product/service. Even if this isn’t especially exciting, spend at least a slide talking about how you’re creating market awareness of your company and product/service, and how you actually get it into the hands of prospects and win those deals.

Provide a deep appendix to your slide deck. You should feel free to put as much supplemental material as you think would be useful into an appendix or separate slide deck for the analyst’s later perusal. Everything from executive bios to a deep dive on your product can go here. If it’s in your presentation for investors, or part of your standard pitch to customers, it can probably go in the deck.

General Tips for Magic Quadrant briefings

Three years ago, I wrote a blog post called “What Makes for an Effective MQ Briefing?” This is a follow-up, with my thoughts somewhat more organized.

For AR and PR people, before you do anything else, if you have not read it, get and read Richard Stiennon’s UP and to the RIGHT. I cannot recommend it enough; Richard has seen the process as both a Gartner analyst and as a vendor, and it is chock full of good tips. It will help you understand the process, how the analysts think, and how you can best present yourselves.

This is part 1 of a two-part post contains the general tips. The next post contains tips that are applicable specifically to each of the three types of vendors. Every single one of these tips comes with one or more horror stories that I can tell you about past Magic Quadrant briefings. What seems obvious often isn’t.

General Tips

Pick the right presenters. You don’t need your most senior executives on the phone. You do need people who can present well, who thoroughly know the source material, and can answer whatever questions arise. Ensure that your presenters will have a good clear phone connection (use of cellphones or speakerphones should be strongly discouraged), and a quiet environment. If possible, choose presenters that speak fluent English and do not have accents that are difficult to understand — even if that requires using someone more junior and simply having the expert or executive on hand to take questions. You may even want to choose presenters that speak relatively quickly, if you feel time-crunched.

Send the presentation in advance. Ensure that you have emailed a copy of the presentation deck, in PowerPoint format, a day in advance of the briefing, to the administrative coordinator (who can take care of distributing it to the analysts). Be prepared to quickly resend it to anyone who has not received it. Do not send PDFs; analysts like to take notes on the slides. If you’re relying on a web conference, make sure that you still send the slides in advance — and get set up on it ahead of time and use a single PC to drive the whole presentation so you don’t waste time and possibly run into technical difficulties switching from screen to screen.

Be on time, and don’t waste time. Make sure that your dial-in number is distributed to everyone who needs to be on the call. Dial into the bridge early if need be. Have someone available to wrangle your executives if they’re running late. Have a backup presenter if you have an executive who’s notorious for not making meetings on time. Do not waste time making introductions. A presenter can briefly state their name and functional role (“I’m Joe Smith, and I run our support organization”) before starting their portion of the presentation. If you think your executive bios are important, include them as an appendix to your slides. Do not spend time having analysts introduce themselves; there are bios for them on Gartner’s website, and they will state their names when they ask questions.

Watch the clock. In a Magic Quadrant briefing, you typically have one hour to present your thoughts. Analysts are almost always scheduled back to back, so you will not have one minute extra to present, and may in fact have several minutes less if for some reason the analysts are held up from a previous meeting. Also, you want to have some time left for questions, so you should target a 45-minute presentation. If you think you have too many slides, you probably do. Rehearse to make sure you can get through your material in 45 minutes, if need be. Do not expect to be given the opportunity to do another briefing if you fail to finish in your allotted hour.

When an analyst prods you along, move on. One or more of the analysts may cut you short and ask you to move to the next slide or even skip a few slides. Listen to this and move on. By the time someone has gotten to the point of cutting you off, the analysts, who are almost certainly in an IM conversation with each other, have all already agreed that what you are saying is collectively useless to them, and that this part of the presentation is dull beyond enduring. If you think that there’s a really important point somewhere in what you’re saying, make it and move on. Or incorporate it into some other part of your presentation. Do not keep plodding along.

Focus on this particular Magic Quadrant’s market. If you have a broad solutions portfolio, that’s great, but remember that the analysts, and this Magic Quadrant, will be focused on something specific. The broader solutions portfolio can be worth mentioning, especially if it allows you to deliver higher-value and directly-relevant highly-integrated solutions, but not at the expense of focus on this Magic Quadrant’s topic. That topic should always be front-and-center. If you’re finding yourself wanting to instead talk about this somewhat-related market that you think you’re much stronger in, don’t — re-focus on the core topic.

Don’t talk about the market in general. Any analyst involved in a Magic Quadrant is, at least in theory, an expert on the market. They don’t want or need to hear about it. The only exception might be if you serve some specialized niche that the analyst does not often come into contact with. Your perspective on the market should be made clear by the specific actions that your company has taken, and will take in the future; you can explain your rationale for those decisions when you go through them, without doing a big up-front thing about the market.

Focus on your differentiation. The analysts want to know what you think makes you different, not just from the market leaders, but from your closest competitors. They want to know who you’re locked in bloody-knuckled combat each day, and why you win or lose those deals. But focus on explaining what you do well and where you intend to be superior in the future — don’t waste time badmouthing your competitors.

Be concrete and incorporate metrics whenever possible. Analysts hear broad directional statements all the time, and are usually unimpressed by them; they’re interested in the specifics of what you’ve done and what you’re going to do. Analysts love numbers. Their lives are full of vendors making grandiose claims, so they like to see evidence whenever possible. (For instance, are you claiming that your customers are much happier this year than last year? Show last year and this year’s NPS scores, ticket closure times, whatever — concrete evidence.) You don’t need to read the metrics. Just make the general point (“customer satisfaction has increased greatly in the last year, as our NPS scores show”), and have the metrics on the slides so the analysts can dive into them later. You can request that the metrics be kept under NDA if need be.

Disclose your future roadmap. A one-year or two-year roadmap, especially one that’s quarter-by-quarter, is going to make a much bigger impression than a general statement about aspirations. If you have to state that part of the briefing is under NDA, that’s fine; the analysts will still factor that information into the rating, implicitly. You may have great things planned, but if the analysts don’t know about them, you’ll get zero credit for those things when they consider your vision.