FinOps can be a big waste of money

Of late, my colleagues and I have been talking to a lot of clients who want to build a “FinOps team”, which they seem to hope will wave magic wands and reduce their cloud IaaS+PaaS bill. I’m struck by how many clients I talk to don’t have cloud cost problems that are reasonably solvable with FinOps.

Bluntly: For many organizations, there is no reasonable ROI on FinOps (and certainly no sensible business case for building a FinOps team).

This doesn’t mean that the organization shouldn’t manage their cloud finances. It just means that they don’t need to manage their cloud finances in a way that’s meaningfully different from the way that they’ve historically managed IT spending in their on-premises data center. I’ll use the term “FinOps” colloquially here to indicate an organization taking an approach and processes for cloud financial management that are different from their established on-premises IT financial management. 

There are lots of common reasons why your organization might not need FinOps. For example:

  • You don’t use self-service.Your developers, app management engineers, data scientists, and other technical end-users do not have direct self-service access to cloud services. Instead, all cloud design and provisioning is done by a central infrastructure and operations (I&O) team — or alternatively, all cloud requests go through a service catalog and are manually reviewed and approved. Therefore, nothing happens in the cloud that’s outside of central I&O’s knowledge or control — likely allowing you to manage budgets like you did on-premises.
  • You have little to no variability in production: Your applications are allocated a static amount of infrastructure, and/or their usage is almost entirely predictable (for example, they autoscale up during the last week of the month, and then autoscale down after the close of the month). Therefore, your cloud bill for each application is essentially the same every month. You should nevertheless configure budget alerts in case something weird happens that makes usage spike, but that likely will be a one-time thing when the application is first deployed, perhaps with a once-a-year review.
  • You’re not spending much money in the cloud. If you’re not spending much money, even a significant percentage reduction in spend (which you could potentially get, for instance, by eliminating all  cloud dev/test VMs that aren’t used any longer and could simply be turned off) won’t be that many hard dollars of savings. Putting into place automation that automatically hibernates or deprovisions unused infrastructure may have a useful ROI, but playing manual whack-a-mole that involves a lot of people (whether in paperwork or actually mucking with infrastructure) almost certainly wastes more money in labor time than it saves in cloud costs.
  • You don’t have infrastructure-hungry applications. Enterprises often don’t have the voracious scale-out cloud-native applications that are common in digital-native companies, or they only have a small handful of those applications. You might be spending significant money in the cloud, but it’s spread across dozens, hundreds, or even thousands of small applications.  Therefore, even if you could cut the necessary capacity for a given application in half, it wouldn’t generate much in the way of monthly cost savings — likely not enough to justify the time of the people doing the work. Lots of enterprises run boring everyday “paperwork” apps on a VM or two (or these days, a container or two). A single-VM app often runs at 40% utilization at max, because of powers-of-two cloud VM sizing, so dropping a “T-shirt” size results in half the capacity and maybe 90% utilization, which many enterprises feel is uncomfortably tight. (And lots of organizations are slightly oversized across the board because they took the “safe” estimate of capacity needs from their cloud migration tools.)

Buying FinOps tools and allocating people to FinOps activities can cost you more than it saves.

Most people launch FinOps practices by purchasing a cloud cost optimization tool of some sort (i.e. a “FinOps tool”). Complicated FinOps processes and/or having a lot of teams and applications you have to corral within your cloud cost governance framework probably result in the genuine need to purchase a third-party FinOps tool — but those tools probably don’t represent a positive ROI until you’re spending more at least a million dollars a year. And then you have to remember that the percentage-of-cloud-spend pricing scheme of those tools can mean that you’re giving the FinOps-tool vendor a pile of money for service elements that they have no optimization capabilities for.

But in many cases, the cost of a tool will be dwarfed by the expense of the employees to do this work, especially in organizations who are making a misguided effort to hire a “FinOps team”. Not only does FinOps represent finance and sourcing overhead, but also cloud operations and engineering overhead — and, most of all, developer overhead (and overhead for any other technical team being asked to do cloud optimization work). If you go further and end up hiring a team that does performance engineering, those people are super rare and expensive.

In other words, being somewhat oversized in the cloud — or being somewhat inefficient in your application code — is a form of insidious creeping technical debt. But it’s the kind of technical debt that tends to linger, because when you look at the business case to actually go after that technical debt, there’s inadequate ROI to justify it. (Indeed, on-premises, people historically haven’t much cared. They throw hardware at the problem and run heavily oversized anyway. Nobody thinks about it because there was capital budget to buy the gear and once the gear was purchased, there wasn’t much reason to contemplate whether the money was efficiently used.)

Moreover, does your business actually want your highly-paid application development teams to chase performance issues in their code, or do they want them adding new features that will deliver new functionality to the business, saving you money elsewhere in your business processes and/or delivering something that will be compelling to customers, thus increasing your top-line revenue?

I certainly think it’s important for nearly all organizations to do some cloud financial management, which they will probably support with tooling. They’ve got to do the basics of cloud cost hygiene (preventing gross waste), budget alerts (to gain rapid awareness of accidents),  spend allocation (showback/chargeback) and discount-related planning (what’s necessary for commits, reserved instances, saving plans etc.) — but even there the effort needs to be proportional to the potential cost savings.

But full-ceremony FinOps, so to speak, is usually something better left for big money-pit applications where cloud engineer or developer effort can have a significant impact on cost — for organizations with substantial self-service and no culture of cost discipline, or for the big spenders where even moving the needle a little bit on things like basic hygiene can have a pretty large absolute dollar effort relative to the investment.

GreenOps for sustainability must parallel FinOps for cost

Cloud customers are trying to make meaningful sustainability decisions. To really reduce carbon impact (or other types of environmental impact), they need the transparency to understand the impact of their architectural decisions. Just like they need to be able to estimate the cost of a solution, they need to be able to estimate its environmental impact. They need to be able to get an estimate of what the “environmental bill” will be based on the region (and maybe zone), services, and service options they choose. To the extent possible, they then need to see what impact they’re actually generating based on actual utilization.

In other words, they need “GreenOps” the way that they need “FinOps” (using FinOps as a generic term for cloud financial management in this context). And because sustainability is not just carbon impact, they’ll probably eventually need to see a multidimensional set of metrics (or a way to create a custom metric that weights different things that are important to them, like water impact vs carbon impact).

Cloud providers have relatively decent cost tools — cost calculators that allow you to choose solution elements and estimate your bill, cost reporting of various sorts, and so forth. Similarly, the third-party FinOps tooling ecosystem provides good visibility and recommendations to customers.

We don’t really need totally new dashboards and tools for sustainability. What we really need is an extension to the existing cloud cost optimization tools (and the cost transparency and billing APIs that enable those tools) to display environmental impacts as well, so we can manage them alongside our costs. Indeed, most customers will want to make trade-offs between their environmental footprint and costs. For instance, are they potentially willing to pay more to lower their greenhouse gas emissions?

Of course, there are many ways to measure sustainability and many different types of impacts, and not all of them are well suited to this kind of granular breakdown — but drawing a GreenOps parallel to FinOps would help customers extend the tools and processes that they already use (or are developing) for cost management to the emerging need for sustainability management.

The road to cloud purgatory

It’s said that the road to hell is paved with good intentions. Well, in my opinion, the road to purgatory is paved with empty principles.

It’s certainly common enough in cloud adoption. Day after day, clients show up with cloud strategies that say things like, “We will use the cloud to be more innovative” and “We will be vigilant about costs and use the lowest-cost solutions” and “We will maximize our availability and resilience” and “We will be safe and secure in the cloud” and “We’re not going to get locked into our vendors”.

Some of these things are platitudes. Obviously, no one ever shows up with, “We will be careless and irresponsible in the cloud” or “Our implementations will be the shoddiest we can get away with” or “We’ll cheerfully waste money”. Principles that don’t help you make decisions aren’t very useful.

Principles like this are only interesting in the context that they represent a ranked set of priorities. When it comes down to “higher availability” versus “higher cost”, which are you going to choose? When you have to choose between a portable solution and a solution that is more innovative, how are you going to make that decision? (And if you think you’ve discovered a miraculous solution for cloud portability, some vendor has suckered you. Badly.)

My cocktail-napkin cloud strategy (Gartner paywall) research note asks you to make just a handful of decisions:

  • Your stance on what to do with new business solutions (i.e. new apps)
  • Your stance on cloud migration
  • Your stack-ranked priority for business agility, short-term costs and long-term TCO
  • Your appetites for risk, transformation, and business independence from central IT

It’s not unusual for us to see 20-page, 50-page, even 100-page cloud strategies that contain no clear decisions about any of those elements, because they are the things that are controversial — so they’ve simply been left out. So the strategy contains worthless platitudes, thoughtful governance is impossible, and actual cloud adoption stalls out on endless arguments that constantly relitigate the same conflicts.

If you’re constantly arguing about cloud-related decisions, or your lovely declaration of “cloud first!” seems to not actually result in any successful cloud adoption, take a hard look at your principles and the organization alignment around those principles and priorities. And make sure your principles can actually be pragmatically implemented.

Cloud adoption will fail because of the skills gap

In order to adopt cloud IaaS and PaaS successfully (and arguably, to adopt SaaS optimally), an organization needs skills. Most of all, it needs technical skills for the whole application lifecycle in the cloud — the ability to architect applications (and their underlying stacks) for the cloud, develop for the cloud, secure the cloud, run and manage and govern the cloud environments and the applications in those environments. The more cloud-natively you can do these things, the better.

If you can’t do these things in cloud-native patterns (often because you’re migrating your legacy), you at least want to try to modernize and cloud-optimize — to leverage PaaS rather than IaaS, to automate everything you reasonably can, and otherwise exploit the cloud capabilities to maximum effectiveness. This, too, requires skills.

Cloud skills needs — and associated “soft skills” and mindset — are needed in infrastructure and operations (I&O) teams and security and risk management (SRM) teams. They’re needed in application teams, data science teams, and other technical end-user teams that exploit cloud services, along with enterprise architecture and other architecture teams. There are also nontechnical skills that have to be built in the appropriate teams — effective cloud sourcing, effective cloud financial management, and so on.

My colleagues and I have previously written that the cloud skills gap has reached a crisis level in many organizations. Organizational timelines for cloud adoption, cloud migration and cloud maturity are being impacted by the inability to hire and retain the people with the necessary qualifications.

There are lots of reasons for the skills gap — insufficient number of trained and experienced people to meet demand, escalating salaries plus a globalized market for talent that results in NYC banks nabbing skilled cloud architects working for enterprises in Iowa or Missouri (or Poland) for NYC banking salaries, and the quality of the opportunities available.

For instance, an increasing number of the technical professionals I talk to care more about good executive support for the cloud program, a cloud team that’s executing well and doing smart things, an opportunity to bring their best selves to work (excellent team management, great colleagues, feeling valued, etc.), and strong belief in the organization’s mission, than they do about pay per se. It isn’t just about pay — but at a lot of slow-moving enterprises where the pay isn’t great, there are also cultural issues that make highly-skilled cloud professionals feel out of place and not valued.

While many organizations are trying to retrain existing I&O personnel especially, these efforts can fail because the DevOps emphasis of successful cloud-optimized or cloud-native adoption results in fundamentally different jobs. Not only does this require the development of strong automation (and thus coding) skills, but it also results in a more project-driven workday,  greater autonomy (but also more self-starting and self-motivation), and more communication and collaboration with application teams and other cloud users. Those who prefer “IT factory work”, solitarily executing repetitive ClickOps tasks driven by service requests, generally don’t enjoy the change in the nature of a job.

Organizations that can’t retain cloud-trained staff often react by leaning more heavily on the people that remain, which jacks up stress levels, leads to resentment, and often turns into a spiral of departures. Contractors can help fill the gap — if the organization is willing to spend the money to hire them.

Many organizations are successfully bridging the gap with consulting (professional services) and managed services (a Gartner survey showed about three-quarters of organizations use such services for at least a portion of their cloud IaaS+PaaS adoption). Many cloud managed services deals include explicit training and gradual handover to the customer’s personnel, allowing the customer to take over bit by bit as their team gets comfortable. However, MSPs, SIs, and other outsourcers are also struggling to fulfill the demand, which leads to both project delays as well as throwing less-qualified bodies into the mix in order to try to meet contractual obligations and grow revenues.

I believe that we are rapidly reaching the point where the skills gap is not only endangering the ability of individual organizations to fulfill their cloud computing ambitions, but where we may begin to see systemic back-off from cloud ambitions, resulting most notably in cancelled or substantially scaled-back cloud migrations as a common market pattern. (Disclaimer: This is a personal statement scribbled while eating lunch. It is not a peer-reviewed Gartner position.) Also, note that in no way am I claiming that this is likely to lead to repatriation!

Organizations that are late cloud adopters were already more hesitant about going to the cloud in the first place. They tend to have less of a belief that IT can help drive business success, have more technical debt, and tend to have lesser-skilled people (with less up-to-date skills). They may have been the recipient of many people who fled early cloud-adopting organizations because those people didn’t want to re-skill, so they face significantly harder internal pushback and potentially internal sandbagging of cloud projects. When they do manage to successfully train people, those people often leave within a year for both better pay and a more congenial, faster-moving environment.  Late adopters may simply not be able to generate enough internal competence to even safely and successfully use outsourced assistance.

However, even organizations that are not late adopters often have different parts of the business at different paces of adoption. Notably, they may have digital business divisions, or more ambitious fast-moving business units in general, that have substantial cloud adoption, while other parts of the organization lag behind. Those that are charging ahead may remain successful and continue to expand in the cloud, while the rest of the organization remains unable to beg borrow or steal enough skills from those other successful outposts to overcome the on-premises inertia.

This may lead to an exacerbation of existing market patterns where the digitally ambitious have had outsized and potentially disruptive success… and where other organizations are unable to imitate those successes, leading not just to failures of IT projects, but also meaningful negative business impacts. This, in turn, has a follow-on effect on the cloud providers. As enterprise bets on the cloud grow bigger, one might begin to see these projects, especially mass migration and transformation, as gambles more so than realistically-executable plans. Any plan that is predicated on if we can get the people who can do this stuff is fraught with nontrivial probability of failure.

(As always, my Gartner colleagues and I are happy to advise on inquiry, but there’s only so much we can help you with your skills gap if your organization has the deadly triplet of not offering good pay, not providing a good working environment, and and not making people feel like they’re doing something valuable with their lives. However, I also spend some significant percentage of my inquiry time listening to people vent, so I’m happy to sympathize with your tale of woe, too, and in most cases reassure you that what you’re trying to get your organization to do would be the right thing…)

Cloud self-service doesn’t need to invite the orc apocalypse

I spend quite a bit of time talking to clients about developer self-service, largely in the context of public cloud governance and cloud operations. There are still lots of infrastructure and operations (I&O) executives who instinctively cringe at the notion of developer self-service, as if self-service would open formerly well-defended gates onto a pristine plain of well-controlled infrastructure, and allow a horde of unwashed orcs to overrun the concrete landscape in a veritable explosion of Lego structures, dot-matrix printouts, Snickers wrappers and lost whiteboard marker caps… never to be clean and orderly again.

It doesn’t have to be that way.

Self-service — and more broadly, developer control over infrastructure — isn’t an all-or-nothing proposition. Responsibility can be divided across the application life cycle, so that you can get benefits from “You build it, you run it” without necessarily parachuting your developers into an untamed and unknown wilderness and wishing them luck in surviving because it’s not an Infrastructure & Operations (I&O) team problem any more.

So we ask, instead:

  1. Will developers design their own infrastructure?
  2. Will developers control their dev/test environments?
  3. How much autonomy will developers have in building production environments?
  4. How much autonomy will developers have for production deployments?
  5. To what extent are developers responsible for day-to-day production maintenance (patching, OS updates, infrastructure rightsizing, etc.)?
  6. To what extent are developers responsible for incident management?
  7. How much help will developers receive for the things they’re responsible for?

I talk to far too many IT leaders who say, “We can’t give developers cloud self-service because we’re not ready for You build it, you run it!” whereupon I need to gently but firmly remind them that it’s perfectly okay to allow your developers full self-service access to development and testing environments, and the ability to build infrastructure as code (IaC) templates for production, without making them fully responsible for production.

This is the subject of my new research note, “How to Empower Technical Teams Through Self-Service Public Cloud IaaS and PaaS“. (Gartner for Technical Professionals paywall)

This is a step along the way to a deeper exploration of finding the right balance between “Dev” and “Ops” in DevOps, which is an organization-specific thing. This is not just a cloud thing; it also impacts the structure of operations on-premises. Every discussion of SRE, platform ops, etc. ultimately revolves around the questions of autonomy, governance, and collaboration, and no two organizations are likely to arrive at the exact same balance. (And don’t get me started on how many orgs rename their I&O teams to SRE teams without actually implementing much if anything from the principles of SRE.)

Resilience: Cloudy without a chance of meatballs

In the wake of AWS’s major US-East-1 incident of December 7th, 2021, I’ve fielded plenty of panicked client inquiry about whether anyone can trust any cloud provider, whether the availability zone model actually works, and whether or not the customer’s current architecture offers adequate resilience for their needs.

I’ve also dealt with more than a handful of journalists who have wanted to push a narrative that AWS customers are fleeing in droves and/or are going multicloud as a result of that outage. Every story I’ve read on that subject has tried its darnedest to imply something which just isn’t true. Yes, many organizations use multiple cloud providers. No, they don’t do so for resilience, but rather, because differing preferences within the organization have led to adopting more than one provider.

The fact that it’s now more than two months since the outage and I’m still talking about it with clients (and my colleagues are too) does reflect how large it looms in the mind of customers — including customers of other cloud providers — though. Indeed, it looms large in the mind of many AWS customers who were not affected, either because they don’t run in US-East-1 or because their failover to another region worked as planned.

At this point, not only have my colleagues and I talked to quite a few organizations but we’ve also talked to providers of disaster recovery software and services. Thus far, it appears that customers that had problems with cross-region recovery during the 12/7/21 incident either violated AWS best practices for such, or violated their vendor’s advice.

That’s not to say that there weren’t two important unpleasant surprises in terms of US-East-1 dependencies:

  • The global console URL was pointing to US-East-1 alone (rather than being geo load balanced or the like, which most people would probably have assumed). Customers could get around this by going to a regional console URL instead. I believe (but haven’t confirmed) AWS has now introduced a truly global endpoint with the introduction of the new console experience.
  • Route 53 and Cloudfront’s control plane APIs are hosted solely in US-East-1. People reasonably expect to be able to make DNS changes during an outage, even though AWS advises that you use health checks for your failover instead.

Either of those two things could have thrown a wrench into cross-region recovery, along with needing to create new S3 buckets (the global namespace conflict checks are done against US-East-1), needing very specific instance types in short supply in the target region, needing to create new IAM roles (which are first  created in US-East-1 and then replicated to other regions), and depending on the legacy STS global namespace (also US-East-1 dependent). But by and large, cross-region recovery worked as expected.

Now, there are certainly plenty of people who can’t do fast failover into another region and who therefore sat tight and suffered through the incident, and there’s a nontrivial number of customers who haven’t laid foundations for disaster recovery (however slowly) into another region. I get it — being able to do this kind of recovery requires an investment. You want cloud providers to be so resilient that you don’t need to make that investment yourself. But hope is not a strategy, here.

But the sky did not fall, and the sky is not falling. Cloud has not suddenly become less attractive or significantly more risky. AZ architectures work, but as always, problems with regional services (which are already designed to be multi-AZ) mean that multi-AZ might not be enough for the most critical applications. Cross-region failover works, when properly architected. (Fast and seamless failover and failback are critical, though; major cloud incidents to date have generally been multi-hour, but not multi-day. If you can’t fail over easily and fail back without a lot of effort, you tend to just wait out the outage and hope it’s short.)

Yes, there were significant problems for many customers in US-East-1. API Gateway was essentially down, and many people are dependent on API Gateway to invoke Lambda, and tons of customers use Lambda in a mission-critical fashion. Amazon Connect also depends on API Gateway, and it was also affected. (Other casualties of the backend network issues: ELB launches, S3 private endpoints, Fargate APIs impacting container launches, STS for EKS, and the support APIs.) But EC2 virtual machines continued to function just fine (although you couldn’t launch new ones). The overwhelming majority of AWS services in the region continued to operate unimpacted, and customers who did not have dependencies on affected services were able to continue operating in the region.

In a way, this was a stark demonstration of how much cloud outages are usually confined to specific services… but if a down service is critical to your application, you’re probably boned unless you have a workaround or you can failover into another region. Unfortunately, far too many customers persist in planning as if physical data center failure was the most likely event. (AWS had one of those in December, too — a power outage in a single data center, thus impacting a percentage of infrastructure in one of the six US-East-1 AZs.)

Yes, the incident was a wake-up call for a lot of cloud customers, and it was a rallying cry for on-premises server-huggers. However, not only is the sky not falling, but there should be no anticipation that it will rain meatballs.

I wrote a number of blog posts months before this outage:

and I still firmly stand by those posts now. (Importantly, I still believe multicloud for resilience is almost always impractical. Successful implementations are vanishingly rare and have horrible drawbacks.) Indeed, I’d been working on a piece of Gartner research with my colleagues Kevin Matheny (who covers application architecture), Stanton Cole and Fintan Quinn (who cover backup and DR), which I’m glad to say has finally published:

Designing Availability and Resilience for Applications in Public Cloud IaaS and PaaS (Gartner for Technical Professionals paywall)

In this, you’ll find what I hope is a pragmatic set of guidance advising that you figure out how critical an application is, and then choose your availability approach and your failover approach accordingly — and not forget the critical importance of designing and implementing resilience within your application. It’s got a lengthy dissection of all the things that can go wrong in the cloud, and what you should be thinking about when you architect. It also contains a sample architectural standard that cloud governance teams can provide to application architects to help them make these decisions. (The main doc runs 65 pages. The impatient will probably find the architectural standard, which is fairly short, to be easier reading.)

My Q1 2022 research agenda

This is the time of year when AR professionals ask analysts what they’re planning for next year. I don’t plan a year in advance. I tend not to even plan a quarter in advance. I write when the mood seizes me, which is probably unfortunate, but given that I write a lot it’s… okay-ish?

But I have a bunch of things drafted (either fully or partially) and that should get released in Q1 of next year. I have a general goal of trying to ensure that I back the advice I write for cloud architects with something for the CIO and other executive leaders that provides a bottom-line strategic summary, and/or material for the other teams the architects work with, so that I publish stuff as part of  a set, either alone or in collaboration with analysts in other areas.

Cloud resilience (January): It’s increasingly common for clients to ask about architectural standards for HA/DR in the cloud. This note dissects why cloud services break, how to set architectural standards for HA and DR/failover (i.e. when to be multi-AZ, when to do cross-region failover, etc.), and some basic guidance on stability patterns (use of partitioning, bulkheads, backpressure, etc.)

Cloud self-service (January): Thankfully, most organizations are moving away from a service catalog-driven approach to cloud self-service in favor of cloud-native self-service. This note is about how to empower technical teams with self-service, while still providing appropriate governance.

The cloud operating model (February): Many clients are asking about how to organize for the cloud. This will be a triple-note set — one on designing a cloud operating model, one on implementing the operating model, and a colorful infographic summarizing the concept for the CIO and other executive leaders. It combines my previous guidance on Cloud Center of Excellence (CCOE), structuring FinOps and cloud sourcing, etc. with some new work on program management, and takes a deeper look at all the ways you can put this stuff together.

Cloud concentration risk (March): Concentration risk is a hot topic right now, especially in regulated industries. This concern spans IaaS, PaaS, and SaaS, and the dependencies are not always clear, so many organizations have concentration risk they’re not aware of. I intend to write a baseline note that other analysts have committed to contextualizing for audiences in different industries, as well as for cloud managed and professional services providers. While the sourcing risk of concentration remains minimal, the availability risks of concentration can be meaningful. An organization’s risk appetite and the business benefits of concentration should determine what, if any, steps they take to address concentration risk.

IaaS+PaaS provider evaluation update (April): Getting updated vendor evaluation research out in April basically means spending a good chunk of the first quarter doing that evaluation. (My January notes have already been written. And the February ones are mostly complete, so the schedule above isn’t implausible.) We are not currently discussing the form that this evaluation will take. Gartner management will communicate appropriately when the time comes (i.e. please don’t ask me, as I’m not at liberty to discuss it).

The cloud budget overrun rainbow of flavors

Cloud budget overruns don’t have a singular cause. Instead, they come in a bright rainbow of jelly belly flavors (the Bertie Botts ones, especially, will combine into a non-mouthwatering delight). Each needs different forms of response.

Ungoverned costs. This is the black licorice of FinOps problems. The organization has no idea what it’s spending, really, much less where the money is going, other than the big bills (or often, many little credit card bills) that they pay each month. This requires basic cost hygiene: analyze your cloud bills, get a cost management tool into place and make it useful through some tagging or partitioning discipline.

Unanticipated usage. This is the sour watermelon flavor of cost overruns — deliciously sweet yet mouth-puckering. In this situation, the organization is the victim of its own cloud success. Cloud has been such a great thing for the organization that more and more unanticipated cloud projects are showing up, blowing out the original budget estimates for cloud resources. Those cloud projects are delivering business value and it doesn’t make sense to say no to them (and even if central IT says no, the cloud costs can usually be paid for out of a line-of-business budget). Nevertheless, it’s causing a lot of organizational angst because central IT or the sourcing team didn’t anticipate this spending. This organization needs to learn to shift its budgeting processes for the digital future, and cloud chargeback will help support future decision-making.

No commitments. This is the minty wrongness of Bertie Botts toothpaste. The organization could get discounts by using public discounting mechanisms for commits (like AWS Savings Plans and Azure Reserved Instances) as well making a contractual commitment for a negotiated discount. But because the organization feels like they can’t perfectly predict their use and aren’t sure if they’ll use all of what they’re using today, they commit to nothing, therefore ensuring that they spend grotesquely more than they could be. This is universally a terrible idea. Organizations that aren’t in early pilot stage have long-term production applications and some predictability of usage; commit to the stuff you know you’re not killing off.

Dev/test waste. This is the mundane bleah-ness of Bertie Botts earwax. Developers are provisioning the biggest things they can get away with (or at least being overaggressive in their estimates of what they need), there are lots of abandoned resources idling away, and dev/test infrastructure that isn’t used outside of business hours isn’t being suspended when unused. This is what cloud cost management tools are great at doing — identifying obvious waste so that it can be eliminated, largely by shutting it down or suspending it, preferably via automation.

Too much production headroom. This is the mild weirdness of the Bertie Botts grass flavor. Application teams haven’t implemented autoscaling for applications that can scale horizontally, or they’ve overestimated how much production headroom an application with variable usage needs (which may result in oversizing compute units, or being overly aggressive with autoscaling). This requires implementing autoscaling with some thoughtful tuning of parameters, and possibly a business value conversation on the cost/benefit tradeoff of having higher application performance on a consistent basis.

Wrongsizing production. This is the awful lingering terribleness of Bertie Botts vomit, whose taste you cannot get out of your mouth. Production environments are statically overprovisioned and therefore overly costly. On-prem, 30% utilization is common, but it’s all capex and as long as it’s within budget, no one really cares about the waste. But in the cloud, you pay for that excess resource monthly, forcing you to confront the ongoing cost of the waste.

However, anyone who tells you to “just” rightsize has never actually tried to do this in practice within an enterprise. The problem is that applications that scale vertically typically can’t be easily rightsized. It’s likely difficult-to-impossible to do automatically, due to complicated application installation. The application is fragile and may be mission-critical, so you are cautious about maintenance downtime. And the application team — the only people who really understand how this thing works — is likely busy with other priorities.

If this is your situation, your cloud cost management tool may cause you to cry hopeless tears, because you can see the waste but taking remediation actions is a complicated cross-functional war dance and delicate negotiation that leaves everyone wondering if it wouldn’t have been easier to just keep paying a larger bill.

Suboptimal design and implementation. The controversial popcorn flavor. Architects are sometimes cost-oblivious when they design cloud solutions. They may make bad design choices, or changes in application features and behavior over time may have turned out to make a design choice unexpectedly expensive. Developers may write poorly-performing code that consumes a lot of infrastructure resources, or code that makes excessive (and, cumulatively, expensive) calls to cloud services. Your cloud cost management tools are unlikely to be of any use for detecting these situations. This needs to be addressed through performance engineering, with attention paid to the business value of the time/effort/money necessary to do so — and for many organizations may require bringing in third-party expertise to diagnose the problems and offer recommendations.

Notably, the answer to most of these issues is not “implement a cloud cost management tool”. The challenges aren’t really as simple as a lot of vendors (and talking heads) make them out to be.

Five-P factors for root cause analysis

One of the problems in doing “root cause analysis” within complex systems is that there’s almost never “one bad thing” that’s truly at the root of the problem, and talking about the incident as if there’s One True Root is probably not productive. It’s important to identify the full range of contributing factors, so that you can do something about those elements individually as well as de-risking the system as a whole.

I recently heard someone talk about struggling to shift the language in their org around root cause, and it occurred to me that adapting Macneil’s Five P factors model from medicine/psychology would be very useful in SRE “blameless postmortems” (or traditional ITIL problem management RCAs). I’ve never seen anything about using this model in IT, and a casual Google search turned up nothing, so I figured I’d write a blog post about it.

The Five Ps (described in IT terms) — well, really six Ps, a problem and five P factors — are as follows:

  • The presenting problem is not only the core impact, but also its broader consequences, which all should be examined and addressed. For instance, “The FizzBots service was down” becomes “Our network was unstable, resulting in  FizzBots service failure. Our call center was overwhelmed, our customers are mad at us, and we need to pay out on our SLAs.”
  • The precipitating factors are the things that triggered the incident. There might not be a single trigger, and the trigger might not be a one-time event (i.e. it could be a rising issue that eventually crossed a threshold, such as exhaustion of a connection pool or running out of server capacity). For example, “A network engineer made a typo in a router configuration.”
  • The perpetuating factors are the things that resulted in the incident continuing (or becoming worse), once triggered. For instance, “When the network was down, application components queued requests, ran out of memory, crashed, and had to be manually recovered.”
  • The predisposing factors are the long-standing things that made it more likely that a bad situation would result. For instance, “We do not have automation that checks for bad configurations and prevents their propagation.” or “We are running outdated software on our load-balancers that contains a known bug that results in sometimes sending requests to unresponsive backends.”
  • The protective factors are things that helped to limit the impact and scope (essentially, your resilience mechanisms). For instance, “We have automation that detected the problem and reverted the configuration change, so the network outage duration was brief.”
  • The present factors are other factors that were relevant to the outcome (including “where we got lucky”). For instance, “A new version of an application component had just been pushed shortly before the network outage, complicating problem diagnosis,” or “The incident began at noon, when much of the ops team was out having lunch, delaying response.”

If you think about the October 2021 Facebook outage in these terms, the presenting problem was the outage of multiple major Facebook properties and their attendant consequences. The precipitating factor was the bad network config change, but it’s clearly not truly the “root cause”. (If your conclusion is “they should fire the careless engineer who made a typo”, your thinking is Wrong.) There were tons of contributing factors, all of which should be addressed. “Blame” can’t be laid at the feet of anyone in particular, though some of the predisposing and perpetuating factors clearly had more impact than others (and therefore should be addressed with higher priority). 

I like this terminology because it’s a clean classification that encompasses a lot of different sorts of contributing factors, and it’s intended to be used in situations that have a fair amount of uncertainty to them. I think it could be useful to structure incident postmortems, and I’d be keen to know how it works for you, if you try it out.

Don’t be surprised when “move fast and break things” results in broken stuff

Of late, I’ve been talking to a lot of organizations that have learned cloud lessons the hard way — and even more organizations who are newer cloud adopters who seem absolutely determined to make the same mistakes. (Note: Those waving little cloud-repatriation flags shouldn’t be hopeful. Organizations are fixing their errors and moving on successfully with their cloud adoption.)

If your leadership adopts the adage, “Move fast and break things!” then no one should be surprised when things break. If you don’t adequately manage your risks, sometimes things will break in spectacularly public ways, and result in your CIO and/or CISO getting fired.

Many organizations that adopt that philosophy (often with the corresponding imposition of “You build it, you run it!” upon application teams) not only abdicate responsibility to the application teams, but they lose all visibility into what’s going on at the application team level. So they’re not even aware of the risks that are out there, much less whether those risks are being adequately managed. The first time central risk teams become aware of the cracks in the foundation might be when the building collapses in an impressive plume of dust.

(Note that boldness and the willingness to experiment are different from recklessness. Trying out new business ideas that end up failing, attempting different innovative paths for implementing solutions that end up not working out, or rapidly trying a bunch of different things to see which works well — these are calculated risks. They’re absolutely things you should do if you can. That’s different from just doing everything at maximum speed and not worrying about the consequences.)

Just like cloud cost optimization might not be a business priority, broader risk management (especially security risk management) might not be a business priority. If adding new features is more important than address security vulnerabilities, no one should be shocked when vulnerabilities are left in a state of “busy – fix later”. (This is quite possibly worse than “drunk – fix later“, as that at least implies that the fix will be coming as soon as the writer sobers up, whereas busy-ness is essentially a state that tends to persist until death).

It’s faster to build applications that don’t have much if any resilience. It’s faster to build applications if you don’t have to worry about application security (or any other form of security). It’s faster to build applications if you don’t have to worry about performance or cost. It’s faster to build applications if you only need to think about the here-and-now and not any kind of future. It is, in short, faster if you are willing to accumulate meaningful technical debt that will be someone else’s problem to deal with later. (It’s especially convenient if you plan to take your money and run by switching jobs, ensuring you’re free of the consequences.)

“We hope the business and/or dev teams will behave responsibly” is a nice thought, but hope is not a strategy. This is especially true when you do little to nothing to ensure that those teams have the skills to behave responsibly, are usefully incentivized to behave responsibly, and receive enough governance to verify that they are behaving responsibly.

When it all goes pear-shaped, the C-level IT executives (especially the CIO, chief information security officer, and the chief risk officer) are going to be the ones to be held accountable and forced to resign under humiliating circumstances. Even if it’s just because “You should have known better than to let these risks go ungoverned”.

(This usually holds true even if business leaders insisted that they needed to move too quickly to allow risk to be appropriately managed, and those leaders were allowed to override the CIO/CISO/CRO, business leaders pretty much always escape accountability here, because they aren’t expected to have known better. Even when risk folks have made business leaders sign letters that say, “I have been made aware of the risks, and I agree to be personally responsible for them” it’s generally the risk leaders who get held accountable. The business leaders usually get off scott-free even with the written evidence.)

Risk management doesn’t entail never letting things break. Rather, it entails a consideration of risk impacts and probabilities, and thinking intelligently about how to deal with the risks (including implementing compensating controls when you’re doing something that you know is quite risky). But one little crack can, in combination with other little cracks (that you might or might or might not be aware of), result in big breaches. Things rarely break because of black swan events. Rather, they break because you ignored basic hygiene, like “patch known vulnerabilities”. (This can even impact big cloud providers, i.e. the recent Azurescape vulnerability, where Microsoft continued to use 2017-era known-vulnerable open-source code in production.)

However, even in organizations with central governance of risk, it’s all too common to have vulnerability management teams inform you-build-it-you-run-it dev teams that they need to fix Known Issue X. A busy developer will look at their warning, which gives them, say, 30 days to fix the vulnerability, which is within the time bounds of good practice. Then on day 30, the developer will request an extension, and it will probably be granted, giving them, say, another 30 days. When that runs out, the developer will request another extension, and they will repeat this until they run out the extension clock, whereupon usually 90 days or more have elapsed. At that point there will probably be a further delay for the security team to get involved in an enforcement action and actually fix the thing.

There are no magic solutions for this, especially in organizations where teams are so overwhelmed and overworked that anything that might possibly be construed as optional or lower-priority gets dropped on the floor, where it is trampled, forgotten, and covered in old chewing gum. (There are non-magical solutions that require work — more on that in future research notes.)

Moving fast and breaking things takes a toll. And note that sometimes what breaks are people, as the sheer number of things they need to cope with overload their coping mechanisms and they burn out (either in impressive pillars or flame, or quiet extinguishment into ashes).