Resilience: Cloudy without a chance of meatballs

In the wake of AWS’s major US-East-1 incident of December 7th, 2021, I’ve fielded plenty of panicked client inquiry about whether anyone can trust any cloud provider, whether the availability zone model actually works, and whether or not the customer’s current architecture offers adequate resilience for their needs.

I’ve also dealt with more than a handful of journalists who have wanted to push a narrative that AWS customers are fleeing in droves and/or are going multicloud as a result of that outage. Every story I’ve read on that subject has tried its darnedest to imply something which just isn’t true. Yes, many organizations use multiple cloud providers. No, they don’t do so for resilience, but rather, because differing preferences within the organization have led to adopting more than one provider.

The fact that it’s now more than two months since the outage and I’m still talking about it with clients (and my colleagues are too) does reflect how large it looms in the mind of customers — including customers of other cloud providers — though. Indeed, it looms large in the mind of many AWS customers who were not affected, either because they don’t run in US-East-1 or because their failover to another region worked as planned.

At this point, not only have my colleagues and I talked to quite a few organizations but we’ve also talked to providers of disaster recovery software and services. Thus far, it appears that customers that had problems with cross-region recovery during the 12/7/21 incident either violated AWS best practices for such, or violated their vendor’s advice.

That’s not to say that there weren’t two important unpleasant surprises in terms of US-East-1 dependencies:

  • The global console URL was pointing to US-East-1 alone (rather than being geo load balanced or the like, which most people would probably have assumed). Customers could get around this by going to a regional console URL instead. I believe (but haven’t confirmed) AWS has now introduced a truly global endpoint with the introduction of the new console experience.
  • Route 53 and Cloudfront’s control plane APIs are hosted solely in US-East-1. People reasonably expect to be able to make DNS changes during an outage, even though AWS advises that you use health checks for your failover instead.

Either of those two things could have thrown a wrench into cross-region recovery, along with needing to create new S3 buckets (the global namespace conflict checks are done against US-East-1), needing very specific instance types in short supply in the target region, needing to create new IAM roles (which are first  created in US-East-1 and then replicated to other regions), and depending on the legacy STS global namespace (also US-East-1 dependent). But by and large, cross-region recovery worked as expected.

Now, there are certainly plenty of people who can’t do fast failover into another region and who therefore sat tight and suffered through the incident, and there’s a nontrivial number of customers who haven’t laid foundations for disaster recovery (however slowly) into another region. I get it — being able to do this kind of recovery requires an investment. You want cloud providers to be so resilient that you don’t need to make that investment yourself. But hope is not a strategy, here.

But the sky did not fall, and the sky is not falling. Cloud has not suddenly become less attractive or significantly more risky. AZ architectures work, but as always, problems with regional services (which are already designed to be multi-AZ) mean that multi-AZ might not be enough for the most critical applications. Cross-region failover works, when properly architected. (Fast and seamless failover and failback are critical, though; major cloud incidents to date have generally been multi-hour, but not multi-day. If you can’t fail over easily and fail back without a lot of effort, you tend to just wait out the outage and hope it’s short.)

Yes, there were significant problems for many customers in US-East-1. API Gateway was essentially down, and many people are dependent on API Gateway to invoke Lambda, and tons of customers use Lambda in a mission-critical fashion. Amazon Connect also depends on API Gateway, and it was also affected. (Other casualties of the backend network issues: ELB launches, S3 private endpoints, Fargate APIs impacting container launches, STS for EKS, and the support APIs.) But EC2 virtual machines continued to function just fine (although you couldn’t launch new ones). The overwhelming majority of AWS services in the region continued to operate unimpacted, and customers who did not have dependencies on affected services were able to continue operating in the region.

In a way, this was a stark demonstration of how much cloud outages are usually confined to specific services… but if a down service is critical to your application, you’re probably boned unless you have a workaround or you can failover into another region. Unfortunately, far too many customers persist in planning as if physical data center failure was the most likely event. (AWS had one of those in December, too — a power outage in a single data center, thus impacting a percentage of infrastructure in one of the six US-East-1 AZs.)

Yes, the incident was a wake-up call for a lot of cloud customers, and it was a rallying cry for on-premises server-huggers. However, not only is the sky not falling, but there should be no anticipation that it will rain meatballs.

I wrote a number of blog posts months before this outage:

and I still firmly stand by those posts now. (Importantly, I still believe multicloud for resilience is almost always impractical. Successful implementations are vanishingly rare and have horrible drawbacks.) Indeed, I’d been working on a piece of Gartner research with my colleagues Kevin Matheny (who covers application architecture), Stanton Cole and Fintan Quinn (who cover backup and DR), which I’m glad to say has finally published:

Designing Availability and Resilience for Applications in Public Cloud IaaS and PaaS (Gartner for Technical Professionals paywall)

In this, you’ll find what I hope is a pragmatic set of guidance advising that you figure out how critical an application is, and then choose your availability approach and your failover approach accordingly — and not forget the critical importance of designing and implementing resilience within your application. It’s got a lengthy dissection of all the things that can go wrong in the cloud, and what you should be thinking about when you architect. It also contains a sample architectural standard that cloud governance teams can provide to application architects to help them make these decisions. (The main doc runs 65 pages. The impatient will probably find the architectural standard, which is fairly short, to be easier reading.)

Posted on February 22, 2022, in Infrastructure and tagged , , , , , , . Bookmark the permalink. 3 Comments.

  1. Great article, and I agree with most of it. But not with “Cloud has not suddenly become less attractive or significantly more risky.” I believe that is precisely what happened. This event has taught us truths about the AZ model that are uncomfortable:
    – Apparently, some services that we expected to be global, are region-bound, such as R53, console logins, elements of S3
    – Higher layer services depend on other AWS services and therefore have a worse availability profile than the basic cloud primitives such as EC2

    Those are findings that really impact the narratives coming from AWS, in particular “we hardly fail and when we do it’s in expected ways” and “higher layer services provide better security, availability and cost than lower layer services because you allow our experts to take care of the undifferentiated heavy lifting”.

    Both those narratives have been proven wrong to a degree by this event and the recent security findings. My perspective on AWS will never be the same.

    Like

  1. Pingback: SRE Weekly Issue #311 – SRE WEEKLY

  2. Pingback: SRE Weekly Issue #311 – FDE

Leave a comment