Blog Archives

Don’t be surprised when “move fast and break things” results in broken stuff

Of late, I’ve been talking to a lot of organizations that have learned cloud lessons the hard way — and even more organizations who are newer cloud adopters who seem absolutely determined to make the same mistakes. (Note: Those waving little cloud-repatriation flags shouldn’t be hopeful. Organizations are fixing their errors and moving on successfully with their cloud adoption.)

If your leadership adopts the adage, “Move fast and break things!” then no one should be surprised when things break. If you don’t adequately manage your risks, sometimes things will break in spectacularly public ways, and result in your CIO and/or CISO getting fired.

Many organizations that adopt that philosophy (often with the corresponding imposition of “You build it, you run it!” upon application teams) not only abdicate responsibility to the application teams, but they lose all visibility into what’s going on at the application team level. So they’re not even aware of the risks that are out there, much less whether those risks are being adequately managed. The first time central risk teams become aware of the cracks in the foundation might be when the building collapses in an impressive plume of dust.

(Note that boldness and the willingness to experiment are different from recklessness. Trying out new business ideas that end up failing, attempting different innovative paths for implementing solutions that end up not working out, or rapidly trying a bunch of different things to see which works well — these are calculated risks. They’re absolutely things you should do if you can. That’s different from just doing everything at maximum speed and not worrying about the consequences.)

Just like cloud cost optimization might not be a business priority, broader risk management (especially security risk management) might not be a business priority. If adding new features is more important than address security vulnerabilities, no one should be shocked when vulnerabilities are left in a state of “busy – fix later”. (This is quite possibly worse than “drunk – fix later“, as that at least implies that the fix will be coming as soon as the writer sobers up, whereas busy-ness is essentially a state that tends to persist until death).

It’s faster to build applications that don’t have much if any resilience. It’s faster to build applications if you don’t have to worry about application security (or any other form of security). It’s faster to build applications if you don’t have to worry about performance or cost. It’s faster to build applications if you only need to think about the here-and-now and not any kind of future. It is, in short, faster if you are willing to accumulate meaningful technical debt that will be someone else’s problem to deal with later. (It’s especially convenient if you plan to take your money and run by switching jobs, ensuring you’re free of the consequences.)

“We hope the business and/or dev teams will behave responsibly” is a nice thought, but hope is not a strategy. This is especially true when you do little to nothing to ensure that those teams have the skills to behave responsibly, are usefully incentivized to behave responsibly, and receive enough governance to verify that they are behaving responsibly.

When it all goes pear-shaped, the C-level IT executives (especially the CIO, chief information security officer, and the chief risk officer) are going to be the ones to be held accountable and forced to resign under humiliating circumstances. Even if it’s just because “You should have known better than to let these risks go ungoverned”.

(This usually holds true even if business leaders insisted that they needed to move too quickly to allow risk to be appropriately managed, and those leaders were allowed to override the CIO/CISO/CRO, business leaders pretty much always escape accountability here, because they aren’t expected to have known better. Even when risk folks have made business leaders sign letters that say, “I have been made aware of the risks, and I agree to be personally responsible for them” it’s generally the risk leaders who get held accountable. The business leaders usually get off scott-free even with the written evidence.)

Risk management doesn’t entail never letting things break. Rather, it entails a consideration of risk impacts and probabilities, and thinking intelligently about how to deal with the risks (including implementing compensating controls when you’re doing something that you know is quite risky). But one little crack can, in combination with other little cracks (that you might or might or might not be aware of), result in big breaches. Things rarely break because of black swan events. Rather, they break because you ignored basic hygiene, like “patch known vulnerabilities”. (This can even impact big cloud providers, i.e. the recent Azurescape vulnerability, where Microsoft continued to use 2017-era known-vulnerable open-source code in production.)

However, even in organizations with central governance of risk, it’s all too common to have vulnerability management teams inform you-build-it-you-run-it dev teams that they need to fix Known Issue X. A busy developer will look at their warning, which gives them, say, 30 days to fix the vulnerability, which is within the time bounds of good practice. Then on day 30, the developer will request an extension, and it will probably be granted, giving them, say, another 30 days. When that runs out, the developer will request another extension, and they will repeat this until they run out the extension clock, whereupon usually 90 days or more have elapsed. At that point there will probably be a further delay for the security team to get involved in an enforcement action and actually fix the thing.

There are no magic solutions for this, especially in organizations where teams are so overwhelmed and overworked that anything that might possibly be construed as optional or lower-priority gets dropped on the floor, where it is trampled, forgotten, and covered in old chewing gum. (There are non-magical solutions that require work — more on that in future research notes.)

Moving fast and breaking things takes a toll. And note that sometimes what breaks are people, as the sheer number of things they need to cope with overload their coping mechanisms and they burn out (either in impressive pillars or flame, or quiet extinguishment into ashes).

Yet more reasons to work at Gartner with me

The TL;DR: My team at Gartner has an open position for someone who has a strong understanding of cloud IaaS — someone who has experience architecting for the cloud, or who has worked on the vendor side of the market (product management, solutions architecture, engineering, consulting, etc.), or is an analyst at another firm covering a related topic. If you’re interested, please email me or contact me on LinkedIn.

The details:

A few years ago, I wrote a blog post on “Five reasons you should work at Gartner with me“, detailing the benefits of the analyst role. I followed it up last year with “Five more reasons to work at Gartner with me“, targeted at women. Both times we were hiring. And we’re continuing to hire right now.

We’re steadily expanding our coverage of cloud computing, which means that we have multiple openings. On my team, we’re looking for an analyst who can cover IaaS, and if you have a good understanding of cloud security, PaaS, and/or DevOps, that would be a plus. (The official posting is for a cloud security analyst, but we’re flexible on the skill set and the job itself, so don’t read too much into the job description.) This role can be entirely work-from-home, and you must work a US time zone schedule, which means candidates should be based in North America or South America.

Previously, I noted great reasons to work at Gartner:

  1. It is an unbeatably interesting job for people who thrive on input.
  2. You get to help people in bite-sized chunks.
  3. You get to work with great colleagues.
  4. Your work is self-directed.
  5. We don’t do any pay-for-play.

In my follow-up post for women, I added the following reasons (which benefit men, too):

  1. We have a lot of women in very senior, very visible roles, including in management.
  2. The traits that might make a woman termed “too aggressive” are valued in analysts.
  3. You are shielded from most misgyny in the tech world.
  4. You will use both technical and non-technical skills, and have a real impact.
  5. This is a flexible-hours, work-from-anywhere job.

I encourage you to go read those posts. Here, I’ll add a few more things about our culture. (If you’re working at another analyst firm or have considered another analyst firm in the past, you might find the below points to be of particular interest.)

1. People love their jobs. While some analysts decide after a year or two that this isn’t the life for them, the ones that stay, pretty much stay forever. Almost everyone is very engaged in their job, works hard, and tries to do the right thing. Although we’re a work-from-home culture, we nevertheless do a good job in establishing a strong corporate culture in which people collaborate remotely.

2. We have no hierarchy. We are an exceptionally flat organization. Every analyst has a team manager, but teams are largely HR reporting structures — a support system, by and large. To get work done, we form ad-hoc and informal groups of collaborators. We have internal research communities of interest, an open peer review process for all research, and freewheeling discussions without organization boundaries. That means more junior analysts are free to take on as much as they want to, and their voices are no less important than anyone else’s.

3. We have no hard-and-fast coverage boundaries. As long as you are meeting the needs of our clients, your coverage can shift as you see fit. Indeed, to be successful, your coverage should naturally evolve over time, as clients change their technology wants and needs. We have no “book of business” or “programs” or the like, which at other analyst firms sometimes encourage analysts to fiercely defend their turf; we actively discourage territoriality. Collaboration across topic boundaries is encouraged. We do have some formal vehicles for coverage — agendas and special reports among them — but these are open to anyone, regardless of the specific team they work on. (We do have product boundaries, but analysts can collaborate across these boundaries.)

4. We have good support systems. There are teams that manage calendaring and client contact, so analysts don’t have to deal with scheduling headaches (we just indicate when we’re available). Events run smoothly and attention is paid to making sure that analysts don’t have to worry about coordination issues. There’s admin and project manager support for things that generate a lot of administrative overhead or require coordination. Management, in the last few years, has paid active attention to things that help make analysts more productive.

5. Analysts do not have any sales responsibility. Analysts do not carry a “book of business” or any other form of direct tie to revenue. We don’t do any pay-for-play. Importantly, that means that you are never beholden to a vendor, nor do you have an incentive to tell a client anything less than the best advice you have to give. The sales team understands the rules (there are always a few bad apples, but Gartner tries very hard to ensure that analysts are not influenced by sales). Performance evaluations are based on metrics such as the popularity of our documents, and customer satisfaction scores across the different dimensions of things we do (inquiries, conference presentations, documents, and so on).

If this sounds like something that’s of interest to you, please get in touch!

Private clouds aren’t necessarily more secure

Eric Domage, an analyst over at IDC, is being quoted as saying, “The decision in the next year or two will only be about the private cloud. The bigger the company, the more they will consider the private cloud. The enterprise cloud is locked down and totally managed. It is the closest replication of virtualisation.” The same article goes on to quote Domage as cautioning against the dangers of the public cloud, and, quoting the article: “He urged delegates at the conference to ‘please consider more private cloud than public cloud.'”

I disagree entirely, and I think Domage’s comments ignore the reality of what is going on within enterprises, both in terms of their internal private cloud initiatives, as well as their adoption of public cloud IaaS. (I assume Domage’s commentary is intended to be specific to infrastructure, or it would be purely nonsensical.)

While not all IaaS providers build to the same security standards, nearly all build a high degree of security into their offering. Furthermore, end-to-end encryption, which Domage claims is unavailable in public cloud IaaS, is available in multiple offerings today, presuming that it refers to both end-to-end network encryption, along with encryption of both storage in motion and storage at rest. (Obviously, computation has to occur either on unencrypted data, or your app has to treat encrypted data like a passthrough.)

And for the truly paranoid, you can adopt something like Harris Trusted Cloud — private or public cloud IaaS built with security and compliance as the first priority, where each and every component is checked for validity. (Wyatt Starnes, the guiding brain behind this, founded Tripwire, so you can guess where the thinking comes from.) Find me an enterprise that takes security to that level today.

I’ve found that the bigger the company, the more likely they are to have already adopted public cloud IaaS. Yes, it’s tactical, but their businesses are moving faster than they can deploy private clouds, and the workloads they have in the public cloud are growing every day. Yes, they’ll also build a private cloud (or in many cases, already have), but they’ll use both.

The idea that the enterprise cloud is “locked down and totally managed” is a fantasy. Not only do many enterprises struggle with managing the security within private clouds, many of them have practically surrendered control to the restless natives (developers) who are deploying VMs within that environment. They’re struggling with basic governance, and often haven’t extended their enterprise IT operations management systems successfully into that private cloud. (Assuming, as the article seems to imply, that private cloud is being used to refer to self-service IaaS, not merely virtualized infrastructure.)

The head-in-the-sand “la la la public cloud is too insecure to adopt, only I can build something good enough” position will only make an enterprise IT manager sound clueless and out of touch both with reality and with the needs of the business.

Organizations certainly have to do their due diligence — hopefully before, and not after, the business is asking what cloud infrastructure solutions can be used right this instant. But the prudent thing to do is to build expertise with public cloud (or hosted private cloud), and for organizations which intend to continue running data centers long-term, simultaneously building out a private cloud.

There’s no such thing as a “safe” public cloud IaaS

I’ve been trialing cloud IaaS providers lately, and the frustration of getting through many of the sign-up processes has reminded me of some recurring conversations that I’ve had with service providers over the past few years.

Many cloud IaaS providers regard the fact that they don’t take online sign-ups as a point of pride — they’re not looking to serve single developers, they say. This is a business decision, which is worth examining separately (a future blog post, and I’ve already started writing a research note on why that attitude is problematic).

However, many cloud IaaS providers state their real reason for not taking online sign-ups, or of having long waiting periods to actually get an account provisioned (and silently dropping some sign-ups into a black hole, whether or not they’re actually legitimate), is that they’re trying to avoid the bad eggs — credit card fraud, botnets, scammers, spammers, whatever. Some cloud providers go so far as to insist that they have a “private cloud” because it’s not “open to the general public”. (I consider this lying to your prospects, by the way, and I think it’s unethical. “Marketing spin” shouldn’t be aimed at making prospects so dizzy they can’t figure out your double-talk. The industry uses NIST definitions, and customers assume NIST definitions, and “private” therefore implies “single-tenant”.)

But the thing that worries me is that cloud IaaS providers claim that vetting who signs up for their cloud, and ensuring that they’re “real businesses”, makes their public, multi-tenant cloud “safe”. It doesn’t. In fact, it can lure cloud providers into a false sense of complacency, assuming that there will be no bad actors within their cloud, which means that they do not take adequate measures to defend against bad actors who work for a customer — or against customer mistakes, and most importantly, against breaches of a customer’s security that result in bad eggs having access to their infrastructure.

Cloud providers tell me that folks like Amazon spend a ton of money and effort trying to deal with bad actors, since they get tons of them from online sign-ups, and that they themselves can’t do this, either for financial or technical reasons. Well, if you can’t do this, you are highly likely to also not have the appropriate alerting to see when your vaunted legitimate customers have been compromised by the bad guys and have gone rogue; and therefore to respond to it immediately and automatically to stop the behavior and thereby protect your infrastructure and customers; and to hopefully automatically, accurately, and consistently do the forensics for law enforcement afterwards. Because you don’t expect it to be a frequent problem, you don’t have the paranoid level of automatic and constant sweep-and-enforce that a provider like Amazon has to have.

And that should scare every enterprise customer who gets smugly told by a cloud provider that they’re safe, and no bad guys can get access to their platform because they don’t take credit-card sign-ups.

So if you’re a security-conscious company, considering use of multi-tenant cloud services, you should ask prospective service providers, “What are you doing to protect me from your other customers’ security problems, and what measures do you have in place to quickly and automatically detect and eliminate bad behavior?” — and don’t accept “we only accept upstanding citizens like yourself on our cloud, sir” as a valid answer.

Amazon, ISO 27001, and a correction

FlyingPenguin has posted a good critique of my earlier post about Amazon’s ISO 27001 certification.

Here’s a succinct correction:

To quote Wikipedia, ISO 27001 requires that management:

  • Systematically examine the organization’s information security risks, taking account of the threats, vulnerabilities and impacts;
  • Design and implement a coherent and comprehensive suite of information security controls and/or other forms of risk treatment (such as risk avoidance or risk transfer) to address those risks that are deemed unacceptable; and
  • Adopt an overarching management process to ensure that the information security controls continue to meet the organization’s information security needs on an ongoing basis.

ISO 27002, which details the security best practices, is not required to be used in conjunction with 27001, although this is customary. I forgot this when I wrote my post (when I was reading docs written by my colleagues on our security team, which specifically recommend the 27001 approach, in the context of 27002).

In other words: 27002 is proscriptive in its controls; 27001 is not that specific.

So FlyingPenguin is right — without the 27002, we have no idea what security controls Amazon has actually implemented.

Bookmark and Share

Amazon, ISO 27001, and some conference observations

Greetings from Gartner’s Application Architecture, Development, and Integration Summit. There are around 900 people here, and the audience is heavy on enterprise architects and other application development leaders.

One of the common themes of my interaction here has been talking to an awful lot of people who are using or have used Amazon for IaaS. They’re a different audience than the typical clients I talk to about the cloud, who are generally IT Operations folks, IT executives, or Procurement folks. The audience here is involved in assessing the cloud, and in adopting the cloud in more skunkworks ways — but they are generally not ultimately the ones making the purchasing decisions. Consequently, they’ve got a level of enthusiasm about it that my usual clients don’t share (although it correlates with the reported enthusiasm they know their app dev folks have for it). Fun conversations.

So on the heels of Amazon’s ISO 27001 certification, I thought it’d be worth jotting down a few thoughts about Amazon and the enterprise.

To start with, SAS 70 Is Not Proof of Security, Continuity or Privacy Compliance (Gartner clients only). As my security colleagues Jay Heiser and French Caldwell put it, “The SAS 70 auditing report is widely misused by service providers that find it convenient to mischaracterize the program as being a form of security certification. Gartner considers this to be a deceptive and harmful practice.” It certainly is possible for a vendor to do a great SAS 70 certification — to hold themselves to best pratices and have the audit show that they follow them consistently — but SAS 70 itself doesn’t require adherence to security best practices. It just requires you to define a set of controls, and then demonstrate you follow them.

ISO 27001, on the other hand, is a security certification standard that examines the efficacy of risk management and an organization’s security posture, in the context of ISO 27002, which is a detailed security control framework. This certification actually means that you can be reasonably assured that an organization’s security controls are actually good, effective ones.

The 27001 cert — especially meaningful here because Amazon certified its actual infrastructure platform, not just its physical data centers — addresses two significant issues with assessing Amazon’s security to date. First, Amazon doesn’t allow enterprises to bring third-party auditors into its facilities and to peer into its operations, so customers have to depend on Amazon’s own audits (which Amazon does share under certain circumstances). Second, Amazon does a lot of security secret sauce, implementing things in ways different than is the norm — for instance, Amazon claims to provide network isolation between virtual machines, but unlike the rest of the world, it doesn’t use VLANs to achieve this. Getting something like ISO 27001, which is proscriptive, hopefully offers some assurance that Amazon’s stuff constitutes effective, auditable controls.

(Important correction: See my follow-up. The above statement is not true, because we have no guarantee Amazon follows 27002.)

A lot of people like to tell me, “Amazon will never be used by the enterprise!” Those people are wrong (and are almost always shocked to hear it). Amazon is already used by the enterprise — a lot. Not necessarily always in particularly “official” ways, but those unofficial ways can sometimes stack up to pretty impressive aggregate spend. (Some of my enterprise clients end up being shocked by how much they’re spending, once they total up all the credit cards.)

And here’s the important thing: The larger the enterprise, the more likely it is that they use Amazon, to judge from my client interactions. (Not necessarily as their only cloud IaaS provider, though.) Large enterprises have people who can be spared to go do thorough evaluations, and sit on committees that write recommendations, and decide that there are particular use cases that they allow, or actively recommend, Amazon for. These are companies that assess their risks, deal with those risks, and are clear on what risks they’re willing to take with what stuff in the cloud. These are organizations — some of the largest global companies in the world — for whom Amazon will become a part of their infrastructure portfolio, and they’re comfortable with that, even if their organizations are quite conservative.

Don’t underestimate the rate of change that’s taking place here. The world isn’t shifting overnight, and we’re going to be looking at internal data centers and private clouds for many years to come, but nobody can afford to sit around smugly and decide that public cloud is going to lose and that a vendor like Amazon is never going to be a significant player for “real businesses”.

One more thing, on the subject of “real businesses”: All of the service providers who keep telling me that your multi-tenant cloud isn’t actually “public” because you only allow “real businesses”, not just anyone who can put down a credit card? Get over it. (And get extra-negative points if you consider all Internet-centric companies to not be “real businesses”.) Not only isn’t it a differentiator, but customers aren’t actually fooled by this kind of circumlocution, and the guys who accept credit cards still vet their customers, albeit in more subtle ways. You’re multi-tenant, and your customers aren’t buying as a consortium or community? Then you’re a public cloud, and to claim otherwise is actively misleading.

Bookmark and Share

The cloud is not magic

Just because it’s in the cloud doesn’t make it magic. And it can be very, very dangerous to assume that it is.

I recently talked to an enterprise client who has a group of developers who decided to go out, develop, and run their application on Amazon EC2. Great. It’s working well, it’s inexpensive, and they’re happy. So Central IT is figuring out what to do next.

I asked curiously, “Who is managing the servers?”

The client said, well, Amazon, of course!

Except Amazon doesn’t manage guest operating systems and applications.

It turns out that these developers believed in the magical cloud — an environment where everything was somehow mysteriously being taken care of by Amazon, so they had no need to do the usual maintenance tasks, including worrying about security — and had convinced IT Operations of this, too.

Imagine running Windows. Installed as-is, and never updated since then. Without anti-virus, or any other security measures, other than Amazon’s default firewall (which luckily defaults to largely closed).

Plus, they also assumed that auto-scaling was going to make their app magically scale. It’s not designed to automagically scale horizontally. Somebody is going to be an unhappy camper.

Cautionary tale for IT shops: Make sure you know what the cloud is and isn’t getting you.

Cautionary tale for cloud providers: What you’re actually providing may bear no resemblance to what your customer thinks you’re providing.

Bookmark and Share

Credit cards and EA/Mythic’s epic billing mistake

Most of us have long since overcome our fear of handing over our credit cards to Internet merchants. It’s become routine for most of us to simply do so. We buy stuff, we sign up for subscriptions, it’s just like handing over plastic anytime else. For that matter, most of us have never really thought about all that credit card data laying around in the hands of brick-and-mortar merchants with whom we do business, until the unfortunate times when that data gets mass-compromised.

Bad billing problems plague lots of organizations, but Electronic Arts (in the form of its Mythic Entertainment studio, which does the massively multiplayer online RPGs Dark Ages of Camelot and Warhammer Online) just had a major screw-up: a severe billing system error that, several days ago, repeatedly charged customers their subscription fees. Not just one extra charge, but, some users say, more than sixty. Worse still, the error reportedly affected not just current customers, but past customers. A month’s subscription is $15, but users can pre-pay for as much as a year. And these days, with credit cards so often actually being checking-account debit cards, that is often an immediate hit to the wallet. So you can imagine the impact on even users with decent bank balances, being hit by multiple charges. (Plenty of people with good-sized savings cushions only keep enough money in the checking account to cover expected bills, so you don’t have to be on the actual fiscal edge to get smacked with overdraft fees.) EA is scrambling to get this straightened out, of course, but this is every company’s worst billing nightmare, and it comes at a time when EA and its competitors are all scrambling to shift their business models online.

How many merchants that you don’t do business with any longer, but used to have recurring billing permission on your credit card, still have your credit card on file? As online commerce and micropayments proliferate, how many more merchants will store that data? (Or will PayPal, Apple’s storefronts, and other payment services rule the world?)

Bookmark and Share

Link round-up

Recent links of interest…

I’ve heard that no less than four memcached start-ups have been recently funded. GigaOM speculates interestingly on whether memcached is good or bad for MySQL. It seems to me that in the age of cloud and hyperscale, we’re willing to sacrifice ACID compliance in many our transactions. RAM is cheap, and simplicity and speed are king. But I’m not sure that the widespread use of memcached in Web 2.0 applications, as a method of scaling a database, reflects the strengths of memcache so much as they reflect the weaknesses of the underlying databases.

Column-oriented databases are picking up some buzz lately. Sybase has a new white paper out on high-performance analytics. MySQL is plugging Infobright, a column-oriented engine for MySQL (replacing MyISAM, InnoDB, etc., just like any other engine).

Brian Krebs, the security blogger for the Washington Post, has an excellent post called The Scrap Value of a Hacked PC. It’s an examination of the ways that hacked PCs can be put to criminal use, and it’s intended to be printed out and handed to end-users who don’t think that security is their personal responsibility.

My colleague Ray Valdes has some thoughts on intuition-based vs. evidence-based design. It’s a riff on the recent New York Times article, Data, Not Design, Is King in the Age of Google, and a departing designer’s blog post that provides a fascinating look at data-driven decision making in an environment where you can immediately test everything.

DDoS season

We are, it seems, in the midst of a wave of distributed denial of service attacks. The victims include:

  • Neustar’s UltraDNS. (Problems with specific regional DNS clusters, with little customer-visible impact.)
  • Register.com. (Severe impact on Web hosting and email customers.)
  • GoGrid. (Severe impact on cloud hosting customers.)
  • ThePlanet. (Attack on their DNS servers, with severe impact on customers.)

The attack on ThePlanet is unusual in that it received minimal attention in the press, despite the company being one of the largest Web hosters, and having Cisco Guard (DDoS mitigation) appliances in place. Also, the status updates were eventually issued via Twitter, rather than a more expected form of customer communication. Here’s the full text, aggregated off Twitter:

Between 2:30am and 5:00am CDT on April 8, The Planet’s name servers were flooded again with a large brute force (DDoS) attack. Unlike the previous attack, this attack did not appear to be DNS-specific; instead, targeted resources indirectly supporting DNS services. Because the nature of this attack was different from the previous event, mirroring the response to the previous attack was ineffective. Once our investigation determined the nature of the attack, we applied filters throughout our DNS support system to alleviate the effects. The Planet’s network and DNS performance have been restored, and the attack originator has ceased actions. Any lingering issues may be indicative of a different problem that may have been exacerbated by the attack and should be resolved quickly. We are working on several projects to help mitigate similar attacks in the future. Once those plans are in order, we will update the DNS Status announcement thread in our community forums. We understand that other providers are experiencing similar events. We will reach out to them, pool our information and then work together to find consistencies between attacks. Our goal is to establish best practices as an industry to better respond to these recent events.

Jose Nazario of Arbor Networks claims these attacks are not Conficker at work, which makes this wave of attacks even more interesting.

The takeaway from this: Customers understand if you get DDoS’d. They don’t put up with a lack of communication. It’s enormously difficult to communicate with customers in the midst of a crisis, especially one that takes down customer-facing infrastructure in a customer-impacting way, but it’s also incredibly critical. Clearly, not everyone in the company is out trying to troubleshoot the problem, so you can usefully put them to work reaching out to your customers, if you have the policies and procedures in place to do so successfully.

Something to think about today, no matter who you are and who you work for: What policies do you have in place for customer communications when a crisis hits your company? (Book recommendation: Eric Dezenhall’s Damage Control, which is a hard-edged, realistic look at communication in a crisis, including coping with competitors who are deliberately fanning the negative-PR flames.)

Bookmark and Share