Monthly Archives: November 2011

Private clouds aren’t necessarily more secure

Eric Domage, an analyst over at IDC, is being quoted as saying, “The decision in the next year or two will only be about the private cloud. The bigger the company, the more they will consider the private cloud. The enterprise cloud is locked down and totally managed. It is the closest replication of virtualisation.” The same article goes on to quote Domage as cautioning against the dangers of the public cloud, and, quoting the article: “He urged delegates at the conference to ‘please consider more private cloud than public cloud.'”

I disagree entirely, and I think Domage’s comments ignore the reality of what is going on within enterprises, both in terms of their internal private cloud initiatives, as well as their adoption of public cloud IaaS. (I assume Domage’s commentary is intended to be specific to infrastructure, or it would be purely nonsensical.)

While not all IaaS providers build to the same security standards, nearly all build a high degree of security into their offering. Furthermore, end-to-end encryption, which Domage claims is unavailable in public cloud IaaS, is available in multiple offerings today, presuming that it refers to both end-to-end network encryption, along with encryption of both storage in motion and storage at rest. (Obviously, computation has to occur either on unencrypted data, or your app has to treat encrypted data like a passthrough.)

And for the truly paranoid, you can adopt something like Harris Trusted Cloud — private or public cloud IaaS built with security and compliance as the first priority, where each and every component is checked for validity. (Wyatt Starnes, the guiding brain behind this, founded Tripwire, so you can guess where the thinking comes from.) Find me an enterprise that takes security to that level today.

I’ve found that the bigger the company, the more likely they are to have already adopted public cloud IaaS. Yes, it’s tactical, but their businesses are moving faster than they can deploy private clouds, and the workloads they have in the public cloud are growing every day. Yes, they’ll also build a private cloud (or in many cases, already have), but they’ll use both.

The idea that the enterprise cloud is “locked down and totally managed” is a fantasy. Not only do many enterprises struggle with managing the security within private clouds, many of them have practically surrendered control to the restless natives (developers) who are deploying VMs within that environment. They’re struggling with basic governance, and often haven’t extended their enterprise IT operations management systems successfully into that private cloud. (Assuming, as the article seems to imply, that private cloud is being used to refer to self-service IaaS, not merely virtualized infrastructure.)

The head-in-the-sand “la la la public cloud is too insecure to adopt, only I can build something good enough” position will only make an enterprise IT manager sound clueless and out of touch both with reality and with the needs of the business.

Organizations certainly have to do their due diligence — hopefully before, and not after, the business is asking what cloud infrastructure solutions can be used right this instant. But the prudent thing to do is to build expertise with public cloud (or hosted private cloud), and for organizations which intend to continue running data centers long-term, simultaneously building out a private cloud.

Would you like to run Apache Wave, Grandma?

As many people already know, Google is sunsetting Google Wave. This has led to Google sending an email to people who previously signed up for Wave. The bit in the email that caught my eye was this:

If you would like to continue using Wave, there are a number of open source projects, including Apache Wave. There is also an open source project called Walkaround that includes an experimental feature that lets you import all your Waves from Google.

For an email sent to Joe Random Consumer, it’s remarkably clueless as to what consumers actually can comprehend. Grandma is highly unlikely to understand what the heck that means.

Tip for everyone offering a product or service to consumers: Any communication with consumers about that product or service should be in language that Grandma can understand.

Why developers make superior operators

Developers who deeply understand the arcana of infrastructure, and operators who can code and understand the interaction of applications and infrastructure, are better than developers and operators who understand only their own discipline. But it’s typically easier, from the perspective of training, for a developer to learn operations, than for an operator to learn development.

While there are fair number of people who teach themselves on-the-job, most developers still come out of formal computer science backgrounds. The effectiveness of formal education in CS varies immensely, and you can get a good understanding by reading on your own, of course, if you read the right things — it’s the knowledge that matters, not how you got it. But ideally, a developer should accumulate the background necessary to understand the theory of operating systems, and then have a deeper knowledge of the particular operating system that they primarily work with, as well as the arcana of the middleware. It’s intensely useful to know how the abstract code you write, actually turns out to run in practice. Even if you’re writing in a very high-level programming language, knowing what’s going on under the hood will help you write better code.

Many people who come to operations from the technician end of things never pick up this kind of knowledge; a lot of people who enter either systems administration or network operations do so without the benefit of a rigorous education in computer science, whether from college or self-administered. They can do very well in operations, but it’s generally not until you reach the senior-level architects that you commonly find people who deeply understand the interaction of applications, systems, and networks.

Unfortunately, historically, we have seen this division in terms of relative salaries and career paths for developers vs. operators. Operators are often treated like technicians; they’re often smart learn-on-the-job people without college degrees, but consequently, companies pay accordingly and may limit advancement paths accordingly, especially if the company has fairly strict requirements that managers have degrees. Good developers often emerge from college with minimum competitive salary requirements well above what entry-level operations people make.

Silicon Valley has a good collection of people with both development and operations skills because so many start-ups are founded by developers, who chug along, learning operations as they go, because initially they can’t afford to hire dedicated operations people; moreover, for more than a decade, hypergrowth Internet start-ups have deliberately run devops organizations, making the skillset both pervasive and well-paid. This is decidedly not the case in most corporate IT, where development and operations tend to have a hard wall between them, and people tend to be hired for heavyweight app development skills, more so than capabilities in systems programming and agile-friendly languages.

Here are my reasons for why developers make better operators, or perhaps more accurately, an argument for why a blended skillset is best. (And here I stress that this is personal opinion, and not a Gartner research position; for official research, check out the work of my esteemed colleagues Cameron Haight and Sean Kenefick. However, as someone who was formally educated as a developer but chose to go into operations, and who has personally run large devops organizations, this is a strongly-held set of opinions for me. I think that to be a truly great architect-level ops person, you also have to have a developer’s skillset, and I believe it’s important to mid-level people as well, which I recognize as a controversial opinions.)

Understanding the interaction of applications and infrastructure leads to better design of both. This is an architect’s role, and good devops understand how to look at applications and advise developers how they can make them more operations-friendly, and know how to match applications and infrastructure to one another. Availability, performance, and security are all vital to understand. (Even in the cloud, sharp folks have to ask questions about what the underlying infrastructure is. It’s not truly abstract; your performance will be impacted if you have a serious mismatch between the underlying infrastructure implementation and your application code.)

Understanding app/infrastructure interactions leads to more effective troubleshooting. An operator who can CTrace, DTrace, sniff networks, read application code, and know how that application code translates to stuff happening on infrastructure, is in a much better position to understand what’s going wrong and how to fix it.

Being able to easily write code means less wasted time doing things manually. If you can code nearly as quickly as you can do something by hand, you will simply write it as a script and never have to think about doing it by hand again — and neither will anyone else, if you have a good method for script-sharing. It also means that forever more, this thing will be done in a consistent way. It is the only way to truly operate at scale.

Scripting everything, even one-time tasks, leads to more reliable operations. When working in complex production environments (and arguably, in any environment), it is useful to write out every single thing you are going to do, and your action plan for any stage you deem dangerous. It might not be a formal “script”, but a command-by-command plan can be reviewed by other people, and it means that you are not making spot decisions under the time pressure of a maintenance window. Even non-developers can do this, of course, but most don’t.

Converging testing and monitoring leads to better operations. This is a place where development and operations truly cross. Deep monitoring converges into full test coverage, and given the push towards test-driven development in agile methodologies, it makes sense to make production monitoring part of the whole testing lifecycle.

Development disciplines also apply to operations. The systems development lifecycle is applicable to operations projects, and brings discipline to what can otherwise be unstructured work; agile methodologies can be adapted to operations. Writing the tests first, keeping things in a revision control system, and considering systems holistically rather than as a collection of accumulated button-presses are all valuable.

The move to cloud computing is a move towards software-defined everything. Software-defined infrastructure and programmatic access to everything inherently advantages developers, and it turns the hardware-wrangling skills into things for low-level technicians and vendor field engineering organizations. Operations becomes software-oriented operations, one way or another, and development skills are necessary to make this transition.

It is unfortunately easier to teach operations to developers, than it is to teach operators to code. This is especially true when you want people to write good and maintainable code — not the kind of script in which people call out to shell commands for the utilities that they need rather than using the appropriate system libraries, or splattering out the kind of program structure that makes re-use nigh-impossible, or writing goop that nobody else can read. This is not just about the crude programming skills necessary to bang out scripts; this is about truly understanding the deep voodoo of the interactions between applications, systems, and networks, and being able to neatly encapsulate those things in code when need be.

Devops is a great place for impatient developers who want to see their code turn into results right now; code for operations often comes in a shorter form, producing tangible results in a faster timeframe than the longer lifecycles of app development (even in agile environments). As an industry, we don’t do enough to help people learn the zen of it, and to provide career paths for it. It’s an operations specialty unto itself.

Devops is not just a world in which developers carry pagers; in fact, it doesn’t necessarily mean that application developers carry pagers at all. It’s not even just about a closer collaboration between development and operations. Instead, it can mean that other than your most junior button-pushers and your most intense hardware specialists, your operations people understand both applications and infrastructure, and that they write code as necessary to highly automate the production environment. (This is more the philosophy of Google’s Site Reliability Engineering, than it is Amazon-style devops, in other words.)

But for traditional corporate IT, it means hiring a different sort of person, and paying differently, and altering the career path.

A little while back, I had lunch with a client from a mid-market business, which they spent telling me about how efficient their IT had become, especially after virtualization — trying to persuade me that they didn’t need the cloud, now or ever. Curious, I asked how long it typically took to get a virtualized server up and running. The answer turned out to be three days — because while they could push a button and get a VM, all storage and networking still had to be manually provisioned. That led me to probe about a lot of other operations aspects, all of which were done by hand. The client eventually protested, “If I were to do the things you’re talking about, I’d have to hire programmers into operations!” I agreed that this was precisely what was needed, and the client protested that they couldn’t do that, because programmers are expensive, and besides, what would they do with their existing do-everything-by-hand staff? (I’ve heard similar sentiments many times over from clients, but this one really sticks in my mind because of how shocked this particular client was by the notion.)

Yes. Developers are expensive, and for many organizations, it may seem alien to use them in an operations capacity. But there’s a cost to a lack of agility and to unnecessarily performing tasks manually.

But lessons learned in the hot seat of hypergrowth Silicon Valley start-ups take forever to trickle into traditional corporate IT. (Even in Silicon Valley, there’s often a gulf between the way product operations works, and the way traditional IT within that same company works.)

Do you really want to be in the cloud?

People often ask me what it’s like to be an analyst at Gartner, and for me, the answer is, “It’s a life of constant client conversations.” Over the course of a typical year, I’ll do something on the order of 1,200 formal one-on-one conversations (or one-on-small-team, if the client brings in some other colleagues), generally 30 minutes in length. That doesn’t count the large number of other casual interactions at conferences and whatnot.

While Gartner serves pretty much the entire technology industry, and consequently I talk to plenty of folks at little start-ups and whatnot, our bread-and-butter client — 80% of Gartner’s revenue — comes from “end-users”, which means IT management at mid-market businesses and enterprise.

Over the years, I have learned a lot of important things about dealing with clients. One of them is that they generally aren’t really interested in best practices. They find best practices fascinating, but they frequently can’t put them to use in their own organizations. They’re actually interested in good practices — things that several other organizations like them have done successfully and which are practically applicable to their own environment.

More broadly, there’s a reason that analysts are still in business — people need advice that’s tailored to their particular needs. You know the Tolstoy line “Happy families are all alike, but every unhappy family is unhappy in its own way” that starts Anna Karenina? Well, every corporate IT department has its own unique pathology. There are the constraints of the business (real and imagined) and the corporate culture, the culture in IT specifically, the existing IT environment in all of its broken glory and layers of legacy, the available budget and resources and skills (modified by whether or not they are willing to hire consultants and other outside help), the people and personalities and pet peeves and personal ambitions, and even the way that they like to deal with analysts. (Right down to the fact that some clients have openly said that they don’t like a woman telling them what to do.)

To be a successful advisor, you have to recognize that most people can’t aim for the “ideal” solution. They have to find a solution that will work for their particular circumstances, with all of the limitations of it — some admittedly self-imposed, but nevertheless important. You can start by articulating an ideal, but it has to quickly come down to earth.

But cloud computing has turned out to be an extra-special set of landmines. When clients come to me wanting to do a “cloud computing” or “cloud infrastructure” project, especially if they don’t have a specific thing in mind, I’ve learned to ask, “Why are you doing this?” Is this client reluctant, pushed into doing this only because someone higher-up is demanding that they do ‘something in the cloud’? Is this client genuinely interested in seeing this project succeed, or would he rather it fail? Does he want to put real effort into it, or just a token? Is he trying to create a proof of concept that he can build upon, or is this a one-shot effort? Is he doing this for career reasons? Does he hope to get his name in the press or be the subject of a case study? What are the constraints of his industry, his business, his environment, and his organization?

My answer to, “What should I do?” varies based on these factors, and I explain my reasoning to the client. My job is not to give academic theoretical answers — my job is to offer advice that will work for this client in his current circumstances, even if I think it’s directionally wrong for the organization in the long term. (I try to shake clients out of their complacency, but in the end, I’m just trying to leave them with something to think about, so they understand the implications of their decisions, and how clinging to the way things are now will have business ramfiications over the long term.) However, not-infrequently, my job involves helping a deeply reluctant client think of some token project that he can put on cloud infrastructure so he can tell his CEO/CFO/CIO that he’s done it.

Cloud providers dealing with traditional corporate IT should keep in mind that not everyone who inquires about their service has a genuine desire for the project to be a success — and even those who are hoping for success don’t necessarily have pure motivations.

To become like a cloud provider, fire everyone here

A recent client inquiry of mine involved a very large enterprise, who informed me that their executives had decided that IT should become more like a cloud provider — like Google or Facebook or Amazon. They wanted to understand how they should transform their organization and their IT infrastructure in order to do this.

There were countless IT people on this phone consultation, and I’d received a dizzying introducing to names and titles and job functions, but not one person in the room was someone who did real work, i.e., someone who wrote code or managed systems or gathered requirements from the business, or even did higher-level architecture. These weren’t even people who had direct management responsibility for people who did real work. They were part of the diffuse cloud of people who are in charge of the general principle of getting something done eventually, that you find everywhere in most large organizations (IT or not).

I said, “If you’re going to operate like a cloud provider, you will need to be willing to fire almost everyone in this room.”

That got their attention. By the time I’d spent half an hour explaining to them what a cloud provider’s organization looks like, they had decidedly lost their enthusiasm for the concept, as well as been poleaxed by the fundamental transformations they would have to make in their approach to IT.

Another large enterprise client recently asked me to explain Rackspace’s organization to them. They wanted to transform their internal IT to resemble a hosting company’s, and Rackspace, with its high degree of customer satisfaction and reputation for being a good place to work, seemed like an ideal model to them. So I spent some time explaining the way that hosting companies organize, and how Rackspace in particular does — in a very flat, matrix-managed way, with horizontally-integrated teams that service a customer group in a holistic manner, coupled with some shared-services groups.

A few days later, the client asked me for a follow-up call. They said, “We’ve been thinking about what you’ve said, and have drawn out the org… and we’re wondering, where’s all the management?”

I said, “There isn’t any more management. That’s all there is.” (The very flat organization means responsibility pushed down to team leads who also serve functional roles, a modest number of managers, and a very small number of directors who have very big organizations.)

The client said, “Well, without a lot of management, where’s the career path in our organization? We can’t do something like this!”

Large enteprise IT organizations are almost always full of inertia. Many mid-market IT organizations are as well. In fact, the ones that make me twitch the most are the mid-market IT directors who are actually doing a great job with managing their infrastructure — but constrained by their scale, they are usually just good for their size and not awesome on the general scale of things, but are doing well enough to resist change that would shake things up.

Business, though, is increasingly on a wartime footing — and the business is pressuring IT, usually in the form of the development organization, to get more things done and to get them done faster. And this is where the dissonance really gets highlighted.

A while back, one of my clients told me about an interesting approach they were trying. They had a legacy data center that was a general mess of stuff. And they had a brand-new, shiny data center with a stringent set of rules for applications and infrastructure. You could only deploy into the new shiny data center if you followed the rules, which gave people an incentive to toe the line, and generally ensured that anything new would be cleanly deployed and maintained in a standardized manner.

It makes me wonder about the viability of an experiment for large enterprise IT with serious inertia problems: Start a fresh new environment with a new philosophy, perhaps a devops philosophy, with all the joy of having a greenfield deployment, and simply begin deploying new applications into it. Leave legacy IT with the mess, rather than letting the morass kill every new initiative that’s tried.

Although this is hampered by one serious problem: IT superstars rarely go to work in enterprises (excepting certain places, like some corners of financial services), and they especially don’t go to work in organizations with inertia problems.

Amazon and the power of default choices

Estimates of Amazon’s revenues in the cloud IaaS market vary, but you could put it upwards of $1 billion in 2011 and not cause too much controversy. That’s a dominant market share, comprised heavily of early adopters but at this point, also drawing in the mainstream business — particularly the enterprise, which has become increasingly comfortable adopting Amazon services in a tactical manner. (Today, Amazon’s weakness is the mid-market — and it’s clear from the revenue patterns, too, that Amazon’s competitors are mostly winning in the mid-market. The enterprise is highly likely to go with Amazon, although it may also have an alternative provider such as Terremark for use cases not well-suited to Amazon.)

There are many, many other providers out there who are offering cloud IaaS, but Amazon is the brand that people know. They created this market; they have remained synonymous with it.

That means that for many organizations that are only now beginning to adopt cloud IaaS (i.e., traditional businesses that already run their own data centers), Amazon is the default choice. It’s the provider that everyone looks at because they’re big — and because they’re #1, they’re increasingly perceived as a safe choice. And because Amazon makes it superbly easy to sign up and get started (and get started for free, if you’re just monkeying around), there’s no reason not to give them a whirl.

Default choices are phenomenally powerful. (You can read any number of scientific papers and books about this.) Many businesses believe that they’ve got a low-risk project that they can toss on cloud IaaS and see what happens next. Or they’ve got an instant need and no time to review all the options, so they simply do something, because it’s better than not doing something (assuming that the organization is one in which people who get things done are not punished for not having filled out a form in triplicate first).

Default choices are often followed by inertia. Yeah, the company put a project on Amazon. It’s running fine, so people figure, why mess with it? They’ve got this larger internal private cloud story they’re working on, or this other larger cloud IaaS deal they’re working on, but… well, they figure, they can migrate stuff later. And it’s absolutely true that people can and do migrate, or in many cases, build a private cloud or add another cloud IaaS provider, but a high enough percentage of the time, whatever they stuck out there remains at Amazon, and possibly begins to accrete other stuff.

This is increasingly leaving the rest of the market trying to pry customers away from a provider they’re already using. It’s absolutely true that Amazon is not the ideal provider for all use cases. It’s absolutely true that any number of service providers can tell me endless stories of customers who have left Amazon for them. It’s probably true, as many service providers claim, that customers who are experienced with Amazon are better educated about the cloud and their needs, and therefore become better consumers of their next cloud provider.

But it does not change the fact that Amazon has been working on conquering the market one developer at a time, and that in turn has become the bean-counters in business saying, hey, shouldn’t we be using these Amazon guys?

This is what every vendor wants: For the dude at the customer to be trying to explain to his boss why he’s not using them.

This is increasingly my client inquiry pattern: Client has decided they are definitively not using Amazon (for various reasons, sometimes emotional and sometimes well thought out) and are looking at other options, or they are looking at cloud IaaS and are figuring that they’ll probably use Amazon or have even actually deployed stuff on Amazon (even if they have done zero reading or evaluation). Two extremes.

Cloud IaaS is a lot more than just self-service VMs

One year ago, when we did our 2010 hosting/cloud Magic Quadrant, you were doing pretty well as a service provider if you had a bare-minimum cloud IaaS offering — a service in which customers could go in, push buttons and self-service provision and de-provision virtual machines. There were providers with more capabilities than that, but by and large, that was the baseline of an acceptable offering.

Today, a year later, that says, yup, you’ve got something — but that’s all. The market has moved on with astonishing speed. Bluntly, the feat of provisioning a VM is only so impressive, and doing it fast and well and with a minimum of customer heartache is now simply table stakes in the game.

If you really want to deliver value to a customer, as a service provider, you’ve got to be asking yourself what you can do to smooth the whole delivery chain of IT Operations management and everything the customer needs to build out their solution on your IaaS platform. That’s true whether your audience is developers, devops, or IT operations.

Think: Hierarchical management of users and resources, RBAC for both users and API keys, governance (showback/chargeback, quotas/budgets, leases, workflow-driven provisioning), monitoring (from resources to APM), auto-scaling (both horizontal and vertical), complex network configurations, multi-data-center replication, automated patch management, automated capabilities to support regulatory compliance needs, sophisticated service catalog capabilities that include full deployment templates and are coupled with on-demand software licensing, integration with third-party IT operations management tools… and that’s only the start.

If you are in the cloud IaaS business and you do not have an aggressive roadmap of new feature releases, you are going to be behind the eight-ball — and you should picture it as the kind of rolling boulder that chases Indiana Jones. It doesn’t matter whether your competitor is Amazon or one of the many providers in the VMware ecosystem. Software-defined infrastructure is a software business, and it moves at the speed of software. You don’t have to be Amazon and compete directly with their feature set, but you had better think about what value you can add that goes beyond getting VMs fast.

Managed Hosting and Cloud IaaS Magic Quadrant

We’re wrapping up our Public Cloud IaaS Magic Quadrant (the drafts will be going out for review today or tomorrow), and we’ve just formally initiated the Managed Hosting and Cloud IaaS Magic Quadrant. This new Magic Quadrant is the next update of last year’s Magic Quadrant for Cloud Infrastructure as a Service and Web Hosting.

Last year’s MQ mixed both managed hosting (whether on physical servers, multi-tenant virtualized “utility hosting” platforms, or cloud IaaS) as well as the various self-service cloud IaaS use cases. While it presented an overall market view, the diversity of the represented use cases meant that it was difficult to use the MQ for vendor selection.

Consequently, we added the Public Cloud IaaS MQ (covering self-service cloud IaaS), and retitled the old MQ to “Managed Hosting and Cloud IaaS” (covering managed hosting and managed cloud IaaS). They are going to be two dramatically different-looking MQs, with a very different vendor population.

The Managed Hosting and Cloud IaaS MQ covers:

  • Managed hosting on physical servers
  • Managed hosting on a utility hosting platform
  • Managed hosting on cloud IaaS
  • Managed hybrid hosting (blended delivery models)
  • Managed Cloud IaaS (at minimum, guest OS is provider-managed)

Both portions of the market are important now, and will continue to be important in the future, and we hope that having two Magic Quadrants will provide better clarity.

Yottaa, 4th-gen CDNs, and acceleration for the masses

I’d meant to blog about Yottaa when it launched back in October, because it’s one of the most interesting entrants to the CDN market that I’ve seen in a while.

Yottaa is a fourth-generation CDN that offers site analytics, edge caching, a little bit of network optimization, and front-end optimization.

While CDNs of earlier generations have heavily capital-intensive models that require owning substantial server assets, fourth-generation CDNs have a software-oriented model and own limited if any delivery assets themselves. (See a previous post for more on 4th-gen CDNs.)

In Yottaa’s case, it uses a large number of cloud IaaS providers around the world in order to get a global server footprint. Since these resources can be obtained on-demand, Yottaa can dynamically match its capacity to customer demand, rather than pouring capital into building out a server network. (This is a critical new market capability in general, because it means that as the global footprint of cloud IaaS grows, there can be far more competition in the innovative portion of the CDN market — it’s far less expensive to start a software company than a company trying to build out a competitive CDN footprint.)

There are three main things that you can do to speed up the delivery of a website (or Web-based app): you can do edge caching (classic static content CDN delivery), you can do network optimization (the kind of thing that F5 has in its Web App Accelerator add-on to its ADC, or, as a service, something like Akamai DSA), or you can do front-end optimization, sometimes known as Web content optimization or Web performance optimization (the kind of thing that Riverbed’s Aptimize does). Gartner clients only: See my research note “How to Accelerate Internet Websites and Applications” for more on acceleration techniques.

Yottaa does all three of these things, although its network optimizations are much more minimal than a typical DSA service. It does it in an easy-to-configure way that’s readily consumable by your average webmaster. And the entry price-point is $20/month. Sign up for an instant free trial, online — here’s the consumerization of CDN services for you.

When I took a look at calculating the prices at volume using the estimates that Yottaa gave me, I realized that it’s nowhere near as cheap as it might first look — it’s comparable to Akamai DSA pricing. But it’s the kind of thing that pretty much anyone could add to their site if they cared about performance. It brings acceleration to the mass market.

Sure, your typical Akamai customer is probably not going to be consuming this service just yet. But give it some time. This is the new face of competition in the CDN market — companies that are focused entirely on software development (possibly using low-cost labor: Yottaa’s engineering is in Beijing), relying on somebody else’s cloud infrastructure rather than burning capital building a network.

Three years ago, I asked in my blog: Are CDNs software or infrastructure companies? The new entrants, which also include folks like CloudFlare, come down firmly on the software side.

Recently, one of my Gartner Invest clients — a buy-side analyst who specializes in infrastructure software companies — pointed out to me that Akamai’s R&D spending is proportionately small compared to the other companies in that sector, while it is spending huge amounts of capex in building out more network capacity. I would previously have put Akamai firmly on the software side of the CDN as software or infrastructure question. It’s interesting to think about the extent to which this might or might not be the case going forward.

There’s no such thing as a “safe” public cloud IaaS

I’ve been trialing cloud IaaS providers lately, and the frustration of getting through many of the sign-up processes has reminded me of some recurring conversations that I’ve had with service providers over the past few years.

Many cloud IaaS providers regard the fact that they don’t take online sign-ups as a point of pride — they’re not looking to serve single developers, they say. This is a business decision, which is worth examining separately (a future blog post, and I’ve already started writing a research note on why that attitude is problematic).

However, many cloud IaaS providers state their real reason for not taking online sign-ups, or of having long waiting periods to actually get an account provisioned (and silently dropping some sign-ups into a black hole, whether or not they’re actually legitimate), is that they’re trying to avoid the bad eggs — credit card fraud, botnets, scammers, spammers, whatever. Some cloud providers go so far as to insist that they have a “private cloud” because it’s not “open to the general public”. (I consider this lying to your prospects, by the way, and I think it’s unethical. “Marketing spin” shouldn’t be aimed at making prospects so dizzy they can’t figure out your double-talk. The industry uses NIST definitions, and customers assume NIST definitions, and “private” therefore implies “single-tenant”.)

But the thing that worries me is that cloud IaaS providers claim that vetting who signs up for their cloud, and ensuring that they’re “real businesses”, makes their public, multi-tenant cloud “safe”. It doesn’t. In fact, it can lure cloud providers into a false sense of complacency, assuming that there will be no bad actors within their cloud, which means that they do not take adequate measures to defend against bad actors who work for a customer — or against customer mistakes, and most importantly, against breaches of a customer’s security that result in bad eggs having access to their infrastructure.

Cloud providers tell me that folks like Amazon spend a ton of money and effort trying to deal with bad actors, since they get tons of them from online sign-ups, and that they themselves can’t do this, either for financial or technical reasons. Well, if you can’t do this, you are highly likely to also not have the appropriate alerting to see when your vaunted legitimate customers have been compromised by the bad guys and have gone rogue; and therefore to respond to it immediately and automatically to stop the behavior and thereby protect your infrastructure and customers; and to hopefully automatically, accurately, and consistently do the forensics for law enforcement afterwards. Because you don’t expect it to be a frequent problem, you don’t have the paranoid level of automatic and constant sweep-and-enforce that a provider like Amazon has to have.

And that should scare every enterprise customer who gets smugly told by a cloud provider that they’re safe, and no bad guys can get access to their platform because they don’t take credit-card sign-ups.

So if you’re a security-conscious company, considering use of multi-tenant cloud services, you should ask prospective service providers, “What are you doing to protect me from your other customers’ security problems, and what measures do you have in place to quickly and automatically detect and eliminate bad behavior?” — and don’t accept “we only accept upstanding citizens like yourself on our cloud, sir” as a valid answer.