Monthly Archives: November 2010
Amazon, ISO 27001, and a correction
FlyingPenguin has posted a good critique of my earlier post about Amazon’s ISO 27001 certification.
Here’s a succinct correction:
To quote Wikipedia, ISO 27001 requires that management:
- Systematically examine the organization’s information security risks, taking account of the threats, vulnerabilities and impacts;
- Design and implement a coherent and comprehensive suite of information security controls and/or other forms of risk treatment (such as risk avoidance or risk transfer) to address those risks that are deemed unacceptable; and
- Adopt an overarching management process to ensure that the information security controls continue to meet the organization’s information security needs on an ongoing basis.
ISO 27002, which details the security best practices, is not required to be used in conjunction with 27001, although this is customary. I forgot this when I wrote my post (when I was reading docs written by my colleagues on our security team, which specifically recommend the 27001 approach, in the context of 27002).
In other words: 27002 is proscriptive in its controls; 27001 is not that specific.
So FlyingPenguin is right — without the 27002, we have no idea what security controls Amazon has actually implemented.
Amazon, ISO 27001, and some conference observations
Greetings from Gartner’s Application Architecture, Development, and Integration Summit. There are around 900 people here, and the audience is heavy on enterprise architects and other application development leaders.
One of the common themes of my interaction here has been talking to an awful lot of people who are using or have used Amazon for IaaS. They’re a different audience than the typical clients I talk to about the cloud, who are generally IT Operations folks, IT executives, or Procurement folks. The audience here is involved in assessing the cloud, and in adopting the cloud in more skunkworks ways — but they are generally not ultimately the ones making the purchasing decisions. Consequently, they’ve got a level of enthusiasm about it that my usual clients don’t share (although it correlates with the reported enthusiasm they know their app dev folks have for it). Fun conversations.
So on the heels of Amazon’s ISO 27001 certification, I thought it’d be worth jotting down a few thoughts about Amazon and the enterprise.
To start with, SAS 70 Is Not Proof of Security, Continuity or Privacy Compliance (Gartner clients only). As my security colleagues Jay Heiser and French Caldwell put it, “The SAS 70 auditing report is widely misused by service providers that find it convenient to mischaracterize the program as being a form of security certification. Gartner considers this to be a deceptive and harmful practice.” It certainly is possible for a vendor to do a great SAS 70 certification — to hold themselves to best pratices and have the audit show that they follow them consistently — but SAS 70 itself doesn’t require adherence to security best practices. It just requires you to define a set of controls, and then demonstrate you follow them.
ISO 27001, on the other hand, is a security certification standard that examines the efficacy of risk management and an organization’s security posture, in the context of ISO 27002, which is a detailed security control framework. This certification actually means that you can be reasonably assured that an organization’s security controls are actually good, effective ones.
The 27001 cert — especially meaningful here because Amazon certified its actual infrastructure platform, not just its physical data centers — addresses two significant issues with assessing Amazon’s security to date. First, Amazon doesn’t allow enterprises to bring third-party auditors into its facilities and to peer into its operations, so customers have to depend on Amazon’s own audits (which Amazon does share under certain circumstances). Second, Amazon does a lot of security secret sauce, implementing things in ways different than is the norm — for instance, Amazon claims to provide network isolation between virtual machines, but unlike the rest of the world, it doesn’t use VLANs to achieve this. Getting something like ISO 27001, which is proscriptive, hopefully offers some assurance that Amazon’s stuff constitutes effective, auditable controls.
(Important correction: See my follow-up. The above statement is not true, because we have no guarantee Amazon follows 27002.)
A lot of people like to tell me, “Amazon will never be used by the enterprise!” Those people are wrong (and are almost always shocked to hear it). Amazon is already used by the enterprise — a lot. Not necessarily always in particularly “official” ways, but those unofficial ways can sometimes stack up to pretty impressive aggregate spend. (Some of my enterprise clients end up being shocked by how much they’re spending, once they total up all the credit cards.)
And here’s the important thing: The larger the enterprise, the more likely it is that they use Amazon, to judge from my client interactions. (Not necessarily as their only cloud IaaS provider, though.) Large enterprises have people who can be spared to go do thorough evaluations, and sit on committees that write recommendations, and decide that there are particular use cases that they allow, or actively recommend, Amazon for. These are companies that assess their risks, deal with those risks, and are clear on what risks they’re willing to take with what stuff in the cloud. These are organizations — some of the largest global companies in the world — for whom Amazon will become a part of their infrastructure portfolio, and they’re comfortable with that, even if their organizations are quite conservative.
Don’t underestimate the rate of change that’s taking place here. The world isn’t shifting overnight, and we’re going to be looking at internal data centers and private clouds for many years to come, but nobody can afford to sit around smugly and decide that public cloud is going to lose and that a vendor like Amazon is never going to be a significant player for “real businesses”.
One more thing, on the subject of “real businesses”: All of the service providers who keep telling me that your multi-tenant cloud isn’t actually “public” because you only allow “real businesses”, not just anyone who can put down a credit card? Get over it. (And get extra-negative points if you consider all Internet-centric companies to not be “real businesses”.) Not only isn’t it a differentiator, but customers aren’t actually fooled by this kind of circumlocution, and the guys who accept credit cards still vet their customers, albeit in more subtle ways. You’re multi-tenant, and your customers aren’t buying as a consortium or community? Then you’re a public cloud, and to claim otherwise is actively misleading.
Inquiries, Cloud/Hosting MQ, and availability in LA/SF
Prospects or Gartner clients who want to meet with me: I will be in Los Angeles on November 18th, and in the San Francisco Bay Area on November 22nd and 23rd. If you want to meet while I’m there, contact your account executive. I’ll also be at Gartner’s Application Architecture, Development, and Integration Summit in Los Angeles from November 15th-17th, and Gartner’s Data Center Conference in Las Vegas from December 6th-9th. (I’m giving a number of presentations and roundtables.)
I recently got some statistics on my inquiry volume, and I was shocked — inquiry is up for me by 93% year-over-year. I thought I had physically run out of hours in the day, but I realized that I’ve also cut back on some my travel and gotten more aggressive about back-to-back timeslots (letting 14 calls be crammed into a day, if necessary). I already had one of the highest workloads in the company last year, so it probably explains why I am feeling somewhat… frazzled. This is mostly cloud-related inquiry, although CDN inquiry still occupies some significant percentage of my time.
The Cloud Infrastructure as a Service and Web Hosting Magic Quadrant is finally finished. Assuming that our peer review goes well, it should be out to vendors for review next week, and published in December.
Last year’s Magic Quadrant was more hosting than cloud. This year, it’s reversed — it’s definitely more cloud than hosting. There’s lots of movement, and there are five new vendors. I’m sure that it will probably be controversial again. The next step is to do our Critical Capabilities note, which is a direct comparison of just the cloud services in their immediate state, separate from questions about things like customer service, managed and professional services, future roadmap, and so forth — just raw “how do these guys stack up in what they offer”.
Amazon’s Free Usage Tier
Amazon recently introduced a Free Usage Tier for its Web Services. Basically, you can try Amazon EC2, with a free micro-instance (specifically, enough hours to run such an instance full-time, and have a few hours left over to run a second instance, too; or you can presumably use a bunch of micro-instances part-time), and the storage and bandwidth to go with it.
Here’s what the deal is worth at current pricing, per-month:
- Linux micro-instance – $15
- Elastic load balancing – $18.87
- EBS – $1.27
- Bandwidth – $3.60
That’s $38.74 all in all, or $464.88 over the course of the one-year free period — not too shabby. Realistically, you don’t need the load-balancing if you’re running a single instance, so that’s really $19.87/month, $238.44/year. It also proves to be an interesting illustration of how much the little incremental pennies on Amazon can really add up.
It’s a clever and bold promotion, making it cost nothing to trial Amazon, and potentially punching Rackspace’s lowest-end Cloud Servers business in the nose. A single instance of that type is enough to run a server to play around with if you’re a hobbyist, or you’re a garage developer building an app or website. It’s this last type of customer that’s really coveted, because all cloud providers hope that whatever he’s building will become wildly popular, causing him to eventually grow to consume bucketloads of resources. Lose that garage guy, the thinking goes, and you might not be able to capture him later. (Although Rackspace’s problem at the moment is that their cloud can’t compete against Amazon’s capabilities once customers really need to get to scale.)
While most of the cloud IaaS providers are actually offering free trials to most customers they’re in discussions with, there’s still a lot to be said about just being able to sign up online and use something (although you still have to give a valid credit card number).
Akamai sues Cotendo for patent infringement
How to tell when a CDN has arrived: Akamai sues them for patent infringement.
The lawsuit that Akamai has filed against Cotendo alleges the violation of three patents.
The most recent of the patents, 7,693,959, is dated April 2010, but it’s a continuation of several previous applications — its age is nicely demonstrated by things like its reference to the Netscape Navigator browser and references to ADC vendors that ceased to exist years ago. It seems to be a sort of generic basic CDN patent, but it governs the use of replication to distribute objects, not just caching.
The second Akamai patent, 7,293,093, essentially covers the “whole site delivery” technique.
The oldest of the patents, 6,820,133, was obtained by Akamai when it acquired Netli. It essentially covers the basic ADN technique of optimized communication between two server nodes in a network. I don’t know how defensible this patent is; there are similar ones held by a variety of ADC and WOC vendors who use such techniques.
My expectation is that the patent issues won’t affect the CDN market in any significant way, and the lawsuit is likely to drag on forever. Fighting the lawsuit will cost money, but Cotendo has deep-pocketed backers in Sequoia and Benchmark. It will obviously fuel speculation that Akamai will try to buy them, but I don’t think that it’s going to be a simple way to eliminate competition in this market, especially given what I’ve seen of the roadmaps of other competitors (under NDA). Dynamic acceleration is too compelling to stay a monopoly (and incidentally, the ex-Netli executives are now free of their competitive lock-up), and the only reason it’s taken this long to arrive was that most CDNs were focused on the huge, and technically far easier, opportunity in video delivery.
It’s an interesting signal that Akamai takes Cotendo seriously as a competitor, though. In my opinion, they should; since early this year, I’ve regularly seen Cotendo as a bidder in the deals that I look at. The AT&T deal is just a sweetener — and since AT&T will apply your corporate enteprise discount to a Cotendo deal, that means that I’ve got enterprise clients who sometimes look at 70% discounts on top of an already less-expensive price quote. And Akamai’s problem is that Cotendo isn’t just, as Akamai alleges in its lawsuit, a low-cost competitor; Cotendo is trying to innovate and offer things that Akamai doesn’t have.
Competition is good for the market, and it’s good for customers.
Netflix, Akamai, and video delivery performance
Dan Rayburn’s blg post about the Akamai/Netflix relationship seems to have set off something of a firestorm, and I’ve been deluged by inquiries from Gartner Invest clients about it.
I do not want to add fuel to the fire by speculating on anything, and I have access to confidential information that prevents me from stating some facts as I know them, so for blog purposes I will stick to making some general comments about Akamai’s delivery performance for long-tail video content.
From the independent third-party testing that I’ve seen, Akamai delivers good large-file video performance, but their performance is not always superior to other major CDNs (excluding greater reach, i.e., they’re going to solidly trounce a rival who doesn’t have footprint in a given international locale, for instance). Actual performance depends on a number of factors, including the specifics of a customer’s deal with Akamai and the way that the service is configured. The testing location also matters. The bottom line is that it’s very competitive performance but it’s not, say, head and shoulders above competitors.
Akamai has, for the last few years, had a specific large-file delivery service designed to cache just the beginning of a file at the very edge, with the remainder delivered from the middle tier to the edge server, thus eliminating the obvious cache storage issues involved in, say, caching entire movies, while still preserving decent cache hit ratios. However, this has been made mostly irrelevant in video delivery by the rise in popularity of adaptive streaming techniques — if you’re thinking about Akamai’s capabilities in the Netflix or similar contexts, you should think of this as an adaptive streaming thing and not a large file delivery thing.
In adaptive streaming (pioneered by Move Networks), a video is chopped up into lots of very short chunks, each just a few seconds long, and usually delivered via HTTP. The end-consumer’s video player takes care of assembly these chunks. Based on the delivery performance of each chunk, the video player decides whether it wants to upshift or downshift the bitrate / quality of the video in real time, thus determining what the URL of the next video chunk is. This technique can also be used to switch sources, allowing, for instance, the CDN to be changed in real time based on performance, as is offered by the Conviva service. Because the video player software in adaptive streaming is generally instrumented to pay attention to performance, there’s also the possibility that it may send back performance information to the content owner, thus enabling them to get a better understanding of what the typical watcher is experiencing. Using an adaptive technique, your goal is generally to get the end-user the most “upshift” possible (i.e., sustain the highest bitrate possible), and if you can, have it delivered via the least expensive source.
When adaptive streaming is in use, that means that the first chunk of videos is now just a small object, easily cached on just about any CDN. Obviously, cache hit ratios still matter, and you will generally get higher cache hit ratios on a megaPOP approach (like Limelight) than you will with a high distributed approach (like Akamai), although that starts to get more complex when you add in server-side pre-fetching, deliberately serving content off the middle tier, and the like. So now your performance starts to boil down to footprint, cache hit ratio, algorithms for TCP/IP protocol optimization, server and network performance — how quickly and seamlessly can you deliver lots and lots of small objects? Third-party testing generally shows that Akamai benchmarks very well when it comes to small object delivery — but again, specific relative CDN performance for a specific customer is always unique.
In the end, it comes down to price/performance ratios. Netflix has clearly decided that they believe Level 3 and Limelight deliver better value for some significant percentage of their traffic, at this particular instant in time. Given the incredibly fierce competition for high-volume deals, multi-vendor sourcing, and the continuing fall in CDN prices, don’t think of this as an alteration in the market, or anything particularly long-term for the fate of the vendors in question.
Google’s mod_pagespeed and Cotendo
Those of you who are Gartner clients know that in the last year, my colleague Joe Skorupa and I have become excited about the emergence of software-based application acceleration via page optimization approaches, as exemplified by vendors like Aptimize and Strangeloop Networks. (Clients: See Cool Vendors in Enterprise Networking, 2010.) This approach to acceleration enhances the performance of Web-based content and applications, by automatically optimizing the page output of webservers according to the best practices described in books like High Performance Web Sites by Steve Souders. Techniques of this sort include automatically combining JavaScript files (which reduces overall download time), optimizing the order of the scripts, and rewriting HTML so that the browser can display the page more quickly.
Page optimization techniques can often provide significant acceleration boosts (2x or more) even when other acceleration techniques are in use, such as a hardware ADC with acceleration module (offered as add-ons by F5 and Citrix NetScaler, for instance), or a CDN (including CDN-based dynamic acceleration). Since early this year, we’ve been warning our CDN clients that this is a critical technology development to watch and to consider adopting in their networks. It’s a highly sensible thing to deploy on a CDN, for customers doing whole site delivery; the CDN offloads the computational expense of doing the optimization (which can be significant), and then caches the result (and potentially distributing the optimized pages to other nodes on the CDN). That gets you excellent, seamless acceleration for essentially no effort on the part of the customer.
Google’s Page Speed project provides free and open-source tools designed to help site owners implement these best practices. Google has recently released an open-source module, called mod_pagespeed, for the popular Apache webserver. This essentially creates an open-source competitor to commercial vendors like Aptimize and Strangeloop. Add the module into your Apache installation, and it will automatically try to optimize your pages for you.
Now, here’s where it gets interesting for CDN watchers: Cotendo has partnered wih Google. Cotendo is deploying the Google code (modified, obviously, to run on Cotendo’s proxy caches, which are not Apache-based), in order to be able to offer the page optimization service to its customers.
I know some of you will automatically be asking now, “What does this mean for Akamai?” The answer to that is, “Losing speed trials when it’s Akamai DSA vs. Cotendo DSA + Page Speed Automatic, until they can launch a competing service.” Akamai’s acceleration service hasn’t changed much since the Netli acquisition in 2007, and the evolution in technology here has to be addressed. Page optimization plus TCP optimization is generally much faster than TCP optimization alone. That doesn’t just have pricing implications; it has implications for the competitive dynamics of the space, too.
I fully expect that page optimization will become part of the standard dynamic acceleration service offerings of multiple CDNs next year. This is the new wave of innovation. Despite the well-documented nature of these best practices, organizations still frequently ignore them when coding — and even commercial packages like Sharepoint ignore them (Sharepoint gets a major performance boost when page optimization techniques are applied, and there are solutions like Certeon that are specific to it). So there’s a very broad swathe of customers that can benefit easily from these techniques, especially since they provide nice speed boosts even in environments where the network latency is pretty decent, like delivery within the United States.
Symposium, Smart Control, and cloud computing
I originally wrote this right after Orlando Symposium and forgot to post it.
Now that Symposium is over, I want to reflect back on the experiences of the week. Other than the debate session that I did (Eric Knipp and I arguing the future of PaaS vs. IaaS), I spent the conference in the one-on-one room or meeting with customers over meals. And those conversations resonated in sharp and sometimes uncomfortable ways with the messages of this year’s Symposiums.
The analyst keynote said this:
An old rule for an old era: absolute centralized IT control over technology people, infrastructure, and services. Rather than fight to regain control, CIOs and IT Leaders must transform themselves from controllers to influencers; from implementers to advisers; from employees to partners. You own this metamorphosis. As an IT leader, you must apply a new rule, for a new era: Smart Control. Open the IT environment to a Wild Open World of unprecedented choice, and better balance cost, risk, innovation and value. Users can achieve incredible things WITHOUT you, but they can only maximize their IT capabilities WITH you. Smart Control is about managing technology in tighter alignment with business goals, by loosening your grip on IT.
The tension of this loss of control, was by far the overwhelming theme of my conversations with clients at Symposium. Since I cover cloud computing, my topic was right at the heart of the new worries. Data from a survey we did earlier this year showed that less than 50% of cloud computing projects are initiated by IT leadership.
Most of the people that I talked to strongly held one of two utterly opposing beliefs: that cloud computing was going to be the Thing of the Future and the way they and their companies would consume IT in the future, and that cloud computing would be something that companies could never embrace. My mental shorthand for the extremes of these positions is “enthusiastic embrace” and “fear and loathing”.
I met IT leaders, especially in mid-sized business, who were tremendously excited by the possibilities of the cloud to free their businesses to move and innovate in ways that they never had before, and to free IT from the chores of keeping the lights on in order to help deliver more busines value. They understood that the technology in cloud is still fairly immature, but they wanted to figure out how they could derive benefits right now, taking smart risks to develop some learnings, and to deliver immediate business value.
And I also met IT leaders who fear the new world and everything it represents — a loss of control, a loss of predictability, an emphasis on outcomes rather than outputs, the need to take risks to obtain rewards. These people were looking for reasons not to change — reasons that they could take back to the business for why they should continue to do the things they always have, perhaps with some implementation of a “private cloud”, in-house.
The analyst keynote pointed out: The new type of CIO won’t ask first what the implementation cost is, or whether something complies with the architecture, but whether it’s good for the enterprise. They will train their teams to think like a business executive, asking first, “is this valuable”? And only then asking, “how can we make this work”? Rather than the other way around.
Changing technologies is often only moderately difficult. Changing mindsets, though, is an enormous challenge.