Blog Archives

What’s cloud IaaS really about?

As expected, the Magic Quadrant for Cloud IaaS and Web Hosting is stirring up much of the same debate that was raised with the publication of the 2009 MQ.

Derrick Harris over at GigaOM thinks we got it wrong. He writes: Cloud IaaS is about letting users get what they need, when they need it and, ideally, with a credit card. It doesn’t require requisitioning servers from the IT department, signing a contract for any predefined time period or paying for services beyond the computing resources.

Fundamentally, I dispute Derrick’s assertion of what cloud IaaS is about. I think the things he cites above are cool, and represent a critical shake-up in thinking about IT access, but it’s not ultimately what the whole cloud IaaS market is about. And our research note is targeted at Gartner’s clients — generally IT management and architects at mid-sized businesses and enterprises, along with technology start-ups of all sizes (but generally ones that are large enough to have either funding or revenue).

Infrastructure without a contract? Convenient initially, but as the relationship gets more significant, usually not preferable. In fact, most businesses like to be able to negotiate contract terms. (For that matter, Amazon does customzed Enterprise Agreements with its larger customers.) Businesses love not having to commit to capacity, but the whole market is shifting its business models pretty quickly to adapt to that desire.

Infrastructure without involving traditional IT operations? Great, but someone’s still got to manage the infrastructure — shoving it in the cloud does not remove the need for operations, maintenance, patch management, security, governance, budgeting, etc. Gartner’s clients generally don’t want random application developers plunking down a credit card and just buying stuff willy-nilly. Empower developers with self-provisioning, sure — but provisioning raw infrastructure is the easy and cheap part, in the grand scheme of things.

Paying for services beyond the computing resources? Sure, some people love to self-manage their infrastructure. But really, what most people want to do is to only worry about their application. Their real dream is that cloud IaaS provides not just compute capacity, but secure compute capacity — which generally requires handling routine chores like patch management, and dealing with anti-virus and security event monitoring and such. In other words, they want to eliminate their junior sysadmins. They’re not looking for managed hosting per se; they’re looking to get magic, hassle-free compute resources.

I obviously recognize Amazon’s contributions to the market. The MQ entry on Amazon begins with: Amazon is a thought leader; it is extraordinarily innovative, exceptionally agile and very responsive to the market. It has the richest cloud IaaS product portfolio, and is constantly expanding its service offerings and reducing its prices. But I think Amazon represents an aspect of a broad market.

Cloud IaaS is complicated by the diversity of use cases for it. Our clients are also looking for specific guidance on just the “pure cloud”, self-provisioned “virtual data center” services, so we’re doing two more upcoming vendor ratings to address that need — a Critical Capabilities note that is focused solely on feature sets, and a mid-year Magic Quadrant that will be purely focused on this.

I could talk at length about what our clients are really looking for and what they’re thinking with respect to cloud IaaS, which is a pretty complicated and interesting tangle, but I figure I really ought to write a research note for that… and get back to my holiday vacation for now.

Bookmark and Share

What does the cloud mean to you?

My Magic Quadrant for Cloud Infrastructure as a Service and Web Hosting is done. The last week has been spent in discussion with service providers over their positioning and the positioning of their competitors and the whys and wherefores and whatnots. That has proven to be remarkably interesting this year, because it’s been full of angry indignation by providers claiming diametrically opposed things about the market.

Gartner gathers its data about what people want in two ways — from primary research surveys, and, often more importantly, from client inquiry, the IT organizations who are actually planning to buy things or better yet are actually buying things. I currently see a very large number of data points — a dozen or more conversations of this sort a day, much of it focused on buying cloud IaaS.

And so when a provider tells me, “Nobody in the market wants to buy X!”, I generally have a good base from which to judge whether or not that’s true, particularly since I’ve got an entire team of colleagues here looking at cloud stuff. It’s never that those customers don’t exist; it’s that the provider’s positioning has essentially guaranteed that they don’t see the deals outside their tunnel vision service.

The top common fallacy, overwhelmingly, is that enterprises don’t want to buy from Amazon. I’ve blogged previously about how wrong this is, but at some point in the future, I’m going to have to devote a post (or even a research note) to why this is one of the single greatest, and most dangerous, delusions, that a cloud provider can have. If you offer cloud IaaS, or heck, you’re a data-center-related business, and you think you don’t compete with Amazon, you are almost certainly wrong. Yes, even if your customers are purely enterprise — especially if your customers are large enterprises.

The fact of the matter is that the people out there are looking at different slices of cloud IaaS, but they are still slices of the same market. This requires enough examination that I’m actually going to write a research note instead of just blogging about it, but in summary, my thinking goes like this (crudely segmented, saving the refined thinking for a research note):

There are customers who want self-managed IaaS. They are confident and comfortable managing their infrastructure on their own. They want someone to provide them with the closest thing they can get to bare metal, good tools to control things (or an API they can use to write their own tools), and then they’ll make decisions about what they’re comfortable trusting to this environment.

There are customers who want lightly-managed IaaS, which I often think of as “give me raw infrastructure, but don’t let me get hacked” — which is to say, OS management (specifically patch management) and managed security. They’re happy managing their own applications, but would like someone to do all the duties they typically entrust to their junior sysadmins.

There are customers who want complex management, who really want soup-to-nuts operations, possibly also including application management.

And then in each of these segments, you can divide customers into those with a single application (which may have multiple components and be highly complex, potentially), and those who have a whole range of stuff that encompass more general data center needs. That drives different customer behaviors and different service requirements.

Claiming that there’s no “real” enterprise market for self-managed is just as delusional as claiming there’s no market for complex management. They’re different use cases in the same market, and customers often start out confused about where they fall along this spectrum, and many customers will eventually need solutions all along this spectrum.

Now, there’s absolutely an argument to be made that the self-managed and lightly-managed segments together represent an especially important segment of the market, where a high degree of innovation is taking place. It means that I’m writing some targeted research — selection notes, a Critical Capabilities rating of individual services, probably a Magic Quadrant that focuses specifically on this next year. But the whole spectrum is part of the cloud IaaS adoption phenomenon, and any individual segment isn’t representative of the total market evolution.

Bookmark and Share

Amazon, ISO 27001, and a correction

FlyingPenguin has posted a good critique of my earlier post about Amazon’s ISO 27001 certification.

Here’s a succinct correction:

To quote Wikipedia, ISO 27001 requires that management:

  • Systematically examine the organization’s information security risks, taking account of the threats, vulnerabilities and impacts;
  • Design and implement a coherent and comprehensive suite of information security controls and/or other forms of risk treatment (such as risk avoidance or risk transfer) to address those risks that are deemed unacceptable; and
  • Adopt an overarching management process to ensure that the information security controls continue to meet the organization’s information security needs on an ongoing basis.

ISO 27002, which details the security best practices, is not required to be used in conjunction with 27001, although this is customary. I forgot this when I wrote my post (when I was reading docs written by my colleagues on our security team, which specifically recommend the 27001 approach, in the context of 27002).

In other words: 27002 is proscriptive in its controls; 27001 is not that specific.

So FlyingPenguin is right — without the 27002, we have no idea what security controls Amazon has actually implemented.

Bookmark and Share

Amazon, ISO 27001, and some conference observations

Greetings from Gartner’s Application Architecture, Development, and Integration Summit. There are around 900 people here, and the audience is heavy on enterprise architects and other application development leaders.

One of the common themes of my interaction here has been talking to an awful lot of people who are using or have used Amazon for IaaS. They’re a different audience than the typical clients I talk to about the cloud, who are generally IT Operations folks, IT executives, or Procurement folks. The audience here is involved in assessing the cloud, and in adopting the cloud in more skunkworks ways — but they are generally not ultimately the ones making the purchasing decisions. Consequently, they’ve got a level of enthusiasm about it that my usual clients don’t share (although it correlates with the reported enthusiasm they know their app dev folks have for it). Fun conversations.

So on the heels of Amazon’s ISO 27001 certification, I thought it’d be worth jotting down a few thoughts about Amazon and the enterprise.

To start with, SAS 70 Is Not Proof of Security, Continuity or Privacy Compliance (Gartner clients only). As my security colleagues Jay Heiser and French Caldwell put it, “The SAS 70 auditing report is widely misused by service providers that find it convenient to mischaracterize the program as being a form of security certification. Gartner considers this to be a deceptive and harmful practice.” It certainly is possible for a vendor to do a great SAS 70 certification — to hold themselves to best pratices and have the audit show that they follow them consistently — but SAS 70 itself doesn’t require adherence to security best practices. It just requires you to define a set of controls, and then demonstrate you follow them.

ISO 27001, on the other hand, is a security certification standard that examines the efficacy of risk management and an organization’s security posture, in the context of ISO 27002, which is a detailed security control framework. This certification actually means that you can be reasonably assured that an organization’s security controls are actually good, effective ones.

The 27001 cert — especially meaningful here because Amazon certified its actual infrastructure platform, not just its physical data centers — addresses two significant issues with assessing Amazon’s security to date. First, Amazon doesn’t allow enterprises to bring third-party auditors into its facilities and to peer into its operations, so customers have to depend on Amazon’s own audits (which Amazon does share under certain circumstances). Second, Amazon does a lot of security secret sauce, implementing things in ways different than is the norm — for instance, Amazon claims to provide network isolation between virtual machines, but unlike the rest of the world, it doesn’t use VLANs to achieve this. Getting something like ISO 27001, which is proscriptive, hopefully offers some assurance that Amazon’s stuff constitutes effective, auditable controls.

(Important correction: See my follow-up. The above statement is not true, because we have no guarantee Amazon follows 27002.)

A lot of people like to tell me, “Amazon will never be used by the enterprise!” Those people are wrong (and are almost always shocked to hear it). Amazon is already used by the enterprise — a lot. Not necessarily always in particularly “official” ways, but those unofficial ways can sometimes stack up to pretty impressive aggregate spend. (Some of my enterprise clients end up being shocked by how much they’re spending, once they total up all the credit cards.)

And here’s the important thing: The larger the enterprise, the more likely it is that they use Amazon, to judge from my client interactions. (Not necessarily as their only cloud IaaS provider, though.) Large enterprises have people who can be spared to go do thorough evaluations, and sit on committees that write recommendations, and decide that there are particular use cases that they allow, or actively recommend, Amazon for. These are companies that assess their risks, deal with those risks, and are clear on what risks they’re willing to take with what stuff in the cloud. These are organizations — some of the largest global companies in the world — for whom Amazon will become a part of their infrastructure portfolio, and they’re comfortable with that, even if their organizations are quite conservative.

Don’t underestimate the rate of change that’s taking place here. The world isn’t shifting overnight, and we’re going to be looking at internal data centers and private clouds for many years to come, but nobody can afford to sit around smugly and decide that public cloud is going to lose and that a vendor like Amazon is never going to be a significant player for “real businesses”.

One more thing, on the subject of “real businesses”: All of the service providers who keep telling me that your multi-tenant cloud isn’t actually “public” because you only allow “real businesses”, not just anyone who can put down a credit card? Get over it. (And get extra-negative points if you consider all Internet-centric companies to not be “real businesses”.) Not only isn’t it a differentiator, but customers aren’t actually fooled by this kind of circumlocution, and the guys who accept credit cards still vet their customers, albeit in more subtle ways. You’re multi-tenant, and your customers aren’t buying as a consortium or community? Then you’re a public cloud, and to claim otherwise is actively misleading.

Bookmark and Share

Amazon’s Free Usage Tier

Amazon recently introduced a Free Usage Tier for its Web Services. Basically, you can try Amazon EC2, with a free micro-instance (specifically, enough hours to run such an instance full-time, and have a few hours left over to run a second instance, too; or you can presumably use a bunch of micro-instances part-time), and the storage and bandwidth to go with it.

Here’s what the deal is worth at current pricing, per-month:

  • Linux micro-instance – $15
  • Elastic load balancing – $18.87
  • EBS – $1.27
  • Bandwidth – $3.60

That’s $38.74 all in all, or $464.88 over the course of the one-year free period — not too shabby. Realistically, you don’t need the load-balancing if you’re running a single instance, so that’s really $19.87/month, $238.44/year. It also proves to be an interesting illustration of how much the little incremental pennies on Amazon can really add up.

It’s a clever and bold promotion, making it cost nothing to trial Amazon, and potentially punching Rackspace’s lowest-end Cloud Servers business in the nose. A single instance of that type is enough to run a server to play around with if you’re a hobbyist, or you’re a garage developer building an app or website. It’s this last type of customer that’s really coveted, because all cloud providers hope that whatever he’s building will become wildly popular, causing him to eventually grow to consume bucketloads of resources. Lose that garage guy, the thinking goes, and you might not be able to capture him later. (Although Rackspace’s problem at the moment is that their cloud can’t compete against Amazon’s capabilities once customers really need to get to scale.)

While most of the cloud IaaS providers are actually offering free trials to most customers they’re in discussions with, there’s still a lot to be said about just being able to sign up online and use something (although you still have to give a valid credit card number).

Bookmark and Share

Amazon introduces “micro instances” on EC2

Amazon has introduced a new type of EC2 instance, called a Micro Instance. These start at $0.02/hour for Linux and $0.03/hour for Windows, come with 613 MB of allocated RAM, a low allocation of CPU, and a limited ability to burst CPU. They have no local storage by default, requiring you to boot from EBS.

613 MB is not a lot of RAM, since operating systems can be RAM pigs if you don’t pay attention to what you’re running in your baseline OS image. My guess is that people who are using micro instances are likely to want to use a JeOS stack if possible. I’d be suggesting FastScale as the tool for producing slimmed-down stacks, except they got bought out some months ago, and wrapped in with EMC Ionix into VMware’s vCenter Configuration Manager; I don’t know if they’ve got anything that builds EC2 stacks any longer.

Amazon has suggested that micro instances can be used for small tasks — monitoring, cron jobs, DNS, and other such things. To me, though, smaller instances are perfect for a lot of enterprise applications. Tons of enterprise apps are “paperwork apps” — fill in a form, kick off some process, be able to report on it later. They get very little traffic, and consolidating the myriad tertiary low-volume applications is one of the things that often drives the most attractive virtualization consolidation ratios. (People are reluctant to run multiple apps on a single OS instance, especially on Windows, due to stability issues, so being able to give each app its own VM is a popular solution.) I read micro instances as being part of Amazon’s play towards being more attractive to the enterprise, since tiny tertiary apps are a major use case for initial migration to the cloud. Smaller instances are also potentially attractive to the test/dev use case, though somewhat less so, since more speed can mean more efficient developers (fewer compiling excuses).

This is very price-competitive with the low end of Rackspace’s Cloud Servers ($0.015/hour for 256 MB and $0.03/hour for 512 MB RAM, Linux only). Rackspace wins on pure ease of use, if you’re just someone who needs a single virtual server, but Amazon’s much broader feature set is likely to win over those who are looking for more than a VPS on steroids. GoGrid has no competitive offering in this range. Terremark can be competitive in this space due to their ability to oversubscribe and do bursting, making its cloud very suitable for smaller-scale enterprise apps. And VirtuStream can also offer smaller allocations tailored to small-scale enterprise apps. So Amazon’s by no means alone in this segment — but it’s a positive move that rounds out their cloud offerings.

Bookmark and Share

The (temporary?) transformation of hosters

Classically, hosting companies have been integrators of technology, not developers of technology. Yet the cloud world is increasingly pushing hosting companies into being software developers — companies who create competitive advantage in significant part by creating software which is used to deliver capabilities to customers.

I’ve heard the cloud IaaS business compared to the colocation market of the 1990s — the idea that you build big warehouses full of computers and you rent that compute capacity to people, comparable conceptually to renting data center space. People who hold this view tend to say things like, “Why doesn’t company X build a giant data center, buy a lot of computers, and rent them? Won’t the guy who can spend the most money building data centers win?” This view is, bluntly, completely and utterly wrong.

IaaS is functionally becoming a software business right now, one that is driven by the ability to develop software in order to introduce new features and capabilities, and to drive quality and efficiency. IaaS might not always be a software business; it might eventually be a service-and-support business that is enabled by third-party software. (This would be a reasonable view if you think that VMware’s vCloud is going to own the world, for instance.) And you can get some interesting dissonances when you’ve got some competitors in a market who are high-value software businesses vs. other folks who are mostly commodity infrastructure providers enabled by software (the CDN market is a great example of this). But for the next couple of years at least, it’s going to be increasingly a software business in its core dynamics; you can kind of think of it as a SaaS business in which the service delivered happens to be infrastructure.

To illustrate, let’s talk about Rackspace. Specifically, let’s talk about Rackspace vs. Amazon.

Amazon is an e-commerce company, with formidable retail operations skills embedded in its DNA, but it is also a software company, with more than a decade of experience under its belt in rolling out a continuous stream of software enhancements and using software to drive competitive advantage.

Amazon, in the cloud IaaS part of its Web Services division, is in the business of delivering highly automated IT infrastructure to customers. Custom-written software drives their entire infrastructure, all the way down to their network devices. Software provides the value-added enhancements that they deliver on top of the raw compute and storage, from the spot pricing marketplace to auto-scaling to the partially-automated MySQL management provided by the RDS service. Amazon’s success and market leadership depends on consistently rolling out new and enhanced features, functions, capabilities. It can develop and release software on such aggressive schedules that it can afford to be almost entirely tactical in its approach to the market, prioritizing whatever customers and prospects are demanding right now.

Rackspace, on the other hand, is a managed hosting company, built around a deep culture of customer service. Like all managed hosters, they’re imperfect, but on the whole, they are the gold standard of service, and customer service is one of the key differentiators in managed hosting, driving Rackspace’s very rapid growth over the last five years. Rackspace has not traditionally been a technology leader; historically, it’s been a reasonably fast follower implementing mainstream technologies in use by its target customers, but people, not engineering, has been its competitive advantage.

And now, Rackspace is going head to head with Amazon on cloud IaaS. It has made a series of acquisitions aimed at acquiring developers and software technology, including Slicehost, JungleDisk, and Webmail.us. (JungleDisk is almost purely a software company, in fact; it makes money by selling software licenses.) Even if they emphasize other competitive differentiation, like customer support, they’re still in direct competition with Amazon on pure functionality. Can Rackspace obtain the competencies it will need to be a software leader?

And in related questions: Can the other hosters who eschew the VMware vCloud route manage to drive the featureset and innovation they’ll need competitively? Will vCloud be inexpensive enough and useful enough to be widely adopted by hosters, and if it is, how much will it commoditize this market? What does this new emphasis upon true development, not just integration, do to hosters and to the market as a whole? (I’ve been thinking about this a lot, lately, although I suspect it’ll go into a real research note rather than a blog post.)

Bookmark and Share

Speculating on Amazon’s capacity

How much capacity does Amazon EC2 have? And how much gets provisioned?

Given that it’s now clear that there are capacity constraints on EC2 (i.e., periods of time where provisioning errors out due to lack of capacity), this is something that’s of direct concern to users. And for all the cloud-watchers, it’s a fascinating study of IaaS adoption.

Randy Bias of CloudScaling has recently posted some interesting speculation on EC2 capacity.

Guy Rosen has done a nifty analysis of EC2 resource IDs, translated to an estimate of the number of instances provisioned on the platform in a day. Remember, when you look at provisioned instances (i.e., virtual servers), that many EC2 instances are short-lived. Auto-scaling can provision and de-provision servers frequently, and there’s significant use of EC2 for batch-computing applications.

Amazon’s unreserved-instance capacity is not unlimited, as people have discovered. There are additional availability zones, and for serious users of the platform, choosing the right zone has become minimal, since you don’t want to pay for cross-zone data transfers or absorb the latency impact, if you don’t have to.

We’re entering a time of year that’s traditionally a traffic ramp for Amazon, the fall leading into Christmas. It should be interesting to see how Amazon balances its own need for capacity (AWS is used for portions of the company’s retail site), reserved EC2 capacity, and unreserved EC2 capacity. I suspect that the nature of EC2’s usage makes it much more bursty than, say, a CDN.

Bookmark and Share

Are multiple cloud APIs bad?

Rackspace has recently launched a community portal called Cloud Tools, showcasing third-party tools that support Rackspace’s cloud compute and storage services. The tools are divided into “featured” and “community”. Featured tools are ones that Rackspace has looked at and believes deserve highlighting; they’re not necessarily commercial projects, but Rackspace does have formal relationships with the developers. Community tools are fro any random joe out there who’d like to be listed. The featured tools get a lot more bells and whistles.

While this is a good move for Rackspace, it’s not ground-breaking stuff, although the portal is notable for a design that seems more consumer-friendly (by contrast with Amazon’s highly text-dense, spartan partner listings). Rather, what’s interesting is Rackspace’s ongoing (successful) efforts to encourage an ecosystem to develop around its cloud APIs, and the broader question of cloud API standardization, “de facto” standards, and similar issues.

There are no small number of cloud advocates out there that believe that rapid standardization in the industry would be advantageous, and that Amazon’s S3 and EC2 APIs, as the APIs with the greatest current adoption and broadest tools support, should be adopted as a de facto standard. Indeed, some cloud-enablement packages, like Eucalyptus, have adopted Amazon’s APIs — and will probably run into API dilemmas as they evolve, as private cloud implementations will be different than public ones, leading to inherent API differences, and a commitment to API compatibility means that you don’t fully control your own feature roadmap. There’s something to be said for compatibility, certainly. Compatibility drives commoditization, which would theoretically lower prices and deliver benefits to end-users.

However, I believe that it’s too early in the market to seek commoditization. Universal commitment to a particular API at this point clamps standardized functionality within a least-common-denominator range, and it restricts the implementation possibilities, to the detriment of innovation. As long as there is rapid innovation and the market continues to offer a slew of new features — something which I anticipate will continue at least through the end of 2011 and likely beyond — standardization is going to be of highly limited benefit.

Rackspace’s API is different than Amazon’s because Rackspace has taken some different fundamental approaches, especially with regard to the network. For another example of significant API differences, compare EMC’s Atmos API to Amazon’s S3 API. Storage is a pretty simple thing, but there are nevertheless meaningful differences in the APIs, reflecting EMC’s different philosophy and approach. (As a sideline, you might find William Vambenepe’s comparison of public cloud APIs in the context of REST, to be an interesting read.)

Everyone can agree on a certain set of core cloud concepts, and I expect that we’ll see libraries that provide unified API access to different underlying clouds; for instance, libcloud (for Python) is the beginning of one such effort. And, of course, third parties like RightScale specialize in providing unified interfaces to multiple clouds.

One thing to keep in mind: Most of the cloud APIs to date are really easy to work with. This means that if you have a tool that supports one API, it’s not terribly hard or time-consuming to make it support another API, assuming that you’re confining yourself to basic functionality.

There’s certainly something to be said in favor of other cloud providers offering an API compatibility layer for basic EC2 and S3 functionality, to satisfy customer demand for such. This also seems to be the kind of thing that’s readily executed as a third-party library, though.

Bookmark and Share

Amazon VPC is not a private cloud

The various reactions to Amazon’s VPC announcement have been interesting to read.

Earlier today, I summarized what VPC is and isn’t, but I realize, after reading the other reactions, that I should have been clearer on one thing: Amazon VPC is not a private cloud offering. It is a connectivity option for a public cloud. If you have concerns about sharing infrastructure, they’re not going to be solved here. If you have concerns about Amazon’s back-end security, this is one more item you’re going to have to trust them on — all their technology for preventing VM-to-VM and VM-to-public-Internet communication is proprietary.

Almost every other public cloud compute provider already offers connectivity options beyond public Internet. Many other providers offer multiple types of Internet VPN (IPsec, SSL, PPTP, etc.), along with options to connect virtual servers in their clouds to colocated or dedicated equipment within the same data center, and options to connect those cloud servers to private, dedicated connectivity, such as an MPLS VPN connection or other private WAN access method (leased line, etc.).

All Amazon has done here is join the club — offering a service option that nearly all their competitors already offer. It’s not exactly shocking that customers want this; in fact, customers have been getting this from competitors for a long time now, bugging Amazon to offer an option, and generally not making a secret of their desires. (Gartner clients: Connectivity options are discussed in my How to Select a Cloud Computing Infrastructure Provider note, and its accompanying toolkit worksheet.)

Indeed, there’s likely a burgeoning market for Internet VPN termination gear of various sorts, specifically to serve the needs of cloud providers — it’s already commonplace to offer a VPN for administration, allowing cloud servers to be open to the Internet to serve Web hits, but only allow administrative logins via the backend VPN-accessed network.

What Amazon has done that’s special (other than being truly superb at public relations) is to be the only cloud compute provider that I know of to fully automate the process of dealing with an IPsec VPN tunnel, and to forego individual customer VLANs for their own layer 2 isolation method. You can expect that other providers will probably automate VPN set-up so in the future, but it’s possibly less of a priority on their road maps. Amazon is deeply committed to full automation, which is necessary at their scale. The smaller cloud providers can get away with some degree of manual provisioning for this sort of thing, still — and it should be pretty clear to equipment vendors (and their virtual appliance competitors) that automating this is a public cloud requirement, ensuring that the feature will show up across the industry within a reasonable timeframe.

Think of it this way: Amazon VPC does not isolate any resources for an individual customer’s use. It provides Internet VPN connectivity to a shared resource pool, rather than public Internet connectivity. It’s still the Internet — the same physical cables in Amazon’s data center and across the world, and the same logical Internet infrastructure, just with a Layer 3 IPsec encrypted tunnel on top of it. VPC is “virtual private” in the same sense that “virtual private” is used in VPN, not in the sense of “private cloud”.

Bookmark and Share