Category Archives: Infrastructure

Cloud IaaS is a lot more than just self-service VMs

One year ago, when we did our 2010 hosting/cloud Magic Quadrant, you were doing pretty well as a service provider if you had a bare-minimum cloud IaaS offering — a service in which customers could go in, push buttons and self-service provision and de-provision virtual machines. There were providers with more capabilities than that, but by and large, that was the baseline of an acceptable offering.

Today, a year later, that says, yup, you’ve got something — but that’s all. The market has moved on with astonishing speed. Bluntly, the feat of provisioning a VM is only so impressive, and doing it fast and well and with a minimum of customer heartache is now simply table stakes in the game.

If you really want to deliver value to a customer, as a service provider, you’ve got to be asking yourself what you can do to smooth the whole delivery chain of IT Operations management and everything the customer needs to build out their solution on your IaaS platform. That’s true whether your audience is developers, devops, or IT operations.

Think: Hierarchical management of users and resources, RBAC for both users and API keys, governance (showback/chargeback, quotas/budgets, leases, workflow-driven provisioning), monitoring (from resources to APM), auto-scaling (both horizontal and vertical), complex network configurations, multi-data-center replication, automated patch management, automated capabilities to support regulatory compliance needs, sophisticated service catalog capabilities that include full deployment templates and are coupled with on-demand software licensing, integration with third-party IT operations management tools… and that’s only the start.

If you are in the cloud IaaS business and you do not have an aggressive roadmap of new feature releases, you are going to be behind the eight-ball — and you should picture it as the kind of rolling boulder that chases Indiana Jones. It doesn’t matter whether your competitor is Amazon or one of the many providers in the VMware ecosystem. Software-defined infrastructure is a software business, and it moves at the speed of software. You don’t have to be Amazon and compete directly with their feature set, but you had better think about what value you can add that goes beyond getting VMs fast.

Managed Hosting and Cloud IaaS Magic Quadrant

We’re wrapping up our Public Cloud IaaS Magic Quadrant (the drafts will be going out for review today or tomorrow), and we’ve just formally initiated the Managed Hosting and Cloud IaaS Magic Quadrant. This new Magic Quadrant is the next update of last year’s Magic Quadrant for Cloud Infrastructure as a Service and Web Hosting.

Last year’s MQ mixed both managed hosting (whether on physical servers, multi-tenant virtualized “utility hosting” platforms, or cloud IaaS) as well as the various self-service cloud IaaS use cases. While it presented an overall market view, the diversity of the represented use cases meant that it was difficult to use the MQ for vendor selection.

Consequently, we added the Public Cloud IaaS MQ (covering self-service cloud IaaS), and retitled the old MQ to “Managed Hosting and Cloud IaaS” (covering managed hosting and managed cloud IaaS). They are going to be two dramatically different-looking MQs, with a very different vendor population.

The Managed Hosting and Cloud IaaS MQ covers:

  • Managed hosting on physical servers
  • Managed hosting on a utility hosting platform
  • Managed hosting on cloud IaaS
  • Managed hybrid hosting (blended delivery models)
  • Managed Cloud IaaS (at minimum, guest OS is provider-managed)

Both portions of the market are important now, and will continue to be important in the future, and we hope that having two Magic Quadrants will provide better clarity.

Yottaa, 4th-gen CDNs, and acceleration for the masses

I’d meant to blog about Yottaa when it launched back in October, because it’s one of the most interesting entrants to the CDN market that I’ve seen in a while.

Yottaa is a fourth-generation CDN that offers site analytics, edge caching, a little bit of network optimization, and front-end optimization.

While CDNs of earlier generations have heavily capital-intensive models that require owning substantial server assets, fourth-generation CDNs have a software-oriented model and own limited if any delivery assets themselves. (See a previous post for more on 4th-gen CDNs.)

In Yottaa’s case, it uses a large number of cloud IaaS providers around the world in order to get a global server footprint. Since these resources can be obtained on-demand, Yottaa can dynamically match its capacity to customer demand, rather than pouring capital into building out a server network. (This is a critical new market capability in general, because it means that as the global footprint of cloud IaaS grows, there can be far more competition in the innovative portion of the CDN market — it’s far less expensive to start a software company than a company trying to build out a competitive CDN footprint.)

There are three main things that you can do to speed up the delivery of a website (or Web-based app): you can do edge caching (classic static content CDN delivery), you can do network optimization (the kind of thing that F5 has in its Web App Accelerator add-on to its ADC, or, as a service, something like Akamai DSA), or you can do front-end optimization, sometimes known as Web content optimization or Web performance optimization (the kind of thing that Riverbed’s Aptimize does). Gartner clients only: See my research note “How to Accelerate Internet Websites and Applications” for more on acceleration techniques.

Yottaa does all three of these things, although its network optimizations are much more minimal than a typical DSA service. It does it in an easy-to-configure way that’s readily consumable by your average webmaster. And the entry price-point is $20/month. Sign up for an instant free trial, online — here’s the consumerization of CDN services for you.

When I took a look at calculating the prices at volume using the estimates that Yottaa gave me, I realized that it’s nowhere near as cheap as it might first look — it’s comparable to Akamai DSA pricing. But it’s the kind of thing that pretty much anyone could add to their site if they cared about performance. It brings acceleration to the mass market.

Sure, your typical Akamai customer is probably not going to be consuming this service just yet. But give it some time. This is the new face of competition in the CDN market — companies that are focused entirely on software development (possibly using low-cost labor: Yottaa’s engineering is in Beijing), relying on somebody else’s cloud infrastructure rather than burning capital building a network.

Three years ago, I asked in my blog: Are CDNs software or infrastructure companies? The new entrants, which also include folks like CloudFlare, come down firmly on the software side.

Recently, one of my Gartner Invest clients — a buy-side analyst who specializes in infrastructure software companies — pointed out to me that Akamai’s R&D spending is proportionately small compared to the other companies in that sector, while it is spending huge amounts of capex in building out more network capacity. I would previously have put Akamai firmly on the software side of the CDN as software or infrastructure question. It’s interesting to think about the extent to which this might or might not be the case going forward.

There’s no such thing as a “safe” public cloud IaaS

I’ve been trialing cloud IaaS providers lately, and the frustration of getting through many of the sign-up processes has reminded me of some recurring conversations that I’ve had with service providers over the past few years.

Many cloud IaaS providers regard the fact that they don’t take online sign-ups as a point of pride — they’re not looking to serve single developers, they say. This is a business decision, which is worth examining separately (a future blog post, and I’ve already started writing a research note on why that attitude is problematic).

However, many cloud IaaS providers state their real reason for not taking online sign-ups, or of having long waiting periods to actually get an account provisioned (and silently dropping some sign-ups into a black hole, whether or not they’re actually legitimate), is that they’re trying to avoid the bad eggs — credit card fraud, botnets, scammers, spammers, whatever. Some cloud providers go so far as to insist that they have a “private cloud” because it’s not “open to the general public”. (I consider this lying to your prospects, by the way, and I think it’s unethical. “Marketing spin” shouldn’t be aimed at making prospects so dizzy they can’t figure out your double-talk. The industry uses NIST definitions, and customers assume NIST definitions, and “private” therefore implies “single-tenant”.)

But the thing that worries me is that cloud IaaS providers claim that vetting who signs up for their cloud, and ensuring that they’re “real businesses”, makes their public, multi-tenant cloud “safe”. It doesn’t. In fact, it can lure cloud providers into a false sense of complacency, assuming that there will be no bad actors within their cloud, which means that they do not take adequate measures to defend against bad actors who work for a customer — or against customer mistakes, and most importantly, against breaches of a customer’s security that result in bad eggs having access to their infrastructure.

Cloud providers tell me that folks like Amazon spend a ton of money and effort trying to deal with bad actors, since they get tons of them from online sign-ups, and that they themselves can’t do this, either for financial or technical reasons. Well, if you can’t do this, you are highly likely to also not have the appropriate alerting to see when your vaunted legitimate customers have been compromised by the bad guys and have gone rogue; and therefore to respond to it immediately and automatically to stop the behavior and thereby protect your infrastructure and customers; and to hopefully automatically, accurately, and consistently do the forensics for law enforcement afterwards. Because you don’t expect it to be a frequent problem, you don’t have the paranoid level of automatic and constant sweep-and-enforce that a provider like Amazon has to have.

And that should scare every enterprise customer who gets smugly told by a cloud provider that they’re safe, and no bad guys can get access to their platform because they don’t take credit-card sign-ups.

So if you’re a security-conscious company, considering use of multi-tenant cloud services, you should ask prospective service providers, “What are you doing to protect me from your other customers’ security problems, and what measures do you have in place to quickly and automatically detect and eliminate bad behavior?” — and don’t accept “we only accept upstanding citizens like yourself on our cloud, sir” as a valid answer.

Common service provider myths about cloud infrastructure

We’re currently in the midst of agenda planning for 2012, which is a fancy way to say that we’re trying to figure out what we’re going to write next year. Probably to the despair of my managers, I am almost totally a spontaneous writer, who sits down on a plane and happens to write a research note on whatever it is that’s occurred to me at the moment. So I’ve been pondering what to write, and decided that I ought to tap into the deep well of frustration I’ve been feeling about the cloud IaaS market over the last couple of months.

Specifically, it started me in on thinking about the most common fallacies that I hear from current cloud IaaS providers, or from vendors who are working on getting into the business. I think each of these things is worthy of a research note (in some cases, I’ve already written one), but they’re also worth a blog post series, because I have the occasional desire to explode in frustrated rants. Also, when I write research, it’s carefully polite, thoughtfully-considered, heavily-nuanced, peer-reviewed documents that will run ten to twenty pages and be vaguely skimmed, often by mid-level folks in product marketing. If I write a blog post, it will be short and pointed and might actually get the point through to people, especially the executives who are more likely to read my blog than my research.

So, here’s the succinct list to be explored in further posts. These are things I have said to vendor clients in inquiries, in politely measured terms. These are the blunt versions:

Doing this cloud infrastructure thing is hard and expensive. Yes, I know that VMware told you that you could just get a VCE Vblock, put VMware’s cloud stack on it (maybe with a little help from VMware consulting), and be in business. That’s not the case. You will be making a huge number of engineering decisions (most of which can screw you in a variety of colorful ways, either immediately or down the road). You will be integrating a ton of tools and doing a bunch of software development yourself, if you want to have a vaguely competitive offering for anything other than the small business migrating from VPS. Ditto if you use Citrix (Cloud.com), OpenStack, or whomever. Even with professional services to help you. And once you have an offering, you will be in a giant competitive rat race where the best players innovate fast, and the capabilities gap widens, not closes. If you’re not up to it, white-label, resell, or broker instead.

There is more to the competition than Amazon, but ignore Amazon at your peril. Sure, Amazon is the market goliath, but if your differentiation is “we’re not like Amazon, we’re enterprise-class!”, you’re now competing against te dozens of other providers who also thought that would be a clever market differentiation. Not to mention that Amazon already serves the enterprise, and wants to deepen its inroads. (Where Amazon is hurting is the mid-market, but there’s tons of competition there, too.) Do you seriously think that Amazon isn’t going to start introducing service features targeted at the enterprise? They already have, and they’re continuing to do so.

Not everything has to be engineered to five nines of availability. Many businesses, especially those moving legacy workloads, need reliable, consistently high-performance infrastructure. Howeve, most businesses shouldn’t get infrastructure as one-size-fits-all — this is part of what is making internal data centers expensive. Instead, cloud infrastructure should be tiered — one management portal, one API, multiple levels of service at different price points. “Everything we do is enterprise-class” unfortunately implies “everything we do is expensive”.

Your contempt for the individual developer hugely limits your sales opportunities. Developers are the face of the business buyer. They are the way that cloud IaaS makes inroads into traditional businesses, including the largest enterprises. This is not just about start-ups or small businesses, or about the companies going DevOps.

Prospective customers will not call Sales when your website is useless. Your lack of useful information on your website doesn’t mean that eager prospects will call sales wanting to know what wonderful things you have. Instead, they will assume that you suck, and you don’t get the cloud, and you are hiding what you have because it’s not actually competitive, and they will move on to the dozens of other providers trying to sell cloud IaaS or who are pretending to do so. Also, engineers hate talking to salespeople. Blind RFPs are common in this market, but so is simply signing up with a provider that doesn’t make it painful to get their service.

Just because you don’t take online sign-ups doesn’t mean your cloud is “safe”. Even if you only take “legitimate businesses”, customers make mistakes and their infrastructure gets compromised. Sure, your security controls might ensure that the bad guys don’t compromise your other customers. But that doesn’t mean you won’t end up hosting command-and-control for a botnet, scammers, or spammers, inadvertently. Service providers who take credit card sign-ups are professionally paranoid about these things; buyers should beware providers who think “only real businesses like you can use our cloud” means no bad guys inside the walls.

Automation, not people, is the future. Okay, you’re more of a “managed services” kind of company, and self-service isn’t really your thing. Except “managed services” are, today, basically a codeword for “expensive manual labor”. The real future value of cloud IaaS is automating the heck out of most of the lower-end managed services. If you don’t get on that bandwagon soon, you are going to eventually stop being cost-competitive — not to mention that automation means consistency and likely higher quality. There’s a future in having people still, but not for things that are better done by computers.

Carriers won’t dominate the cloud. This opinion is controversial. Of course, carriers will be pretty significant players — especially since they’ve been buying up the leading independent cloud IaaS providers. But many other analyst firms, and certainly the carriers themselves, believe that the network, and the ability to offer an end-to-end service, will be a key differentiator that allows carriers to dominate this business. But that’s not what customers actually want. They want private networking from their carrier that connects them to their infrastructure — which they can get out of a carrier-neutral data center that is a “cloud hub”. Customers are better off going into a cloud hub with a colocated “cloud gateway” (with security, WAN optimization, etc.), cross-connecting to their various cloud providers (whether IaaS, PaaS, SaaS, etc.), and taking one private network connection home.

Stay tuned. More to come.

Trialing a lot of cloud IaaS providers

I’ve just finished writing the forthcoming Public Cloud IaaS Magic Quadrant (except for some anticipated tweaks when particular providers come back with answers to some questions), which has twenty providers. Although Gartner normally doesn’t do hands-on evaluations, this MQ was an exception, because the easiest way to find out if a given service can do X, was generally to get an account, and attempt to do X. Asking the vendor sometimes requires a bunch of back-and-forth, especially if they don’t do X but and are weaseling their reply, forcing you to ask a set of increasingly narrow, specific questions until you get a clear answer. Also, I did not want to constantly bombard the vendors with questions, since, come MQ time, it tends to result in a firedrill whether or not you intended the question as urgent or even particularly important. (I apologize for the fact that I ended up bombarding many vendors with questions, anyway.)

I’ve used cloud services before, of course, and I am a paying customer of two cloud IaaS providers and a hosting provider, for my personal hobbies. But there’s nothing quite like a blitzkrieg through this many providers all at once. (And I’m not quite done, because some providers without online sign-up are still getting back to me on getting a trial account.)

In the course of doing this, I have had some great experiences, some mediocre experiences, and some “you really sell this and people buy it?” experiences. I have online chatted with support folks for basic questions not covered in the documentation (like “if I stop this VM, does it stop billing me, or not?” which varies from provider to provider). I have filed numerous support tickets (for genuine issues, not for evaluation purposes). I have filed multiple bug reports. I have read documentation (sometimes scanty to non-existent). I have clicked around interfaces, and I have actually used the APIs (working in Python, and in one case, without using a library like libcloud); I have probably weirded out some vendors by doing these things at 2 am, although follow-the-sun support has been intriguing. Those of you who follow me on Twitter (@cloudpundit) have gotten little glimpses of some of these things.

Ironically, I have tried to not let these trials unduly influence my MQ evaluations, except to the extent that these things are indisputably factual — features, availability of documentation, etc. But I have taken away strong impressions about ease of use, even for just the basic task of provisioning and de-provisioning a virtual machine. There is phenomenal variation in ease of use, and many providers could really use the services of a usability expert.

Any number of these providers have made weird, seemingly boneheaded decisions in their UI or service design, for which there’s no penalty to anything in MQ scoring, but did occasionally make me stare and go, “Seriously?”

I’m reluctant to publicly call out vendors for this stuff, so I’ll pick just one example from a vendor that has open online sign-up, where it’s not a private issue that hasn’t been raised on a community forum, and they’re not the sort of vendor (I hope) to make angry calls to Gartner’s Ombudsman demanding that I take this post down. (Dear OpSource folks: Think of this as tough love, and I hope Dimension Data analyst relations doesn’t have conniptions.)

So, consider: OpSource has pre-built VMs, that come with a set amount of compute and RAM, bundled with an OS. Great. Except that you can’t alter a bundle at the time of provisioning. So, say, if I want their Ubuntu image, it comes only in a 2 CPU core config. If I want only 1 core, I have to provision that image, wait for the provision to finish, go in and edit the VM config to reduce it to 1 core, and then wait for it to restart. After I go through that song and dance once, I can clone the config… but it boggles the mind why I can’t get the config I want from the start. I’m sure there’s a good technical reason, but the provider’s job is to mask such things from the user.

The experience has also caused me to wholly revise my opinion of vCloud Director as a self-service tool for the average goomba who wants a VM. I’d always seen vCD as a demo being given by experts, where it looked like despite the pile of complex functionality, it was easy enough to use. The key thing is that the service catalogs were always pre-populated in those demos. If you’re starting from the bare vCD install that a vCloud Powered provider is going to give you, you face a daunting task. Complexity is necessary for that level of fine-grained functionality, but it’s software that is in desperate need of pre-configuration from the service provider, and quite possibly an overlay interface for Joe Average Developer.

Now we’ll see if my bank freezes my credit card for possible fraud, when I’m hit with a dozen couple-of-cents-to-a-few-dollar charges come the billing cycle — I used my personal credit card for this, not my corporate card, since Gartner doesn’t actually reimburse for this kind of work. Ironically, once I spent a bunch of time on these sites, Google and the other online ad networks have started displaying ads that consist of nothing but cloud providers, including “click here for a free trial” or “$50 credit” or whatever, but of course you can’t apply those to existing accounts, which makes every little, “hey, you’ve spent another ten cents provisioning and de-provisioning this VM” charge which I’m noting in the back of my head now, into something which will probably annoy me in aggregate come the billing cycle.

Some things, you just can’t know until you try it yourself.

Results of Symposium workshop on Amazon

I promised the attendees at my Gartner Symposium workshop, called “Using Amazon Web Services“, that I would post the notes from the session, so here they are — with some context for public consumption.

A workshop is a structured, facilitated discussions that are designed to assist participants in working through a problem, coming up with best practices, etc. This one had thirty people, all from IT organizations that were either using Amazon or planning to use Amazon.

Because I didn’t know what level of experience with Amazon the workshop attendees would have, I actually prepared two workshops in advance. One of them was a highly structured work-through of preparing to use Amazon in a more formal way (i.e., not a single developer with a credit card or the like), and the other was a facilitated sharing of challenges and best practices amongst current adopters. As the room skewed heavily towards people who already had a deployment well under way, this workshop focused on the latter.

I started the workshop with introductions — people, companies, current use cases. Then, I asked attendees to share their use cases in more details in their smaller working groups. This turned into a set of active discussions that I allowed extra time for, before I asked each of the group to make a list of their most significant challenges in adopting/using Amazon, and their solutions if any. Throughout, I circulated the room, listening and, rarely, commenting. Each group then shared their findings, and I offered some commentary and then did an open Q&A (with some more participant sharing of their answers to questions).

Broadly, I would say that we had three types of people in the room. We had folks from the public sector and education, who were at a relatively early stage in adoption; we had people who were test/dev oriented but in a significant way (i.e., formal adoption, not a handful of developers doing their thang); and we had people who were more e-business oriented (including people from net-native businesses like SaaS, as well as traditional businesses with a hosting type of need), although that could be test/dev or production. Most of the people were mid-level IT management with direct responsibility for the Amazon services.

Some key observations:

Dealing with the financial aspects of moving to the cloud is hard. Understanding the return on investment, accurately estimating costs in advance, comparing costs to internal costs, and understanding the details of billing were common challenges of the participants. Moreover, it raises the issue of “is capital king or is expense king?” Although the broader industry is constantly talking about how people are trying to move to expense rather than capital, workshop participants frequently asserted that it was easier for them to get capital than to up their recurring expenses. (As a side note, I have found that to be a frequent assertion in both inquiry and conference 1-on-1s.) Finally, user management, cost control, and turning resources on/off appropriately were problematic in the financial context.

Move low-risk workloads first. The workshop participants generally assessed Amazon as being suitable only to test/dev, non-mission-critical workloads, and things that had specifically been designed with Amazon’s characteristics in mind. Participants recommended a risk profile of apps, and moving low-risk apps first. They also saw their security organizations as being a barrier to adoption. Many had issues with their Legal departments either trying to prevent use of services or causing issues in the contracting process (what Amazon calls an Enterprise Agreement); participants recommended not involving Legal until after adopting the service.

Performance is a problem. Performance was cited as a frequent issue, especially storage performance, which participants noted was unsuitable to their production applications, and one participant made the key point that many test/dev situations also require highly performant storage (something he had first discovered when his ILM strategy placed test/dev storage at a lower more commodity tier and it impacted his developers).

Know what your SLA isn’t. Amazon’s limited SLAs were cited as an issue, particularly the mismatch in what many users thought the SLA was versus what it actually was, and what it’s actually turned out to be in practice (given Amazon’s outages this year). Participants also stressed business continuity planning in this context.

Integration is a challenge. Participants noted that going to test/dev in the cloud, while maintaining production in an internal data center, splits the software development lifecycle across data centers. This can be overcome to some degree with the appropriate tools, but still creates challenges and sometimes outright problems. Also, because speed of deployment is such a driving factor to go to the cloud, there is a resulting fragmentation of solutions. A service catalog would help some of these issues.

Data management can be a challenge. Participants were worried about regulatory compliance and the “where is my data?” question. Inexperienced participants were often not aware that non-S3 data is generally local to an availability zone. But even beyond that, there’s the question of what data is being put where by the cloud users. Participants with larger amounts of data also faced challenges in moving data in and out of the cloud.

Amazon isn’t the right provider for all workloads in the cloud. Several workshop participants used other cloud IaaS providers in addition to Amazon, for a variety of other reasons — greater ease of use for users who didn’t need complex things, enterprise-grade availability and performance, better manageability, security capabilities, and so forth.

I have conducted cloud workshops and what Gartner calls analyst/user roundtables at a bunch of our conferences now, and it’s always interesting what the different audiences think about, and how much it’s evolving over time. Compared to last year’s Symposium, the state of the art of Amazon adoption amongst conference attendees has clearly advanced hugely.

What makes Akamai sticky?

There’s one thing in particular that tends to make Akamai customers “sticky” — the amount the customer uses professional services. The more professional services a customer consumes from Akamai, the less likely it is they’ll ever switch CDNs. In short: The more of a pain it’s been for them to integrate with Akamai’s CDN (usually due to the customer having a complex site that violates best practices related to content cacheability), and the more they have to use recurring professional services every time they update their site, the less likely it is that they’re going to move to another CDN. That’s for two reasons — one, because it’s difficult and expensive to do the up-front work to get the site onto another CDN, and two, because most other CDNs don’t like to do extensive professional services on a recurring basis. That makes the use of professional services a double-edged sword, since it’s not really a business with great margins, and you’re vulnerable if the customer eventually goes and builds a site that isn’t a great big hairy mess.

But there’s one Akamai product (delivered as a value-added additional service) that’s currently sufficiently compelling that customers and prospects who want it, won’t consider any other CDN that can’t offer the same. (And since it’s currently unique to Akamai, that means no competition, always a boon in a market where pricing is daily warfare.) I’m suddenly seeing it frequently quoted, which makes it likely that it’s a significant sales push, though it’s not a brand-new product. It’s a very effective attach.

Can you guess what it is?

(You may feel free to speculate on my blog, but if you want the answer, and you’re a Gartner client, make an inquiry request through the usual means.)

AT&T’s CDN re-launch

(This is part of a series of “catch-up” posts of announcements that I’ve wanted to comment on but didn’t previously find time to blog about.)

AT&T recently essentially re-launched its CDN — new technology, new branding, new footprint.

AT&T’s existing CDN product, called iCDS, has had limited success in the marketplace. They’ve been a low-cost competitor, but their deal success in the high-volume market has been low — Level 3, for instance, has offered prices just as good or better, on a more featureful, higher-performance service, and with other competitors, notably Akamai and Limelight, willing to compete in the low-cost high-volume market, it’s been difficult for AT&T to compete successfully on price (although they certainly helped the general decline in prices). We’ve also seen them get good pick-up with CDN added to a managed hosting contract — there are plenty of managed hosting customers happy to sign on $1000 or $2000 worth of CDN. (We’ve also seen this with other hosters that casually quote a little bit of CDN along with managed hosting deals; it’s not just an AT&T phenomenon.) We’ve also seen them pitch “hey, you should use us if you want to reach iPhone customers”, but that’s too narrow for most content providers to consider right now.

Previously, AT&T had been insistent on developing all of its CDN technology in-house. AT&T has a long and proud “built here and only here” tradition, especially with AT&T Labs, but it simply hasn’t worked out well for its CDN, especially since anything that AT&T builds in portal technology tends to look like it was built by hard-core geeks for other hard-core geeks — not the slick, user-friendly, Web 2.0 interfaces that you’ll see coming out of many other service providers these days. That made all iCDS things to do with “how customers interface with the CDN to actually get something useful done”, including configuration and analytics, pretty sub-par to the market.

AT&T has now done something that would probably be smart for other carriers to emulate — buying CDN technology rather than developing it in-house. There are now plenty of vendors to choose from — Cisco, Juniper (Ankeena), Alcatel-Lucent (Velocix), Edgecast, 3Crowd, JetStream, etc. — and although these solutions vary wildly in quality and completeness, I’m still bemused by the number of carriers whose engineers are really jonesing to build their own in-house technology. In AT&T’s case, they’ve selected Edgecast’s software solution — a nice feather in the cap for Edgecast, definitely, given the kind of scrutiny that AT&T gives its solutions that are going to be deployed in its network. (Carrier CDNs are very much a hot trend at the moment, although they’re a hot trend relative to the otherwise glacial speed at which carriers do anything.)

AT&T is building out a new footprint of servers running the Edgecast software. They’ll operate both the old and new CDNs for some time — existing iCDS customers will continue to run on the existing iCDS platform and footprint, and new customers will go onto the new platform. That means it’s going to take some time to assess the real performance of the new CDN, as the POPs are being rolled out gradually. The new footprint will be similar, but not identical to, the old fotprint.

However, I don’t think the launch of a new AT&T CDN is anywhere near as significant for the market as AT&T’s continued success in reselling Cotendo. The AT&T CDN itself is simply part of the already-commoditized market for high-volume delivery — the re-launch will likely return them to real competitiveness, but doesn’t change any fundamental market dynamics.

Riverbed acquires Zeus and Aptimize

(This is part of a series of “catch-up” posts of announcements that I’ve wanted to comment on but didn’t previously find time to blog about.)

Riverbed made two interesting acquisitions recently, which I think signal a clear intention to be more than just a traditional WAN optimization controller (WOC) vendor — Zeus, and Aptimize. If you’re an investment banker or a networking vendor, who has talked to me over the last year, you know that these two companies have been right at the top of my “who I think should get bought” list; these are both great pick-ups for Riverbed.

Zeus has been around for quite some time now, but a lot of people have never heard of them. They’re a small company in the UK. Those of you who have been following infrastructure for the Web since the 1990s might remember them as the guys who developed the highest-performance webserver — if a vendor did SPECweb benchmarks for its hardware back then, they generally used Zeus for the software. It was a great service provider product, too, especially for shared Web hosting — it had tons of useful sandboxing and throttling features that were light-years ahead of anyone else back then. But despite the fact that the tech was fantastic, Zeus was never really commercially successful with their webserver software, and eventually they turned their underlying tech to building application delivery controller (ADC) software instead.

Today, Zeus sells a high-performance, software-based ADC, with a nice set of features, including the ability to act as a proxy cache. It’s a common choice for high-end load-balancing when cloud IaaS customers need to be able to deploy a virtual appliance running on a VM, rather than dropping in a box. It’s also the underlying ADC for a variety of cloud IaaS providers, including Joyent and Rackspace (which means it’ll also get an integration interface to OpenStack). Notably, over the last two years, we’ve seen Zeus supplanting or supplementing F5 Networks in historically strong F5 service provider accounts.

Aptimize, by contrast, is a relatively new start-up. It’s a market leader in front-end optimization (FEO), sometimes also called Web performance optimization (WPO) or Web content optimization (WCO). FEO is the hot new thing in acceleration — it’s been the big market innovation in the last two years. While previous acceleration approaches have focused upon network and protocol optimization, or on edge caching, FEO optimizes the pages themselves — the HTML, Cascading Style Sheets (CSS), JavaScript, and so forth that goes into them. It basically takes whatever the webserver output is and attempts to automatically apply the kinds of best practices that Steve Souders has espoused in his books.

Aptimize makes a software-based FEO solution which can be deployed in a variety of ways, including as a virtual appliance running on a VM. (FEO is generally a computationally expensive thing, though, since it involves lots of text parsing, so it’s not unusual to see it run on a standalone server.)

So, what Riverbed has basically bought itself is the ability to offer a complete optimization solution — WOC, ADC, and FEO — plus the intellectual property portfolio to potentially eventually combine the techniques from all three families of products into an integrated product suite. (Note that Riverbed is fairly cloud-friendly already with its Virtual Steelhead.)

I think this also illustrates the vital importance of “beyond the box” thinking. Networking hardware has traditionally been just that — specialized appliances with custom hardware that can do something to traffic, really really fast. But off-the-shelf servers have gotten so powerful that they can now generate the kind of processing umph and network throughput that you used to have to build custom hardware logic to achieve. That’s leading us to the rise of networking vendors who make software appliances instead, because it’s a heck of a lot easier and cheaper to launch a software company than a hardware company (probably something like a 3:1 ratio in funding needed), you can have product to market and iterate much more quickly, and you can integrate more easily with other products.

ObPlug for new, related research notes (Gartner clients only):