Blog Archives

Google Compute Engine goes GA

Google Compute Engine (GCE) — Google’s cloud IaaS offering — is now in general availability, an announcement accompanied by a 10% price drop, new persistent disk (which should now pretty much always be used instead of scratch disk), and expanded OS support (though no Microsoft Windows yet). The announcement also highlights two things I wrote about GCE recently, in posts about its infrastructure resilience features — live migration and fast VM restart.

Amazon Web Services (AWS) remains the king of this space and is unlikely to be dethroned anytime soon, although Microsoft Windows Azure is clearly an up-and-coming competitor due to Microsoft’s deep established relationships with business customers. GCE is more likely to target the cloud-natives that are going to AWS right now — companies doing things that the cloud is uniquely well-suited to serve. But I think the barriers to Google moving into mainstream businesses are more of a matter of go-to-market execution, along with trust, track record, and an enterprise-friendly way of doing business — Google’s competitive issues are unlikely to be technology.

In fact, I think that Google is likely to push the market forward in terms of innovation in a way that Azure will not; AWS and Google will hopefully goad each other into one-upsmanship, creating a virtuous cycle of introducing things that customers discover they love, thus creating user demand that pushes the market forward. Google has a tremendous wealth of technological capabilities in-house that it likely can externalize over time. Most organizations can’t do things the way that Google does them, but Google can certainly start making the attempt to make it easier for other organizations to adopt the Google approach to the world, by exposing their tools in an easily-consumable way.

GCE still lags AWS tremendously in terms of breadth and depth of feature set, of course, but it also has aspects that are immediately more attractive for some workloads. However, it’s now at the point where it’s a viable alternative to AWS for organizations who are looking to do cloud-native applications, whether they’re start-ups or long-established companies. I think the GA of GCE is a demarcation of market eras — we’re now moving into a second phase of this market, and things only get more interesting from here onwards.

CenturyLink (Savvis) acquires Tier 3

If you’re an investment banker or a vendor, and you’ve asked me in the last year, “Who should we buy?”, I’ve often pointed at enStratius (bought by Dell), ServiceMesh (bought by CSC last week), and Tier 3.

So now I’m three for three, because CenturyLink just bought Tier 3, continuing its acquisition activity. CenturyLink is a US-based carrier (pushed to prominence when they acquired Qwest in 2011). They got into the hosting business (in a meaningful way) when they bought Savvis in 2011; Savvis has long been a global leader in colocation and managed hosting. (It’s something of a pity that CenturyLink is in the midst of killing the Savvis brand, which has recently gotten a lot of press because of their partnership with VMware for vCHS, and is far better known outside the US than the CenturyLink brand, especially in the cloud and hosting space.)

Savvis has an existing cloud IaaS business and a very large number of offerings that have the “cloud” label, generally under the Symphony brand — I like to say of Savvis that they never seem to have a use case that they don’t think needs another product, rather than having a unified but flexible platform for everything.

The most significant of Savvis’s cloud offerings are Symphony VPDC (recently rebranded to Cloud Data Center), SavvisDirect, and their vCHS franchise. VPDC is a vCloud-Powered public cloud offering (although Savvis has done a more user-friendly portal than vCloud Director provides); Savvis often combines it with managed services in lightweight data center outsourcing deals. (Savvis also has private cloud offerings.) SavvisDirect is an offering developed together with CA, and is intended to be a pay-as-you-go, credit-card-based offering, targeted at small businesses (apparently intended to be competitive with AWS, but whose structure seems to illustrate a failure to grasp the appeal of cloud as opposed to just mass-market VPS).

Savvis is the first franchise partner for vCHS; back at the time of VMworld (September) they were offering indications that over the long term that they thought that vCHS would win and that Savvis only needed to build its own IaaS platform until vCHS could fully meet customer requirements. (But continuing to have their own platform is certainly necessary to hedge their bets.)

Now CenturyLink’s acquisition of Tier 3 seems to indicate that they’re going to more than hedge their bets. Tier 3 is an innovative small IaaS provider (with fingers in the PaaS world through a Cloud Foundry-based PaaS, and they added .NET support to Cloud Foundry as “Iron Foundry”). Their offering is vCloud-Powered public cloud IaaS, but they entirely hide vCloud Director under their own tooling (and it doesn’t seem vCloud-ish from either the front-end or the implementation of the back-end), and they have a pile of interesting additional capabilities built into their platform. They’ve made a hypervisor-neutral push, as well They’ve got a nice blend between capabilities that appeal to the traditional enterprise, and forward-looking capabilities that appeal to a DevOps orientation. Tier 3 has some blue-chip enterprise names as customers, and it has historically scored well on Gartner evaluations, and they’re strongly liked by our enterprise clients who have evaluated them — but people have always worried about their size. (Tier 3 has made it easy to white-label the offering, which has given them more success from its partners, like Peer 1.) The acquisition by CenturyLink neatly solves that size problem.

Indeed, CenturyLink seems to have placed a strong vote of confidence in their IaaS offering, because Tier 3 is being immediately rebranded, and immediately offered as the CenturyLink Cloud. (Current outstanding quotes for Symphony VPDC will likely be requoted, and new VPDC orders are unlikely to be taken.) CenturyLink will offer existing VPDC customers a free migration to the Tier 3 cloud (since it’s vCD-to-vCD, presumably this isn’t difficult, and it represents an upgrade in capabilities for customers). CenturyLink is also immediately discontinuing selling the SavvisDirect offering (although the existing platform will continue to run for the time being); customers will be directed to purchase the Tier 3 cloud instead. (Or, I should say, the CenturyLink Cloud, since the Tier 3 brand is being killed.) CenturyLink is also doing a broad international expansion of data center locations for this cloud.

CenturyLink has been surprisingly forward-thinking to date about the way the cloud converges infrastructure capabilities (including networking) and applications, and how application development and operations changes as a result. (They bought AppFog back in June to get a PaaS offering, too.) Their vision of how these things fit together is, I think, much more interesting than either AT&T or Verizon’s (or for that matter, any other major global carrier). I expect the Tier 3 acquisition to help accelerate their development of capabilities.

Savvis’s managed and professional services combined with the Tier 3 platform should provide them some immediate advantages in the cloud-enabled managed hosting and data center outsourcing markets. It’s more competition for the likes of CSC and IBM in this space, as well as providers like Verizon Terremark and Rackspace. I think the broad scope of the CenturyLink portfolio will mesh nicely not just with existing Tier 3 capabilities, but also capabilities that Tier 3 hasn’t had the resources to be able to develop previously.

Even though I believe that the hyperscale providers are likely to have the dominant market share in cloud IaaS, there’s still a decent market opportunity for everyone else, especially when the service is combined with managed and professional services. But I believe that managed and professional services need to change with the advent of the cloud — they need to become cloud-native and in many cases, DevOps-oriented. (Gartner clients only: see my research note, “Managed Service Providers Must Adapt to the Needs of DevOps-Oriented Customers“.) Tier 3 should be a good push for CenturyLink along this path, particularly since CenturyLink will make Tier 3’s Seattle offices the center of their cloud business, and they’re retaining Jared Wray (Tier 3’s founder) as their cloud CTO.

IBM buys SoftLayer

It’s been a hot couple of weeks in the cloud infrastructure as a service space. Microsoft’s Azure IaaS (persistent VMs) came out of beta, Google Compute Engine went into public beta, VMware formally launched its public cloud (vCloud Hybrid Service), and Dell withdrew from the mark. Now, IBM is acquiring SoftLayer, with a deal size in the $2B range, around a 4x-5x multiple — roughly the multiple that Rackspace trades at, with RAX no doubt used as a comp despite the vast differences in the two companies’ business models.

SoftLayer is the largest provider of dedicated hosting on the planet, although they do also have cloud IaaS offerings; they sell direct, but also have a huge reseller channel, and they’re often the underlying provider to many mass-market shared hosting providers. Like other dedicated hosters, they are very SMB-centric — tons of dedicated hosting customers are folks with just a server or two. But they also have a sizable number of customers with scale-out businesses to whom “bare metal” (non-virtualized dedicated servers), provisioned flexibly on demand (figure it typically takes 1 to 4 hours to provision bare metal), is very attractive.

Why bare metal? Because virtualization is great for server consolidation (“I used to have 10 workloads on 10 servers that were barely utilized, and now I have one heavily utilized server running all 10 workloads!”), but it’s often viewed as unnecessary overhead when you’re dealing with an environment where all the servers are running nearly fully utilized, as is common in scale-out, Web-scale environments.

SoftLayer’s secret sauce is its automation platform, which handles virtualized and non-virtualized servers with largely equal ease. One of their value propositions has been to bring the kinds of things you expect from cloud VMs, to bare metal — paid by the hour, fully-automated provisioning, API as well as GUI, provisioning from image, and so forth. So the value proposition is often “get the performance of bare metal, in exactly the configuration you want, with the advantages and security of single-tenancy, without giving up the advantages of the cloud”. And, of course, if you want virtualization, you can do that — or SoftLayer will be happy to sell you VMs in their cloud.

SoftLayer also has a nice array of other value-adds that you can self-provision, including being able to click to provision cloud management platforms (CMPs) like CloudStack / Citrix CloudPlatform, and hosting automation platforms like Parallels. Notably, though, they are focused on self-service. Although SoftLayer acquired a small managed hosting business when it merged with ThePlanet, its customer base is nearly exclusively self-managed. (That makes them very different than Rackspace.)

In terms of the competition, SoftLayer’s closest philosophical alignment is Amazon Web Services — don’t offer managed services, but instead build successive layers of automation going up the stack, that eliminate the need for traditional managed services as much as possible. They have a considerably narrower portfolio than AWS, of course, but AWS does not offer bare metal, which is the key attractor for SoftLayer’s customers.

So why does IBM want these guys? Well, they do fill a gap in the IBM portfolio — IBM has historically not served an SMB market directly in gneral, and its developer-centric SmartCloud Enterprise (SCE) has gotten relatively weak traction (seeming to do best where the IBM brand is important, notably Europe), although that can be blamed on SCE’s weak feature set and significant downtime associated with maintenance windows, more so than the go-to-market (although that’s also been problematic). I’ll be interested to see what happens to the SCE roadmap in light of the acquisition. (Also, IBM’s SCE+ offering — essentially a lightweight data center outsourcing / managed hosting offering, delivered on cloud-enabled infrastructure — uses a totally different platform, which they’ll need to converge at some point in time.)

Beyond the “public cloud”, though, SoftLayer’s technology and service philosophy are useful to IBM as a platform strategy, and potentially as bits of software and best practices to embed in other IBM products and services. SoftLayer’s anti-managed-services philosophy isn’t dissonant with IBM’s broader outsourcing strategy as it relates to the cloud. Every IT outsourcer at any kind of reasonable scale actually wants to eliminate managed services where they can, because at this point, it’s hard to get any cheaper labor — the Indian outsourcers have already wrung that dry, and every IT outsourcer today offshores as much of their labor as possible. So your only way to continue to get costs down is to eliminate the people. If you can, through superior technology, eliminate people, then you are in a better competitive position — not just for cost, but also for consistency and quality of service delivery.

I don’t think this was a “must buy” for IBM, but it should be a reasonable acceleration of their cloud plans, assuming that they manage to retain the brains at SoftLayer, and can manage what has been an agility-focused, technology-driven business with a very different customer base and go-to-market approach than the traditional IBM base — and culture. SoftLayer can certainly use more money to for engineering resources (although IBM’s level of engineering commitment to cloud IaaS has been relatively lackluster given its strategic importance), marketing, and sales, and larger customers that might have been otherwise hesitant to use them may be swayed by the IBM brand.

(And it’s a nice exit for GI Partners, at the end of a long road in which they wrapped up EV1 Servers, ThePlanet, and SoftLayer… then pursued an IPO route during a terrible time for IPOs… and finally get to sell the resulting entity for a decent valuation.)

VMware joins the cloud wars with vCloud Hybrid Service

Although this has been long-rumored, and then was formally mentioned in VMware’s recent investor day, VMware has only just formally announced the vCloud Hybrid Service (vCHS), which is VMware’s foray into the public cloud IaaS market.

VMware has previously had a strategy of being an arms dealer to service providers who wanted to offer cloud IaaS. In addition to the substantial ecosystem of providers who use VMware virtualization as part of various types of IT outsourcing offerings, VMware also signed up a lot of vCloud Powered partners, each of which offered what was essentially vCloud Director (vCD) as a service. It also certified a number of the larger providers as vCloud Datacenter Service Providers; each such provider needed to meet criteria for reliability, security, interoperability, and so forth. In theory, this was a sound channel strategy. In practice, it didn’t work.

Of the certified providers, only CSC has managed to get substantial market share, with Bluelock trailing substantially; the others haven’t gotten much in the way of traction, Dell has now dropped their offering entirely, and neither Verizon nor Terremark ended up launching the service. Otherwise, VMware’s most successful service providers — providers like Terremark, Savvis, Dimension Data, and Virtustream — have been the ones who chose to use VMware’s hypervisor but not its cloud management platform (in the form of vCD).

Indeed, those successful service providers (let’s call them the clueful enterprise-centric providers) are the ones that have built the most IP themselves — and not only are they resistant to buying into vCD, but they are increasingly becoming hypervisor-neutral. Even CSC, which has staunchly remained on VMware running on VCE Vblocks, has steadily reduced its reliance on vCD, bringing in a new portal, service catalog, orchestration engine, and so forth. Similarly, Tier 3 has vCD under the covers, but never so much as exposed the vCD portal to customers. (I think the industry has come to a broad consensus that vCD is too complex of a portal for nearly all customers. Everyone successful, even VMware themselves with vCHS, is front-ending their service with a more user-friendly portal, even if customers who want it can request to use vCD instead.)

In other words, even while VMware remains a critical partner for many of its service providers, those providers are diversifying their technology away from VMware — their success will be, over time, less and less VMware’s success, especially if they’re primarily paying for hypervisor licenses, and not the rest of VMware’s IT operations management (ITOM) tools ecosystem. The vCloud Powered providers that are basically putting out vanilla vCD as a service aren’t getting significant traction in the market — not only can they not compete with Amazon, but they can’t compete against clueful enterprise-centric providers. That means that VMware can’t count on them as a significant revenue stream in the future. And meanwhile, VMware has finally gotten the wake-up call that Amazon’s (and AWS imitators) increasing claim on “shadow IT” is a real threat to VMware’s future not only in the external cloud, but also in internal data centers.

That brings us to today’s reality: VMware is entering the public cloud IaaS market themselves, with an offering intended to compete head-to-head with its partners as well as Amazon and the whole constellation of providers that don’t use VMware in their infrastructure.

VMware’s thinking has clearly changed over the time period that they’ve spent developing this solution. What started out as a vanilla vCD solution intended to enable channel partners who wanted to deliver managed services on top of a quality VMware offering, has morphed into a differentiated offering that VMware will take to market directly as well as through their channel — including taking credit cards on a click-through sign-up for by-the-hour VMs, although the initial launch is a monthly resource-pool model. Furthermore, their benchmark for price-competitiveness is Amazon, not the vCloud providers. (Their hardware choices reflect this, too, including their choice to use EMC software but going scale-out architecture and commodity hardware across the board, rather than much more expensive and much less scalable Vblocks.)

Fundamentally, there is virtually no reason for providers who sell vanilla vCD without any value-adds to continue to exist. VMware’s vCHS will, out of the gate, be better than what those providers offer, especially with regard to interopability with internal VMware deployments — VMware’s key advantage in this market. Even someone like a Bluelock, who’s done a particularly nice implementation and has a few value-adds, will be tremendously challenged in this new world. The clueful providers who happen to use VMware’s hypervisor technology (or even vCD under the covers) will continue on their way just fine — they already have differentiators built into their service, and they are already well on the path to developing and owning their own IP and working opportunistically with best-of-breed suppliers of capabilities.

(There will, of course, continue to be a role for vCloud Powered providers who really just use the platform as cloud-enabled infrastructure — i.e., providers who are mostly going to do managed services or one sort or another, on top of that deployment. Arguably, however, some of those providers may be better served, over the long run, offering those managed services on top of vCHS instead.)

No one should underestimate the power of brand in the cloud IaaS market, particularly since VMware is coming to market with something real. VMware has a rich suite of ITOM capabilities that it can begin to build into an offering. It also has CloudFoundry, which it will integrate, and would logically be as synergistic with this offering as any other IaaS/PaaS integration (much as Microsoft believes Azure PaaS and IaaS elements are synergistic).

I believe that to be a leader in cloud IaaS, you have to develop your own software and IP. As a cloud IaaS provider, you cannot wait for a vendor to do their next big release 12-18 months from now and then take another 6-12 months to integrate it and upgrade to it — you’ll be a fatal 24 months behind a fast-moving market if you do that. VMware’s clueful service providers have long since come to this realization, which is why they’ve moved away from a complete dependence on VMware. Now VMware itself has to ensure that their cloud IaaS offering has a release tempo that is far faster than the software they deliver to enterprises. That, I think, will be good for VMware as a whole, but it will also be a challenge for them going forward.

VMware can be successful in this market, if they really have the wholehearted will to compete. Yes, their traditional buying center is the deeply untrendy and much-maligned IT Operations admin, but if anyone would be the default choice for that population (which still controls about a third of the budget for cloud services), it’s VMware — and VMware is playing right into that story with its emphasis on easy movement of workloads across VMware-based infrastructures, which is the story that these guys have been wanting to hear all along and have been waiting for someone to actually deliver.

Hello, vCHS! Good-bye, vCloud Powered?

Dell withdraws from the public cloud IaaS market

Today, not long after its recent acquisition of Enstratius, Dell announced a withdrawal from the public cloud IaaS market. This removes Dell’s current VMware-based, vCloud Datacenter Service from the market; furthermore, Dell will not launch an OpenStack-based public cloud IaaS offering later this year, as it had originally intended to do. This does not affect Dell’s continued involvement with OpenStack as a CMP for private clouds.

It’s not especially surprising that Dell decided to discontinue its vCloud service, which has gotten highly limited traction in the market, and was expensive even compared to other vCloud offerings — given its intent to launch a different offering, the writing was mostly on the wall already. What’s more surprising is that Dell has decided to focus upon an Enstratius-enabled cloud services broker (CSB) role, when its two key competitors — HP and IBM — are trying to control an entire technology stack that spans hardware, software, and services.

It is clear that it takes significant resources and substantial engineering talent — specifically, software engineering talent — to be truly competitive in the cloud IaaS market, sufficiently so to move the needle of a company as large as Dell. I do not believe that cloud IaaS is, or will become, a commodity; I believe that the providers will, for many years to come, compete to offer the most capable and feature-rich offerings to their customers.

Infrastructure, of course, still needs to be managed. IT operations management (ITOM) tools — whether ITIL-ish as in the current market, or DevOps-ish as in the emerging market — will remain necessary. All the capabilities that make it easy to plan, deploy, monitor, manage, and so forth are still necessary, although you do these things differently in the cloud than on-premise, potentially. Such capabilities can either be built into the IaaS offerings themselves — perhaps with bundled pricing, perhaps as value-added services, but certainly as where much of the margin will be made and providers will differentiate — or they can come from third-party multi-cloud management vendors who are able to overlay those capabilities on top of other people’s clouds.

Dell’s strategy essentially bets on the latter scenario — that Enstratius’s capabilities can be extended into a full management suite that’s multi-cloud, allowing Dell to focus all of its resources on developing the higher-level functionality without dealing with the lower-level bits. Arguably, even if the first scenario ends up being the way the market goes (I favor the former scenario over the latter one, at present), there will still be a market for cloud-agnostic management tools. And if it turns out that Dell has made the wrong bet, they can either launch a new offering, or they may be able to buy a successful IaaS provider later down the line (although given the behemoths that want to rule this space, this isn’t as likely).

From my perspective, as strategies go, it’s a sensible one. Management is going to be where the money really is — it won’t be in providing the infrastructure resources. (In my view, cloud IaaS providers will eventually make thin margins on the resources in order to get the value-adds, which are basically ITOM SaaS, plus most if not all will extend up into PaaS.) By going for a pure management play, with a cloud-native vendor, Dell gets to avoid the legacy of BMC, CA, HP, IBM/Tivoli, and its own Quest, and their struggles to make the shift to managing cloud infrastructure. It’s a relatively conservative wait-and-see play that depends on the assumption that the market will not mature suddenly (beware the S-curve), and that elephants won’t dance.

If Dell really wants to be serious about this market, though, it should start scooping up every other vendor that’s becoming significant in the public cloud management space that has complementing offerings (everyone from New Relic to Opscode, etc.), building itself into an ITOM vendor that can comprehensively address cloud management challenges.

And, of course, Dell is going to need a partner ecosystem of credible, market-leading IaaS offerings. Enstratius already has those partners — now they need to become part of the Dell solutions portfolio.

Akamai buys Cotendo

Akamai is acquiring Cotendo for a purchase price of $268 million, somewhat under the rumored $300 million that had been previously reported in the Israeli press. To judge from the stock price, the acquisition is being warmly received by investors (and for good reason).

The acquisition only impacts the website delivery/acceleration portion of the CDN market — it has no impact on the software delivery and media delivery segments. The acquisition will leave CDNetworks as the only real alternative for dynamic site acceleration that is based on network optimization techniques (EdgeCast does not seem to have made the technological cut thus far). Level 3 (via its Strangeloop Networks partnership) and Limelight (via its Acceloweb acquisition) have chosen to go with front-end optimization techniques instead for their dynamic acceleration. Obviously, AT&T is going to have some thinking to do, especially since application-fluent networking is a core part of its strategy for cloud computing going forward.

I am not going to publicly blog a detailed analysis of this acquisition, although Gartner clients are welcome to schedule an inquiry to discuss it (thus far the questions are coming from investors and primarily have to do with the rationale for the purchase price, technology capabilities, pricing impact, and competitive impact). I do feel compelled to correct two major misperceptions, though, which I keep seeing all over the place in press quotes from Wall Street analysts.

First, I’ve heard it claimed repeatedly that Cotendo’s technology is better than Akamai’s. It’s not, although Cotendo has done some important incremental engineering innovation, as well as some better marketing of specific aspects (for instance, their solution around mobility). I expect that there will be things that Akamai will want to incorporate into their own codebase, naturally, but this is not really an acquisition that is primarily being driven by the desire for the technology capabilities.

Second, I’ve also heard it claimed repeatedly that Cotendo delivers better performance than Akamai. This is nonsense. There is a specific use case in which Cotendo may deliver better performance — low-volume customers with low cache hit ratios due to infrequently-accessed content, as can occur with SaaS apps, corporate websites, and so on. Cotendo pre-fetches content into all of its POPs and keeps it there regardless of whether or not it’s been accessed recently. Akamai flushes objects out of cache if they haven’t been accessed recently. This means that you may see Akamai cache hit ratios that are only in the 70%-80% range, especially in trial evaluations, which is obviously going to have a big impact on performance. Akamai cache tuning can help some of those customers substantially drive up cache hits (for better performance, lower origin costs, etc.), although not necessarily enough; cache hit ratios have always been a competitive point that other rivals, like Mirror Image, have hammered on. It has always been a trade-off in CDN design — if you have a lot more POPs you get better edge performance, but now you also have a much more distributed cache and therefore lower likelihood of content being fresh in a particular POP.

(Those are the two big errors that keep bothering me. There are plenty of other minor factual and analysis errors that I keep seeing in the articles that I’ve been reading about the acquisition. Investors, especially, seem to frequently misunderstand the CDN market.)

Riverbed acquires Zeus and Aptimize

(This is part of a series of “catch-up” posts of announcements that I’ve wanted to comment on but didn’t previously find time to blog about.)

Riverbed made two interesting acquisitions recently, which I think signal a clear intention to be more than just a traditional WAN optimization controller (WOC) vendor — Zeus, and Aptimize. If you’re an investment banker or a networking vendor, who has talked to me over the last year, you know that these two companies have been right at the top of my “who I think should get bought” list; these are both great pick-ups for Riverbed.

Zeus has been around for quite some time now, but a lot of people have never heard of them. They’re a small company in the UK. Those of you who have been following infrastructure for the Web since the 1990s might remember them as the guys who developed the highest-performance webserver — if a vendor did SPECweb benchmarks for its hardware back then, they generally used Zeus for the software. It was a great service provider product, too, especially for shared Web hosting — it had tons of useful sandboxing and throttling features that were light-years ahead of anyone else back then. But despite the fact that the tech was fantastic, Zeus was never really commercially successful with their webserver software, and eventually they turned their underlying tech to building application delivery controller (ADC) software instead.

Today, Zeus sells a high-performance, software-based ADC, with a nice set of features, including the ability to act as a proxy cache. It’s a common choice for high-end load-balancing when cloud IaaS customers need to be able to deploy a virtual appliance running on a VM, rather than dropping in a box. It’s also the underlying ADC for a variety of cloud IaaS providers, including Joyent and Rackspace (which means it’ll also get an integration interface to OpenStack). Notably, over the last two years, we’ve seen Zeus supplanting or supplementing F5 Networks in historically strong F5 service provider accounts.

Aptimize, by contrast, is a relatively new start-up. It’s a market leader in front-end optimization (FEO), sometimes also called Web performance optimization (WPO) or Web content optimization (WCO). FEO is the hot new thing in acceleration — it’s been the big market innovation in the last two years. While previous acceleration approaches have focused upon network and protocol optimization, or on edge caching, FEO optimizes the pages themselves — the HTML, Cascading Style Sheets (CSS), JavaScript, and so forth that goes into them. It basically takes whatever the webserver output is and attempts to automatically apply the kinds of best practices that Steve Souders has espoused in his books.

Aptimize makes a software-based FEO solution which can be deployed in a variety of ways, including as a virtual appliance running on a VM. (FEO is generally a computationally expensive thing, though, since it involves lots of text parsing, so it’s not unusual to see it run on a standalone server.)

So, what Riverbed has basically bought itself is the ability to offer a complete optimization solution — WOC, ADC, and FEO — plus the intellectual property portfolio to potentially eventually combine the techniques from all three families of products into an integrated product suite. (Note that Riverbed is fairly cloud-friendly already with its Virtual Steelhead.)

I think this also illustrates the vital importance of “beyond the box” thinking. Networking hardware has traditionally been just that — specialized appliances with custom hardware that can do something to traffic, really really fast. But off-the-shelf servers have gotten so powerful that they can now generate the kind of processing umph and network throughput that you used to have to build custom hardware logic to achieve. That’s leading us to the rise of networking vendors who make software appliances instead, because it’s a heck of a lot easier and cheaper to launch a software company than a hardware company (probably something like a 3:1 ratio in funding needed), you can have product to market and iterate much more quickly, and you can integrate more easily with other products.

ObPlug for new, related research notes (Gartner clients only):

New gTLDs require a business case

Recently, I’ve been deluged with client inquiries about the new gTLDs that ICANN finally approved last month. (That’s three years after they first accepted the gTLD stakeholder recommendation, and two years after they said they expected to start taking applications… which they now say they won’t do until January 2012.)

Tonight, I decided to write a research note, in hopes of persuading clients to read the note rather than trying to talk to me. I sat down at 5 pm to write it. I figured it’d be a quick little note. I finished at 3 am, with an hour break for dinner. It’s not a short note, and I’m not convinced that it’s really as complete as it should be, so it’s not done per se, and it still needs peer review…

I’ll throw out a couple of quick thoughts on this blog, though, and invite you to challenge my thinking:

  • If you’re going to get a gTLD, you should start with the business plan, driven by your business / marketing guys, not IT security guys nattering about defensive moves. Lots of organizations won’t be able to come up with reasonable business plans, especially given the cost.

  • A gTLD is valuable to a business with many affiliates or affinity sites. That includes companies that franchise or have agents, companies with partner networks, and companies that have big fan communities. It may also include companies that have a ton of unique names that need to be associated with a domain, for some reason, or which otherwise need a namespace to themselves.

  • Most companies won’t become .brand rather than brand.com; among other things, nobody knows what second-level domains are going to be logical, in many cases. Global companies currently operating under a mess of country-specific domains may usefully consolidate under a .brand, though.

  • Government entities are facing a ton of hype, especially from consultants selling gTLD-related services. But most governments won’t significantly benefit from a gTLD for their locale, and the benefits to residents of a geographic-name gTLD are pretty limited. (That doesn’t mean that you can’t make a successful business out of a geographic name, though; at the very least you’ll get the obligatory defensive registrations.)

  • Defensive registrations of gTLDs are relatively pointless. Nobody’s going to cybersquat for the kind of money that a gTLD costs to apply for and operate, and the dispute process is so expensive that people aren’t going to go spend money applying for a gTLD that’s likely to be contested on trademark grounds.

  • There will be some contention for generic terms, both by companies associated with those terms, trade associations, and registry businesses that want to operate general-public registries for those terms.

  • The proliferation of new gTLDs is going to multiply everyone’s defensive registration headaches for domain names. Many new gTLD registries will probably make most of their money off defensive registrations, and not active primary-use domains. This is very sad and creates negative value in the world.

I’m a fan of the digital brand management guys — companies like MarkMonitor, Melbourne IT, and NameProtect (Corporation Services Company, the “other CSC”), to name a few. I think they have a lot of specialized knowledge and I tend to recommend that clients who need in-depth thinking on this stuff use them. If you really want to dive into gTLD strategy, they’re the folks to go to. (Yes, I know there are tons of other little consultancies out there that now claim to specialize in gTLDs. I don’t trust any of them yet, and what my clients have told me about their interactions with various such shops hasn’t made me feel better about their trustworthiness. Beware of consultants who either try to scare you or make your eyes light up in dollar symbols.)

Akamai and Riverbed partner on SaaS delivery

Akamai and Riverbed have signed a significant partnership deal to jointly develop solutions that combine Internet acceleration with WAN optimization. The two companies will be incorporating each other’s technologies into their platforms; this is a deep partnership with significant joint engineering, and it is probably the most significant partnership that Akamai has done to date.

Akamai has been facing increasing challenges to its leadership in the application acceleration market — what Akamai’s financial statements term “value added services”, including their Dynamic Site Accelerator (DSA) and Web Application Accelerator (WAA) services, which are B2C and B2B bundles, respectively, built on top of the same acceleration delivery network (ADN) technology. Vendors such as Cotendo (especially via its AT&T partnership), CDNetworks, and EdgeCast now have services that compete directly with what has been, for Akamai, a very high-margin, very sticky service. This market is facing severe pricing pressure, due not just to competition, but due to the delta between the cost of these services and standard CDN caching. (In other words, as basic CDN services get cheaper, application acceleration also needs to get cheaper, in order to demonstrate sufficient ROI, i.e., business value of performance, above just buying the less expensive solution.)

While Akamai has had interesting incremental innovations and value-adds since it obtained this technology via the 2007 acquisition of Netli, it has, until recently, enjoyed a monopoly on these services, and therefore hasn’t needed to do any groundbreaking innovation. While the internal enterprise WAN optimization market has been heavily competitive (between Riverbed, Cisco, and many others), other CDNs largely only began offering competitive ADN solutions in the last year. Now, while Akamai still leads in performance, it badly needs to open up some differentiation and new potential target customers, or it risks watching ADN solutions commoditize just the way basic CDN services have.

The most significant value proposition of the joint Akamai/Riverbed solution is this:

Despite the fundamental soundness of the value proposition of ADN services, most SaaS providers use only a basic CDN service, or no CDN at all. The same is true of other providers of cloud-based services. Customers, however, frequently want accelerated services, especially if they have end-users in far-flung corners of the globe; the most common problem is poor performance for end-users in Asia-Pacific when the service is based in the United States. Yet, today, doing so either requires that the SaaS provider buy an ADN service themselves (which it’s hard to do for only one customer, especially for multi-tenant SaaS), or requires the SaaS provider to allow the customer to deploy hardware in their data center (for instance, a Riverbed Steelhead WOC).

With the solution that this partnership is intended to produce, customers won’t need a SaaS provider’s cooperation to deploy an acceleration solution — they can buy it as a service and have the acceleration integrated with their existing Riverbed solution. It adds significant value to Riverbed’s customers, and it expands Akamai’s market opportunity. It’s a great idea, and in fact, this is a partnership that probably should have happened years ago. Better late than never, though.

Amazon’s Elastic Beanstalk

Amazon recently released a new offering called the Elastic Beanstalk. At its heart, it is a simplified interface to EC2 and its ancillary services (load-balancing, auto-scaling, and monitoring integrated with alerts), along with an Amazon-maintained AMI containing Linux and Apache Tomcat (an open source Java EE application server), and a deployment mechanism for a Java app (in the form of a WAR file), which notably adds tighter integration with Eclipse, a popular IDE.

Many people are calling this Amazon’s PaaS foray. I am inclined to disgree that it is PaaS (although Amazon does have other offerings which are PaaS, such as SimpleDB and SQS). Rather, I think this is still IaaS, but with a friendlier approach to deployment and management. It is developer-friendly, although it should be noted that in its current release, there is no simplification of any form of storage persistence — no easy configuration of EBS or friendly auto-adding of RDS instances, for example. Going to the database tab in the Elastic Beanstalk portion of Amazon’s management console just directs you to documentation about storage options on AWS. Almost no one is going to be running a real app without a persistence mechanism, so the Beanstalk won’t be truly turnkey until this is simplified accordingly.

Because Elastic Beanstalk fully exposes the underlying AWS resources and lets you do whatever you want with them, the currently-missing feature capabilities aren’t a limitation; you can simply use AWS in the normal way, while still getting the slimmed-down elegance of the Beanstalk’s management interfaces. Also notably, it’s free — you’re paying only for the underlying AWS resources.

Amazon exemplifies the idea of IT services industrialization, but in order to address the widest possible range of use cases, Amazon needs to be able to simplify and automate infrastructure management that would otherwise require manual work (i.e., either the customer needs to do it himself, or he needs managed services). I view Elastic Beanstalk and its underlying technologies as an important advancement along Amazon’s path towards automated management of infrastructure. In its current incarnation, it eases developer on-boarding — but in future iterations, it could become a key building-block in Amazon’s ability to serve the more traditional IT buyer.

Follow

Get every new post delivered to your Inbox.

Join 9,849 other followers

%d bloggers like this: