SoftLayer and ThePlanet merge

When I was told during HostingCon that private-equity firm GI Partners was acquiring a controlling interest in SoftLayer, my first question was, “Will it be merged with The Planet?” I got a coy answer about what would be logical, and now, it seems, the answer is indeed yes.

The two companies have an interesting mutual history that both might like to forget now — the founders of SoftLayer left The Planet a little over five years ago, leaving some amount of acrimony in their wake (expressed by a still-ongoing lawsuit that will presumably be put to rest now). Industry rumor back then said that the SoftLayer founders were essentially the movers and shakers at The Planet, and that their departure gutted significant talent from the company. By leaving, they missed the results of the GI Partners acquisition of The Planet, subsequent merger with EV1 Servers, and so forth. Management has changed almost entirely at The Planet in the intervening years, making the reconciliation of a merger completely reasonable, but it’s an interesting irony that SoftLayer’s CEO is going to get to come back to run the merged company. It’s also worth noting that the degree of common genesis ought to make this an easier merger than might otherwise be the case.

SoftLayer has been growing at an incredible pace, taking advantage of the same trend towards highly-automated, on-demand, self-managed infrastructure that Amazon has been riding high on. The Planet brings the rest of a hosting product portfolio to the game, so it’s an entirely sensible match. Also, for SoftLayer, the change in capitalization structure should be strongly beneficial, letting them get away from the equipment-leasing trap they’ve been in.

GI Partners has had a solid record of success in the data center space thus far — their other previous investments were Digital Realty Trust (wholesale data center leasing) and Telx (carrier hoteling and carrier-neutral colocation) — and the integration of The Planet and EV1 Servers clearly built a stronger company. I think merging fast-moving companies in the midst of a radically changing market is a dangerous, difficult proposition, since it risks loss of momentum, management confusion and distraction, and so forth, so this will be one to watch — it could build a much stronger merged company, or it could be disruptive to existing success.

There’s been a lot of M&A buzz around the hosting industry of late. Consolidation makes sense in the scale business of cloud, and there are also lots of companies seeking to move up-market with managed hosting offerings. Arguably, the thing that is preventing more M&A activity right now is that there simply aren’t great acquisition targets in the places where people are looking.

Bookmark and Share

Shooting sparrows with a (software) cannon

Many businesses fail to save significant amounts of money when they migrate to cloud IaaS, because of the cost of their software stacks. This can happen in a myriad of ways, such as:

  • Getting on-demand hardware by the hour, but having to pay for permanent, peak-load software licenses
  • Paying an “enterprise” uplift on software licenses or running into other licensing-under-virtualization issues
  • Spending money on commercial middleware when free open source will do

While there obviously a host of complex issues surrounding software licensing both with in-house virtualization and in the external cloud, I want to focus this blog post on this last point: Using a big complex heavyweight package when a lighterweight simpler one will do.

In the enterprise, especially pre-virtualization, it previously made a reasonable amount of sense to standardize on relatively heavyweight architectures — say, WebLogic or WebSphere on top of an Oracle database. Sure, there were some applications that actually used the full power of Java EE and needed fancy Oracle features (like RAC), but for the most part, the zillions of apps within enterprises are basically business process, workflow, paperwork apps — fill out a form, take some kind of minor action, run a report. They work fine and probably unchanged, on a sliver of a compute core, JBoss, and MySQL, for instance. (And, while we’re at it, run fine under Linux rather than a proprietary Unix flavor.)

I used to convince Web hosting customers that they ought to switch from Solaris to Linux in order to save money. (I still occasionally do, though Solaris is a vanishingly tiny percentage of the market these days, going from near complete market dominance to being essentially negligible in less than a decade and a half.) These days, I pitch clients on why they should consider converting to open source middleware if they’re going to the cloud and want to maximize their cost savings.

The fact of the matter is, it’s expensive to shoot sparrows with a cannon. Standardizing on a single platform has cost advantages right up until the time that the single platform is vastly more expensive than having two platforms. Most of the compute infrastructure in most enterprises is used for commodity applications, and it makes sense to bring down the operational cost of those applications as much as possible. Open source does not necessarily mean free, of course, and commercial open source can be plagued by the same sorts of licensing issues, but there are good things to explore here (and even if open source doesn’t cut it for you, exploring commercial alternatives that are more cloud/virtualization-friendly is still a boon).

Gartner clients: I’ve written about this topic before, in Open Source in Web Hosting, 2008. My colleague Stewart Buchanan has authored a magnificent series of notes on this topic, and I recommend you read, at the very least, Splitting End-User and Service Provider Licensing Will Increase the Costs and Risks of Virtualization and Cloud Strategies, as well as Q&A: How to License Software Under Virtualization.

Bookmark and Share

Rackspace and OpenStack

Rackspace is open-sourcing its cloud software — Cloud Files and Cloud Servers — and merging its codebase and roadmap with NASA’s Nebula project (not to be confused with OpenNebula), in order to form a broader community project called OpenStack. This will be hypervisor-neutral, and initially supports Xen (which Rackspace uses) and KVM (which NASA uses), and there’s a fairly broad set of vendors who have committed to contributing to the stack or integrating with it.

While my colleagues and I intend to write a full-fledged research note on this, I feel like shooting from the hip on my blog, since the research note will take a while to get done.

I’ve said before that hosters have traditionally been integrators, not developers of technology, yet the cloud, with its strong emphasis on automation, and its status as an emerging technology without true turnkey solutions at this stage, has forced hosters into becoming developers.

I think the decision to open-source its cloud stack reinforces Rackspace’s market positioning as a services company, and not a software company — whereas many of its cloud competitors have defined themselves as software companies (Amazon, GoGrid, and Joyent, notably).

At the same time, open sourcing is not necessarily a way to software success. Rackspace has a whole host of new challenges that it will have to meet. First, it must ensure that the roadmap of the new project aligns sufficiently with its own needs, since it has decided that it will use the project’s public codebase for its own service. Second, it now has to manage and just as importantly, lead, an open-source community, getting useful commits from outside contributors and managing the commit process. (Rackspace and NASA have formed a board for governance of the project, on which they have multiple seats but are in the minority.) Third, as with all such things, there are potential code-quality issues, the impact of which become significantly magnified when running operations at massive scale.

In general, though, this move is indicative of the struggle that the hosting industry is going through right now. VMware’s price point is too high, it’ll become even higher for those who want to adopt “Redwood” (vCloud), and the initial vCloud release is not a true turnkey service provider solution. This is forcing everyone into looking at alternatives, which will potentially threaten VMware’s ability to dominate the future of cloud IaaS. The compelling value proposition of single pane of glass management for hybrid clouds is the key argument for having VMware both in the enterprise and in outsourced clouds; if the service providers don’t enthusiastically embrace this technology (something which is increasingly threatening), the single pane of glass management will go to a vendor other than VMware, probably someone hypervisor-neutral. Citrix, with its recent moves to be much more service provider friendly, is in a good position to benefit from this. So are hypervisor-neutral cloud management software vendors, like Cloud.com.

Bookmark and Share

HostingCon (and Booth Babes)

I’m on my way home from HostingCon. I wish I had decided to stay an extra day. I originally expected I’d give my Monday keynote and be free to roam and have various conversations with random people, have plenty of time to wander the show floor, and so on. Instead, my schedule filled up rapidly with clients and friends-of-clients (for instance, folks with relationships with our investment banking clients who tugged some strings), plus other folks who grabbed me on email beforehand.

Great things have happened with the show since iNet Interactive took over running it — the audience has become much more diverse in terms of the types of attendees, and in general, it’s a smoothly-run, very professional show, quite a change from the past. I enjoyed having the chance to deliver the opening keynote, as well as my formal and informal conversations with people.

I wish I’d had more time than the 30 minutes I had to spend on the show floor. But there’s been a very interesting backchannel discussion happening on Twitter #HostingCon) that I want to highlight, and that’s the subject of booth babes.

Much to my surprise, there were several exhibitors who brought booth babes — you know the classic sort, in super-skimpy outfits, arrayed in front of their booths. A number of female attendees have called this out on Twitter, but just as interesting are the retweets and supporting objections that come from male attendees. This was particularly stark because of the near total absence of women from the conference; the attendance is overwhelmingly male, and so there was little female representation in either attendees or exhibitors. This was true to even a far greater extent than I’m accustomed to seeing at IT conferences.

So, vendors, here’s a set of reasons why you should not bring booth babes. (And especially not to something like HostingCon, where much of the audience is C-level executives, and it’s all about the business and networking.)

1. You imply your audience is immature and/or unprofessional. Booth babes imply that you think that your audience’s primary interest is in staring at boobs, as opposed to getting serious business done. Moreover, there’s no way to look professional while ogling, and even those people who would like to ogle don’t want to do so in front of people that they’re doing business with. Unless you’re E3 and your audience is adolescents and overgrown adolescents, this is a bad tactic. (And you can argue that booth babes ended up significantly contributing to the death of E3 as a serious trade show.)

2. You imply that your company’s offerings are less interesting than the flesh on display. Yes, everyone needs to do something to draw in traffic, but booth babes smack of desperation. But you do this by having a compelling display that makes people want to come have conversations, not by having booth babes shoving trinkets at people. People grab the trinkets and then don’t have the conversations.

3. You actually make it harder for people to get to the booth itself. This is especially true on crowded show floors, where the booth babes basically form a wall in front of your booth. This makes it hard to see your display, your collateral, and the nametags of the people you have staffing your booth (important for any attendee who is trying to do some networking). Chances are that a lot of people simply don’t make it through the obstacle, especially if they’re casually perusing the floor, rather than looking for you specifically.

I chose not to talk to any exhibitor with booth babes. It wasn’t really a principle thing; I’m not actually offended, just bemused. It was simply a practical matter.

I don’t think conference organizations necessarily need to have rules against booth babes, per se. I simply think that companies should exercise good sense when thinking about where they’re exhibiting and who they’re exhibiting to.

Bookmark and Share

Abstracting IaaS… or not

Customer inquiry around cloud IaaS these days is mostly of a practical sort — less “what is my broad long-term strategy going to be” or “help me with my initial pilot” like it was last year, and more “I’m going to do something big and serious, help me figure out how to source it”.

My inquiry volume is nothing short of staggering (shoved into 30-minute back-to-back calls the entirety of my work day, so if you talk to me and I sound a bit anxious to keep to schedule, that’s why). I’m currently clinging to the desperate hope that if I spend more time writing, people will consult the written work first, which will free me of having to go over basics in calls and hopefully result in better answers for clients as well as greater sanity for myself.

Thus, I have been trying to cram in a lot of writing in my evenings. At the moment, I’m working on a series of notes covering IaaS soup to nuts, going over everything from the different ways that compute resources are implemented and offered, to the minutiae of what capabilities are available in customer portals.

It strikes me that in more than ten years of covering the hosting industry as an analyst, this is the first time that I’ve written deep-dive architectural notes. No one has really cared in the past whether or not, say, a vendor uses NPIV in their storage fabric, or whether the chips in the servers support EPT/RVI. That’s all changing with cloud IaaS, once people get down into the weeds and look into adopting it for production systems.

It’s vastly ironic that now, in this age of the fluffy wonderful abstraction of infrastructure as a service, IT buyers are obsessing over the minutiae of exactly how this stuff is implemented.

It matters, of course. A core is not just a core; the performance of that core for your apps is going to determine bang-for-your-buck if you’re compute bound. The fine niggling details of implementation of fill-in-the-blank-here will result in different sorts of security vulnerabilities and ways that those vulnerabilities are addressed. And so on. The IT buyers who are delving into this stuff aren’t being paranoid or crazy, really; they’re wanting to evaluate how it’s done versus how they’d do it themselves, when you get right down to what’s going through their heads.

It’s a key difference between IaaS and PaaS thinking in the heads of customers. In PaaS, you trust that it will work as long as you write to the APIs, and you surrender control over the underlying implementation. In IaaS, you’re getting something so close to bare metal that you start really wondering about what’s there, because you’re comparing it directly to your own data center.

I think that over time this will be something that simply gets addressed with SLAs that guarantee particular levels of availability, performance, and so forth, along with some transparency and strong written guarantees around security. But the industry hasn’t hit that level of maturity yet, which means that for the moment, customers will and probably should do deep dives scrutinizing exactly what it is that they’re being offered when they contemplate IaaS solutions.

The cloud is not magic

Just because it’s in the cloud doesn’t make it magic. And it can be very, very dangerous to assume that it is.

I recently talked to an enterprise client who has a group of developers who decided to go out, develop, and run their application on Amazon EC2. Great. It’s working well, it’s inexpensive, and they’re happy. So Central IT is figuring out what to do next.

I asked curiously, “Who is managing the servers?”

The client said, well, Amazon, of course!

Except Amazon doesn’t manage guest operating systems and applications.

It turns out that these developers believed in the magical cloud — an environment where everything was somehow mysteriously being taken care of by Amazon, so they had no need to do the usual maintenance tasks, including worrying about security — and had convinced IT Operations of this, too.

Imagine running Windows. Installed as-is, and never updated since then. Without anti-virus, or any other security measures, other than Amazon’s default firewall (which luckily defaults to largely closed).

Plus, they also assumed that auto-scaling was going to make their app magically scale. It’s not designed to automagically scale horizontally. Somebody is going to be an unhappy camper.

Cautionary tale for IT shops: Make sure you know what the cloud is and isn’t getting you.

Cautionary tale for cloud providers: What you’re actually providing may bear no resemblance to what your customer thinks you’re providing.

Bookmark and Share

Hope is not engineering

My enterprise clients frequently want to know why fill-in-the-blank-cloud-IaaS only has a 99.95% SLA. “That’s more than four hours of downtime a year!” they cry. “More than twenty minutes a month! I can’t possibly live with that! Why can’t they offer anything better than that?”

The answer to that is simple: There is a significant difference between engineering and hope. Many internal IT organizations, for instance, set service-level objectives that are based on what they hope to achieve, rather than the level that the solution is engineered to achieve, and can be mathematically expected to deliver, based on calculated mean time between failures (MTBF) of each component of the service. Many organizations are lucky enough to achieve service levels that are higher than the engineered reliability of their infrastructure. IaaS providers, however, are likely to base their SLAs on their engineered reliability, not on hope.

If a service provider is telling you the SLA is 99.95%, it usually means they’ve got a reasonable expectation, mathematically, of delivering a level of availability that’s 99.95% or higher.

My enterprise client, with his data center that has a single UPS and no generator (much less dual power feeds, multiple carriers and fiber paths, etc.), with a single, non-HA, non-load-balanced server (which might not even have dual power supplies, dual NICs, etc.), will tell me that he’s managed to have 100% uptime on this application in the past year, so fie on you, Mr. Cloud Provider.

I believe that uptime claim. He’s gotten lucky. (Or maybe he hasn’t gotten lucky, but he’ll tell me that the power outage was an anomaly and won’t happen again, or that incident happened during a maintenance window so it doesn’t count.)

A service provider might be willing to offer you a higher SLA. It’s going to cost you, because once you get past a certain point, mathematically improving your reliability starts to get really, really expensive.

Now, that said, I’m not necessarily a fan of all cloud IaaS providers’ SLAs. But I encourage anyone looking at them (or a traditional hosting SLA, for that matter), to ponder the difference between engineering and hope.

Bookmark and Share

Recent research notes

Here’s a round-up of what I’ve written lately, for those of you that are Gartner clients and are following my research:

Data Center Managed Services: Regional Differences in the Move Toward the Cloud is about how the IaaS market will evolve differently in each of the major regions of the world. We’re seeing significant adoption differences between the United States, Western Europe (and Canada follows the WEU pattern), and Asia, both in terms of buyer desires and service provider evolution.

Web Hosting and Cloud Infrastructure Prices, North America, 2010 is my regular update to the state of the hosting and cloud IaaS markets, targeted at end-users (IT buyers).

Content Delivery Network Services and Pricing, 2010 is my regular update of end-user (buyer) advice, providing a brief overview of the current state of the market.

Is a Cloud Content Delivery Network Right for You? is a look at Amazon CloudFront and the other emerging “cloud CDN” services (Rackspace/Limelight, GoGrid/EdgeCast, Microsoft’s CDN for Azure, etc.). It’s a hot topic of inquiry at the moment (interestingly, mostly among Akamai customers hoping to reduce their costs).

Some of my colleagues have also recently published notes that might be of interest to those of you who follow my research. Those notes include:

Bookmark and Share

Shifting the software optimization burden

Historically, software vendors haven’t had to care too much about exactly how their software performed. Enterprise IT managers are all too familiar with the experience of buying commercial software packages and/or working with integrators in order to deliver software solutions that have turned out to consume far more hardware than was originally projected (and thus caused the overall project to cost more than anticipated). Indeed, many integrators simply don’t have anyone on hand that’s really a decent architect, and lack the experience on the operations side to accurately gauge what’s needed and how it should be configured in the first place.

Software vendors needed to fix performance issues so severe that they were making the software unusable, but they did not especially care whether a reasonably efficient piece of software was 10% or even 20% more efficient, and given how underutilize enterprise data centers typically are, enterprises didn’t necessarily care, either. It was cheaper and easier to simply throw hardware at the problem rather than to worry about either performance optimization in software, or proper hardware architecture and tuning.

Software as a service turns that equation around sharply, whether multi-tenant or hosted single-tenant. Now, the SaaS vendor is responsible for the operational costs, and therefore the SaaS vendor is incentivized to pay attention to performance, since it directly affects their own costs.

Since traditional ISVs are increasingly offering their software in a SaaS model (usually via a single-tenant hosted solution), this trend is good even for those who are running software in their own internal data centers — performance optimizations prioritized for the hosted side of the business should make their way into the main branch as well.

I am not, by the way, a believer that multi-tenant SaaS is inherently significantly superior to single-tenant, from a total cost of ownership, and total value of opportunity, perspective. Theoretically, with multi-tenancy, you can get better capacity utilization, lower operational costs, and so forth. But multi-tenant SaaS can be extremely expensive to develop. Furthermore, a retrofit of a single-tenant solution into a multi-tenant one is a software project burdened with both incredible risk and cost, in many cases, and it diverts resources that could otherwise be used to improve the software’s core value proposition. As a result, there is, and will continue to be, a significant market for infrastructure solutions that can help regular ISVs offer a SaaS model in a cost-effective way without having to significantly retool their software.

Bookmark and Share

Lightweight Provisioning != Lightweight Process

Congratulations, you’ve virtualized (or gone to public cloud IaaS) and have the ability to instantly and easily provision capacity.

Now, stop and shoot yourself in the foot by not implementing a lightweight procurement process to go with your lightweight provisioning technology.

That’s all too common of a story, and it highlights a critical aspect of movement towards a cloud (or just ‘cloudier’ concepts). In many organizations, it’s not actually the provisioning that’s expensive and lengthy. It’s the process that goes with it.

You’ll probably have heard that it can take weeks or months for an enterprise to provision a server. You might even work for an organization where that’s true. You might also have heard that it takes thousands of dollars to do so, and your organization might have a chargeback mechanism that makes that the case for your department.

Except that it doesn’t actually take that long, and it’s actually pretty darn cheap, as long as you’re large enough to have some reasonable level of automation (mid-sized businesses and up, or technology companies with more than a handful of servers). Even with zero automation, you can buy a server and have it shipped to you in a couple of days, and build it in an afternoon.

What takes forever is the procurement process, which may also be heavily burdened with costs.

When most organizations virtualize, they usually eliminate a lot of the procurement process — getting a VM is usually just a matter of requesting one, rather than going through the whole rigamarole of justifying buying a server. But the “request a VM” process can be anything from a self-service portal to something with as much paperwork headache as buying a server — and the cost-savings and the agility and efficiency that an organization gains from virtualizing is certainly dependent upon whether they’re able to lighten their process for this new world.

There are certain places where the “forever to procure, at vast expense” problems are notably worse. For instance, subsidiaries in companies that have centralized IT in the parent company often seem to get shafted by central IT — they’re likely to tell stories of an uncaring central IT organization, priorities that aren’t aligned with their own, and nonsensical chargeback mechanisms. Moreover, subsidiaries often start out much more nimble and process-light than a parent company that acquired them, which leads to the build-up of frustration and resentment and an attitude of being willing to go out on their own.

And so subsidiaries — and departments of larger corporations — often end up going rogue, turning to procuring an external cloud solution, not because internal IT cannot deliver a technology solution that meets their needs, but because their organization cannot deliver a process that meets their needs.

When we talk about time and cost savings for public cloud IaaS vs. the internal data center, we should be careful not to conflate the burden of (internal, discardable/re-engineerable) process, with what technology is able to deliver.

Note that this also means that fast provisioning is only the beginning of the journey towards agility and efficiency. The service aspects (from self-service to managed service) are much more difficult to solve.

Bookmark and Share