Monthly Archives: May 2009
Google and Salesforce.com
While I’ve been out of the office, Google has made some significant announcements. My colleague Ray Valdes has been writing about Google Wave and its secret sauce. I highly encourage you to go read his blog.
Google and Salesforce.com continue to build on their partnership. In April, they unveiled Salesforce for Google Apps. Now, they’re introducing Force.com for Google App Engine.
The announcement, in a nutshell, is this: There are now public Salesforce APIs that can be downloaded, and will work on Google App Engine (GAE). Those APIs are a subset of the functionality available in Force.com’s regular Web Services APIs. Check out the User Guide for details.
Note that this is not a replacement for Force.com and its (proprietary) Apex programming language. Salesforce clearly articulates web services vs. Force.com in its developer guide. Rather, this should be thought of as easing the curve for developers who want to extend their Web applications for use with Salesforce data.
A question that lingers in my mind: Normally, on Force.com, a Developer Edition account means that you can’t affect your organization’s live data. If a similar restriction exists on the GAE version of the APIs, it’s not mentioned in the documentation. I wonder if you can do very lightweight apps, using live data, with just a Developer Edition account with Salesforce, if you do it through GAE. If so, that would certainly open up the realm of developers who might try building something on the platform.
My colleague Eric Knipp has also blogged about the announcement. I’d encourage you to read his analysis.
What’s the worth of six guys in a garage?
The cloud industry is young. Amazon’s EC2 service dates back just to October 2007, and just about everything related to public cloud infrastructure post-dates that point. Your typical cloud start-up is at most 18 months old, and in most cases, less than a year old. It has a handful of developers, some interesting tech, plenty of big dreams, and the need for capital.
So what’s that worth? Do you buy their software, or do you hire six guys, put them in nice offices, and give them a couple of months to try to duplicate that functionality? Do you just go acquire the company on the cheap, giving six guys a reasonably nice payday for the year of their life spent developing the tech, and getting six smart employees to continue developing this stuff for you? How important is time to market? And if you’re an investor, what type of valuation do you put on that?
Infrastructure and systems management is fairly well understood. Although the cloud is bringing some new ideas and approaches, people need most of the same stuff on the cloud that they’ve traditionally needed in the physical world. That means the near-term feature roadmaps are relatively clear-cut, and it’s a question of how many developers you can throw at cranking out features as quickly as possible. Some approaches have greater value than others, and there’s inherent value in well-developed software, but the question is, what is the defensible intellectual property? Relatively few companies in this space have patentable technology, for instance.
The recent Oracle acquisition of Virtual Iron may pose one possible answer to this. One could say the same about the Cincinnatti Bell (CBTS) acquisition of Virtual Blocks back in February. The rumor mill seems to indicate that in both cases, the valuations were rather low.
Don’t get me wrong. There are certainly companies out there who are carving out defensible spaces and which have exciting, interesting, unique ideas backed by serious management and technical chops. But as with all such gold rushes to majorly hyped tech trends, there’s also a lot of me-toos. What intrigues me is the extent to which second-rate software companies are getting funding, but first-rate infrastructure services companies are not.
Amazon’s CloudWatch and other features
Catching up on some commentary…
Amazon recently introduced three new features: monitoring, load-balancing, and auto-scaling. (As usual, Werner Vogels has further explanation, and RightScale has a detailed examination.)
The monitoring service, called CloudWatch, provides utilization metrics for your running EC2 instances. This is a premium service on top of the regular EC2 fee; it costs 1.5 cents per instance-hour. The data is persisted for just two weeks, but is independent of running instances. If you need longer-term historical graphing, you’ll need to retrieve and archive the data yourself. There’s some simple data aggregation, but anyone who needs real correlation capabilities will want to feed this data back into their own monitoring tools.
CloudWatch is required to use the auto-scaling service, since that service uses the monitoring data to figure out when to launch or terminate instances. Basically, you define business rules for scaling that are based on the available CloudWatch metrics. Developers should take note that this is not magical auto-scaling. Adding or subtracting instances based on metrics isn’t rocket science. The tough part is usually writing an app that scales horizontally, plus automatically and seamlessly making other configuration changes necessary when you change the number of virtual servers in its capacity pool. (I field an awful lot of client calls from developers under the delusion that they can just write code any way they want, and simply putting their application on EC2 will remove all worries about scalability.)
The new load-balancing service essentially serves both global and local functions — between availability zones, and between instances within a zone. It’s auto-scaling-aware, but its health checks are connection-based, rather than using CloudWatch metrics. However, it’s free to EC2 customers and does not require use of CloudWatch. Customers who have been using HAproxy are likely to find this useful. It won’t touch the requirements of those who need full-fledged application delivery controller (ADC) functionality and have been using Zeus or the like.
As always, Amazon’s new features eat into the differentiating capabilities of third-party tools (RightScale, Elastra, etc.) with these services, but the “most, but not all of the way there” nature of their implementations mans that third-party tools still add value to the baseline. That’s particularly true given that only the load-balancing feature is free.
VMware takes stake in Terremark
I have been crazily, insanely busy, and my frequency of blog posting has suffered for it. On the plus side, I’ve been busy because a huge number of people — users, vendors, investors — want to talk about cloud.
I’ve seen enough questions about VMware investing $20 million in Terremark that I figured I’d write a quick take, though.
Terremark is a close VMware partner (and their service provider of the year for 2008). Data Return (acquired by Terremark in 2007) was the first to have a significant VMware-based utility hosting offering, dating all the way back to 2005. Terremark has since also gotten good traction with its VMware-based Enterprise Cloud offering, which is a virtual data center service. However, Terremark is not just a hosting/cloud provider; it also does carrier-neutral colocation. It has been sinking capital into data center builds, so an external infusion, particularly one directed specifically at funding the cloud-related engineering efforts, is probably welcome.
Terremark has been the leading-edge service provider for VMware-based on-demand infrastructure. It is to VMware’s advantage to get service providers to use its cutting-edge stuff, particularly the upcoming vCloud, as soon as possible, so giving Terremark money to accelerate its cloud plans is a perfectly good tactical move. I don’t think it’s necessary to read any big strategic message into this investment, although undoubtedly it’s interesting to contemplate.
The cloud computing forecast
John Treadway of Cloud Bzz asked my colleague Ben Pring, at our Outsourcing Summit, about how we derived our cloud forecast. Ben’s answer is apparently causing a bit of concern. I figured it might be useful for me to respond publicly, since I’m one of the authors of the forecast.
The full forecast document (clients only, sorry) contains a lot of different segments, which in turn make up the full market that we’ve termed “cloud computing”. We’ve forecasted each segment, along with subsegments within them. Those segments, and their subsegments, are Business Process Services (cloud-based advertising, e-commerce, HR, payments, and other); Applications (no subcategories; this is “cloud SaaS”); Application Infrastructure (platform and integration); and System Infrastructure (compute, storage, and backup).
Obviously, one argue whether or not it’s valid to include advertising revenue, but a key point that should not be missed is that in the trend towards the consumerization of IT, it is the advertiser that often implicitly pays for the consumer’s use of an IT service, rather than the consuer himself. Advertising revenue is a significant component of the overall market, part of the “cloud” phenomenon even if you don’t necessarily think of it as “computing”.
Because we offer highly granular breakouts within the forecast, those who are looking for specific details or who wish to classify the market in a particular way should be able to do so. If you want to define cloud computing as just typical notions of PaaS plus IaaS, for instance, you can probably simply take our platform, compute, and storage line-items and add them together.
Is it confusing to see the giant number with advertising included? It can be. I often start off descriptions of our forecast with, “This is a huge number, but you should note that a substantial percentage of these revenues are derived from online advertising.” and then drill down into a forecast for a particular segment or subsegment of audience interest.
Giant numbers can be splashily exciting on conference presentations, but pretty much anyone doing anything practical with the forecast (like trying to figure out their market opportunity) looks at a segment or even a subsegment.
The perils of defaults
A Fortune 1000 technology vendor installed a new IP phone system last year. There was one problem: By IT department policy, that company does not change any defaults associated with hardware or software purchased from a vendor. In this case, the IP phones defaulted to no ring tone. So the phone does not ring audibly when it gets a call. You can imagine just how useful that is. Stunningly, this remains the case months after the initial installation — the company would rather, say, miss customer calls, than change the Holy Defaults.
A software vendor was having an interesting difficulty with a larger customer. The vendor’s configuration file, as shipped with the software, has defaults set up for single-server operation. If you want to run multi-server for high availability or load distribution, you need to change some of the defaults in the configuration file. They encountered a customer with the same kind of “we do not change any defaults”. Unsurprisingly, their multi-server deployment was breaking. The vendor’s support explained what was wrong, explained how to fix it, and was confounded by the policy. This is one of the things a custom distribution from the vendor can be used for, of course, but it’s a head-slapping moment and a grotesque waste of everyone’s time.
Now I’m seeing cloud configurations confounding people who have these kinds of policies. What is “default” when you’re picking from drop-down menus? What do you do when the default selection is something other than what you actually need? And the big one: Will running software on cloud infrastructure necessitate violating virgin defaults?
As an analyst, I’m used to delivering carefully nuanced advice based on individual company situations, policies, and needs. But here’s one no-exceptions opinion: “We never ever change vendor defaults” is a universally stupid policy. It is particularly staggeringly dumb in the cloud world, where generally, if you can pick a configuration, it is a supported configuration. And bluntly, in the non-cloud world, configurable parameters are also just that — things that the vendor intends for you to be able to change. There are obviously ways to screw up your configuration, but those parameters are changeable for a reason. Moreover, if you are just using cloud infrastructure but regular software, you should expect that you may need to tune configuration parameters in order to get optimal performance on a shared virtualized environment that your users are accessing remotely (and you may want to change the security parameters, too).
Vendors: Be aware that some companies, even really big successful companies, sometimes have nonsensical, highly rigid policies regarding defaults. Consider the tradeoffs between defaults as a minimalistic set, and defaults as a common-configuration set. Consider offering multiple default “profiles”. Packaging up your software specifically for cloud deployment isn’t a bad idea, either (i.e., “virtual appliances”).
IT management: Your staff really isn’t so stupid that they’re not able to change any defaults without incurring catastrophic risks. If they are, it’s time for some different engineers, not needlessly ironclad policies.
If you worry about hardware, it’s not cloud
If you need more RAM, and you have to call your service provider, they’ve got to order the RAM, wait until they receive it, and then put it in a physical server, before you actually get more memory, and then they bill you on a one-off basis for buying and installing the RAM, you’re not doing cloud computing. If you have to negotiate the price of that RAM each time they buy some, you are really really not doing cloud computing.
I talked to a client yesterday who is in exactly this situation, with a small vendor who calls themselves a cloud computing provider. (I am not going to name names on my blog, in this case.)
Cloud infrastructure services should not be full of one-offs. (The example I cited is merely the worst of the service provider’s offenses against cloud concepts.) It’s reasonably to hybridize cloud solutions with non-cloud solutions, but for basic things — compute cores, RAM, storage, bandwidth — if it’s not on-demand, seamless, and nigh-instant, it’s not cloud, at least not in any reasonable definition of public cloud computing. (“Private cloud”, in the sense of in-house, virtualized data centers, adopts some but not all traits of the public cloud to varying degrees, and therefore gets cut more slack.)
Cloud infrastructure should be a fabric, not individual VMs that are tied to specific physical servers.
Out clauses
I’m seeing an increasing number of IT buyers try to negotiate “out clauses” in their contracts — clauses that let them arbitrarily terminate their services, or which allow them to do so based on certain economy-related business conditions.
People are doing this because they’re afraid of the future. If, for instance, they launch a service and it fails, they don’t want to be stuck in a two-year contract for hosting that service (or colocating that service, or having CDN services for it, etc.). Similarly, if the condition of their business deteriorates, they have an eye on what they can cut in that event.
We’re not talking about businesses that are already on the chopping block — we’re talking about businesses that seem to currently be in good health, whose prospects for growth would seem good. (Businesses that are on the chopping block, or wavering dangerously near it, are behaving in different defensive ways.)
Providers who would previously have never agreed to such conditions are sometimes now willing to negotiate clauses that address these specific fears of businesses. But don’t expect to see such clauses to be common, especially if the service provider has an up-front capital expenditure (such as equipment for dedicated, non-utility hosting). If you’re trying to negotiate a clause like this, you’re much more likely to have success if you tie it to specific business outcomes that would result in you entirely shutting down whatever it is that you’re outsourcing, rather than trying to negotiate an arbitrary out.
The costs of user-generated content
When I first started this blog, I intended to write more about virtual worlds, following the general theme of massive scalability. In this instance, though, I want to muse upon the balance between maximizing your revenues, and adhering to principle, especially when you’re a public company with shareholders to worry about. Also, this involves the unintended consequences of user-generated content, and there are lessons to be learned here if you’re looking at UGC, whether in your own enterprise or for consumers in general. Similarly, there are perils in any customer-controlled environment. Bear with me, though, because this is long.
Massively multiplayer online games (MMOGs), and MMO roleplaying games (MMORPGs) in particular, all have distinct communities, but each such community is always full of players with conflicting interests. The development studio has to balance their own vision, as well as the sometimes-warring interests of different types of players, and the commercial needs of the game (whether it’s paid for in subscriptions, real-money trade, or other, there has to be revenue), in order to maximize long-term profit. Communities are particularly fragile, and widespread changes can lead to mass exodus, as Sony Online Entertainment discovered with Star Wars: Galaxies, where a thorough and expensive revamp instead caused more than a 50% drop in subscriptions. Players who depart are not individuals — they are part of a community of family, friends, and online acquaintances, and when key players leave, there’s a domino effect.
Enter NCsoft (SEO:036570), and one of its veteran properties, five-year-old City of Heroes. CoH is relatively small fry for NCsoft — it peaked at around 200,000 subscribers, and now has something in the 150,000 range, paying a base of $15/month in subscription fees. NCsoft’s Lineage and Lineage II, by contrast, each have about a million subscribers; for anyone that isn’t Blizzard and the juggernaut that is World of Warcraft, these are impressive numbers, but they’re down hugely from their all-time highs.
CoH currently enjoys a position as the only superhero-themed MMOG out there. However, Champions Online comes out this summer, designed by the same folks who originally created CoH, creating an imminent competitive threat. Paragon Studios (the studio within NCsoft that’s responsible for CoH) chose to do something smart — introduce user-generated content, allowing players to create their own missions (scenarios), complete with fully custom enemies to fight. (As an on-and-off CoH player with what I hope is a creative streak, UGC is deeply welcome feature, and lots of people are using it to do very entertaining things.)
As one would expect, players immediately went diligently to work to find ways to hyperoptimize UGC in order to maximize rewards for a given amount of play time. The game’s EULA specifies you’re not allowed to use exploits, but the difficulty created was this: What is an exploit, versus merely unintended levels of reward? There are methods in the game that generate very high rewards per unit time, for instance; UGC simply allowed players to generate optimal situations for themselves. The game’s programmers rapidly closed down some methods, but left other methods live for almost a full month. The hyper-efficient methods were well-known and broadly used by the player base, but the studio was essentially silent, with no communication to customers, other than a request for feedback.
Usually, in a virtual world, when there’s an exploit, the exploiters are limited to a handful of people; players normally know a bug when they see one, like the ability to duplicate a valuable object. This particular case is unusual because it affects a sizable percentage of the player base, and it’s unclear what is and is not an exploit.
Consequently, players have been shocked to see NCsoft announce that they’ve decided to react harshly, stating that players who have “abused” the reward system may lose the rewards they’ve gained, including losing access to the characters used. Since CoH is an MMORPG, characters may represent hundreds, even thousands, of hours of investment, so this is a serious threat. The real-world cash value of optimized characters is significant, too, although such sales and transfers are against the EULA.
It’s an extraordinary choice on NCsoft’s part. Other than the instructions not to “exploit” the system, as well as explicit rules forbidding players from creating exploitative UGC, there was never any warning to customers not to play UGC that might be exploitative, although CoH‘s parent studio publicly communicates with customers on a daily basis through the game’s forums. NCsoft has recently been pushing sales of a new boxed set for new players, as well, leading to the high likelihood of inadvertent “abuse” by new players who would not necessarily know that these were exceptional levels of reward for the time.
Losing access to rewards and characters essentially represents nullifying the time investment of players, and the removal of avenues from which to have fun (the character represents the ability to access content). Thus, impacted customers, most of whom subscribe month-to-month, have a very high likelihood of cancelling. This represents a potential direct revenue hit at a time when the game is likely extremely vulnerable to competition, and the aforementioned domino effect of subscriber loss is real and must be considered. Yet, to not do anything is a compromise of principle, and potentially creates a whack-a-mole effect whereby players find new gray areas of high-reward generation and widely use them to gain rewards, while developers try to patch these as quickly as possible. Moreover, because virtual worlds have internal economies, exceptionally fast rewards create imbalances, so they have an impact beyond individual players. (This does not include the impact to “gold farmers” and “power-leveling services”, who offer in-game rewards and powerful characters in exchange for real money, a practice which is against nearly every MMOG’s terms of service, but is nonetheless a significant and growing business. Ironically, making it easier for players to gain quick rewards on their own devalues such services.)
NCsoft is facing the prospect of significant subscriber bleed due to the forthcoming Champions Online, so a decision that increases the likelihood of cancellations is an extraordinarily bold move. It’s unusual for public companies to be willing to choose principle over revenue. Implementing harsh penalties based on clear guidelines, possibly with an automated warning system (i.e., if a player has gotten more than X widgets per Y time, alert him to it), may be advisable, but retroactive imposition of penalties on one’s customer base is another matter. Creating “traps” for bad apples disguised as paying customers is certainly reasonable. Punishing ordinary customers for having done something gray, and which your company has failed to even suggest is black, may be a quick ticket to having to offer unpleasantly complex explanations to your shareholders. Industry-watchers may find the outcome of this to be instructive.
So here are the broader lessons:
A couple of months ago, I wrote about scaling and friendly failure. The same principle that applies here: It’s not what the limits are. It’s how well you communicate them to your customers in advance of enforcing them. It applies whether you’re a gaming company, a cloud computing company, a network services provider, or an entirely non-tech company.
If you are providing an environment with user-generated content, expect that it will be abused, sometimes in subtle ways. Even in a corporate environment, there are potentials for abuse, particularly if the company gives employees goals or bonuses to work towards for completing UGC. Human nature being what it is, people optimize; in the work world, they’re careful not to optimize so much that they think they could get fired over it, but again, the boundaries are gray and hazy. Clear communication of what is and isn’t acceptable, in advance, is necessary.