Blog Archives

TCO tool for cloud computing

Gartner clients might be interested in my just-published piece of research, which is a TCO toolkit for comparing the cost of internal and cloud infrastructure.

A not-new link, but which I nonetheless want to draw people’s attention to as much as possible: Yahoo’s best practices for speeding up your web site is a superb list of clearly-articulated tips for improving your site performance and the user’s perception of performance (which goes beyond just site performance). Recommended reading for everyone from the serious Web developer to the guy just throwing some HTML up for his personal pages.

On the similarly not-new but still-interesting front, Voxel’s open-source mod_cdn module for Apache is a cool little bit of code that makes it easy to CDN-ify your site — install the module and it’ll automatically transform your links to static content. For those of you who are dealing with CDNs that don’t provide CNAME support (like the Rackspace/Limelight combo), are using Apache for your origin front-end, and who don’t want to fool with mod_rewrite, this might be an interesting alternative.

Bookmark and Share

Advertisements

Developer-driven cloud adoption?

James Governor’s thoughtful blog post on finding the REST of cloud prompted me to think about developer-driven versus sysadmin-driven adoption of cloud. This is a fulcrum that’s separate from GUI vs. CLI vs. API tug-of-war, which in many ways is a sysadmin-driven debate.

The immediacy of cloud provisioning has instinctive appeal to developers who just want to get something up and running. Amazon’s choice to initially do API-based provisioning was a clear choice to favor developer thinking, and developers at vast numbers of start-ups rewarded them by adopting the platform. At Web 2.0 start-ups, the founders, who usually come out of some sort of dev background, get to hire the ops people and call the shots. And thus, direct appeal to developers has been very important to cloud success, up until this point.

But my observation from client interactions is that cloud adoption in established, larger organizations (my typical client is $100m+ in revenue) is, and will be, driven by Operations, and not by Development. The developers may be the business stakeholders, and they might be the engineers who first experiment with the cloud (in a “rogue” or ad-hoc way), but Operations has the budget and Operations is going to be managing the actual cloud deployment, and therefore Operations makes the decision about what cloud to go with in the end.

That assumes, of course, that the developers haven’t tied their code to a non-portable API. If the developers have, unbeknownst to the Ops folks, gone and tightly integrated their application with, say, Amazon S3 in a way that doesn’t readily allow portability between different cloud storage APIs, or built on top of Google App Engine, then Ops isn’t going to have much in the way of options.

Another way to think about this: Developer-driven adoption is bottom-up adoption, rather than top-down adoption. The early stages of cloud adoption have been bottom-up. But because of the economic climate, larger organizations are experiencing a radical acceleration of interest in cloud computing, and thus, the next stage of cloud infrastructure adoption is more likely to be driven top-down.

Bookmark and Share

Seven years to SEAP, not to cloud in general

Gartner recently put out a press release titled “Gartner Says Cloud Application Infrastructure Technologies Need Seven Years to Mature“, based on a report from my colleague Mark Driver. That’s gotten a bunch of pickup in the press and in the blogosphere. I’ve read a lot of people commenting about how the timeline given seems surprisingly conservative, and I suspect it’s part of what has annoyed Reuven Cohen into posting, “Cloud computing is for everyone — except stupid people.

The confusion, I think, is over what the timeline actually covers. Mark is talking specifically about service-enabled application platforms (SEAPs), not cloud computing in general. Basically, a SEAP is a foundation platform for software as a service. Examples of current-generation SEAP platforms are Google App Engine, Microsoft Azure, the Facebook application platform, Coghead, and Bungee Labs. (Gartner clients who want to drill into SEAP, see The Impact of SaaS on Application Servers and Platforms.) When you’re talking about SEAP adoption, you’re talking about something pretty complex, on a very different timeframe than the evolution of the broader cloud computing style.

Cloud computing in general already has substantial business uptake, with potential radical acceleration due to the economic downturn. I say “potential” because it’s very clear to me that existing public cloud services, at their current state of maturity, frequently don’t meet the requirements that enterprises are looking for right now. I have far more clients suddenly willing to consider taking even big risks to leap into the cloud, than I have clients who actually have projects well-suited to the public cloud and who will realize substantial immediate cost savings from that move.

On the flip side, for those who have public-facing Web infrastructure, cloud services are now a no-brainer. Expect cloud elasticity and fast provisioning to simply become part of hosting and data center outsourcing solutions. Traditional hosting providers who don’t make the transition near-immediately are going to get eaten alive.

Bookmark and Share

COBOL comes to the cloud

In this year of super-tight IT budgets and focus on stretching what you’ve got rather than replacing it with something new, Micro Focus is bringing COBOL to the cloud.

Most vendor “support for EC2” announcements are nothing more than hype. Amazon’s EC2 is a Xen-virtualized environment. It supports the operating systems that run in that environment; most customers use Linux. Applications run no differently there than they do in your own internal data center. There’s no magical conveyance of cloud traits. Same old app, externally hosted in an environment with some restrictions.

But Micro Focus (which is focused around COBOL-based products) is actually launching its own cloud service, built on top of partner clouds — EC2, as well as Microsoft’s Azure (previously announced).

Micro Focus has also said it has tweaked its runtime for cloud deployment. They give the example of storing VSAM files as blobs in SQL. This is undoubtedly due to Azure not offering direct access to the filesystem. (For EC2, you can get persistent normal file storage with EBS, but there are restrictions.) I assume that similar tweaks were made wherever the runtime needs to do direct file I/O. Note that this still doesn’t magically convey cloud traits, though.

It’s interesting to see that Micro Focus has built its own management console around EC2, providing easy deployment of apps based on their technology, and is apparently making a commitment to providing this kind of hosted environment. Amidst all of the burgeoning interest in next-generation technologies, it’s useful to remember that most enterprises have a heavy burden of legacy technologies.

(Disclaimer: My husband was founder and CTO of LegacyJ, a Micro Focus competitor, whose products allow COBOL, including CICS apps, to be deployed within standard J2EE environments — which would include clouds. He doesn’t work there any longer, but I figured I should note the personal interest.)

Bookmark and Share

Google Native Client

Google announced something very interesting yesterday: their Native Client project.

The short form of what this does: You can develop part or all of your application client in a language that compiles down to native code (for instance, C or C++, compiled to x86 assembly), then let the user run it in their browser, in a semi-sandboxed environment that theoretically prevents malicious code from being executed.

Why would you want to do this? Because developing complex applications in JavaScript is a pain, and all of the other options (Java in a browser, Adobe Flex, Microsoft Silverlight) provide only a subset of functionality, and are slower than native applications. That’s one of the reasons that most applications are still done for the desktop.

It’s an ambitious project, not to mention one that is probably making every black-hat hacker on the planet drool right now. The security challenges inherent in this are enormous.

Adobe has previously had a similar thought, in the form of Alchemy, a labs project for a C/C++ compiler that generates code for AVM2 (the virtual machine inside the Flash player). But Google takes the idea all the way down to true native code.

The broader trend has been towards managed code environments and just-in-time compilers (JITs). But the idea of native code with managed-code-like protections is certainly extremely interesting, and the techniques developed will likely be interesting in the broader context of malware prevention in non-browser applications, too.

And while we’re talking about lower-level application infrastructure pies that Google has its fingers in, it’s worth noting that Google has also exhibited significant interest in LLVM (which stands for Low-Level Virtual Machine). LLVM is an open-source project now sponsored by Apple, who hired its developer and is now using it within MacOS X. In layman’s terms, LLVM makes it easier for developers to write new programming languages, and makes it possible to develop composite applications using multiple programming languages. A compiler or interpreter developer can generate LLVM instructions rather than compiling to native code, then let LLVM take care of dealing with the back-end, the final stage of getting it to run natively. But LLVM also makes it easier to do analysis of code, something that is going to be critical if Google’s efforts with Native Client are to succeed. I am somewhat curious if Google’s interests intersect here, or if they’re entirely unrelated (not all that uncommon in Google’s chaotic universe).

Bookmark and Share

Software and thick vs. thin-slice computing

I’ve been thinking about the way that the economics of cloud computing infrastructure will impact the way people write applications.

Most of the cloud infrastructure providers out there offer virtual servers as a slice of some larger, physical server; Amazon EC2, GoGrid, Joyent, Terremark Enterprise Cloud, etc. all follow this model. This is in contrast to the abstracted cloud platform provided by Google App Engine or Mosso, which provide arbitrary, unsliced amounts of compute.

The virtual server providers typically provide thin slices — often single cores with 1 to 2 GB of RAM. EC2’s largest available slices are 4 virtual cores plus 15 GB, or 8 virtual cores plus 7 GB, for about $720/month. Joyent’s largest slice is 8 cores with 32 GB, for about $3300/month (including some data transfer). But on the scale of today’s servers, these aren’t very thick slices of compute, and the prices don’t scale linearly — thin slices are much cheaper than thick slices for the same total aggregate amount of compute.

The abstracted platforms are oriented around thin-slice compute, as well, at least from the perspective of desired application behavior. You can see this in the limitations imposed by Google App Engine; they don’t want you to work with large blobs of data nor do they want you consuming significant chunks of compute.

Now, in that context, contemplate this Intel article: “Kx – Software Which Uses Every Available Core“. In brief, Kx is a real-time database company; they process extremely large datasets, in-memory, parallelized across multiple cores. Their primary customers are financial services companies, who use it to do quantitative analysis on market data. It’s the kind of software whose efficiency increases with the thickness of the available slice of compute.

In the article, Intel laments the lack of software that truly takes advantage of multi-core architectures. But cloud economics are going to push people away from thick-sliced compute — away from apps that are most efficient when given more cores and more RAM. Cloud economics push people towards thin slices, and therefore applications whose performance does not suffer notably as the app gets shuffled from core to core (which hurts cache performance), or when limited to a low number of cores. So chances are that Intel is not going to get its wish.

Bookmark and Share

%d bloggers like this: