Monthly Archives: January 2009

Google Apps and enterprises

My colleague Tom Austin has posted a call for Google to be more transparent about enterprise usage of Google Apps. This was triggered by a TechCrunch article on Google’s reduction of the number of free users for a given Google Apps account.

I’ve been wondering how many businesses use Google Apps almost exclusively for messaging, and how many of them make substantial use of collaboration. My expectation is that a substantial number of the folks with custom domains on Google Apps solely or almost-solely do email or email forwarding. For instance, for my blog, I have no option for email for that domain other than via Google Apps, because has explicit MX record support for them and nobody else — so I use that to forward email for that domain to my regular email account. Given how heavily bloggers have driven domain registrations and “vanity” domains, I’d expect Google Apps to be wrapped up pretty heavily in that phenomenon. This is not to discount the small business, of course, whose usage of this kind of service also becomes more significant over time.

Those statistics aside, though, and going back to Tom’s thoughts on transparency, I think he’s right, if Google intends to court the enterprise market in the way that the enterprise is accustomed to being courted. I am uncertain if Google intends that, though, especially when fighting more featureful, specialized vendors in order to get an enterprise clientele is likely a waste of resources at the moment. The type of enterprise who is going to adopt this kind of solution is probably not the kind of enterprise who wants to see a bunch of case studies and feel reassured by them; they’re independent early adopters with high tolerance for risk. (This goes back to a point I made in a previous post: Enterprise IT culture tends to be about risk mitigation.)

Bookmark and Share

Recent polling results

I’ve just put out a new research report called The Changing Colocation and Data Center Market. Macroeconomic factors have driven major changes in both the supply and demand picture for data center construction, leasing, and colocation, in the last quarter of 2008, continuing into this year. The economic environment has brought about abrupt shift in sourcing strategies, build plans, and the like, driving a ton of inquiry for myself and my colleagues. This report looks at those changes, and presents results from a colocation poll done of attendees at Gartner’s data center conference in December.

Those of you interested in commentary related to that conference might also want to read reports done by colleagues of mine: Too Many Data Center Conference Attendees Are Not Considering Availability and Location Risks in Data Center Siting and Sourcing Decisions, and an issue near and dear to many businesses right now, how to stretch out their money and current data center life, Pragmatic Guidance on Data Center Energy Issues for the Next 18 Months.

Reports are clients-only, sorry.

Bookmark and Share

The turf war of unified computing

The New York Times article about Cisco getting into the server market is a very interesting read, as is Cisco’s own blog post announcing something they call “Unified Computing“.

My colleague Tom Bittman has a thoughtful blog post on the topic, writing: What is apparent is that the comfortable sandboxes in which different IT vendors sat are shattering. Those words demand that computing become a much more flexible, unified fabric.

Tom ruminates on the vendors, but setting aside any opinion of Cisco’s approach (or any other vendor making a unified computing attempt), my mind goes to the people — specifically, the way that a unified approach impacts IT operations personnel, and the way that these engineers can help or hinder adoption of unified data center technologies.

Unified computing — unified management of compute, storage, and network elements — is not just going to shape up to be a clash between vendors. It’s going to become a turf war between systems administrators and network engineers. Today, computing and storage are classically the domain of the former, and the WAN the domain of the latter. The LAN might go either way, but the bigger the organization, the more likely it goes to the network guys. And devices like application delivery controllers fall into an uncomfortable in-between, but in most organizations, one group or the other takes them into their domain. The dispute over devices like that serves as the warning shot in this war, I think. (An ADC is a network element, but it is often closely associated with servers; it is usually an appliance, i.e. a purpose-built server, but its administration more closely resembles a network device than a server.) The more a given technology crosses turf lines, the greater the dispute over who manages it, whose budget it comes out of, etc.

all your cloud are belong to us]
(Yes, I really did make a lolcat just for this post.)

He who controls the entire enchilada — the management platform — is king of the data center. There will be personnel who are empire-builders, seeking to use platform control to assert dominance over more turf. And there will be personnel who try to push away everything that is not already their turf, trying to avoid more work piling up on their plate.

Unification is probably inevitable. We’ve seen this human drama play out once this past decade already — the WAN guys generally triumphed over the telephony guys in the VoIP convergence. But my personal opinion is that it’s the systems guys, not the network guys, who will be most likely to triumph in the unified-platform wars. In most organizations, systems guys significantly outnumber the network guys, and they tend to have a lot more clout, especially as you go up the management chain. Internal politics and whose vendor influence triumphs may turn out to influence solution selection as much as, or more than, the actual objective quality of the solutions themselves.

Bookmark and Share

My coverage

I’ve received various queries from people, particularly analyst relations folks at vendors, trying to understand what I cover, especially as it relates to cloud computing, so I figured I’d devote a blog post to explaining.

Gartner analysts do not really have “official coverage areas” defined by titles, and our coverage shifts dynamically based on client needs and our own interests. We are matrix-managed, and our research falls into “agendas” (which may be outside our “home team”) and we collaborate across the company in “research communities”. I report into a team called Enterprise Network Services within our Technology and Service Providers group (i.e., what was Dataquest), but I spend about 90% of my time answering end-user inquiries, with the remainder split between vendors and investors. I focus on North America but also track my markets globally. I’m responding for sizing and forecasting my markets, too.

I use the term “Internet infrastructure services” to succinctly describe my coverage, but other terms, like “emerging enterprise network services” are used, as well. I cover services that are enabled by networks, rather than networking per se.

My coverage falls into the following broad buckets:

  • Hosting, colocation, and the general market for data center space.
  • Content delivery networks, and application delivery networks as a service.
  • The Internet ecosystem, enabling technologies like DNS, policy issues, etc.
  • Cloud computing.

Cloud computing, of course, is an enormously broad topic, and it’s covered across Gartner in many areas of specialization, with those of us who track it closely collaborating via our Cloud research community.

My particular focus in the cloud realm is on cloud infrastructure services — public clouds and “virtual private” clouds, on the infrastructure side (i.e., excluding SaaS, consumer content/apps, etc.). Because so many of these services are, in their currently-nascent stage, basically a way to host applications built using Web technologies, and compete directly in that same market, it’s been a very natural extension of my coverage of hosting. But by the nature of the topic, my coverage also crosses into everything else touching the space.

Our end-user customers (IT managers and architects) ask me questions like:

  • Help me cut through the hype and figure out this cloud thing.
  • Help me to understand what cloud offerings are available today.
  • Given my requirements, is there a cloud service that’s right for me?
  • What short-list of vendors should I look at for cloud infrastructure?
  • What’s the cost of the various cloud options?
  • What should I think about when considering putting a project in the cloud?
  • What will I need to do in order to get my application to run in the cloud?
  • What best practices can I learn from cloud vendors?

Our vendor customers ask me questions like:

  • How is the cloud transformation going to affect my business?
  • What do users care about when purchasing a cloud service?
  • What does the competitive landscape look like?
  • What should I be thinking about in my three-year roadmap?
  • What are the technologies that I should be exploiting?
  • Who should I acquire for competitive advantage?

My research is primarily grounded in the here-and-now — from what’s available now, to what’s going to be important over the next five years. However, like everyone who covers cloud computing, I’m also hunched over the crystal ball, trying to see a decade or two into the future. But companies buy stuff thinking about their needs right now and maybe their needs three years out, and vendors think about the next year’s product plans and how it positions them three years out, so I’m kept pretty busy dealing within the more immediate-term window of “how do I cut through the hype to use cloud to bring measurable benefit to my business?”

Bookmark and Share

Clear Card and frequent travel

It’s Inauguration Day, but I am on the road, so I am watching TV from my hotel room rather than braving the crowds in DC. Fortunately, my quadrant of the DC area was still relatively tame at 7 am, and I made it to Dulles on time, where I made the mistake of using the premium security line rather than my Clear card, and my lane was essentially unmoving for a time — lots of tourists today. (By the way, kudos to the Washington Post for slimming down their mobile site today, to just a nice set of minimalistic pages focused on the inauguration and visitors to the city.)

Like many analysts, I travel a lot. Moreover, I go to a hodgepodge of destinations, making it difficult to accumulate frequent flyer miles on a single carrier. So I find myself frequently flying airlines that I do not have premier standing on.

My attempted solution to this was to get a Clear Card. My home airports — Dulles, Reagan National, BWI — got Clear early on, so it seemed like a logical choice. Applying was easy — in Dulles, there’s a kiosk for it where they can do the soup-to-nuts application, and one day, while waiting for an international flight (and therefore in possession of my passport), I went ahead and did it. My Clear card arrived promptly in the mail, and I was good to go.

Unfortunately, there are three notable aspects about Clear that severely hamper its usefulness: First, it’s available at very few airports. Second, it’s often not clear whether or not it’s available, and if you’re in an airport where it’s not (and sometimes even if you’re in an airport where it is), if you ask about it, airport personnel, including the TSA personnel, will look at you like you have grown a third head — they’ve never heard of it, and think the fact that you are asking about it (“Excuse me, is there a security line for Clear card holders at this airport?”) is annoying, weird and possibly suspicious. (And good luck getting anyone to take your Clear card as ID, as they’re supposed to do.) Third, the implementation of the Clear line varies, and its usefulness varies accordingly.

At Dulles, for instance, the Clear line is down with the employee security checkpoint, at baggage claim. It’s a separate line which usually has very few people in line, but it’s slow by comparison to the rate at which the premium security line normally moves. Most of the people now using Clear probably don’t use their card often; that’s obvious by the way everyone fumbles with the thing. (I do, too.) Moreover, from watching the way Clear users fumble with their bags, it seems like they don’t fly often. Frequent flyers usually get dealing with security down to an art form. The slowness of the passengers can offset the fact there are very few people in line — premium security at Dulles can actually be faster, especially when you take into account the out-of-the-way walk down to the Clear area and back towards the gates. That makes Clear a pricey investment for those times when I’m flying on an airline where I can’t use the premium line, or when Dulles is exceptionally crowded (or has otherwise not made expert-traveler lanes available).

By contrast, in Atlanta, the Clear line is just a separate entrance into security, just around the corner from the regular entrance. It lets you shortcut what is sometimes a very long general line, or the shorter premium line. But once you’re past the card-check, you’re waiting in the same screening lanes as everyone else.

I don’t regret getting the Clear card, but the cost-to-value ratio is a bit off. It’s $200 to slightly shorten wait times, for fairly frequent flyers who often end up on flights that don’t entitle them to use the premium security line, and who routinely use airports with Clear.

To me, this seems like an opportunity for the airlines to add value: Offer a paid upgrade that gives access to the premium security line and “zone 1” preferred upgrade, or attach it to full-fare tickets, or otherwise allow people to temporarily buy privileges.

Bookmark and Share

The Process Trap

Do your processes help or hinder your employees’ ability to deliver great service to customers? When an employee makes an exception to keep a customer happy, is that rewarded or does the employee feel obliged to hide that from his manager? When you design a process, which is more important: Ensuring that nobody can be blamed for a mistake as long as they did what the process said they were supposed to do, or maximizing customer satisfaction? And when a process exception is made, do you have a methodical way to handle it?

Many companies have awesome, dedicated employees who want to do what’s best for the customer. And, confronted with the decision of whether to make a customer happy, or follow the letter of the process, most of them will end up opting for helping the customer. Many organizations, even the most rigidly bureaucratic ones, celebrate those above-and-beyond efforts.

But there’s an important difference in the way that companies handle these process exceptions. Some companies are good at recognizing that people will exercise autonomy, and that systems should be built to handle exceptions, and track why they were granted and what was done. Other companies like to pretend that exceptions don’t exist, so when employees go outside the allowed boundaries, they simply do stuff — the exception is never recorded, and nobody knows what was done or how or why, and if the issue is ever raised again, the account team turns over, or someone wonders why this particular customer has a weird config, nobody will have a clue. And ironically, it tends to be the companies with the most burdensome process — the ones not only most likely to need exceptions, but the ones obsessed with a paperwork trail for even the most trivial minutia — that lack the ability to systematically handle exceptions.

When you build systems, whether human or machine, do you figure out how you’re going to handle the things that will inevitably fall outside your careful design?

Bookmark and Share

Zork meets browser-based games

Nostalgia for the ’80s continues to reign. (Robot Chicken fans: Have you seen the Pac-Matrix?)

A company called Jolt Online Gaming has acquired the rights to produce a browser-based MMORPG called Legends of Zork. For those of you who have never had the experience of realizing it is dark and you may be eaten by a grue, this is probably not particularly meaningful to you, but for fans of the era of classic Infocom text-adventure games, it is both fascinating and bizarre to see that they’re going to try to turn the Great Underground Empire into a hack-and-slash online multiplayer RPG.

The market for browser-based massively-multiplayer games supports a cottage industry of small companies with a handful of developers, backed by an artist or two, who crank out a reasonably nice living for themselves without ever competing with the big-time. I wonder if we’ll eventually see a roll-up of these guys, or if they like being “lifestyle companies”.

Bookmark and Share

Akamai article in ACM Queue

My colleague Nick Gall pointed out an article in the ACM Queue that I’d missed: Improving Performance on the Internet, by Akamai’s chief scientist, Tom Leighton.

There is certainly some amount of marketing spin in that article, but it is nonetheless a very good read. If you are looking for a primer on why there are CDNs, or are interested in understanding how the application delivery network service works, this is a great article. Even if you’re not interested in CDNs, the section called “Highly Distributed Network Design” has a superb set of principles for fault-tolerant distributed systems, which I’ll quote here:

  1. Ensure significant redundancy in all systems to facilitate failover.
  2. Use software logic to provide message reliability.
  3. Use distributed control for coordination.
  4. Fail cleanly and restart.
  5. Phase software releases.
  6. Notice and proactively quarantine faults.

One niggle: The article says, “The top 30 networks combined deliver only 50 percent of end-user traffic, and it drops off quickly from there, with a very-long-tail distribution over the Internet’s 13,000 networks.” That statement needs a very important piece of context: the fact that most of those networks do not belong to network operators (i.e., carriers, cable companies, etc.). Many of them are simply “autonomous systems” (in Internet parlance) owned by enterprises, or which belong to Web hosters, and so forth. That’s why the top 30 account for so much of the traffic, and that percentage would be sharply increased if you allocated them the enterprises who buy transit from them. (Those interested in looking at data to do a deeper dive should check out the Routing Report site.)

Bookmark and Share

COBOL comes to the cloud

In this year of super-tight IT budgets and focus on stretching what you’ve got rather than replacing it with something new, Micro Focus is bringing COBOL to the cloud.

Most vendor “support for EC2” announcements are nothing more than hype. Amazon’s EC2 is a Xen-virtualized environment. It supports the operating systems that run in that environment; most customers use Linux. Applications run no differently there than they do in your own internal data center. There’s no magical conveyance of cloud traits. Same old app, externally hosted in an environment with some restrictions.

But Micro Focus (which is focused around COBOL-based products) is actually launching its own cloud service, built on top of partner clouds — EC2, as well as Microsoft’s Azure (previously announced).

Micro Focus has also said it has tweaked its runtime for cloud deployment. They give the example of storing VSAM files as blobs in SQL. This is undoubtedly due to Azure not offering direct access to the filesystem. (For EC2, you can get persistent normal file storage with EBS, but there are restrictions.) I assume that similar tweaks were made wherever the runtime needs to do direct file I/O. Note that this still doesn’t magically convey cloud traits, though.

It’s interesting to see that Micro Focus has built its own management console around EC2, providing easy deployment of apps based on their technology, and is apparently making a commitment to providing this kind of hosted environment. Amidst all of the burgeoning interest in next-generation technologies, it’s useful to remember that most enterprises have a heavy burden of legacy technologies.

(Disclaimer: My husband was founder and CTO of LegacyJ, a Micro Focus competitor, whose products allow COBOL, including CICS apps, to be deployed within standard J2EE environments — which would include clouds. He doesn’t work there any longer, but I figured I should note the personal interest.)

Bookmark and Share

More cloud news

Another news round-up, themed around “competitive fracas”.

Joyent buys Reasonably Smart. Cloud hoster Joyent has picked up Reasonably Smart, a tiny start-up with an APaaS offering based, unusually enough, on JavaScript and the Git version-control system. GigaOM has an analysis; I’ll probably post my take later, once I get a better idea of exactly what Reasonably Smart does.

DreamHost offers free hosting. DreamHost — one of the more prominent, popular mass-market and SMB hosting providers — is now offering free hosting for certain applications, including WordPress, Drupal, MediaWiki, and PhpBB. There are a limited number of beta invites out there, and DreamHost notes that the service may become $50/year later. (The normal DreamHost base plan is $6/month.) Increasingly, shared hosting companies are having to compete with free application-specific hosting services like; and Wikidot, and they’re facing the looming spectre of some giants like Google giving away cloud capacity for free. And shared hosting is a cutthroat market already. So, here’s another marketing salvo being fired.

Google goes after Microsoft. Google has announced it’s hiring a sales force to pitch the Premier Edition of Google Apps to customers who are traditionally Microsoft customers. I’d expect the two key spaces where they’ll compete are in email and collaboration, going after the Exchange and Sharepoint base.

Bookmark and Share

%d bloggers like this: