The myth of zero downtime
Every time there’s been a major Amazon outage, someone always says something like, “Regular Web hosters and colocation companies don’t have outages!” I saw an article in my Twitter stream today, and finally decided that the topic deserves a blog post. (The article seemed rather linkbait-ish, so I’m not going to link it.)
It is an absolute myth that you will not have downtime in colocation or Web hosting. It is also a complete myth that you won’t have downtime in cloud IaaS run by traditional Web hosting or data center outsourcing providers.
The typical managed hosting customer experiences roughly one outage a year. This figure comes from thirteen years of asking Gartner clients, day in and day out, about their operational track record. These outages are typically related to hardware failure, although sometimes they are related to service provider network outages (often caused by device misconfiguration, which can obliterate any equipment or circuit redundancy). Some customers are lucky enough to never experience any outages over the course of a given contract (usually two to three years for complex managed hosting), but this is actually fairly rare, because most customers aren’t architected to be resilient to all but the most trivial of infrastructure failures. (Woe betide the customer who has a serious hardware failure on a database server.) The “one outage a year” figure does not include any outages that the customer might have caused himself through application failure.
The typical colocation facility in the US is built to Tier III standards, with a mathematical expected availability of about 99.98%. In Europe, colocation facilities are often built to Tier II standards intead, for an expected availability of about 99.75%. Many colocation facilities do indeed manage to go for many years without an outage. So do many enterprise data centers — including Tier I facilities that have no redundancy whatsoever. The mathematics of the situation don’t say that you will have an outage — these are merely probabilities over the long term. Moreover, there will be an additional percentage of error that is caused by humans. Single-data-center kings who proudly proclaim that their one data center has never had an outage have gotten lucky.
The amount of publicity that a data center outage gets is directly related to its tenant constituency. The outage at the 365 Main colocation facility in San Francisco a few years back was widely publicized, for instance, because that facility happened to house a lot of Internet properties, including ones directly associated with online publications. There have been significant outages at many other colocation faciliities over the years, though, that were never noted in the press — I’ve found out about them because they were mentioned by end-user clients, or because the vendor disclosed them.
Amazon outages — and indeed, more broadly, outages at large-scale providers like Google — get plenty of press because of their mass effects, and the fact that they tend to impact large Internet properties, making the press aware that there’s a problem.
Small cloud providers often have brief outages — and long maintenance windows, and sometimes lengthy maintenance downtimes. You’re rolling the dice wherever you go. Don’t assume that just because you haven’t read about an outage in the press, it hasn’t occurred. Whether you decide on managed hosting, dedicated hosting, colocation, or cloud IaaS, you want to know a provider’s track record — their actual availability over a multi-year period, not excluding maintenance windows. Especially for global businesses with 24×7 uptime requirements, it’s not okay to be down at 5 am Eastern, which is prime-time in both Europe and Asia.
Sure, there are plenty of reasons to worry about availability in the cloud, especially the possibility of lengthy outages made worse by the fundamental complexity that underlies many of these infrastructures. But you shouldn’t buy into the myth that your local Web hoster or colocation provider necessarily has better odds of availability, especially if you have a non-redundant architecture.
Five reasons you should work at Gartner with me
Gartner is hiring again! We’ve got a number of open positions, actually, and somewhat flexible about how we use the headcount; we’re looking for great people and the jobs can adapt to some extent based on what they know. This also means we’re flexible on seniority level — anywhere from about five years of experience to “I have been in the industry forever” is fine. We’re very flexible on background, too; as long as you have a solid grasp of technology, with an understanding of business, we don’t care if you’re currently an engineer, IT manager, product manager, marketing person, journalist, etc.
First and foremost, we’re looking for an analyst to cover the colocation market, and preferably also data center leasing. Someone who knows one or more other adjacent spaces as well would be great — peering, IP transit, hosting, cloud IaaS, content delivery networks, network services, etc.
We could also use an analyst who can cover some of the things that I cover — cloud IaaS, managed hosting, CDNs, and general Internet topics (managed DNS, domain registration, peering, and so on).
These positions will primarily serve North American clients, but we don’t care where you’re located as long as you can accomodate normal US time zones; these positions are work-from-home.
I love my job. You’ve got to have the right set of personality traits to enjoy it, but if the following five things sound awesome to you, you should come work at Gartner:
1. It is an unbeatably interesting job for people who thrive on input. You will spend your days talking to IT people from an incredibly diverse array of businesses around the globe, who all have different stories to tell about their environments and needs. Vendors will tell you about the cool stuff that they’re doing. You will be encouraged to inhale as much information as you can, reading and researching on your own. You will have one-on-one meetings with hundreds of clients each year (our busiest analysts do over 1,500 one-on-one interactions!), and get to meet countless more in informal interactions. Many of the people you talk to will make you smarter, and all of them will make you more knowledgeable.
2. You get to help people in bite-sized chunks. People will tell you their problems and you will try your best to help them in thirty minutes. After those thirty minutes, their problem is no longer yours; they’re the ones who are going to have to go back and fight through their politics and tangled snarl of systems to get things done. It’s hugely satisfying if you enjoy that kind of thing, especially since you do often get long-term feedback about how much you helped them. You’ll help IT buyer clients choose the right strategy, pick the right vendors, and save tons of money by smart contract negotiation. You’ll help vendors with their strategy, design better products, understand the competition, and serve their customers better. You’ll help investors understand markets and companies and trends, which translates directly into helping them make money. Hopefully, you’ll get to influence the market in a way that’s good for everyone.
3. You get to work with great colleagues. Analysts here are smart and self-motivated. There’s no real hierarchy; we work collaboratively and as equals, regardless of our titles, with ad-hoc leadership as needed. Also, analysts are articulate, witty, and opinionated, which always makes for fun interactions. Your colleagues will routinely provide you with new insights, challenge your thinking, and provide amazing amounts of expertise in all kinds of things. There’s almost always someone who is deeply expert in whatever you want to talk about. Analysts are Gartner’s real product; research and events are a result of the people. Our turnover is extremely low.
4. Your work is self-directed. Nobody tells you what to do here beyond some general priorities and goals; there’s very little management. You’re expected to figure out what you need to do with some guidance from your manager and input from your peers, manage your time accordingly, and go do it. You mostly get to figure out how to cover your market, and aim towards what clients are interested in. Your research agenda and coverage are flexible, and you can expand into whatever you can be expert in. You set your own working hours. Most people work from home.
5. We don’t do any pay-for-play. Integrity is a core value at Gartner, so you won’t be selling your soul. About 80% of our revenue comes from IT buyers, not vendors. Unlike most other analyst firms, we don’t do commissioned white papers, or anything else that could be perceived as an endorsement of a vendor; also, unlike some other analyst firms, analysts don’t have any sales responsibility for bringing in vendor sales or consulting engagements, or being quoted in press releases, etc. You will neither need to know nor care which vendors are clients or what they’re paying (any vendor can do briefings, though only clients get inquiry). Analysts must be unbiased, and management fiercely defends your right to write and say anything you want, as long as it’s backed up by solid evidence and is presented professionally, no matter how upset it makes a vendor. (Important downside: We don’t allow side work like participation in expert nets, and we don’t allow you or your immediate family to have any financial interest in the areas you cover, including employment or stock ownership in related companies. If your spouse works in tech, this can be a serious limiter.)
Poke me if you’re interested. I have a keen interest in seeing great people hired into these roles fast, since they’re going to be taking a big chunk of my current workload.
Amazon and Equinix partner for Direct Connect
Amazon has introduced a new connectivity option called AWS Direct Connect. In plain speak, Direct Connect allows an Amazon customer to get a cross-connect between his own network equipment and Amazon’s, in some location where the two companies are physically colocated. In even plainer speak, if you’re an Equinix colocation customer in their Ashburn, Virginia (Washington DC) data center campus, you can get a wire run between your cage and Amazon’s, which gives you direct connectivity between your router and theirs.
This is relatively cheap, as far as such things go. Amazon imposes a “port charge” for the cross-connect at $0.30/hour for 1 Gbps or $2.25/hour for 10 Gbps (on a practical level, since cross-connects are by definition nailed up 100% of the time, about $220/month and $1625/month respectively), plus outbound data transfer at $0.02/GB. You’ll also pay Equinix for the cross-connect itself (I haven’t verified the prices for these, but I’d expect they would be around $500 and $1500 per month). And, of course, you have to pay Equinix for the colocation of whatever equipment you have (upwards of $1000/month+ per rack).
Direct Connect has lots of practical uses. It provides direct, fast, private connectivity between your gear in colocation and whatever Amazon services are in Equinix Ashburn (and non-Internet access to AWS in general), vital for “hybrid cloud” use cases and enormously useful for people who, say, have PCI-compliant e-commerce sites with huge databases Oracle RAC and black-box encryption devices, but would like to put some front-end webservers in the cloud. You can also buy whatever connectivity you want from your cage in Equinix, so you can take that traffic and put it over some less expensive Internet connection (Amazon’s bandwidth fees are one of the major reasons customers leave them), or you can get private networking like ethernet or MPLS VPN (an important requirement for enterprise customers who don’t want their traffic to touch the Internet at all).
This is not a completely new thing — Amazon has quietly offered private peering and cross-connects to important customers for some time now, in Equinix. But this now makes cross-connects into a standard option with an established price point, which is likely to have far greater uptake than the one-off deals that Amazon has been doing.
It’s not a fully-automated service — the sign-up is basically used to get Amazon to grant you an authorization so that you can put in an Equinix work order for the cross-connect. But it’s an important step in the right direction. (I’ve previously noted the value of this partnership in a blog post called “Why Cloud IaaS Customers Care About a Colo Option“. Also, for Gartner clients, see my research note “Customers Need Hybrid Cloud Compute IaaS” for a detailed analysis.)
This is good for Equinix, too, for the obvious reasons. For quite some time now, I’ve been evangelizing the importance of carrier-neutral colocation as a “cloud hub”, envisioning a future where these providers facilitate cross-connect infrastructures between cloud users and cloud providers. Widespread adoption of this model would allow an enterprise to say, get a single rack of network equipment at Equinix (or Telecity or Interxion, etc.), and then cross-connect directly to all of their important cloud suppliers. It would drive cross-connect density, differentiation and stickiness at the carrier-neutral colo providers who succeed in being the draw for these ecosystems.
It’s worth noting that this doesn’t grant Amazon a unique capability, though. Just about every other major cloud IaaS provider already offers colocation and private connectivity options. But it’s a crucial step for Amazon towards being suitable for more typical enterprise use cases. (And as a broader long-term ecosystem play, customers may prefer using just one or two “cloud hubs” like an Equinix location for their “cloud backhaul” onto private connectivity, especially if they have gateway devices.)
Verizon buys Terremark
A couple of days ago, Verizon bid to acquire Terremark, for a total equity value of $1.4 billion. My colleague Ted Chamberlin and I are issuing a First Take on the event to Gartner clients; if you’re looking for advice and the official Gartner position, you’ll want to read that. This blog post is just some personal musings on the reasons for the acquisition.
Terremark has three significant businesses — carrier-neutral colocation (with the most notably carrier-dense facility being the NAP of the Americas in Miami, which is a major interexchange point), managed hosting, and VMware-based cloud IaaS (principally The Enterprise Cloud). As such, it overlaps entirely with Verizon’s own product lines, which are (non-carrier-neutral) colocation, managed hosting, and VMware-based cloud IaaS (Verizon CaaS). Both companies are vCloud Data Center partners of VMware, and both have vCloud Director-based offerings about to launch.
Verizon’s plan is to continue to run Terremark standalone, as a wholly-owned subsidiary, with the existing management team in place. Verizon might push its own related assets into the subsidiary, as well. Verizon will be keeping the carrier-dense facilities carrier-neutral.
The key question about this acquisition is probably, “Why?”
- While Terremark’s cloud platforms are arguably better than Verizon’s, there’s not such a huge difference that this necessarily makes sense as a technology play.
- In managed hosting, Terremark (or rather, Data Return, its managed hosting acquisition from a few years back) was the beneficiary of customers fleeing Verizon’s decline in Web hosting quality mid-decade, and it has certain cultural similarities to Digex (the Web hoster that Verizon bought to get into the business). It has superior automation but again, not so vastly better that you can point to the technology acquired as significant. It has better service and support, but not at the differentiating level of, say, Rackspace.
- Terremark does have a bunch of data centers in places that Verizon does not, but Verizon hasn’t previously prioritized widespread international expansion. The two big flagship US data centers are nice facilities, but Verizon was already a tenant in the NAP of the Capital Region (reselling to federal customers), and thus able to derive value there without having to buy Terremark.
For Terremark, of course, the reasons are clearer. Mired in debt from data center construction, it has under-invested in the rest of its business of late (I believe VMware invested in Terremark because they needed money to accelerate their cloud plans). As a modest-sized company, it has also had limited sales reach. This represents a nice exit. Having a deep-pocketed sugar daddy of a carrier parent ought to be useful for it — provided that the carrier doesn’t wreck its business doing the things that carriers are wont to do.
Verizon’s sales force is probably the best thing that Terremark will get out of this. No carrier is as aggressive as Verizon at trying to sell cloud IaaS to its customers. It’s a way of changing the conversation — of getting a carrier sales rep in to see the CIO, rather than getting into the argument with the network guy (or worse still, the procurement guy) over penny-a-minute voice services. And while this is resulting in a lot of conversations with customers who aren’t ready to move their entire data centers into the cloud just yet, it’s planting the buzz in the ear — when these customers (particularly in the mid-market) get around to being ready to adopt seriously, Verizon will be on their mind as a potential vendor.
As an acceleration and market share play on Verizon’s part, this potentially makes more sense — Terremark likely has the highest market share in VMware-based self-managed IaaS. But it’s not Verizon’s way of getting into the cloud, despite the press spin on the acquisition — Verizon already has cloud IaaS and its offering, CaaS, is doing pretty decently in the market.
Why cloud IaaS customers care about a colo option
Ben Kepes has raised some perceived issues on the recent Cloud IaaS and Web Hosting Magic Quadrant, on his blog and on Quora. It seems to reflect some confusion that I want to address in public.
Ben seems to think that the Magic Quadrant mixes colocation and cloud IaaS. It doesn’t, not in the least, which is why it doesn’t include plain ol’ colo vendors. However, we always note if a cloud IaaS vendor does not have colocation available, or if they have colo but don’t have a way to cross-connect between equipment in the colo and their cloud.
The reason for this is that a substantial number of our clients need hybrid solutions. They’ve got a server or piece of equipment that can’t be put in the cloud. The most common scenario for this is that many people have big Oracle databases that need big-iron dedicated servers, which they stick in colo (or in managed hosting), and then cross-connect to the Web front-ends and app servers that sit in the cloud. However, there are other examples; for instance, our e-commerce clients sometimes have encryption “black boxes” that only come as hardware, so sit in colo while everything else is in the cloud. Also, we have a ton of clients who will put the bulk of their stuff into the colo — and then augment it with cloud capacity, either as burst capacity for their apps in colo, or for lighter-weight apps that they’re moving into the cloud but which still need fast, direct, secure communication with interrelated back-end systems.
We don’t care in the slightest whether a cloud provider actually owns their own data center, directly provides colocation, has any strategic interest in colocation, or even offers colocation as a formal product. We don’t even care about the quality of the colocation. What we care about is that they have a solution for customers with those hybrid needs. For instance, if Amazon were to go out and partner with Equinix, say, and customers could go colo in the same Equinix data center as Amazon and cross-connect directly into Amazon? Score. Or, for instance, Joyent doesn’t formally offer colocation — but if you need to colocate a piece of gear to complement your cloud solution, they’ll do it. This is purely a question of functionality.
Now, you can argue that plenty of people manage to use pure-play cloud without having anything that they can’t put in the cloud, and that’s true. But it becomes much less of a typical scenario the more you move away from forward-thinking Web-native companies, and towards the mixed application portfolios of mainstream business. It’s especially true among our mid-market clients, who are keenly interested in gradually migrating to cloud as their primary approach to infrastructure, hybrid models are critical to the migration path.
More on Symposium 1-on-1s
My calendar for one-on-ones at Symposium is now totally full, as far as I know, so here’s a look at some updated stats:
(No overlaps above. Things have been disambiguated. This counts only the formal 1-on-1s, and not any other meetings I’m doing here.)
The hosting discussions have a very strong cloud flavor to them, as one might expect. The broad trend from today is that most people talking about cloud infrastructure here are really talking about putting individual production applications on virtualized infrastructure in a managed hosting environment, with at least some degree of capacity flexibility. But at the same time, this is a good thing for service provider — it clearly illustrates that people are comfortable putting highly mission-critical, production applications on shared infrastructure.
Symposium 1-on-1 trends
My 1-on-1 schedule is filling rapidly. (People who didn’t pre-book, you’re in luck: I was only added to the system on Friday or so, so I still have openings, at least as of this writing.)
Trend-watchers might be interested in how these break down so far:
17 on cloud
8 on colocation
4 on hosting
2 on CDN
(A few of these mention two topics, such as ‘colo and cloud’, and are counted twice above.)
Slightly over half the cloud 1-on-1s so far are about cloud strategy in general; the remainder are about infrastructure specifically.
What’s also interesting to me is that the 1-on-1s scheduled prior to on-site registration appear to be more about colocation and hosting, but the on-site 1-on-1 requests are very nearly pure cloud. I’m not sure what that signifies, although I expect the conversations may be illuminating in this regard.
Equinix’s reduced guidance
A lot of Gartner Invest clients are calling to ask about Equinix’s trimming of guidance. I am enormously swamped at the moment, and cannot easily open up timeslots to talk to everyone asking. So I’m posting a short blog entry (short and not very detailed because of Gartner’s rules about how much I can give away on my blog), and the Invest inquiry coordinators are going to try to set up a 30-minute group conference call for everyone with questions about this.
If you haven’t read it, you should read my post on Getting Real on Colocation, from six months ago, when I warned that I did not see this year’s retail colocation market being particularly hot. (Wholesale and leasing are hot. Retail colo is not.)
Equinix has differentiators on the retail colo side, but they are differentiators to only part of the market. If you don’t care about dense interconnect, Equinix is just a high-quality colo facility. I have plenty of regular enterprise clients that like Equinix for their facility quality, and reliably solid operations and customer service, and who are willing to pay a premium for it — but of course increasingly, nobody’s paying a premium for much of anything (in the US) because the economy sucks and everyone is in serious belt-tightening mode. And the generally flat-to-down pricing environment for retail colo also depresses the absolute premium Equinix can command, since the premium has to be relative to the rest of the market in a given city.
Those of you who have talked to me in the past about Switch and Data know that I have always felt that the SDXC sales force was vastly inferior to the Equinix sales force, both in terms of its management and, at least as manifested in actual working with prospects, possibly in terms of the quality of the salespeople themselves. Time is needed for sales force integration and upgrade, and it seems like the earning calls indicated an issue there. Equinix has had a good track record of acquisition integration to date, so I wouldn’t worry too much about this.
The underprediction of churn is more interesting, since Equinix has historically been pretty good about forecasting, and customers who are going to be churning tend to look different from customers who will be staying. Moving out of a data center is a big production, and it drives changes in customer behavior that are observable. My guess is that they expected some mid-sized customers to stay who decided to leave instead — possibly clients who are moving to a wholesale or lease model, and who are just leaving their interconnection in Equinix. (Things like that are good from a revenue-per-square-foot standpoint, but they’re obviously an immediate hit to actual revenues.)
This doesn’t represent a view change for me; I’ve been pessimistic on prospects for retail colocation since last year, even though I still feel that Equinix is the best and most differentiated company in that sector.
Getting real on colocation
Of late, I’ve had a lot of people ask me why my near-term forecast for the colocation market in the United States is so much lower (in many cases, half the growth rate) when compared with those produced by competing analyst firms, Wall Street, and so forth.
Without giving too much information (as you’ll recall, Gartner likes its bloggers to preserve client value by not delving too far into details for things like this), the answer to that comes down to:
- Gartner’s integrated forecasting approach
- Direct insight into end-user buying behavior
- Tracking the entire market, not just the traditional “hot” colo markets
I’ve got the advantage of the fact that Gartner producing forecasts for essentially the full range of IT-related “stuff”. If I’ve got a data center, I’ve got to fill it with stuff. It needs servers, network equipment, and storage, and those things need semiconductors as their components. It’s got to have network connectivity (and that means carrier network equipment for service providers, as well as equipment on the terminating end). It’s got to have software running on those servers. Stuff is a decent proxy for overall data center growth. If people aren’t buying a lot of stuff, their data center footprint isn’t growing. And when they’re buying stuff, it’s important to know if it’s replacing other stuff (freeing up power and space), or if it’s new stuff that’s going to drive footprint or power growth.
Collectively, analysts at Gartner take over a quarter-million client inquiries a year, an awful of lot of them related to purchasing decisions of one sort or another. We also do direct primary research in the form of surveys. So when we forecast, we’re not just listening to vendors tell us what they think their demand is; we’re also judging demand from the end-user (buyer) side. My colleagues and I, who collectively cover data center construction, renovation, leasing, and colocation (as well as things like hosting and data center outsourcing), have a pretty good picture of what our clientele are thinking about when it comes to procuring data center space, in addition to the degree to which end-user thinking informs our forecast for the stuff that goes into data centers.
Because of our client base, which not only include IT buyers dispersed throughout the world, but a lot of vendors and investors, we watch not just the key colocation markets where folks like Equinix have set up shop, but everywhere anyone does colo, which is getting to be an awful lot of places. If you’re judging the data center market by what’s happening in Equinix Cities or even Savvis Cities, you’re missing a lot.
If I’m going to believe in gigantic growth rates in colocation, I have to believe that one or more of the following things is true:
- IT stuff is growing very quickly, driving space and/or power needs
- Substantially more companies are choosing colo over building or leasing
- Prices are escalating rapidly
- Renewals will be at substantially higher prices than the original contracts
I don’t think, in the general case, that these things are true. (There are places where they can be true, such as with dot-com growth, specific markets where space is tight, and so on.) They’re sufficiently true to drive a colo growth rate that is substantially higher than the general “stuff that goes into data centers” growth rate, but not enough to drive the stratospheric growth rates that other analysts have been talking about.
Note, though, that this is market growth rate. Individual companies may have growth rates far in excess or far below that of the market.
I could be wrong, but pessimism plus the comprehensive approach to forecasting has served me well in the past. I came through the dot-com boom-and-bust with forecasts that turned out to be pretty much on the money, despite the fact that every other analyst firm on the planet was predicting rates of growth enormously higher than mine.
(Also, to my retroactive amusement: Back then, I estimated revenue figures for WorldCom that were a fraction of what they reported, due to my simple inability to make sense of their reported numbers. If you push network traffic, you need carrier equipment, as do the traffic recipients. And traffic goes to desktops and servers, which can be counted, and you can arrive at reasonable estimates of how much bandwidth each uses. And so on. Everything has to add up to a coherent picture, and it simply didn’t. It didn’t help that the folks at WorldCom couldn’t explain the logical discrepancies, either. It just took a lot of years to find out why.)
Q1 2010 inquiry in review
My professional life has gotten even busier — something that I thought was impossible, until I saw how far out my inquiry calendar was being booked. As usual, my blogging has suffered for it, as has my writing output in general. Nearly all of my writing now seems to be done in airports, while waiting for flights.
The things that clients are asking me about has changed in a big way since my Q4 2009 commentary, although this is partially due to an effort to shift some of my workload to other analysts on my team, so I can focus on the stuff that’s cutting edge rather than routine. I’ve been trying to shed as much of the routine colocation and data center leasing inquiry onto other analysts as possible, for instance; reviewing space-and-power contracts isn’t exactly rocket science, and I can get the trends information I need without needing to look at a zillion individual contracts.
Probably the biggest surprise of the quarter is how intensively my CDN inquiry has ramped up. It’s Akamai and more Akamai, for the most part — renewals, new contracts, and almost always, competitive bids. With aggressive new pricing across the board, a willingness to negotiate (and an often-confusing contract structure), and serious prospecting for new business, Akamai is generating a volume of CDN inquiry for me that I’ve never seen before, and I talk to a lot of customers in general. Limelight is in nearly all of these bids, too, by the way, and the competition in general has been very interesting — particularly AT&T. Given Gartner’s client base, my CDN inquiry is highly diversified; I see a tremendous amount of e-commerce, enterprise application acceleration, electronic software delivery and whatnot, in addition to video deals. (I’ve seen as many as 15 CDN deals in a week, lately.)
The application acceleration market in general is seeing some new innovations, especially on the software end (check out vendors like Aptimize), and there will be more ADN offers will be launched by the major CDN vendors this year. The question of, “Do you really need an ADN, or can you get enough speed with hardware and/or software?” is certainly a key one right now, due to the big delta in price between pure content offload and dynamic acceleration.
By the way, if you have not seen Akamai CEO Paul Sagan’s “Leading through Adversity” talk given at MIT Sloan, you might find it interesting — it’s his personal perspective on the company’s history. (His speech starts around the 5:30 mark, and is followed by open Q&A, although unfortunately the audio cuts out in one of the most interesting bits.)
Most of the rest of my inquiry time is focused around cloud computing inquiries, primarily of a general strategic sort, but also with plenty of near-term adoption of IaaS. Traditional pure-dedicated hosting inquiry, as I mentioned in my last round-up, is pretty much dead — just about every deal has some virtualized utility component, and when it doesn’t, the vendor has to offer some kind of flexible pricing arrangement. Unusually, I’m beginning to take more and more inquiry from traditional data center outsourcing clients who are now looking at shifting their sourcing model. And we’re seeing some sharp regional trends in the evolution of the cloud market that are the subject of an upcoming research note.