Blog Archives
The (temporary) death of premium
We’re in the midst of a fascinating discontinuity in IT purchasing patterns. Even the dot-com crash didn’t cause this kind of disruption. Practically everyone is scrambling to save money immediately, and some organizations are looking at long-term belt-tightening.
The most obvious immediate impact is that buyer tolerance for paying a premium is rapidly diminishing. Quality is suddenly being eyed with greater objectivity, as businesses stop feeling like they need to have “the best” and start actively considering what services and service levels they really need and can cost-justify. It doesn’t matter (enough) if they love their current vendor and feel they’re getting excellent value for the money. It’s about absolute dollar amounts and aligning spend with needs rather than want-to-haves. Risks are being taken in the name of the bottom line.
The turf war of unified computing
The New York Times article about Cisco getting into the server market is a very interesting read, as is Cisco’s own blog post announcing something they call “Unified Computing“.
My colleague Tom Bittman has a thoughtful blog post on the topic, writing: What is apparent is that the comfortable sandboxes in which different IT vendors sat are shattering. Those words demand that computing become a much more flexible, unified fabric.
Tom ruminates on the vendors, but setting aside any opinion of Cisco’s approach (or any other vendor making a unified computing attempt), my mind goes to the people — specifically, the way that a unified approach impacts IT operations personnel, and the way that these engineers can help or hinder adoption of unified data center technologies.
Unified computing — unified management of compute, storage, and network elements — is not just going to shape up to be a clash between vendors. It’s going to become a turf war between systems administrators and network engineers. Today, computing and storage are classically the domain of the former, and the WAN the domain of the latter. The LAN might go either way, but the bigger the organization, the more likely it goes to the network guys. And devices like application delivery controllers fall into an uncomfortable in-between, but in most organizations, one group or the other takes them into their domain. The dispute over devices like that serves as the warning shot in this war, I think. (An ADC is a network element, but it is often closely associated with servers; it is usually an appliance, i.e. a purpose-built server, but its administration more closely resembles a network device than a server.) The more a given technology crosses turf lines, the greater the dispute over who manages it, whose budget it comes out of, etc.
(Yes, I really did make a lolcat just for this post.)
He who controls the entire enchilada — the management platform — is king of the data center. There will be personnel who are empire-builders, seeking to use platform control to assert dominance over more turf. And there will be personnel who try to push away everything that is not already their turf, trying to avoid more work piling up on their plate.
Unification is probably inevitable. We’ve seen this human drama play out once this past decade already — the WAN guys generally triumphed over the telephony guys in the VoIP convergence. But my personal opinion is that it’s the systems guys, not the network guys, who will be most likely to triumph in the unified-platform wars. In most organizations, systems guys significantly outnumber the network guys, and they tend to have a lot more clout, especially as you go up the management chain. Internal politics and whose vendor influence triumphs may turn out to influence solution selection as much as, or more than, the actual objective quality of the solutions themselves.
The Process Trap
Do your processes help or hinder your employees’ ability to deliver great service to customers? When an employee makes an exception to keep a customer happy, is that rewarded or does the employee feel obliged to hide that from his manager? When you design a process, which is more important: Ensuring that nobody can be blamed for a mistake as long as they did what the process said they were supposed to do, or maximizing customer satisfaction? And when a process exception is made, do you have a methodical way to handle it?
Many companies have awesome, dedicated employees who want to do what’s best for the customer. And, confronted with the decision of whether to make a customer happy, or follow the letter of the process, most of them will end up opting for helping the customer. Many organizations, even the most rigidly bureaucratic ones, celebrate those above-and-beyond efforts.
But there’s an important difference in the way that companies handle these process exceptions. Some companies are good at recognizing that people will exercise autonomy, and that systems should be built to handle exceptions, and track why they were granted and what was done. Other companies like to pretend that exceptions don’t exist, so when employees go outside the allowed boundaries, they simply do stuff — the exception is never recorded, and nobody knows what was done or how or why, and if the issue is ever raised again, the account team turns over, or someone wonders why this particular customer has a weird config, nobody will have a clue. And ironically, it tends to be the companies with the most burdensome process — the ones not only most likely to need exceptions, but the ones obsessed with a paperwork trail for even the most trivial minutia — that lack the ability to systematically handle exceptions.
When you build systems, whether human or machine, do you figure out how you’re going to handle the things that will inevitably fall outside your careful design?
Women, software development, and 10,000 hours
My colleague Thomas Otter is encouraging us to support Ada Lovelace day — an effort to have a day of bloggers showcasing role-model women in technology, on March 24th.
His comments lead me to muse upon how incredibly male-dominated the IT industry is, especially in the segment that I watch — IT operations. At Gartner’s last data center conference, for instance, women made up a tiny fraction of attendees, and many of the ones that I spoke with were in a “pink role” like Marketing with a vendor, or had side-stepped being hands-on by entering an IT management role out of an early career spent in some other discipline.
I’m all for raising awareness of role models, but I lean towards the belief that the lack of women in IT, especially in hands-on roles, is something more fundamental — a failure to be adequately prepared, in childhood, to enter a career in IT. Resources devoted to the girls-and-computers issue have long noted that a girl is less likely to get a PC in her bedroom (all the paranoia about letting kids privately use the Internet aside), less likely to be encouraged to start programming, and less likely to get encouragement from either peer or parental sources, compared to a boy. The differences are already stark by the high school level. And that’s not even counting the fact that the Internet can be a cesspool of misogyny.
Malcom Gladwell’s Outliers has gotten a lot of press and interest of late. In it, he asserts a general rule that it takes 10,000 hours, within 10 years, to become truly expert at something.
There is a substantial chance that a boy who is interested in computers will get in his 10,000 hours of useful tinkering, possibly even focused programming, before he finishes college. That’s particularly true if he’s got a computer in his bedroom, where he can focus quietly on a project.
Conversely, a girl who initially starts with similar inclinations is vastly less likely to be encouraged down the path that leads to spending two or three hours a night, for ten years, mucking around with a computer.
I was very fortunate, in that even though my parents viewed computers as essentially fancy toys for boys, that they nonetheless bought me computers (and associated things, like electronics kits), allowed me to have a computer in my bedroom, and tolerated the long stints at the keyboard that meant that I accumulated 10,000 hours of actual programming time (on top of some admittedly egregious number of hours playing computer games), well within a 10-year timeframe. I majored in computer science engineering, and I did a lot of recreational programming in college, too, as well paid systems administration and programming, but the key thing is: College classes taught me very few practical IT skills. I already had the foundation by the time I got there.
Academic computer science is great for teaching theory, but if you only do enough programming to do well in your classes, you’re simply not spending that much time acquiring expertise. And that leads to the phenomenon where companies interview entry-level software development candidates, who look pretty similar on paper, but some of whom have already put in 10,000+ hours learning the trade, and some of whom are going to have to spend the first five years of their careers doing so. The way the culture (at least in the US) is, there’s enormous social pressure on girls and women to not nerd out intensively on their own time, and while it might lead to ostensibly positive phrases like “a more balanced lifestyle”, it absolutely hurts many women when they try to enter the IT workforce.
Cloud debate: GUI vs. CLI and API
In the greater blogosphere, as well as amongst the cloud analysts across the various research firms, there’s been an ongoing debate over the question, “Does a cloud have to have an API to be a cloud?”
Going beyond that question, though, there are two camps of cloud users emerging — those who prefer the GUI (control panel) approach to controlling their cloud, and those that prefer command-line interfaces and/or APIs. These two camps can probably be classified into the automated and the automators — those users who want easy access to pre-packaged automation, and those users who want to write automation of their own.
This distinction has long existed in the systems administration community — the split between those who rely on the administrator GUIs to do things, vs. those who do everything via the command line, editing config files, and their own scripts. But the advent of cloud computing and associated tools, with their relentless drive towards standardization and automation, is casting these preferences into an increasingly stark light. Moreover, the emerging body of highly sophisticated commercial tools for cloud management (virtual data center orchestration and everything that surrounds it) means that in the future, even those more sophisticated IT operations folks who are normally self-reliant, will end up taking advantage of those tools rather than writing stuff from scratch. That suggests that tools will also follow two paths — there will be tools that are designed to be customized via GUI, and tools that are readily decomposable into scriptable components and/or provide APIs.
I’ve previously asserted that cloud drives a skills shift in IT operations personnel, creating a major skills chasm between those who use tools, and those who write tools.
The emerging cloud infrastructure services seem to be pursuing one of two initial paths — exposure via API and thus highly scriptable by the knowledgeable (e.g., Amazon Web Services), and friendly control panel (e.g., Rackspace’s Mosso). While I’d expect that most public clouds will eventually offer both, I expect that both services and do-it-yourself cloud software will tend to emphasize capabilities one way or another, focusing on either the point-and-click crowd or the systems programmers.
(A humorous take on this, via an old Craigslist posting: Keep the shell people alive.)
The culture of service
I recently finished reading Punching In, a book by Alex Frankel. It’s about his experience working as a front-line employee in a variety of companies, from UPS to Apple. The book is focused upon corporate culture, the indoctrination of customer-facing employees, and how such employees influence the customer experience. And that got me thinking.
Culture may be the distinguishing characteristic between managed hosting companies. Managed hosting is a service industry. You make an impression upon the customer with every single touch, from the response to the initial request for information, to the day the customer says good-bye and moves on. (The same is true for more service-intensive cloud computing and CDN providers, too.)
I had the privilege, more than a decade ago, of spending several years working at DIGEX (back when all-uppercase names were trendy, before the chain of acquisitions that led to the modern Digex, absorbed into Verizon Business). We were a classic ISP of the mid-90s — we offered dial-up, business frame relay and leased lines, and managed hosting. Back then, DIGEX had a very simple statement of differentiation: “We pick up the phone.” Our CEO used to road-show dialing our customer service number, promising a human being would pick up in two rings or less. (To my knowledge, that demo never went wrong.) We wanted to be the premium service company in the space, and a culture of service really did permeate the company — the idea that, as individuals and as an organization, we were going to do whatever it took to make the customer happy.
For those of you who have never worked in a culture like that: It’s awesome. Most of us, I think, take pleasure in making our customers happy; it gives meaning to our work, and creates the feeling that we are not merely chasing the almighty dime. Cultures genuinely built around service idolize doing right by the customer, and they focus on customer satisfaction as the key metric. (That, by the way, means that you’ve got to be careful in picking your customers, so that you only take business that you know that you can service well and still make a profit on.)
You cannot fake great customer service. You have to really believe in it, from the highest levels of executive management down to the grunt who answers the phones. You’ve got to build your company around a set of principles that govern what great service means to you. You have to evaluate and compensate employees accordingly, and you’ve got to offer everyone the latitude to do what’s right for your customers — people have to know that the management chain will back them up and reward them for it.
Importantly, great customer service is not equivalent to heroics. Some companies have cultures, especially in places like IT operations, where certain individuals ride in like knights to save the day. But heroics almost always implies that something has gone wrong — that service hasn’t been what it needed to be. Great service companies, on the other hand, ensure that the little things are right — that routine interactions are pleasant and seamless, that processes and systems help employees to deliver better service, and that everyone is incentivized to cooperate across functions and feel ownership of the customer outcome.
When I talk to hosting companies, I find that many of them claim to value customer service, but their culture and the way they operate clash directly with their ability to deliver great service. They haven’t built service-centric cultures, they haven’t hired people who value service (admittedly tricky: hire smart competent geeks who also like and are good at talking to people), and they aren’t organized and incentivized to deliver great service.
Similarly, CDN vendors have a kind of tragedy of growth. Lots of people love new CDNs because at the outset, there’s an extremely high-touch support model — if you’ve got a problem, you’re probably going to get an engineer on the phone with you right away, a guy who may have written the CDN software or architected the network, who knows everything inside and out and can fix things promptly. As the company grows, the support model has to scale — so the engineers return to the back room and entry-level lightly-technical support folks take their place. It’s a necessity, but that doesn’t mean that customers don’t miss having that kind of front-line expertise.
So ask yourself: What are the features of your corporate culture that create the delivery of great customer service, beyond a generic statement like “customers matter to us”? What do you do to inspire your front-line employees to be insanely awesome?
Peer influence and the use of Magic Quadrants
The New Scientist has an interesting article commenting that the long tail may be less potent than previously postulated — and that peer pressure creates a winner-take-all situation.
I was jotting this blog post about Gartner clients and the target audience for the Magic Quadrant, and that article got me thinking about the social context for market research and vendor recommendations.
Gartner’s client base is primarily mid-sized business to large enterprise — our typical client is probably $100 million or more in revenue, but we also serve a lot of technology companies who are smaller than that. Beyond that subscription base, though, we also talk to people at conferences; those attendees usually represent a much more diverse set of organizations. But it’s the subscription base that we mostly talk to. (I carry an unusually high inquiry load — I’ll talk to something on the order of 700 clients this year.)
Normally, I’m interested in the comprehensive range of a vendor’s business (at least insofar as it’s relevant to my coverage). When I do an MQ, though, my subscriber base is the lens through which I evaluate companies. While I’m interested in the ways vendors service small businesses at other times, when it’s in the context of an MQ, I care only about a vendor’s relevance to our clients — i.e., the IT buyers who subscribe to Gartner services and who are reading the MQ to figure out what vendors they want to short-list.
Sometimes, when vendors think about our client base, they mistakenly assume that it’s Fortune 1000 and the largest of enterprises. While we serve those companies, we have more than 10,000 client organizations — so obviously, we serve a lot more than giant entities. The customers I talk to day after day may have a single cabinet in colocation — or fifty data centers of their own. (Sometimes both.) They might have one or two servers in managed hosting, or dozens of websites deployed via multi-dozen-server contracts. They might deliver less than a TB of content per month via a CDN, or they might be one of the largest media companies on the planet, with staggering video volumes.
These clients span an enormous range of wants and needs, but they have one significant common denominator: They are the kinds of companies that subscribe to a research and advisory firm, which means they make enough tech purchases to justify the cost of a research contract, and they have a culture which values (or at least bureaucratically mandates) seeking a neutral outside opinion.
That ideal of objectivity, however, often masks something more fundamental that ties back to the article that I mentioned: namely, the fact that many clients have an insatiable hunger to know “What are companies like mine doing?“. They are not necessarily seeking best practice, but common practice. Sometimes they seek the assurance that their non-ideal situation is not dissimilar to that of their peers at similar companies. (Although the opening line of Tolstoy’s Anna Karenina — “Happy families are all alike, but every unhappy family is unhappy in its own way” — quite possibly applies to IT departments, too.)
This is also reflected in the fact that customers often have a deep desire to talk to other customers of the same vendor, on an informal and social basis. That hunger is sometimes satisfied by online forums, but the larger the company, the more reluctant they are to discuss their business in public, although they may still share freely in a one-on-one or directly personal context.
IBM was the ultimate winner-take-all company (to use the New Scientist phrase) — the company that everyone was buying from, thus guaranteeing that you were unlikely to get fired buying IBM. Arguably, it and its brethren still are at the fat forefront of the outsourced IT infrastructure market share curve, while the bazillion hosting companies out there are spread out over the long tail. Even within the narrower confines of pure hosting, which is a highly fragmented market, and despite massive amounts of online information, peer influence has concentrated market share in the hands of relatively few vendors.
To quote the article: Which leads to a curious puzzle: why, when we have so much information at our fingertips, are we so concerned with what our peers like? Don’t we trust our own judgement? Watts thinks it is partly a cognitive problem. Far from liberating us, the proliferation of choice that modern technology has brought is overwhelming us — making us even more reliant on outside cues to determine what we like.
So I can sum up: A Magic Quadrant is an outside cue, offering expert opinion that factors in aggregated peer opinion.
User or community competence curve
I was pondering cloud application patterns today, and the following half-formed thought occurred to me: All new technologies go through some kind of competence curve — where they are on it determines the aggregate community knowledge of that technology, and the likely starting point for a new user adopting that technology. This might not entirely correlate to its position on the hype cycle.
The community develops a library of design patterns and recipes, over the lifetime of the technology. This is everything from broad wisdom like, “MVC is probably the right pattern for a Web app”, to trivia like “you can use Expando with Google App Engine’s user class to get most of what you want out of traditional session management”. And the base level of competence of the engineers grows in seemingly magical ways — the books get better, Googling answers gets easier, the random question tossed over your shoulder to another team member has a higher chance of being usefully answered. As the technology ages, this aggregate knowledge fades, until years down the road, the next generation of engineers end up repeating the mistakes of the past.
So far, we have very little aggregate community knowledge about cloud.
Cloud enables offshoring
The United States is a hotbed of cloud innovation and adoption, but cloud is also going to be a massive enabler of the offshoring of IT operations.
Peter Laird (an architect at Oracle) had an interesting blog post about a month ago on cloud computing mindshare across geographies, analyzing traffic to his blog. And Pew Research’s cloud adoption study indicate that uptake of cloud apps among American consumers is very high. But where the users are doesn’t matter.
Today, most companies still do most of their IT Ops locally (i.e., wherever their enterprise data centers are), even if they’ve sent functions like help desk offshore. Most companies server-hug — their IT staff is close to wherever the equipment is. But the trend is moving towards remote data centers (especially as the disparity between data center real estate prices between, say, New York City and Salt Lake City grows), and cloud exacerbates that even more. Data centers don’t move off-shore because of network latency, data protection laws, etc., so that won’t change — but a big Internet data center only employs about 50 people.
What the future looks like could be very similar to the NaviSite model — local data centers staffed by small local teams who handle physical hardware, but all the monitoring, remote management, and software development for automation and other back-office functions handled offshore.
Being a hardware wrangler isn’t a high-paying job. In fact, a lot of hosting and Web 2.0 companies hire college students, part-time, to do it. So in making a move to cloud, we seem to be facilitating the further globalization of the IT labor market for the high-paying jobs.
Does architecture matter?
A friend of mine, upon reading my post on the cloud skills shift, commented that he thought that the role of the IT systems architect was actually diminishing in the enterprise. (He’s an architect at a media company.)
His reasoning was simple: Hardware has become so cheap that IT managers no longer want to spend staff time performance-tuning, finding just the right configuration, or getting the sizing and capacity planning at its most efficient, any longer.
Put another way: Is it cheaper to have a senior operations engineer on staff, or is it cheaper to just buy more hardware?
The one-size-fits-all nature of cloud may very well indicate the latter, for organizations for whom cutting-edge technology expertise does not drive competitive advantage.