Monthly Archives: January 2009

Women, software development, and 10,000 hours

My colleague Thomas Otter is encouraging us to support Ada Lovelace day — an effort to have a day of bloggers showcasing role-model women in technology, on March 24th.

His comments lead me to muse upon how incredibly male-dominated the IT industry is, especially in the segment that I watch — IT operations. At Gartner’s last data center conference, for instance, women made up a tiny fraction of attendees, and many of the ones that I spoke with were in a “pink role” like Marketing with a vendor, or had side-stepped being hands-on by entering an IT management role out of an early career spent in some other discipline.

I’m all for raising awareness of role models, but I lean towards the belief that the lack of women in IT, especially in hands-on roles, is something more fundamental — a failure to be adequately prepared, in childhood, to enter a career in IT. Resources devoted to the girls-and-computers issue have long noted that a girl is less likely to get a PC in her bedroom (all the paranoia about letting kids privately use the Internet aside), less likely to be encouraged to start programming, and less likely to get encouragement from either peer or parental sources, compared to a boy. The differences are already stark by the high school level. And that’s not even counting the fact that the Internet can be a cesspool of misogyny.

Malcom Gladwell’s Outliers has gotten a lot of press and interest of late. In it, he asserts a general rule that it takes 10,000 hours, within 10 years, to become truly expert at something.

There is a substantial chance that a boy who is interested in computers will get in his 10,000 hours of useful tinkering, possibly even focused programming, before he finishes college. That’s particularly true if he’s got a computer in his bedroom, where he can focus quietly on a project.

Conversely, a girl who initially starts with similar inclinations is vastly less likely to be encouraged down the path that leads to spending two or three hours a night, for ten years, mucking around with a computer.

I was very fortunate, in that even though my parents viewed computers as essentially fancy toys for boys, that they nonetheless bought me computers (and associated things, like electronics kits), allowed me to have a computer in my bedroom, and tolerated the long stints at the keyboard that meant that I accumulated 10,000 hours of actual programming time (on top of some admittedly egregious number of hours playing computer games), well within a 10-year timeframe. I majored in computer science engineering, and I did a lot of recreational programming in college, too, as well paid systems administration and programming, but the key thing is: College classes taught me very few practical IT skills. I already had the foundation by the time I got there.

Academic computer science is great for teaching theory, but if you only do enough programming to do well in your classes, you’re simply not spending that much time acquiring expertise. And that leads to the phenomenon where companies interview entry-level software development candidates, who look pretty similar on paper, but some of whom have already put in 10,000+ hours learning the trade, and some of whom are going to have to spend the first five years of their careers doing so. The way the culture (at least in the US) is, there’s enormous social pressure on girls and women to not nerd out intensively on their own time, and while it might lead to ostensibly positive phrases like “a more balanced lifestyle”, it absolutely hurts many women when they try to enter the IT workforce.

Bookmark and Share

Cloud debate: GUI vs. CLI and API

In the greater blogosphere, as well as amongst the cloud analysts across the various research firms, there’s been an ongoing debate over the question, “Does a cloud have to have an API to be a cloud?

Going beyond that question, though, there are two camps of cloud users emerging — those who prefer the GUI (control panel) approach to controlling their cloud, and those that prefer command-line interfaces and/or APIs. These two camps can probably be classified into the automated and the automators — those users who want easy access to pre-packaged automation, and those users who want to write automation of their own.

This distinction has long existed in the systems administration community — the split between those who rely on the administrator GUIs to do things, vs. those who do everything via the command line, editing config files, and their own scripts. But the advent of cloud computing and associated tools, with their relentless drive towards standardization and automation, is casting these preferences into an increasingly stark light. Moreover, the emerging body of highly sophisticated commercial tools for cloud management (virtual data center orchestration and everything that surrounds it) means that in the future, even those more sophisticated IT operations folks who are normally self-reliant, will end up taking advantage of those tools rather than writing stuff from scratch. That suggests that tools will also follow two paths — there will be tools that are designed to be customized via GUI, and tools that are readily decomposable into scriptable components and/or provide APIs.

I’ve previously asserted that cloud drives a skills shift in IT operations personnel, creating a major skills chasm between those who use tools, and those who write tools.

The emerging cloud infrastructure services seem to be pursuing one of two initial paths — exposure via API and thus highly scriptable by the knowledgeable (e.g., Amazon Web Services), and friendly control panel (e.g., Rackspace’s Mosso). While I’d expect that most public clouds will eventually offer both, I expect that both services and do-it-yourself cloud software will tend to emphasize capabilities one way or another, focusing on either the point-and-click crowd or the systems programmers.

(A humorous take on this, via an old Craigslist posting: Keep the shell people alive.)

Bookmark and Share

The value, or not, of hands-on testing

At Gartner, we generally do not do hands-on testing of products, with the exception of some folks who cover consumer devices and the like. And even then, it’s an informal thing. Unlike the trade rags, for instance, we don’t have test labs or other facilities for doing formal testing.

There are a lot of reasons for that, but from my perspective, the analyst role has been primarily to advise IT management. Engineers can and will want to do testing themselves (and better than we can). Also, for the mid-size to enterprise market that we target, any hands-on testing that we might do is relatively meaningless vis a vis the complexity of the typical enterprise implementation.

Yet, the self-service nature of cloud computing makes it trivially cheap to do testing of these services, and without needing to involve the vendor. (I find that if I’m paying for a service, I feel free to bother the customer support guys, and find out what that’s really like, without being a nuisance to analyst relations or getting a false impression.) So for me, testing things myself has a kind of magnetic draw; call it the siren song of my inner geek. The question I’m asking myself, given the time consumed, is, “To what end?”

I think the reason I’m trying to do at least a little bit of hands-on with each major cloud is that I feel like I’m grounding hype in reality. I know that in superficially dinking around with these clouds, I’m only lightly skimming the surface of what it’s like to deploy in the environments. But it gives me an idea of how turnkey something is, or not, as well as the level of polish in these initial efforts.

This is a market that is full of incredible hype, and going through the mental exercise of “how would I use this in production” helps me to advise my clients on what is and isn’t ready for prime-time. An acquaintance once memorably wrote, when he was disputing some research, that analysts sit at the apex of the hype food-chain, consuming pure hype and excreting little pellets of hype as dense as neutronium. I remember being both amused and deeply offended when I first read that. Of course, I think he was very wrong — whatever we’re fed in marketing, tends to be more than overcome by the overwhelming volume of IT buyer inquiry we do, which is full of the ugly reality of actual implementation. But the comment has stuck in my memory as a dark reminder that an analyst needs to be vigilant about not feeding at the hype-trough. Keeping in touch by being at least a little hands-on helps to innoculate against that.

However, I realized, after talking to a cloud vendor client the other day, that I probably should not waste their phone inquiry time talking about hands-on impressions. That’s better left to this blog, or email, or their customers and the geek blogosphere. My direct impressions are only meaningfully relevant to the extent that what I experience hands-on contradicts marketing messages, or indicates a misalignment between strategy and implementation, or otherwise touches something higher-level. My job, as an analyst, is to not get lost in the weeds.

Nevertheless, there’s simply nothing like gaining a direct feel for something, and I am, unfortunately, way behind in my testing; I’ve got more demos accumulating than I’ve had time to try out, and the longer it takes to set something up, the more it lags in my mental queue.

Bookmark and Share

Touring Amazon’s management console

The newly-released beta of Amazon’s management console is reasonably friendly, but it is not going to let your grandma run her own data center.

I took a bit of a tour today. I’m running Firefox 3 on a Windows laptop, but everything else I’m doing out of a Unix shell — I have Linux and MacOS X servers at home. I already had AWS stuff set up prior to trying this out; I’ve previously used RightScale to get a Web interface to AWS.

The EC2 dashboard starts with a big friendly “Launch instances” button. Click it, and it takes you to a three-tab window for picking an AMI (your server image). There’s a tab for Amazon’s images, one for your own, and one for the community’s (which includes a search function). After playing around with the search a bit (and wishing that every community image came with an actual blurb of what it is), and not finding a Django image that I wanted to use, I decided to install Amazon’s Ruby on Rails stack.

On further experience, the “Select” buttons on this set of tabs seem to have weird issues; sometimes you’ll go to them and they’ll be grayed out and unclickable, sometimes you’ll click them and they’ll go gray but you won’t get the little “Loading, please wait” box that appears before going onto the next tab — and it will leave you stuck, leaving you to cancel the window and try again.

Once you select an image, you’re prompted to select how many instances you want to launch, your instance type, key pair (necessary to SSH into your server), and a security group (firewall config). More twiddly bits, like the availability zone, are hidden in advanced options. Pick your options, click “Launch”, and you’re good to go.

From the launch window, your options for the firewall default to having a handful of relevant ports (like SSH, webserver, MySQL) open to the world. You can’t get more granular with the rules than this there; you’ve got to use the Security Group config panel to add a custom rule. I wish that the defaults would be slightly stricter, like limiting the MySQL port to Amazon’s back-end.

Next, I went to create an EBS volume for user data. This, too, is simple, although initially I did something stupid, failing to notice that my instance had launched in us-east-1b. (Your EBS volume must reside in the same availability zone as your instance, in order for the instance to mount it.)

That’s when I found the next interface quirk — the second time I went to create an EBS volume, the interface continued to insist for fifteen minutes that it was still creating the volume. Normally there’s a very nice Ajax bit that automatically updates the interface when it’s done, but this time, even clicking around the whole management console and trying to come back wouldn’t get it to update the status and thus allow me to attach it to my instance. I had to close out the Firefox tab, and relaunch the console.

Then, I remember that the default key pair that I’d created had been done via RightScale, and I couldn’t remember where I’d stashed the PEM credentials. So that led me to a round of creating a new key pair via the management console (very easy), and having to terminate and launch a new instance using the new key pair (subject to the previously-mentioned interface quirks).

The same interface-somehow-gets-into-indeterminate-state also seems to be a problem for other things, like the console “Output” button for interfaces — you get a blank screen rather than the console dump.

That all dealt with, I log into my server via SSH, don’t see the EBS volume mounted, and remember that I need to actually make a filesystem and explicitly mount it. All creating an EBS volume does is allocate you an abstraction on Amazon’s SAN, essentially. This leads me to trying to find documentation for EBS, which leads to the reminder that access to docs on AWS is terrible. The search function on the site doesn’t index articles, and there are far too many articles to just click through the list looking for what you want. A Google search is really the only reasonable way to find things.

All that aside, once I do that, I have an entirely functional server. I terminate the instance, check out my account, see that this little experiment has cost me 33 cents, and feel reasonably satisfied with the world.

Bookmark and Share

News round-up

A handful of quick news-ish takes:

Amazon has released the beta of its EC2 management console. This brings point-and-click friendliness to Amazon’s cloud infrastructure service. A quick glance through the interface makes it clear that effort was made to make it easy to use, beginning with big colorful buttons. My expectation is that a lot of the users who might otherwise have gone to RightScale et.al. to get the easy-to-use GUI will now just stick with Amazon’s own console. Most of those users would have just been using that free service, but there’s probably a percentage that would otherwise have been upsold who will stick with what Amazon has.

Verizon is courting CDN customers with the “Partner Port Program”. It sounds like this is a “buy transit from us over a direct peer” service — essentially becoming explicit about settlement-based content peering with content owners and CDNs. I imagine Verizon is seeing plenty of content dumped onto its network by low-cost transit providers like Level 3 and Cogent; by publicly offering lower prices and encouraging content providers to seek paid peering with it, it can grab some revenue and improve performance for its broadband users.

Scott Cleland blogged about the “open Internet” panel at CES. To sum up, he seems to think that the conversation is now being dominated by the commercially-minded proponents. That would certainly seem to be in line with Verizon’s move, which essentially implies that they’re resigning themselves to the current peering ecosystem and are going to compete directly for traffic rather than whining that the system is unfair (always disengenuous, given ILEC and MSO complicity in creating the current circumstances of that ecosystem).

I view arrangements that are reasonable from a financial and engineering standpoint, that do not seek to discriminate based on the nature of the actual content, to be the most positive interpretation of network neutrality. And so I’ll conclude by noting that I heard an interesting briefing today from Anagran, a hardware vendor offering flow-based traffic management (i.e., it doesn’t care what you’re doing, it’s just managing congestion). It’s being positioned as an alternative or supplement to Sandvine and the like, offering a way to try to keep P2P traffic manageable without having to do deep-packet inspection (and thus explicit discrimination).

Bookmark and Share

Open source and behavioral economics

People occasionally ask me why busy, highly-skilled, highly-compensated programmers freely donate their time to open-source projects. In the past, I’ve nattered about the satisfaction of sharing with the community, the pleasure of programming as a hobby even if you do it for your day job, the “just make it work” attitude that often prevails among techies, altruism, idealism, the musings of people like Linus Torvalds, or research like the Lakhni and Wolf MIT/BCG study of developer motivation. (Speaking for myself, I code to solve problems, and I am naturally inclined to share what I do with others, and derive pleasure from having it be useful to others. The times I’ve written code for a living, I’ve always been lucky to have employers who were willing to let me open-source anything which wasn’t company-specific.)

But a chapter in Dan Ariely’s book Predictably Irrational got me thinking about a simpler way to explain it: Programmers contribute to free software projects for reasons that are similar to the reasons why lawyers do pro bono work.

The book posits that exchanges follow either social norms or market norms. If it’s a market exchange, we think in terms of money. If it’s a social exchange, we think in terms of human benefits. It’s the difference between a gift and a payment. Mentioning money (“a gift worth $10″) immediate transforms something into a market exchange. The book cites the example of lawyers being asked to do pro bono work — offered $30/hour to help needy clients, they refused, but asked to do it for free, there were plenty of takers. The $30/hour was viewed through the mental lens of a market exchange, mentally compared to their usual fees and deemed not worthwhile. Doing it for free, on the other hand, was viewed as a social exchange, evaluated on an entirely separate basis than the dollar value.

Contributing to free software follows the norms of the social exchange. The normative difference is also interesting in light of Richard Stallman’s assertion of the non-equivalence of “free software” and “open source”, and some of the philosophical debates that simmer in the background of the open-source movement; Stallman’s “free software” philosophy is intricately tied into the social community of software development.

The book also notes that issues occur when one tries to mix social norms and market norms. For instance, if you ask a friend to help you move, but he’s volunteering his time alongside paid commercial movers, that’s generally going to be seen as socially unacceptable. Commercial open-source projects conflate these two things all the time — which may go far to explaining why few commercialy-started projects gain much of a committer base beyond the core organizations and developers who care and are paid to do so (either directly, or indirectly via an end-user organization that makes heavy use of that software).

(Edit: I just discovered that Ariely has actually done an interview on open source, in quite some depth.)

Bookmark and Share

Predictably Irrational

If you deal with pricing, or for that matter, marketing or sales in general, and you’re going to read one related book this year, read Predictably Irrational: The Hidden Forces That Shape Our Decisions, by Dan Ariely. (I mentioned an article by him in a previous post on the impact of transparent pricing for CDNs, and I’ve finally had time to read his book.)

The book deals with behavioral economics, which can be summed up as the science of the way we perceive value and make economic decisions. It’s an entertaining read, describing a variety of experiments, their outcomes, and the broader conclusions that can be drawn. The book does an excellent job of demonstrating that we do not make such decisions in a fully rational manner, even when we think we are — but because there’s a predictable pattern to this irrationality, you can market and sell accordingly.

Two thoughts, among many others that I’m mulling over as a result:

Ariely asserts that people don’t know what new things ought to cost — they have no basis for comparison. Thus, establishing a basis for comparison creates the sense of value, and can be used to manipulate people’s mental pricing baselines and influence their decisions. For instance, given a thing, an inferior version but cheaper version of that thing, and some other less-similar thing, people will generally choose the thing. That’s relevant when you think about the way people compare CDN services, especially first-time enterprise buyers.

Ariely also shows that given a useful but brand-new thing, people might not know whether it’s a good value and thus may choose not to buy it — but establish a comparison in the form of a bigger but much more expensive form of that thing, and people will see the original as a good value and buy it. This is hugely relevant in the emerging cloud computing market, where people aren’t yet certain what the billing units should be and what they should cost.

Relating this to my usual topics of interest: Amazon has essentially established a transparent baseline in both the cloud computing and CDN markets, with clearly-articulated, readily-available pricing, and as a result, they have implicit control of the conversation around pricing. Broadly, any vendor who puts a public stake in the ground on prices is going to exert influence over a customer’s perception of not only their value, but every other comparable vendor.

Bookmark and Share

Sun buys Q-Layer

Today, Sun announced the acquisition of Q-Layer, a Belgium-based start-up of about two dozen people. Q-Layer is a virtualization orchestration vendor, with a focus that seems similar to 3Tera. For a similar acquisition parallel, look at Dune Technologies, acquired by VMware in late 2007.

When people say “orchestrate virtual resources”, usually what they mean is, “make software handle the messy background details of the infrastructure, automatically, while allowing me to navigate through a point-and-click GUI to provision and manage my virtualized data center resources”. In other words, they’ve got a GUI that can be exposed to users, who can create, configure, manage, and destroy virtual servers (and related equipment) at whim.

Like 3Tera, Q-Layer targets the hosting market — notably, Q-Layer’s founders include folks from Dedigate, a small European managed hosting provider that was acquired by Terremark back in 2005. Unlike 3Tera, which has focused on Linux, Q-Layer has made the effort to support Sun technologies, like Solaris Containers. However, Q-Layer has virtually no market traction; it seems to have signed some small, country-specific managed hosting providers in Europe, who are offering a VMware-based Q-Layer solution. (3Tera’s notable hosting customers include Layered Technologies and 1-800-HOSTING, but despite relatively few hosting partners, it has done a good job of creating market awareness.)

Hosters who want to offer virtual data center hosting (“VDC hosting”) — blocks of capacity that customers can carve up into servers at whim — can buy an off-the-shelf orchestration solution, or, if they’re brave and sufficiently skilled, they can write their own (as Terremark has). It’s not a big market yet, but orchestration also has value for large enterprises deploying big virtualization environments and who would like to delegate the management down through the organization.

Sun’s various cloud ambitions are being expanded with this acquisition. Sun expects to derive near-term benefits from incorporating Q-Layer’s technologies into its product plans this year.

On a lighter note, last week, I had dinner with an old friend I haven’t seen for some years. She’s a former Sun employee, and we were reminescing about Sun’s heyday — I was Sun’s second-largest customer back in those days (ironically, only Enron bought more stuff from them). She joked that her Sun stock options had been priced so egregiously high that Sun would have had to invent teleportation for her to ever see a return on them. Then she stopped and said, “Of course, even if Sun did invent teleportation, they would still somehow have failed to make money from it. They’d probably have given it away for free to spite Microsoft.”

And there’s the rub: Sun is doing many interesting and cool things with technology, but seems to have a persistent problem actually generating meaningful revenue from those ideas. So the Q-Layer acquisition is reasonably logical and I know where I can expect it to fit into Sun’s product line, but I’m still feeling a bit like the plan is:

1. Buy company.
2. …
3. Profit!

Bookmark and Share

The culture of service

I recently finished reading Punching In, a book by Alex Frankel. It’s about his experience working as a front-line employee in a variety of companies, from UPS to Apple. The book is focused upon corporate culture, the indoctrination of customer-facing employees, and how such employees influence the customer experience. And that got me thinking.

Culture may be the distinguishing characteristic between managed hosting companies. Managed hosting is a service industry. You make an impression upon the customer with every single touch, from the response to the initial request for information, to the day the customer says good-bye and moves on. (The same is true for more service-intensive cloud computing and CDN providers, too.)

I had the privilege, more than a decade ago, of spending several years working at DIGEX (back when all-uppercase names were trendy, before the chain of acquisitions that led to the modern Digex, absorbed into Verizon Business). We were a classic ISP of the mid-90s — we offered dial-up, business frame relay and leased lines, and managed hosting. Back then, DIGEX had a very simple statement of differentiation: “We pick up the phone.” Our CEO used to road-show dialing our customer service number, promising a human being would pick up in two rings or less. (To my knowledge, that demo never went wrong.) We wanted to be the premium service company in the space, and a culture of service really did permeate the company — the idea that, as individuals and as an organization, we were going to do whatever it took to make the customer happy.

For those of you who have never worked in a culture like that: It’s awesome. Most of us, I think, take pleasure in making our customers happy; it gives meaning to our work, and creates the feeling that we are not merely chasing the almighty dime. Cultures genuinely built around service idolize doing right by the customer, and they focus on customer satisfaction as the key metric. (That, by the way, means that you’ve got to be careful in picking your customers, so that you only take business that you know that you can service well and still make a profit on.)

You cannot fake great customer service. You have to really believe in it, from the highest levels of executive management down to the grunt who answers the phones. You’ve got to build your company around a set of principles that govern what great service means to you. You have to evaluate and compensate employees accordingly, and you’ve got to offer everyone the latitude to do what’s right for your customers — people have to know that the management chain will back them up and reward them for it.

Importantly, great customer service is not equivalent to heroics. Some companies have cultures, especially in places like IT operations, where certain individuals ride in like knights to save the day. But heroics almost always implies that something has gone wrong — that service hasn’t been what it needed to be. Great service companies, on the other hand, ensure that the little things are right — that routine interactions are pleasant and seamless, that processes and systems help employees to deliver better service, and that everyone is incentivized to cooperate across functions and feel ownership of the customer outcome.

When I talk to hosting companies, I find that many of them claim to value customer service, but their culture and the way they operate clash directly with their ability to deliver great service. They haven’t built service-centric cultures, they haven’t hired people who value service (admittedly tricky: hire smart competent geeks who also like and are good at talking to people), and they aren’t organized and incentivized to deliver great service.

Similarly, CDN vendors have a kind of tragedy of growth. Lots of people love new CDNs because at the outset, there’s an extremely high-touch support model — if you’ve got a problem, you’re probably going to get an engineer on the phone with you right away, a guy who may have written the CDN software or architected the network, who knows everything inside and out and can fix things promptly. As the company grows, the support model has to scale — so the engineers return to the back room and entry-level lightly-technical support folks take their place. It’s a necessity, but that doesn’t mean that customers don’t miss having that kind of front-line expertise.

So ask yourself: What are the features of your corporate culture that create the delivery of great customer service, beyond a generic statement like “customers matter to us”? What do you do to inspire your front-line employees to be insanely awesome?

Bookmark and Share

New year, new companies

I thought I’d start off the New Year with a FAQ: “How do I get to talk to you about what my company is offering?” This is closely related to one of the questions that I get most frequently at conferences and networking events: “How do I get on an analyst’s radar screen?”

The answer to these questions is pretty straightforward: Make a briefing request. A big analyst shop like my employer, Gartner, has a formal process that lets any vendor request to brief analysts. We, at least, will take briefings, without prejudice, from clients and non-clients alike. (It’s just that clients are entitled to advice and feedback; non-clients are not, although we’ll generally engage in dialogs if the non-client has something interesting to say.)

To convince an analyst to take a first-time briefing, though, you need to have an elevator pitch that makes an analyst say, “Hey, this is relevant to my coverage and this is a vendor that’s doing something interesting.” Alternatively, you need to have won some high-profile deals or otherwise show evidence that you’re going to be making waves in the market. Start-ups often fail to articulate what the compelling value proposition is, or otherwise demonstrate that they’re important to know about, which leads analysts to decide that it’s not yet worth taking the time to listen to a briefing.

Analysts love cool new vendors. Talking to smart people who are doing cool things and have great insights into their markets is one of the best parts of being an analyst.

If you’re an innovative or rapidly-growing provider in my coverage space, and we’ve never spoken before, I encourage you to make a briefing request. I’m particularly interested in cloud infrastructure start-ups, at the moment.

Bookmark and Share

Follow

Get every new post delivered to your Inbox.

Join 9,848 other followers

%d bloggers like this: