Category Archives: Infrastructure
Cotendo and AT&T
A lot of Gartner Invest clients are calling to ask about the AT&T deal with Cotendo. Since I’m swamped, I’m doing a blog post, and the inquiry coordinators will try to set up a single conference call.
I’ve known about this deal for a long time, but I’ve been respecting AT&T and Cotendo’s request to keep it quiet despite the fact that it’s not under formal nondisclosure. Since the deal was noted in my recently-published Who’s Who in Content Delivery Networks, 2010, someone else has now blogged about it publicly, and I’m being asked explicitly about it, though, I’m going to go ahead and talk about it on my blog.
There are now three vendors in the market who claim true dynamic site acceleration offerings: Akamai, CDNetworks, and Cotendo. (Limelight’s recently-announced accelerator offerings are incremental evolutions of LimelightSITE.) CDNetworks has not gained any significant market traction with their offering since introducing it six months ago, whereas these days, I routinely see customers bid Cotendo along with Akamai.
However, to understand the potential impact of Cotendo, one has to understand what they actually deliver. It’s important to note that while Cotendo positions its service identically to Akamai’s, even calling it Dynamic Site Accelerator (just like Akamai brands it), it is not, from a technical perspective, like Akamai’s DSA.
Cotendo’s DSA offering, at present, consists of TCP multiplexing and connection pooling from their edge servers. Both of these technologies are common features in application delivery controllers (or, in more colloquial terms, load-balancers, i.e., F5’s LTM, Citrix’s NetScaler, etc.). If you’re not familiar with the benefits of either, F5’s DevCentral provides good articles on multiplexing and persistent connections, as does Network World (2001, but still relevant).
By contrast, Akamai’s DSA offering — the key technology acquired when they bought Netli — is sort of like a combination of functionality from an ADC and a WAN optimization controller (WOC, like Riverbed), offered as a service in the cloud (in the old-fashioned meaning, i.e., “somewhere on the Internet”). In DSA, Akamai’s edge servers essentially behave like bidirectional WOCs, speaking an optimized protocol between them; it’s coupled with Akamai’s other acceleration technologies, including pre-fetching, compression, and so on.
Engineering carrier-scale WOC functionality is hard. Netli succeeded. There have been other successes in the hardware market — for instance, Ipanema, which targets carriers. Both made significant sacrifices in the complexity of functionality in order to achieve scale. Enterprise WOC vendors have had a hard time scaling past more than a few dozen sites, and the bar is still pretty low (at Gartner, we use “scale to over a hundred sites” on our vendor evaluation, for instance). A new CDN entrant offering WOC-style, Akamai/Netli-style functionality would be a big deal — but that’s not what Cotendo actually has.
Akamai’s DSA service competes to some extent with unidirectional ADC-based acceleration (F5’s WebAccelerator, for instance), but there are definitely benefits to middle-mile bidirectional acceleration, resulting in a stacked benefit if you use an ADC plus Akamai; moreover, this kind of acceleration is not a baseline feature in ADCs. Cotendo overlaps directly with baseline ADC functionality. That means the two companies have distinctly different services, serving different target audiences.
Cotendo is offering pretty good performance in the places where they have footprint — enough to be competitive. Like all CDN performance, customers care about “good enough” rather than “the very best”, but in transactional sites, there’s usually a decent return curve for more performance before you finally hit “fast enough that faster makes no difference”. This is still dependent upon the context, though. Electronics shoppers, for instance, are much less patience than people shopping for air travel. And the baseline site performance (i.e., your application response time in general) and construction, will also determine how much site acceleration will get you in terms of ROI.
The deal with AT&T is significant for the same reason that it was significant for Akamai to have signed Verizon and IBM as resellers years ago — because larger companies can be much more comfortable buying on the paper of a big vendor they already have a relationship with. And since AT&T’s CDN wins are often add-ons to hosting deals — where you typically have a complex transactional site — selling a dynamic acceleration service over a pure static caching one is definitely preferable. AT&T has tried to get around that deficiency in the past by selling multi-data-center and managed F5 WebAccelerator solutions, but those solutions aren’t as attractive. This partnership benefits both companies, but it’s not a game-changer in the CDN industry.
Since everyone’s asking, no, I don’t see Cotendo price-pressuring Akamai at the moment. (I see as many as 15 CDN deals a week, so I feel very comfortable with my state of pricing knowledge, especially in this transactional space.) What I do see is the incredibly depressed price of static object delivery affecting what anyone can realistically charge for dynamic acceleration, because the price/performance delta gets too large. I certainly do see Cotendo winning smaller deals, but it’s important that the wins aren’t coming from just undercuts in price — for instance, my clients cite the user-friendly, attractive portal as a reason to choose Cotendo over Akamai.
I have plenty more to say on this subject, but I’ve already skimmed the edge of how much I can say in my blog vs. when I should be writing research or answering inquiry, so: If you’re a client, please feel free to make an inquiry.
Interesting side note: Since publishing my Who’s Who note a week and a half ago, my CDN inquiry from customers has suddenly started to include a lot more multi-vendor inquiry about the smaller vendors. That probably says that other CDNs could still do a lot to build brand awareness. (SEO is key to this, these days.)
Equinix’s reduced guidance
A lot of Gartner Invest clients are calling to ask about Equinix’s trimming of guidance. I am enormously swamped at the moment, and cannot easily open up timeslots to talk to everyone asking. So I’m posting a short blog entry (short and not very detailed because of Gartner’s rules about how much I can give away on my blog), and the Invest inquiry coordinators are going to try to set up a 30-minute group conference call for everyone with questions about this.
If you haven’t read it, you should read my post on Getting Real on Colocation, from six months ago, when I warned that I did not see this year’s retail colocation market being particularly hot. (Wholesale and leasing are hot. Retail colo is not.)
Equinix has differentiators on the retail colo side, but they are differentiators to only part of the market. If you don’t care about dense interconnect, Equinix is just a high-quality colo facility. I have plenty of regular enterprise clients that like Equinix for their facility quality, and reliably solid operations and customer service, and who are willing to pay a premium for it — but of course increasingly, nobody’s paying a premium for much of anything (in the US) because the economy sucks and everyone is in serious belt-tightening mode. And the generally flat-to-down pricing environment for retail colo also depresses the absolute premium Equinix can command, since the premium has to be relative to the rest of the market in a given city.
Those of you who have talked to me in the past about Switch and Data know that I have always felt that the SDXC sales force was vastly inferior to the Equinix sales force, both in terms of its management and, at least as manifested in actual working with prospects, possibly in terms of the quality of the salespeople themselves. Time is needed for sales force integration and upgrade, and it seems like the earning calls indicated an issue there. Equinix has had a good track record of acquisition integration to date, so I wouldn’t worry too much about this.
The underprediction of churn is more interesting, since Equinix has historically been pretty good about forecasting, and customers who are going to be churning tend to look different from customers who will be staying. Moving out of a data center is a big production, and it drives changes in customer behavior that are observable. My guess is that they expected some mid-sized customers to stay who decided to leave instead — possibly clients who are moving to a wholesale or lease model, and who are just leaving their interconnection in Equinix. (Things like that are good from a revenue-per-square-foot standpoint, but they’re obviously an immediate hit to actual revenues.)
This doesn’t represent a view change for me; I’ve been pessimistic on prospects for retail colocation since last year, even though I still feel that Equinix is the best and most differentiated company in that sector.
Amazon introduces “micro instances” on EC2
Amazon has introduced a new type of EC2 instance, called a Micro Instance. These start at $0.02/hour for Linux and $0.03/hour for Windows, come with 613 MB of allocated RAM, a low allocation of CPU, and a limited ability to burst CPU. They have no local storage by default, requiring you to boot from EBS.
613 MB is not a lot of RAM, since operating systems can be RAM pigs if you don’t pay attention to what you’re running in your baseline OS image. My guess is that people who are using micro instances are likely to want to use a JeOS stack if possible. I’d be suggesting FastScale as the tool for producing slimmed-down stacks, except they got bought out some months ago, and wrapped in with EMC Ionix into VMware’s vCenter Configuration Manager; I don’t know if they’ve got anything that builds EC2 stacks any longer.
Amazon has suggested that micro instances can be used for small tasks — monitoring, cron jobs, DNS, and other such things. To me, though, smaller instances are perfect for a lot of enterprise applications. Tons of enterprise apps are “paperwork apps” — fill in a form, kick off some process, be able to report on it later. They get very little traffic, and consolidating the myriad tertiary low-volume applications is one of the things that often drives the most attractive virtualization consolidation ratios. (People are reluctant to run multiple apps on a single OS instance, especially on Windows, due to stability issues, so being able to give each app its own VM is a popular solution.) I read micro instances as being part of Amazon’s play towards being more attractive to the enterprise, since tiny tertiary apps are a major use case for initial migration to the cloud. Smaller instances are also potentially attractive to the test/dev use case, though somewhat less so, since more speed can mean more efficient developers (fewer compiling excuses).
This is very price-competitive with the low end of Rackspace’s Cloud Servers ($0.015/hour for 256 MB and $0.03/hour for 512 MB RAM, Linux only). Rackspace wins on pure ease of use, if you’re just someone who needs a single virtual server, but Amazon’s much broader feature set is likely to win over those who are looking for more than a VPS on steroids. GoGrid has no competitive offering in this range. Terremark can be competitive in this space due to their ability to oversubscribe and do bursting, making its cloud very suitable for smaller-scale enterprise apps. And VirtuStream can also offer smaller allocations tailored to small-scale enterprise apps. So Amazon’s by no means alone in this segment — but it’s a positive move that rounds out their cloud offerings.
Shooting sparrows with a (software) cannon
Many businesses fail to save significant amounts of money when they migrate to cloud IaaS, because of the cost of their software stacks. This can happen in a myriad of ways, such as:
- Getting on-demand hardware by the hour, but having to pay for permanent, peak-load software licenses
- Paying an “enterprise” uplift on software licenses or running into other licensing-under-virtualization issues
- Spending money on commercial middleware when free open source will do
While there obviously a host of complex issues surrounding software licensing both with in-house virtualization and in the external cloud, I want to focus this blog post on this last point: Using a big complex heavyweight package when a lighterweight simpler one will do.
In the enterprise, especially pre-virtualization, it previously made a reasonable amount of sense to standardize on relatively heavyweight architectures — say, WebLogic or WebSphere on top of an Oracle database. Sure, there were some applications that actually used the full power of Java EE and needed fancy Oracle features (like RAC), but for the most part, the zillions of apps within enterprises are basically business process, workflow, paperwork apps — fill out a form, take some kind of minor action, run a report. They work fine and probably unchanged, on a sliver of a compute core, JBoss, and MySQL, for instance. (And, while we’re at it, run fine under Linux rather than a proprietary Unix flavor.)
I used to convince Web hosting customers that they ought to switch from Solaris to Linux in order to save money. (I still occasionally do, though Solaris is a vanishingly tiny percentage of the market these days, going from near complete market dominance to being essentially negligible in less than a decade and a half.) These days, I pitch clients on why they should consider converting to open source middleware if they’re going to the cloud and want to maximize their cost savings.
The fact of the matter is, it’s expensive to shoot sparrows with a cannon. Standardizing on a single platform has cost advantages right up until the time that the single platform is vastly more expensive than having two platforms. Most of the compute infrastructure in most enterprises is used for commodity applications, and it makes sense to bring down the operational cost of those applications as much as possible. Open source does not necessarily mean free, of course, and commercial open source can be plagued by the same sorts of licensing issues, but there are good things to explore here (and even if open source doesn’t cut it for you, exploring commercial alternatives that are more cloud/virtualization-friendly is still a boon).
Gartner clients: I’ve written about this topic before, in “Open Source in Web Hosting, 2008“. My colleague Stewart Buchanan has authored a magnificent series of notes on this topic, and I recommend you read, at the very least, “Splitting End-User and Service Provider Licensing Will Increase the Costs and Risks of Virtualization and Cloud Strategies“, as well as “Q&A: How to License Software Under Virtualization“.
Abstracting IaaS… or not
Customer inquiry around cloud IaaS these days is mostly of a practical sort — less “what is my broad long-term strategy going to be” or “help me with my initial pilot” like it was last year, and more “I’m going to do something big and serious, help me figure out how to source it”.
My inquiry volume is nothing short of staggering (shoved into 30-minute back-to-back calls the entirety of my work day, so if you talk to me and I sound a bit anxious to keep to schedule, that’s why). I’m currently clinging to the desperate hope that if I spend more time writing, people will consult the written work first, which will free me of having to go over basics in calls and hopefully result in better answers for clients as well as greater sanity for myself.
Thus, I have been trying to cram in a lot of writing in my evenings. At the moment, I’m working on a series of notes covering IaaS soup to nuts, going over everything from the different ways that compute resources are implemented and offered, to the minutiae of what capabilities are available in customer portals.
It strikes me that in more than ten years of covering the hosting industry as an analyst, this is the first time that I’ve written deep-dive architectural notes. No one has really cared in the past whether or not, say, a vendor uses NPIV in their storage fabric, or whether the chips in the servers support EPT/RVI. That’s all changing with cloud IaaS, once people get down into the weeds and look into adopting it for production systems.
It’s vastly ironic that now, in this age of the fluffy wonderful abstraction of infrastructure as a service, IT buyers are obsessing over the minutiae of exactly how this stuff is implemented.
It matters, of course. A core is not just a core; the performance of that core for your apps is going to determine bang-for-your-buck if you’re compute bound. The fine niggling details of implementation of fill-in-the-blank-here will result in different sorts of security vulnerabilities and ways that those vulnerabilities are addressed. And so on. The IT buyers who are delving into this stuff aren’t being paranoid or crazy, really; they’re wanting to evaluate how it’s done versus how they’d do it themselves, when you get right down to what’s going through their heads.
It’s a key difference between IaaS and PaaS thinking in the heads of customers. In PaaS, you trust that it will work as long as you write to the APIs, and you surrender control over the underlying implementation. In IaaS, you’re getting something so close to bare metal that you start really wondering about what’s there, because you’re comparing it directly to your own data center.
I think that over time this will be something that simply gets addressed with SLAs that guarantee particular levels of availability, performance, and so forth, along with some transparency and strong written guarantees around security. But the industry hasn’t hit that level of maturity yet, which means that for the moment, customers will and probably should do deep dives scrutinizing exactly what it is that they’re being offered when they contemplate IaaS solutions.
The cloud is not magic
Just because it’s in the cloud doesn’t make it magic. And it can be very, very dangerous to assume that it is.
I recently talked to an enterprise client who has a group of developers who decided to go out, develop, and run their application on Amazon EC2. Great. It’s working well, it’s inexpensive, and they’re happy. So Central IT is figuring out what to do next.
I asked curiously, “Who is managing the servers?”
The client said, well, Amazon, of course!
Except Amazon doesn’t manage guest operating systems and applications.
It turns out that these developers believed in the magical cloud — an environment where everything was somehow mysteriously being taken care of by Amazon, so they had no need to do the usual maintenance tasks, including worrying about security — and had convinced IT Operations of this, too.
Imagine running Windows. Installed as-is, and never updated since then. Without anti-virus, or any other security measures, other than Amazon’s default firewall (which luckily defaults to largely closed).
Plus, they also assumed that auto-scaling was going to make their app magically scale. It’s not designed to automagically scale horizontally. Somebody is going to be an unhappy camper.
Cautionary tale for IT shops: Make sure you know what the cloud is and isn’t getting you.
Cautionary tale for cloud providers: What you’re actually providing may bear no resemblance to what your customer thinks you’re providing.
Hope is not engineering
My enterprise clients frequently want to know why fill-in-the-blank-cloud-IaaS only has a 99.95% SLA. “That’s more than four hours of downtime a year!” they cry. “More than twenty minutes a month! I can’t possibly live with that! Why can’t they offer anything better than that?”
The answer to that is simple: There is a significant difference between engineering and hope. Many internal IT organizations, for instance, set service-level objectives that are based on what they hope to achieve, rather than the level that the solution is engineered to achieve, and can be mathematically expected to deliver, based on calculated mean time between failures (MTBF) of each component of the service. Many organizations are lucky enough to achieve service levels that are higher than the engineered reliability of their infrastructure. IaaS providers, however, are likely to base their SLAs on their engineered reliability, not on hope.
If a service provider is telling you the SLA is 99.95%, it usually means they’ve got a reasonable expectation, mathematically, of delivering a level of availability that’s 99.95% or higher.
My enterprise client, with his data center that has a single UPS and no generator (much less dual power feeds, multiple carriers and fiber paths, etc.), with a single, non-HA, non-load-balanced server (which might not even have dual power supplies, dual NICs, etc.), will tell me that he’s managed to have 100% uptime on this application in the past year, so fie on you, Mr. Cloud Provider.
I believe that uptime claim. He’s gotten lucky. (Or maybe he hasn’t gotten lucky, but he’ll tell me that the power outage was an anomaly and won’t happen again, or that incident happened during a maintenance window so it doesn’t count.)
A service provider might be willing to offer you a higher SLA. It’s going to cost you, because once you get past a certain point, mathematically improving your reliability starts to get really, really expensive.
Now, that said, I’m not necessarily a fan of all cloud IaaS providers’ SLAs. But I encourage anyone looking at them (or a traditional hosting SLA, for that matter), to ponder the difference between engineering and hope.
Lightweight Provisioning != Lightweight Process
Congratulations, you’ve virtualized (or gone to public cloud IaaS) and have the ability to instantly and easily provision capacity.
Now, stop and shoot yourself in the foot by not implementing a lightweight procurement process to go with your lightweight provisioning technology.
That’s all too common of a story, and it highlights a critical aspect of movement towards a cloud (or just ‘cloudier’ concepts). In many organizations, it’s not actually the provisioning that’s expensive and lengthy. It’s the process that goes with it.
You’ll probably have heard that it can take weeks or months for an enterprise to provision a server. You might even work for an organization where that’s true. You might also have heard that it takes thousands of dollars to do so, and your organization might have a chargeback mechanism that makes that the case for your department.
Except that it doesn’t actually take that long, and it’s actually pretty darn cheap, as long as you’re large enough to have some reasonable level of automation (mid-sized businesses and up, or technology companies with more than a handful of servers). Even with zero automation, you can buy a server and have it shipped to you in a couple of days, and build it in an afternoon.
What takes forever is the procurement process, which may also be heavily burdened with costs.
When most organizations virtualize, they usually eliminate a lot of the procurement process — getting a VM is usually just a matter of requesting one, rather than going through the whole rigamarole of justifying buying a server. But the “request a VM” process can be anything from a self-service portal to something with as much paperwork headache as buying a server — and the cost-savings and the agility and efficiency that an organization gains from virtualizing is certainly dependent upon whether they’re able to lighten their process for this new world.
There are certain places where the “forever to procure, at vast expense” problems are notably worse. For instance, subsidiaries in companies that have centralized IT in the parent company often seem to get shafted by central IT — they’re likely to tell stories of an uncaring central IT organization, priorities that aren’t aligned with their own, and nonsensical chargeback mechanisms. Moreover, subsidiaries often start out much more nimble and process-light than a parent company that acquired them, which leads to the build-up of frustration and resentment and an attitude of being willing to go out on their own.
And so subsidiaries — and departments of larger corporations — often end up going rogue, turning to procuring an external cloud solution, not because internal IT cannot deliver a technology solution that meets their needs, but because their organization cannot deliver a process that meets their needs.
When we talk about time and cost savings for public cloud IaaS vs. the internal data center, we should be careful not to conflate the burden of (internal, discardable/re-engineerable) process, with what technology is able to deliver.
Note that this also means that fast provisioning is only the beginning of the journey towards agility and efficiency. The service aspects (from self-service to managed service) are much more difficult to solve.
The convenience of not coping
There’s a lot to be said for the ability to get a server for less than the price of a stick of chewing gum.
But convenience has a price, and it’s sufficient that shared hosters, blog hosters, and other folks who make their daily pittance from infrastructure-plus-a-little-extra aren’t especially threatened by cloud infrastructure services.
For instance, I pay for WordPress to host a blog because, while I am readily capable of managing a cloud server and everything necessary to run WordPress, I don’t want to deal with it. I have better things to do with my time.
Small businesses will continue to use traditional shared hosting or even some control-panel-based VPS offerings, despite the much-inferior price-to-resource ratios compared to raw cloud servers, because of the convenience of not having to cope with administration.
The reason why cloud servers are not a significant cost savings for most enterprises (when running continuously, not burst or one-time capacity), is because administration is still a tremendous burden. It’s why PaaS offerings will gain more and more traction over time, as the platforms mature, but also why those companies that crack the code to really automating systems administration will win over time.
I was pondering this equation while contemplating the downtime of a host that I use for some personal stuff; they’ve got a multi-hour maintenance downtime this weekend. My solution to this was simple: write a script that would, shortly before shutdown time, automatically shut down my application, provision a 1.5-cent-an-hour cloud server over on Rackspace, copy the data over, and fire up the application on its new home. (Note: This was just a couple of lines of code, taking moments to type.) The only thing I couldn’t automate was the DNS changeover, since I use GoDaddy for primary DNS and they don’t have an API available for ordinary customers. But conveniently: failover, without having to disrupt my Saturday.
But I realized that I was paying, on a resource-unit equivalent, tremendously more for my regular hosting than I would for a cloud server. Mostly, I’m paying for the convenience of not thinking — for not having to deal with making sure the OS is hardened, pay attention to security advisories, patch, upgrade, watch my logs, etc. I can probably afford the crude way of not thinking for a couple of hours — blindly shutting down all ports, pretty much — but I’m not comfortable with that approach for more than an afternoon.
This is, by the way, also a key difference between the small-business folks who have one or two servers, and the larger IT organizations with dozens, hundreds, or thousands of servers. The fewer you’ve got, the less efficient your labor leverage is. The guy with the largest scale doesn’t necessarily win on cost-efficiency, but there’s definitely an advantage to getting to enough scale.
Getting real on colocation
Of late, I’ve had a lot of people ask me why my near-term forecast for the colocation market in the United States is so much lower (in many cases, half the growth rate) when compared with those produced by competing analyst firms, Wall Street, and so forth.
Without giving too much information (as you’ll recall, Gartner likes its bloggers to preserve client value by not delving too far into details for things like this), the answer to that comes down to:
- Gartner’s integrated forecasting approach
- Direct insight into end-user buying behavior
- Tracking the entire market, not just the traditional “hot” colo markets
I’ve got the advantage of the fact that Gartner producing forecasts for essentially the full range of IT-related “stuff”. If I’ve got a data center, I’ve got to fill it with stuff. It needs servers, network equipment, and storage, and those things need semiconductors as their components. It’s got to have network connectivity (and that means carrier network equipment for service providers, as well as equipment on the terminating end). It’s got to have software running on those servers. Stuff is a decent proxy for overall data center growth. If people aren’t buying a lot of stuff, their data center footprint isn’t growing. And when they’re buying stuff, it’s important to know if it’s replacing other stuff (freeing up power and space), or if it’s new stuff that’s going to drive footprint or power growth.
Collectively, analysts at Gartner take over a quarter-million client inquiries a year, an awful of lot of them related to purchasing decisions of one sort or another. We also do direct primary research in the form of surveys. So when we forecast, we’re not just listening to vendors tell us what they think their demand is; we’re also judging demand from the end-user (buyer) side. My colleagues and I, who collectively cover data center construction, renovation, leasing, and colocation (as well as things like hosting and data center outsourcing), have a pretty good picture of what our clientele are thinking about when it comes to procuring data center space, in addition to the degree to which end-user thinking informs our forecast for the stuff that goes into data centers.
Because of our client base, which not only include IT buyers dispersed throughout the world, but a lot of vendors and investors, we watch not just the key colocation markets where folks like Equinix have set up shop, but everywhere anyone does colo, which is getting to be an awful lot of places. If you’re judging the data center market by what’s happening in Equinix Cities or even Savvis Cities, you’re missing a lot.
If I’m going to believe in gigantic growth rates in colocation, I have to believe that one or more of the following things is true:
- IT stuff is growing very quickly, driving space and/or power needs
- Substantially more companies are choosing colo over building or leasing
- Prices are escalating rapidly
- Renewals will be at substantially higher prices than the original contracts
I don’t think, in the general case, that these things are true. (There are places where they can be true, such as with dot-com growth, specific markets where space is tight, and so on.) They’re sufficiently true to drive a colo growth rate that is substantially higher than the general “stuff that goes into data centers” growth rate, but not enough to drive the stratospheric growth rates that other analysts have been talking about.
Note, though, that this is market growth rate. Individual companies may have growth rates far in excess or far below that of the market.
I could be wrong, but pessimism plus the comprehensive approach to forecasting has served me well in the past. I came through the dot-com boom-and-bust with forecasts that turned out to be pretty much on the money, despite the fact that every other analyst firm on the planet was predicting rates of growth enormously higher than mine.
(Also, to my retroactive amusement: Back then, I estimated revenue figures for WorldCom that were a fraction of what they reported, due to my simple inability to make sense of their reported numbers. If you push network traffic, you need carrier equipment, as do the traffic recipients. And traffic goes to desktops and servers, which can be counted, and you can arrive at reasonable estimates of how much bandwidth each uses. And so on. Everything has to add up to a coherent picture, and it simply didn’t. It didn’t help that the folks at WorldCom couldn’t explain the logical discrepancies, either. It just took a lot of years to find out why.)