Blog Archives

Terms of Service: From anti-spam to content takedown

Many consumers are familiar with the terms of service (ToS) that govern their use of consumer platforms that contain user-generated content (UGC), such as Facebook, Twitter, and YouTube. But many people are less familiar with the terms of service and acceptable use policy (AUP) that governs the relationships between businesses and their service providers.

In light of the recent decision undertaken by Amazon Web Services (AWS) to suspend service to Parler, a Twitter-like social network that was used to plan the January 6th insurrection at the US Capitol, there have been numerous falsehoods circulating on social media that seem related to a lack of understanding of service provider behavior, ToS and AUPs: claims of “free speech” violations, a coordinated conspiracy between big tech companies, and the like. This post is intended to examine, without judgment, how service providers — cloud, hosting and colo, CDN and other Internet infrastructure providers, and the ISPs that connect everyone to the Internet — came to a place of industry “standards” for behavior, and how enforcement is handled in B2B situations.

The TL;DR summary: The global service provider community, as a result of anti-spam efforts dating back to the mid-90s, enforces extremely similar policies governing content, including user-generated content, through a combination of B2B peer pressure and contractual obligations. Business customers who contravene these norms have very few options.

These norms will  greatly limit  Parler’s options for a new home. Many sites with far-right and similarly controversial content have ultimately ended up using a provider in a supply chain that relies on Russian connectivity, thus dodging the Internet norms that prevail in the rest of the world.

Internet Architecture and Service Provider Dependencies

While the Internet is a collection of loosely federated networks that are in theory independent from one another, it is also an interdependent web of interconnections between those networks. There are two ways that ISPs connect with one another — through “settlement-free peering” (essentially an exchange of traffic between two ISPs that view themselves as equals) and through the purchase of “transit” (making a “downstream ISP” the customer of an “upstream ISP”).

This results in a three-tier model for ISPs.  The Tier 1 ISPs are big global carriers of network connectivity — companies like AT&T, BT and NTT — who have settlement-free peers with each other,  and sell transit to smaller ISPs. Tier 2 ISPs are usually regional, and have settlement-free peers with others in and around their region, but also are reliant on transit from the Tier 1s. Tier 3 ISPs are entirely dependent on purchasing transit. ISPs at all three tiers also sell connectivity directly to businesses and/or consumers.

In practice, this means that ISPs are generally contractually bound to other ISPs. All transit contracts are governed by terms of service that normally incorporate, by reference, an AUP.  Even settlement-free peering agreements are legal contracts, which normally includes the mutual agreement to maintain and enforce some form of AUP. (In the earlier days of the Internet, peering was done on a handshake, but anything of that sort is basically a legacy that can come to an abrupt end should one party suddenly decide to behave badly.)

AUP documents are interesting because they are deliberately created as living documents, allowing AUPs to be adapted to changing circumstances — unlike standard contract terms, which apply for the length of what is usually a multiyear contract. AUPs are also normally ironclad; it’s usually difficult to impossible for a business to get any form of AUP exemption written into their contract. Most contracts provide minimal or no notice for AUP changes. Businesses tend to simply agree to them because most businesses do not plan to engage in the kind of behavior that violates an AUP — and because they don’t have much choice.

The existence of ISP tiering means that upstream providers have significant influence over the behavior of their downstream. Upstream ISPs normally mandate that their downstream ISPs — and other service providers that use their connectivity, like hosting providers — enforce an AUP that enables the downstream provider to be compliant with the upstream’s terms of service. Downstream providers that fail to do so can have their connectivity temporary suspended or their contract terminated. And between the Tier 1 providers, peer pressure ensures a common global understanding and enforcement of  acceptable behavior on the Internet.

Note that this has all occurred in the absence of regulation. ISPs have come to these arrangements through decisions about what’s good for their individual businesses first and foremost, with the general agreement that these community standards for AUPs are good for the community of network operators as a whole.

We’re Here Because Nobody Likes Spammers

So how did we arrive at this state in the first place?

In the mid-90s, as the Internet was growing rapidly, in the near-total absence of regulation, spam was a growing problem. Spam came from both legitimate businesses who simply weren’t aware of or didn’t especially care about Internet etiquette, as well as commercial spammers (bad actors with deceptive or fraudulent ads, and/or illegal/grey-market products).

Many B2B ISPs did not feel that it was necessarily their responsibility to intervene, despite general distaste for spammers — and, sometimes, a flood of consumer complaints. Some percentage of spammers were otherwise “good customers” — i.e. they paid their bills on time and bought a lot of bandwidth. Many more, however, obtained services under fraudulent pretenses, didn’t pay their bills, or tended not to pay on time.

Gradually, the community of network operators came to a common understanding that spammers were generally bad for business, whether they were your own customers, or whether they were the customers of, say, a web hosting company that you provided Internet connectivity for.

This resulted in upstream ISPs exerting pressure on downstream ISPs. Downstream ISPs, in turn, exerted pressure on their customers — kicking spammers off their networks and pushing hosters to kick spammers out of hosting environments. ISPs formalized AUPs. AUP enforcement took longer. Many ISPs were initially pretty shoddy and inconsistent in their enforcement — either because they needed the revenue they were getting from spammers, or due to unwillingness or inability to fund a staff to deal with abuse, or corporate lawyers who urged caution. It took years, but ISPs eventually arrived at AUPs that were contractually enforceable, processes for handling complaints, and relatively consistent enforcement. Legislation like the CAN-SPAM act in the US didn’t hurt, but by the time CAN-SPAM was passed (in 2003), ISPs had already arrived at a fairly successful commercial resolution to the problem.

Because anti-spam efforts were largely fueled by agreements enshrined in B2B contracts, and not in government regulation, there was never full consistency across the industry. Different ISPs created different AUPs — some stricter and some looser. Different ISPs wrote different terms of service into their contracts, with different “cure” periods (a period of time that a party in the contract is given to come into compliance with a contractual requirement). Different ISPs had different attitudes towards balancing “customer service” versus their responsibilities to their upstream providers and to the broader community of network operators.

Consequently, there’s nothing that says “We need to receive X number of spam complaints before we take action,” for instance. Some providers may have internal process standards for this. A lot of enforcement simply takes place via automated algorithms; i.e. if a certain threshold of users reports something as spam, enforcement actions take place. Providers effectively establish, through peer norms, what constitutes “effective” enforcement in accordance with terms of service obligations. Providers don’t need to threaten each other with network disconnection, because a norm has been established. But the implicit threat — and the contractual teeth behind that threat — always remains.

Nobody really likes terminating customers. So there are often fairly long cure periods, recognizing that it can take a while for a customer to properly comply with an AUP. In the suspension letter that AWS sent Parler, AWS cites communications “over the past several weeks”. Usually the providers look for their customers to demonstrate good-faith efforts, but may take suspension or termination efforts if it does not look like a good-faith effort to comply is being made, or if it appears that the effort, no matter how seemingly earnest, does not seem likely to bring compliance within a reasonable time period. 30 days is a common timeframe specified as a cure period in contracts (and is the cure period in the AWS standard Enterprise Agreement), but cloud provider click-through agreements (such as the AWS Customer Agreement) do not normally have a cure period, allowing immediate action to be taken at the provider’s discretion.

What Does This Have to Do With Policing Users on Social Media?

When providers established anti-spam AUPs, they also added a whole laundry list of offenses beyond spamming. Think of that list, “Everything a good corporate lawyer thought an ISP might ever want to terminate a customer for doing.” Illegal behavior, harassment, behavior that disrupts provider operations, behavior that threatens the safety/security/operations of other businesses, etc. are all prohibited.

Hosting companies — eventually followed by cloud providers like AWS, Microsoft Azure, and Google Cloud Platform, as well as companies that hold key roles in the Internet ecosystem (domain registrars and the companies that operate the DNS; content delivery networks like Akamai and Cloudflare, etc.) — were essentially obliged to incorporate their upstream ISP usage policies into their own terms of service and AUPs, and to enforce those policies on their users if they wanted to stay connected to the Internet. Some such providers have also explicitly chosen not to sell to customers in certain business segments — for instance, no gambling, or no pornography, even if the business is fully legitimate and legal (for instance, like MGM Resorts or Playboy) — through limiting what’s allowed in their terms of service. An AUP may restrict activities that are perfectly legal in a given jurisdiction.

Even extremely large tech companies that have their own data centers, like Facebook and Apple, are ultimately beholden to ISPs. (Google is something of an odd case because in addition to owning their own data centers, they are one of the largest global network operators. Google owns extensive fiber routes and peers with Tier 1 ISPs as an equal.) And even though AWS has, to some degree, a network of its own, it is effectively a Tier 2 ISP, making it beholden to the AUPs of its upstream. Other cloud providers are typically mostly or fully transit-dependent, and are thus entirely beholden to their upstream.

In short: Everyone who hosts content companies, and the content companies themselves, is essentially trapped, by the chain of AUP obligations, to policing content to ensure that it is not illegal, harassing, or otherwise seen as commercially problematic.

You have to go outside the normal Internet supply chain — for instance, to the Russian service providers — before you escape the commercial arrangements that bound notions of good business behavior on the Internet. It doesn’t matter what a provider’s philosophical alignment is. Commercially, they simply can’t really push back on the established order. And because these agreements are global, regulation at a single-country level can’t really force these agreements to be significantly more or less restrictive, because of the globalized nature of peering/transit; providers generally interconnect in multiple countries.

It also means that these aren’t just “Silicon Valley” standards. These are global norms for behavior, which means they are not influenced solely by the relatively laissez-faire content standards of the United States, but by the more stringent European and APAC environments.

It’s an interesting result of what happens when businesses police themselves. Even without formal industry-association “rules” or regulatory obligations, a fairly ironclad order can emerge that exerts extremely effective downstream pressure (as we saw in the cases of 8Chan and the Daily Stormer back in 2019).

Does being multicloud help with terms of service violations?

Some people will undoubtedly ask, “Would it have helped Parler to have been multicloud?” Parler has already said that they are merely bare-metal customers of AWS, reducing technical dependencies / improving portability. But their situation is such that they would almost certainly have had the exact same issue if they had been running on Microsoft Azure, Google Cloud Platform, or even Oracle Cloud Infrastructure as well (even though the three companies have top executives with political views spanning the spectrum). A multicloud strategy won’t help any business that violates AUP norms.

AWS and its cloud/hosting competitors are usually pretty generous when working with business customers that unintentionally violate their AUPs. But a business that chooses not to comply is unlikely to find itself welcome anywhere, which makes multicloud deployment largely useless as a defensive strategy.

Beware misleading marketing of “private clouds”

Many cloud IaaS providers have been struggling to articulate their differentiation for a while now, and many of them labor under the delusion that “not being Amazon” is differentiating. But it also tends to lead them into misleading marketing, especially when it comes to trying to label their multi-tenant cloud IaaS “private cloud IaaS”, to distinguish it from Those Scary And Dangerous Public Cloud Guys. (And now that we have over four dozen newly-minted vCloud Powered providers in the early market-entrance stage, the noise is only going to get worse, as these providers thrash about trying to differentiate.)

Even providers who are clear in their marketing material that the offering is a public, multi-tenant cloud IaaS, sometimes have salespeople who pitch the offering as private cloud. We also find that customers are sometimes under the illusion that they’ve bought a private cloud, even when they haven’t.

I’ve seen three common variants of provider rationalization for why they are misleadingly labeling a multi-tenant cloud IaaS as “private cloud”:

We use a shared resource pool model. These providers claim that because customers buy by the resource pool allocation (for instance, “100 vCPUs and 200 GB of RAM”) and can carve that capacity up into VMs as they choose, that capacity is therefore “private”, even though the infrastructure is fully multi-tenant. However, there is always still contention for these resources (even if neither the provider nor the customer deliberately oversubscribes capacity), as well as any other shared elements, like storage and networking. It also doesn’t alter any of the risks of multi-tenancy. In short, a shared resource pool, versus a pay-by-the-VM model, is largely just a matter of the billing scheme and management convenience, possibly including the nice feature of allowing the customer to voluntarily self-oversubscribe his purchased resources. It’s certainly not private. (This is probably the situation that customers most commonly confuse as “private”, even after long experience with the service — a non-trivial number of them think the shared resource pool is physically carved off for them.)

Our customers don’t connect to us over the Internet. These providers claim that private networking makes them a private cloud. But in fact, nearly all cloud IaaS providers offer multiple networking options other than plain old Internet, ranging from IPsec VPN over the Internet to a variety of private connectivity options from the carrier of your choice (MPLS, Ethernet, etc.). This has been true for years, now, as I noted when I wrote about Amazon’s introduction of VPC back in 2009. Even Amazon essentially offers private connectivity these days, since you can use Amazon Direct Connect to get a cross-connect at select Equinix data centers, and from there, buy any connectivity that you wish.

We don’t allow everyone to use our cloud, so we’re not really “public”. These providers claim to have a “private cloud” because they vet their customers and only allow “real businesses”, however they define that. (The ones who exclude net-native companies as not being “real businesses” make me cringe.) They claim that a “public cloud” would allow anyone to sign up, and it would be an uncontrolled environment. This is hogwash. It can also lead to a false sense of complacency, as I’ve written before — the assumption that their customers are good guys means that they might not adequately defend against customer compromises or customer employees who go rogue.

The NIST definition of private cloud is clear: “Private cloud. The cloud infrastructure is provisioned for exclusive use by a single organization comprising multiple consumers (e.g., business units). It may be owned, managed, and operated by the organization, a third party, or some combination of them, and it may exist on or off premises.” In other words, NIST defines private cloud as single-tenant.

Given the widespread use of NIST cloud definitions, and the reasonable expectation that customers have that a provider’s terminology for its offering will conform to those definitions, calling a multi-tenant offering “private cloud” is misleading at best. And at some point in time, the provider is going to have to fess up to the customer.

I do fully acknowledge that by claiming private cloud, a provider will get customers into the buying cycle that they wouldn’t have gotten if they admitted multi-tenancy. Bait-and-switch is unpleasant, though, and given that trust is a key component of provider relationships as businesses move into the cloud, customers should use providers that are clear and up-front about their architecture, so that they can make an accurate risk assessment.

Performance can be a disruptive competitive advantage

All of us are used to going to travel sites, especially for airline tickets, and waiting a while for the appropriate results to be extracted and displayed to us. I recently saw Google Flight Search for the first time and was astonished by its raw speed — essentially completely instant.

I frequently talk to customers about acceleration solutions, and discuss the business value of performance. Specifically, this is a look at business metrics that measure the success of a website or application — time spent on your site, conversion rate, shopping basket value, page views, ad views, transactions processed, employee productivity, decline in call center volume, and so forth. You compare the money associated with these metrics, against the cost of the solutions, to look at comparative ROI.

The business value of performance is usually tied to industry in a narrow and specific way, because users have a particular set of expectations and needs. For instance, for travel sites, a certain amount of performance is necessary in order to make the site usable, but the long waits for searches are things that users are conditioned to, making their overall performance expectations relatively low. Travel sites usually discover that generalized site responsiveness improve the user experience and cause revenue per site visit to increase — but only up to a certain point, at which point in time it plateaus, as the site has enough responsiveness that users aren’t discouraged from using it, and they’re going to buy what they came to buy.

Google Flight Search proves that you can “break through” the performance ceiling to actually entirely change the user experience, though. This is not the kind of incremental improvement you can achieve through acceleration techniques, though; instead, it’s a core change that affects the thing that is slowest, which is generally the back-end database and business logic, not the network. This can actually be a disruptive competitive advantage.

I typically ask my CDN clients, “What are the factors that make your site slow?” In many cases, they need to do something that goes beyond what edge caching or even network optimization (dynamic acceleration) can achieve. They need to reduce their page weight, or write better pages (and may benefit from front-end optimization techniques), or to improve the back-end responsiveness. Acceleration techniques are often used to band-aid a core problem with performance, just like CDN professional services to make a site cacheable are often used to band-aid a core problem with site structure. At some point in time it becomes more cost-effective to fix the core problem.

Too few businesses design their websites and applications with speed in mind.

Common service provider myths about cloud infrastructure

We’re currently in the midst of agenda planning for 2012, which is a fancy way to say that we’re trying to figure out what we’re going to write next year. Probably to the despair of my managers, I am almost totally a spontaneous writer, who sits down on a plane and happens to write a research note on whatever it is that’s occurred to me at the moment. So I’ve been pondering what to write, and decided that I ought to tap into the deep well of frustration I’ve been feeling about the cloud IaaS market over the last couple of months.

Specifically, it started me in on thinking about the most common fallacies that I hear from current cloud IaaS providers, or from vendors who are working on getting into the business. I think each of these things is worthy of a research note (in some cases, I’ve already written one), but they’re also worth a blog post series, because I have the occasional desire to explode in frustrated rants. Also, when I write research, it’s carefully polite, thoughtfully-considered, heavily-nuanced, peer-reviewed documents that will run ten to twenty pages and be vaguely skimmed, often by mid-level folks in product marketing. If I write a blog post, it will be short and pointed and might actually get the point through to people, especially the executives who are more likely to read my blog than my research.

So, here’s the succinct list to be explored in further posts. These are things I have said to vendor clients in inquiries, in politely measured terms. These are the blunt versions:

Doing this cloud infrastructure thing is hard and expensive. Yes, I know that VMware told you that you could just get a VCE Vblock, put VMware’s cloud stack on it (maybe with a little help from VMware consulting), and be in business. That’s not the case. You will be making a huge number of engineering decisions (most of which can screw you in a variety of colorful ways, either immediately or down the road). You will be integrating a ton of tools and doing a bunch of software development yourself, if you want to have a vaguely competitive offering for anything other than the small business migrating from VPS. Ditto if you use Citrix (Cloud.com), OpenStack, or whomever. Even with professional services to help you. And once you have an offering, you will be in a giant competitive rat race where the best players innovate fast, and the capabilities gap widens, not closes. If you’re not up to it, white-label, resell, or broker instead.

There is more to the competition than Amazon, but ignore Amazon at your peril. Sure, Amazon is the market goliath, but if your differentiation is “we’re not like Amazon, we’re enterprise-class!”, you’re now competing against te dozens of other providers who also thought that would be a clever market differentiation. Not to mention that Amazon already serves the enterprise, and wants to deepen its inroads. (Where Amazon is hurting is the mid-market, but there’s tons of competition there, too.) Do you seriously think that Amazon isn’t going to start introducing service features targeted at the enterprise? They already have, and they’re continuing to do so.

Not everything has to be engineered to five nines of availability. Many businesses, especially those moving legacy workloads, need reliable, consistently high-performance infrastructure. Howeve, most businesses shouldn’t get infrastructure as one-size-fits-all — this is part of what is making internal data centers expensive. Instead, cloud infrastructure should be tiered — one management portal, one API, multiple levels of service at different price points. “Everything we do is enterprise-class” unfortunately implies “everything we do is expensive”.

Your contempt for the individual developer hugely limits your sales opportunities. Developers are the face of the business buyer. They are the way that cloud IaaS makes inroads into traditional businesses, including the largest enterprises. This is not just about start-ups or small businesses, or about the companies going DevOps.

Prospective customers will not call Sales when your website is useless. Your lack of useful information on your website doesn’t mean that eager prospects will call sales wanting to know what wonderful things you have. Instead, they will assume that you suck, and you don’t get the cloud, and you are hiding what you have because it’s not actually competitive, and they will move on to the dozens of other providers trying to sell cloud IaaS or who are pretending to do so. Also, engineers hate talking to salespeople. Blind RFPs are common in this market, but so is simply signing up with a provider that doesn’t make it painful to get their service.

Just because you don’t take online sign-ups doesn’t mean your cloud is “safe”. Even if you only take “legitimate businesses”, customers make mistakes and their infrastructure gets compromised. Sure, your security controls might ensure that the bad guys don’t compromise your other customers. But that doesn’t mean you won’t end up hosting command-and-control for a botnet, scammers, or spammers, inadvertently. Service providers who take credit card sign-ups are professionally paranoid about these things; buyers should beware providers who think “only real businesses like you can use our cloud” means no bad guys inside the walls.

Automation, not people, is the future. Okay, you’re more of a “managed services” kind of company, and self-service isn’t really your thing. Except “managed services” are, today, basically a codeword for “expensive manual labor”. The real future value of cloud IaaS is automating the heck out of most of the lower-end managed services. If you don’t get on that bandwagon soon, you are going to eventually stop being cost-competitive — not to mention that automation means consistency and likely higher quality. There’s a future in having people still, but not for things that are better done by computers.

Carriers won’t dominate the cloud. This opinion is controversial. Of course, carriers will be pretty significant players — especially since they’ve been buying up the leading independent cloud IaaS providers. But many other analyst firms, and certainly the carriers themselves, believe that the network, and the ability to offer an end-to-end service, will be a key differentiator that allows carriers to dominate this business. But that’s not what customers actually want. They want private networking from their carrier that connects them to their infrastructure — which they can get out of a carrier-neutral data center that is a “cloud hub”. Customers are better off going into a cloud hub with a colocated “cloud gateway” (with security, WAN optimization, etc.), cross-connecting to their various cloud providers (whether IaaS, PaaS, SaaS, etc.), and taking one private network connection home.

Stay tuned. More to come.

Akamai and Riverbed partner on SaaS delivery

Akamai and Riverbed have signed a significant partnership deal to jointly develop solutions that combine Internet acceleration with WAN optimization. The two companies will be incorporating each other’s technologies into their platforms; this is a deep partnership with significant joint engineering, and it is probably the most significant partnership that Akamai has done to date.

Akamai has been facing increasing challenges to its leadership in the application acceleration market — what Akamai’s financial statements term “value added services”, including their Dynamic Site Accelerator (DSA) and Web Application Accelerator (WAA) services, which are B2C and B2B bundles, respectively, built on top of the same acceleration delivery network (ADN) technology. Vendors such as Cotendo (especially via its AT&T partnership), CDNetworks, and EdgeCast now have services that compete directly with what has been, for Akamai, a very high-margin, very sticky service. This market is facing severe pricing pressure, due not just to competition, but due to the delta between the cost of these services and standard CDN caching. (In other words, as basic CDN services get cheaper, application acceleration also needs to get cheaper, in order to demonstrate sufficient ROI, i.e., business value of performance, above just buying the less expensive solution.)

While Akamai has had interesting incremental innovations and value-adds since it obtained this technology via the 2007 acquisition of Netli, it has, until recently, enjoyed a monopoly on these services, and therefore hasn’t needed to do any groundbreaking innovation. While the internal enterprise WAN optimization market has been heavily competitive (between Riverbed, Cisco, and many others), other CDNs largely only began offering competitive ADN solutions in the last year. Now, while Akamai still leads in performance, it badly needs to open up some differentiation and new potential target customers, or it risks watching ADN solutions commoditize just the way basic CDN services have.

The most significant value proposition of the joint Akamai/Riverbed solution is this:

Despite the fundamental soundness of the value proposition of ADN services, most SaaS providers use only a basic CDN service, or no CDN at all. The same is true of other providers of cloud-based services. Customers, however, frequently want accelerated services, especially if they have end-users in far-flung corners of the globe; the most common problem is poor performance for end-users in Asia-Pacific when the service is based in the United States. Yet, today, doing so either requires that the SaaS provider buy an ADN service themselves (which it’s hard to do for only one customer, especially for multi-tenant SaaS), or requires the SaaS provider to allow the customer to deploy hardware in their data center (for instance, a Riverbed Steelhead WOC).

With the solution that this partnership is intended to produce, customers won’t need a SaaS provider’s cooperation to deploy an acceleration solution — they can buy it as a service and have the acceleration integrated with their existing Riverbed solution. It adds significant value to Riverbed’s customers, and it expands Akamai’s market opportunity. It’s a great idea, and in fact, this is a partnership that probably should have happened years ago. Better late than never, though.

ICANN and DNS

ICANN has been on the soapbox on the topic of DNS recently, encouraging DNSSEC adoption, and taking a stand against top-level domain (TLD) redirection of DNS inquiries.

The DNS error resolution market — usually manifesting itself as the display of an advertising-festooned Web page when a user tries to browse to a non-existent domain — has been growing over the years, primarily thanks to ISPs who have foisted it upon their users. The feature is supported in commercial DNS software and services that target the network service provider market; in most current deployments of this sort, business customers typically have an opt-out option, and consumers might as well.

While ICANN’s Security and Stability Advisory Committee (SSAC) believes this is detrimental to the DNS, their big concern is what happens when this is done at the TLD level. We all got a taste of that with VeriSign’s SiteFinder back in 2003, which affected the .com and .net TLDs. Since then, though, similar redirections have found their way into smaller TLDs (i.e., ones where there’s no global outcry against the practice). SSAC wants this practice explicitly forbidden at the TLD level.

I personally feel that the DNS error resolution market, at whatever level of the DNS food chain, is harmful to the DNS and to the Internet as a whole. The Internet Architecture Board’s evaluation is a worthy indictment, although it’s missing one significant use case — the VPN issues that redirection can cause. Nevertheless, I also recognize that until there are explicit standards forbidding this kind of use, it will continue to be commercially attractive and thus commonplace; indeed, I continue to assist commercial DNS companies, and service providers, who are trying to facilitate and gain revenue related to this market. (Part of the analyst ethic is much like a lawyer’s; it requires being able to put aside one’s personal feelings about a matter in order to assist a client to the best of one’s ability.)

I applaud ICANN taking a stand against redirection at the TLD level; it’s a start.

Bookmark and Share

Identity overflow

Back in 2005, my colleague Monica Basso and I wrote a research note titled, “Wide Array of Communications Overwhelms Users“. In it, we pointed out that the proliferation of communication mechanisms and the complex intermixing of personal and business communications would become increasingly unmanageable.

Until a few months ago, Gartner had a policy that disallowed analysts from participating in most forms of social media. So, up until recently, I’ve had relatively clean distinctions in my communications — a personal instant messaging account, del.icio.us, Facebook, Twitter, and so on, used strictly to communicate with friends. LinkedIn was the face of my business profile.

Now that my employer is actually encouraging use of social media, I’m finding myself facing a problem of converging business and personal identities. I am happy to build big social networks, but categorization and compartmentalization are a big problem. For instance, how do I deal with the fact that my vendor contacts see my high school acquaintances scribbling random things on my Facebook Wall? Should my blog followers who want to see my interesting technology links also have to endure my MMORPG links on del.icio.us? What do I tweet to my various accounts? (On Twitter, I have a business identity, a private friends-only identity, and a public personal identity, but ironically, I find Twitter almost impossibly distracting to deal with, so I tweet and read tweets very seldomly.) How much do I really want to federate of myself on FriendFeed?

I feel like social networking, whether targeted at personal or business use, really needs strong tagging and categorizations. Here are the people who happened to work at the same company as me, that I’ve spoken to a few times; here are my immediate co-workers; here are the people I’ve worked closely with and think are awesome. Here are business contacts at other companies that I want to stay in touch with but don’t have a personal relationship with. Here are vague past acquaintances, current friends, and my inner circle of close buddies. And I desperately want tools on all social networking sites that let me limit the flow of me-related content: public, all connections, just these specific groups.

But more broadly, we are still missing really elegant ways to manage the incredible flow of communication and information that’s coming our way. There’s a commercial opportunity there that’s still not been taken.

Bookmark and Share

Ranking of ISPs

Datahounds might be interested in the Renesys ISP rankings for 2008.

Renesys is a company that specializes in collecting data about the Internet, focused upon the peering ecosystem. Its rankings are essentially a matter of size — how much IP address space ends up transiting each provider?

Among the interesting data points: Level 3 has overtaken Sprint for the #1 spot, Global Crossing has continued its rapid climb to become #3, Telia Sonera has grown steadily, and, broadly, Asia is a huge source for growth.

Bookmark and Share

Who hosts Warhammer Online?

With the recent launch of EA/Mythic‘s Warhammer Online MMORPG, comes my usual curiosity about who’s providing the infrastructure.

Mythic has stated publicly that all of the US game servers are located in Virginia, near Mythic’s offices. A couple of traceroutes seem to indicate that they’re in Verizon, almost certainly in colocation (managed hosting is rare for MMOGs), and seem to have purely Verizon connectivity to the Internet. The webservers, on the other hand, look to be split between Verizon, and ThePlanet in Dallas. FileBurst (a single-location download hosting service) is used to serve images and cinematics.

During the beta, Mythic used BitTorrent to serve files. With the advent of full release, it doesn’t appear that they’re depending on peer-to-peer any longer — unlike Blizzard, for instance, which uses public P2P in the form of BitTorrent for its World of Warcraft updates, trading off cost with much higher levels of user frustration. MMO updates are probably an ideal case for P2P file distribution — Solid State Networks, a P2P CDN, has done well by that — and with hybrid CDNs (those combining a traditional distributed model with P2P) becoming more commonplace, I’d expect to see that model more often.

However, I’m not keen on either single data center locations or single-homing, for anything that wants to be reliable. I also believe that gaming — a performance-sensitive application — really ought to run in a multi-homed environment. My favorite “why you should use multiple ISPs, even if you’re using a premium ISP that you love” anecdote to my clients is an observation I made while playing World of Warcraft a few years ago. WoW originally used just AT&T’s network (in AT&T colocation). Latency was excellent — most of the time. Occasionally, you’d get a couple of seconds of network burp, where latency would spike hugely. If you’re websurfing, this doesn’t really impact your experience. If you’re playing an online game, you can end up dead. When WoW switched to Internap for the network piece (remaining in AT&T colo), overall latencies went up — but the latencies were still well below the threshold of problematic performance, and more importantly, the latencies were rock-solidly in a narrow window of variability. (This is the same reason multi-homed CDNs with lots of route diversity deliver better consistency of user experience than single-carrier CDNs.)

Companies like Fileburst, by the way, are going to be squarely in the crosshairs of the forthcoming Amazon CDN. Fileburst will do 5 TB of delivery at $0.80 per GB — $3,985/month. At the low end, they’ll do 100 GB or less at $1/GB. The first 100 MB of storage is free, then it’s $2/MB. They’ve got a delivery infrastructure at the Equinix IBX in Ashburn (Northern Virginia, near DC), extensive peering, but any other footprint is vague (they say they have a six-location CDN service, but it’s not clear whether it’s theirs or if they’re reselling).

If Amazon’s CDN pricing is anything like the S3 pricing, they’ll blow the doors off those prices. S3 is $0.15/GB for space and $0.17/GB for the first 10 TB of data transfer. So deliver 5 TB worth of content, out of a 1 GB store, would cost me $5,785/month with Fileburst, and about $850 with Amazon S3. Even if the CDN premium on data transfer is, say, 100%, that’d still be only $1,700 with Amazon.

Amazon has a key cloud trait — elasticity, basically defined as the ability to scale to zero (or near-zero) as easily as scaling to bogglosity. It’s that bottom end that’s really going to give them the potential to wipe out the zillion little CDNs that primarily have low-volume customers.

Bookmark and Share

%d bloggers like this: