Network neutrality is the premise that the company from which you obtain access to the internet, your Internet Service Provider (ISP), should treat all content they transfer on your behalf the same: they should not make decisions about what traffic is allowed or blocked or make some traffic fast and other traffic slow. This is something that many people, myself included, believe is incredibly important and worth fighting to maintain.
In the past few weeks, we have heard a lot about this idea, as the Federal Communications Commission (FCC) has been debating whether to roll back many of these protections. To understand a world without network neutrality, one needs to look no further than cable television, where you can buy packages that allow you to watch different channels and where some providers have contracts to broadcast channels that others do not.
Imagine if different ISPs provided access to different websites, with Netflix only being accessible on one but Amazon Prime only being accessible on another. In such a world, companies like Cox Communications could purposefully block access to HBO Now, the streaming internet version of HBO, in order to convince people to buy their expansive cable television packages.
This is actually not so outlandish of a concept; we see this happen in other countries, such as Portugal, and we have even seen companies like Facebook and organizations like Wikipedia get on board with the idea of subsidizing internet access in the developing world, as long as the only website that people will be able to access is their own.
These anti-competitive scenarios are the typical examples used to explain why we need network neutrality, and are then expanded to “once you start allowing an ISP to filter and block content, they will start blocking and censoring content for even worse reasons.” For what it’s worth, I actually agree with this thought process; however, I do not believe that it is the strongest argument available. If nothing else, it assumes that all ISPs are also providing services that compete with established websites.
In fact, many ISPs simply wish to provide internet access as best as they can, and they still feel that network neutrality hinders their ability to compete. When you look at the worst case scenario and work backwards, you end up getting pushback, with ISPs claiming “well, we wouldn’t go that far, but we still need to make changes.” To truly argue against these ISPs, we have to understand how ISPs charge for bandwidth.
In such a world, companies like Cox Communications could purposefully block access to HBO Now, the streaming internet version of HBO, in order to convince people to buy their expansive cable television packages.
To start our analysis, let’s imagine “JP&P” was the only ISP. Given their monopoly, they could charge an almost arbitrary amount for their services, but let’s assume they are charging a fair amount: some reasonable markup on their underlying costs.
Like the wires that transport electricity, network connections are limited in how much data they can transport at once, and so, in a manner very similar to electricity, the cost of internet increases with the amount of usage during peak hours, whenever that might be.
We thereby would expect that an ISP would charge customers based on how much data they use, but they usually don’t. Instead, they tend to have easier-to-predict unlimited plans, where they charge a fixed amount of money for an arbitrary amount of data.
One could imagine them simply charging the amount of money that would be needed to maintain the wire to your apartment. That would be very clear, and would make sure that the ISP never loses money on your connection, even if you aren’t using all of it.
This would work fine, until a competitor, “B-Steady,” starts providing the same service. Knowing that JP&P is making a lot of money on your connection, they could lower their prices; as these companies compete, they would eventually charge close to average usage prices.
In this scenario, if the average customer needs 100GB of bandwidth during prime time hours, everyone would be billed a little more than the amount that the infrastructure costs to the ISP for being able to allow people to transfer 100GB of traffic, whether they used much more or less.
This works fine, except that the usage of internet bandwidth is actually extremely uneven, to the point where some users are using much more bandwidth than the vast majority. To account for this, ISPs instead figure out a usage level that most customers will never go over — maybe 10GB — and then charge customers who use more in 10GB increments.
If B-Steady does not do this, and JP&P does, then what happens is that that same vast majority of customers who are only using 10GB of data will switch to the cheaper service from JP&P, leaving B-Steady with only the users who are using large amounts of data. But that also changes the average usage, and will require B-Steady to drastically increase their prices.
Does this sound like anything familiar? If you guessed insurance, you would be correct! This model of pricing, where ISPs attempt to predict the average cost of having a user and then charge a relatively fixed fee of most users, is very similar to the process in which a health insurance company attempts to predict the average medical costs for their members but then charges a relatively fixed premium.
In a way, this is also how users conceptualize using these services: many, if not most, people would rather pay a relatively fixed and predictable amount every month than have to pay attention to their usage during the month and potentially be surprised by a larger-than-average bill; that being said, unlike with health insurance, it is generally well within the power of the user to avoid overage costs.
To the customers, their wish to predict their bill ahead of time and their wish to not be billed for service they did not use are at odds with each other, and the result is that there is a threshold price level that ISPs have figured out that below a certain point, people would rather just pay this price rather than be surprised later, and that has determined the typical lowest level service plans.
After some time, ISPs will get reasonably good at predicting people’s needs. They will know that during some months, the average changes, maybe due to holidays, and will even find multi-year cycles that affect usage, such as the quadrennial presidential elections. The averages they use to calculate their service level offerings will take all this into account.
This is only what happens in a boring world, though. In practice, the internet is subject to continual disruption, with new services coming out that change everything, such as Netflix, which now accounts for more than a third of all internet usage, or Apple’s FaceTime, which provided a reason for users to use a lot of bandwidth, even on their cellular connections.
When this happens, the obvious solution is for the ISP to just let users who are using more bandwidth get charged more by going over the base plan, possibly multiple times over; however, as we just noted, people usually prefer to be able to predict their monthly costs, so ISPs are wary to do this for any significant numbers of users.
Instead, an ISP could build a new cost model and increase the base cost for their service, which, for people who don’t care about Netflix or FaceTime, would seem extremely unfair, as they have now increased their rates for everyone.
The final option is that they could instead divide their users into two categories: ones which want to use FaceTime, and ones which do not. This is exactly what the real AT&T did in 2012; they only allowed users of their more expensive Mobile Share Plan to use this new service.
Of course, in order for this to even be possible, the ISP must be able to actually discriminate traffic in the first place. This implies that the ISP knows what you are doing with your internet connection, which allows them to build profiles of your usage and correlate it with that of other people. This is a form of surveillance that they are then incentivized to not only perform but to require services and websites to help with in order for their traffic to not be slowed or blocked.
I hope that we can at least agree that we don’t want our ISP to be able to know everything that we are doing at all times.
At this point, we have reached what I consider to be a fundamentally not acceptable premise; whether or not you agree or disagree with the business models that have been posited for ISPs to use in a world without network neutrality, I hope that we can at least agree that we don’t want our ISP to be able to know everything that we are doing at all times.
To avoid this, we need the power of government, and we need the FCC to maintain, and potentially even expand, the principle of network neutrality. This is not an argument of “What if people are evil and do bad things in the future, such as censor arbitrary content?” This is a simple matter of the economic incentives inherent to a competitive marketplace requiring companies to take these actions. We have seen large players engage in these specific business practices in the past, and in order to pull this off, it requires them to do something many people want to never occur.
Unfortunately, network neutrality comes at a cost: the ability for ISPs to provide service plan offerings that are fair to people who do not need to access much of the internet while still supporting users who want to use much more bandwidth. A world with true network neutrality is a world which, in the long term, is forced to charge for usage in smaller increments. I actually think this is a good thing, and that a world without network surveillance is well worth having bills as complex as our bills for electricity or water.
Jay Freeman wants to offer an explanation as to the logic behind both sides of Net Neutrality.
Thank you! Great article, I haven’t been able to find anything but very simplistic descriptions of net neutrality, and this helped me understand some of the complexity of the issue.
Hi to all you UCSB students from an alum who started working on the internet in 1976 at UCSB (then called the Arpanet and before HTML had been invented). The internet got along fine for decades without strong government controls (which is what so-called net neutrality is). Then for two years the gov got in the business of controls by executive fiat, then also by executive action got out of the business of controls. The internet did fine before Obama’s rules and we’ll do fine without them now. And you might wonder why big businesses (e.g. Apple and Google) are… Read more »