MIT researchers say all network congestion algorithms are unfair

This site may earn affiliate commissions from links on this page. Terms of use.

(Credit: Getty Images)

We’re all using more data than ever before, and the bandwidth caps that ISPs put on us don’t do much to slow people down – they’re just a tool to make more money. Legitimate network management must go beyond penalizing people for using more data, but MIT researchers say the algorithms supposed to do this don’t work as well as we thought. A recently published study suggests that it is impossible so that these algorithms distribute the bandwidth fairly.

We’ve all been there, struggling to get enough bandwidth during peak usage to stream a video or download large files. Your devices don’t know how fast to send packets because they lack information about upstream network conditions. If they send packets too slowly, you’re wasting available bandwidth. If they go too fast, packets can be lost and resent packets cause delays. You have to rely on the network to adapt, which can be frustrating even though academics and companies have spent years developing algorithms that are supposed to reduce the impact of network saturation. These systems, like the BBR algorithm designed by Google, aim to control the delays of packets queuing on the network to ensure that everyone gets bandwidth.

But can this type of system ever be fair? The new study argues that there will always be at least one shipper who gets screwed. This unfortunate connection will get no data while others will get a share of what is available, a problem known as “starvation”. The team developed a mathematical model of network congestion and fed it with all the algorithms currently used to control congestion. No matter what they did each scenario ended up shutting down at least one user.

Optical fiber, in blue and white

The problem seems to be the overwhelming complexity of the Internet. The algorithms use signals such as packet loss to estimate congestion, but packets can also be lost for reasons unrelated to congestion. This jitter delay is unpredictable and causes the algorithm to spiral toward starvation, the researchers say. This led the team to define these systems as “delayed convergence algorithms” to indicate that starvation is inevitable.

Study author and MIT graduate student Venkat Arun says the failure modes the team identified have been around the internet for years. The fact that no one knew them testifies to the difficulty of the problem. Existing algorithms may fail to avoid starvation, but researchers believe a solution is possible. They continue to explore other classes of algorithms that might do a better job, perhaps accommodating greater delay variation across a network. These same modeling tools could also help us understand other unsolved problems in networked systems.

Now read:

Sharon D. Cole