Bandwidth versus speed

I am trying to figure out if I should pay more to my ISP for more bandwidth. I currently pay for 100 Mbs (fiber).

As far as I understand, ISPs numbers are bandwidth, not speed.

I can easily measure current given bandwidth, by using one of the plethora of online "speed" test.

But how can I measure speed? I can ping a website, and see how fast it goes, but I'm not sure that's the right way to go about it.

Any tip to decide whether to pay more on a monthly basis?


These terms, bandwidth, rate, speed are mostly used interchangeably... at the very least they are not well defined.

ISPs typically give you some throughput numbers, often gross numbers, but more and more numbers like you can confirm yourself in on-line speedtests (partly because in the EU the BEREC explicitly recommends to use net throughput, exactly as that is relatively easy to confirm).


This depends mostly on what you understand "speed" to mean.

Ah, that is something often called latency, or more recently "responsiveness". This is an important component of how "reactive" internet applications feel.

Have a look at sqm-scripts and its wiki pages for a way of keeping latency (especially latency under load) under control even when your link is saturated... often that is more effective than getting a faster link (because once you saturate that faster link you want/need to control latency under load again).

So, generally speaking, only pay more if you perceive the added performance (in whatever dimension you value most) as worth the extra money?

Personally, I would try whether the 100 Mbps link does not feel good enough with properly configured sqm-scripts, and only consider paying more for a faster link if that is not already good enough; but I am hardly unbiased here.

If you want to try and run into issues not addressed in the wiki article feel free to ask in this thread about details.


Thanks a lot for the tip. I will definitely try sqm, and see of that works.

Overall, I do not have too much of an issue with internet. The only gripe is zoom calls are sometimes fairly degraded. When the calls are degraded, I see a lot of packet losses.

But I have no evidence this is due to my own internet connection. It might be that zoom servers get overloaded, or other issues.

If this happens when your network is otherwise idle (and you are testing over wired ethernet to rule out RF issues on WiFi) then SQM is unlikely to help. If however the problems only/mostly occur with higher loads on your network, sqm might just be what you need :wink:

1 Like

Ok, thanks for that. I didn't try on ethernet, but the computer is right next to the router, unobstructed, so I assumed it wouldn't be the issue. What do you think?

I still think it worthwile to test over direct ethernet wires, if just to rule out WiFi as potential culprit. The idea is to keep things as simple as possible when starting out and introduce complexity step by step to see what causes issues.


my considerations would be:

  • the down:up ratio as i think that's usually a bigger problem in case it is very asymmetric than bw vs speed.
  • in general if you could move to a new technology (fibre) it is a +1 for me. not necessarily because of new technology is that better but because it is new ISP investment so they have longer plans thus I'd think they have better reserves, better infra, better operations.
  • cost vs benefits. honestly i don't think 1G fibre for example can ever be utilized constantly to 100% by normal usage, so strictly speaking it is not needed but if you can afford the extra why not.

(i recently moved but kept the same ISP but from a 1G/1G fibre i had to change to 1G/30m cable but it is a huge disappointment for example from reliability, support point of view. had not got a single issue with fibre, had not got a single problem with fibre modem, I could switch to bridge mode locally and it kept working like that even after power outage. the cable modem on the other hand loses wan access every other day, reverts to non-bridge mode which can only be switched on by support (need to call up through L1->L2->L3 support who finally understand what am talking about and actually has the skills to do it), not to mention the 30m up bw :frowning: )


I think if you can ensure latency (think ping times and responsiveness) never spikes, then 100Mbit/s is surely adequate for everything that is typically needed.

Personally I live out in the sticks in the Scottish Highlands and have to rely on an LTE connection that offers a variable bandwidth spanning in the region of 10Mbit/s to 70Mbit/s. With CAKE set up in a way that handles this variable capacity element to keep latency low, this gives my wife and I everything we need: Teams/Zoom is always smooth for work, browsing, downloading, Windows updates, Netflix and Prime at 4K all work fine (and all at the same time).

I could probably increase our available bandwidth by getting an external directional antenna or considering Starlink, but I see no reason to bother when this low cost solution (just one data SIM through Vodafone for circa £30/month) gives everything we need.

One has to ask oneself: how much is the extra bandwidth actually needed for day-to-day usage rather than just to give high numbers in a synthetic speed test like

Obviously certain use cases will warrant super fast connections, but I imagine I am not alone in not requiring much more than say 25Mbit/s for typical internet-based activities.

1 Like

I tend to agree, over here a family of five shares a nominally 100/40 link (in reality 116/37) without issues, and we were equally satisfied with 50/10 before. This is with sqm-scripts (layer_cake.qos/cake configured for internal IP fairness) as could be expected (I am a junior partner in sqm-scripts and believe in the "eat your own dod food" approach).

I guess I would think about the "better antenna" approach if only because it should offer less susceptibility to weather effects, no?

+1; this is a decision each network administrator needs to take for her/his own users, there is no right or wrong answer here, only more or less satisfied users :wink:

Average access rates are slowly increasing, and so are use-cases for higher access rates (this goes hand in hand, as it is rather uneconomic to offer services that only a small fraction of users can use theoretically unless those services allow very high prices). For example Netflix recommends >= 15 Mbps for 4K streaming up from 5 Mbps for HD, similar for amazon and disney+ video streaming, while game streaming with e.g. nvidia geforcenow recommends up to 40 Mbps for 4K@60Hz, so there is a slow creep towards requiring more throughput capacity; I take this as an indicator that today below 25-50 Mbps some use-cases will already require (mild) compromises and expect this to slowly increase over time (but within reason, I think for a number of factors determining required capacity we are already deep into "diminishing returns" territory, e.g. the step up for SD to HD was quite noticeable to me and welcome, but from HD to 4K was less noticeable*)

*) so little in fact that the one 4K input source operates as zwift-device and the "big screen" (for us 43" counts as big) is still driven by an HD-source as testing the 4K source revealed no clear improvements. Yes this clearly is subjective assessments and different folks will come to different conclusions, but I think the trend is clear.

1 Like

I would agree, that was probably one of the small numbers of global wow factor like monocrome to color and crt to lcd. But 4K…so the black got blacker🤷‍♂️

1 Like

Our findings show that the sole inclusion of quality labels can strongly impact subjective rating behavior and the overall opinion on UHD quality; also, visual differences between HD and UHD video were rarely noticeable by the subjects.

Anyone else like me wonder if they are wasting money by paying for the Netflix 4K plan? It's not like we have gigantic televisions.

1 Like