General Discussion of SQM

As so often, the answer is, it depends... If you do not run out of CPU cycles, increasing the overhead should decrease the achievable speedtest-goodput. Once the configured shaper per-packet-overhead (SPPO) is larger than the real per-paket-overhead (RPPO) all it will do is decrease the achievable goodput. If the overhead is set too low increasing it can either decrease latency-under-load or just decrease goodput, depending on packetsize and the relation of true bottleneck gross rate and shaper gross rate.

So, assuming shaper gross rate (SGR) = bottleneck gross rate (BGR),
increasing SPPO will reduce bufferbloat/latency under load until SPPO = RPPO, any further increase will just reduce goodput.

If SGR < BGR, for any given packet size there will be situations where SPPO < RPPO will not result in bufferbloat, if SGR is sufficiently smaller than BGR. In a situation like this using smaller packet will typically make bufferbloat raise its ugly head again.

Bufferbloat is only truly squelched if SGR <= BGR AND SPPO >= RPPO, other combinations tend to be either sensitive to packet size distributions of your traffic, or sacrifice massive amounts of bandwidth.

Is that in any way understandable and helpful?

It is possible, albeit unlikely, that the ISP manages buffers in up- and downstream well enough to make bufferbloat on the WAN link a non-issue. Try to saturate your link with small packets and multiple flows and see how you ISP's hardware deals with that :wink:

2 Likes

Is there a tool to do that?

I would happily pay a subscription to a public flent server...

Think of overhead as a correction factor that is associated with packet size. if you have packet sizes almost always large, it makes no difference but with small packets it makes a potentially big difference. Games and VoIP will usually use small size packets. So if that kind of traffic is a big fraction of your link it will be an important thing to get overhead right.

When in doubt, just look up the best guess for overhead on your link type from the wiki, and add 10 bytes. done. spend more time tuning the speed and qdisc options.

1 Like

How much? I can set one up and charge people's credit card.

I was testing with fast.com, because dslreports does not want to scale to 300M/300M...

if you have 300/300 overhead is unlikely to be a big deal. Unless you are running a call center with thousands of simultaneous calls or you have a LAN party going for an entire high school or something :grinning:

you should do the guess and add 10 rule from above.

I disabled SQM entirely and see no buffer bloat...

Well I guess pricing depends on the monthly data allowance, x number of speedtests allowed within a given time, rate of the speed test for starters.

Well, what do you think you would need? what would provide a decent value to you?

I ask because setting up a server would be easy but paying for bandwidth etc is probably the bigger issue. so I'd have to do some research on costs.

either you aren't able to saturate your link, or you have an ISP that has decent queue management.

For example DOCSIS specced a queue manager in their modems.

It is an FTTH connection. I will give it another shot once I figure out for to saturate it with small packets.

What's your baseline ping to say 1.1.1.1?

ping 1.1.1.1
PING 1.1.1.1 (1.1.1.1): 56 data bytes
64 bytes from 1.1.1.1: seq=0 ttl=57 time=17.391 ms
64 bytes from 1.1.1.1: seq=1 ttl=57 time=18.866 ms
64 bytes from 1.1.1.1: seq=2 ttl=57 time=17.063 ms
64 bytes from 1.1.1.1: seq=3 ttl=57 time=17.199 ms
64 bytes from 1.1.1.1: seq=4 ttl=57 time=17.107 ms
64 bytes from 1.1.1.1: seq=5 ttl=57 time=17.026 ms
64 bytes from 1.1.1.1: seq=6 ttl=57 time=19.049 ms
^C
--- 1.1.1.1 ping statistics ---
7 packets transmitted, 7 packets received, 0% packet loss
round-trip min/avg/max = 17.026/17.671/19.049 ms

What's your approximate location?

According to the trace route, I am very far from 1.1.1.1. Here is a closer one:

ping 8.8.8.8
PING 8.8.8.8 (8.8.8.8): 56 data bytes
64 bytes from 8.8.8.8: seq=0 ttl=54 time=7.359 ms
64 bytes from 8.8.8.8: seq=1 ttl=54 time=6.719 ms
64 bytes from 8.8.8.8: seq=2 ttl=54 time=6.525 ms
64 bytes from 8.8.8.8: seq=3 ttl=54 time=6.615 ms
64 bytes from 8.8.8.8: seq=4 ttl=54 time=8.132 ms
64 bytes from 8.8.8.8: seq=5 ttl=54 time=6.646 ms
^C
--- 8.8.8.8 ping statistics ---
6 packets transmitted, 6 packets received, 0% packet loss
round-trip min/avg/max = 6.525/6.999/8.132 ms

That appears to a legitimate fiber connection. I take it there is a optical network terminal somewhere near/inside your home or garage area?

if you have no bufferbloat you can generally ignore SQM. it seems rare these days, but especially if you have a recent install maybe your ISP has a queue manager in their first hop router.

Yes, it is :slight_smile:

I am gonna verify that eventually.

The only device I was able to verify so far that has queue management built in are the Riverbed Steelheads Cake in commercial equipment.. That's not saying others don't, but working in my datacenter nothing else has appeared. We do constantly upgrade to the latest and greatest switches,servers, etc even for carriers.