Show and Tell - SQM settings

OK so getting A+ tests on wave form is good/done with the following configuration:

  • fibre ont > eth to router, vlan + pppoe 550mbps/550mbps speed
config queue 'eth1'
	option enabled '1'
	option interface 'pppoe-wan'
	option download '522500'
	option upload '511100'
	option qdisc 'cake'
	option script 'layer_cake.qos'
	option qdisc_advanced '1'
	option linklayer 'ethernet'
	option overhead '50'
	option ingress_ecn 'ECN'
	option egress_ecn 'NOECN'
	option linklayer_advanced '1'
	option tcMTU '2047'
	option tcTSIZE '128'
	option tcMPU '84'
	option debug_logging '0'
	option verbosity '5'
	option squash_dscp '1'
	option squash_ingress '1'
	option linklayer_adaptation_mechanism 'default'
	option qdisc_really_really_advanced '1'
	option iqdisc_opts 'nat dual-dsthost ingress diffserv4'
	option eqdisc_opts 'nat dual-srchost ack-filter diffserv4'

https://www.waveform.com/tools/bufferbloat?test-id=55aaa2e9-6a12-4d82-bcc5-09ec8016b409

but the challenge now is how to better tune the settings for the libreqos bufferbloat tests ie:

If anyone has a working setup with similar speeds, id like to know your parameters if you were able to score higher in the libreqos tests

1 Like

How was it before you made the configs?

You already have a 550/550 connection after all.

1 Like

Quite ompressive for anonymous decade old router with 2 100mbps ports.

It’s worse without cake enabled for sure

The router is a x86 n100 2.5g port from wan and 1gb lan ports. Sorry for any confusion

Normally you start with shaping upload only (set ingress to 0) , by splicing up from half of bandwidth, and taking one step back when loaded upload latency starts to grow.
If you are on something like cortex a53 or ramips you stop here if dl mahically got improved, but you have virtually unlimited CPU so you can repeat the excersise with ingress too.

For x86 - check if network card has multiple interrupts (cat /proc/interrupts) - if so seplace packet steering with irqbalance.

1 Like

ah i did forget to mention irqbalance and pakcet steering acrross all cpus was enabled.

So messing around with the settings a bit more this is the best config result that did achieve A+ on libreqos

config queue 'eth1'
	option enabled '1'
	option interface 'pppoe-wan'
	option download '527240'
	option upload '484200'
	option qdisc 'cake'
	option script 'layer_cake.qos'
	option qdisc_advanced '1'
	option linklayer 'ethernet'
	option overhead '48'
	option ingress_ecn 'ECN'
	option egress_ecn 'NOECN'
	option linklayer_advanced '1'
	option tcMTU '2047'
	option tcTSIZE '128'
	option tcMPU '84'
	option debug_logging '0'
	option verbosity '5'
	option squash_dscp '1'
	option squash_ingress '1'
	option linklayer_adaptation_mechanism 'default'
	option qdisc_really_really_advanced '1'
	option iqdisc_opts 'ingress nat dual-dsthost diffserv4'
	option eqdisc_opts 'nat dual-srchost diffserv4'

posting for anyone else with similar spec links.. this shall serve as ref config untill the mq-cake feature releases to openwrt :slight_smile:

I suggested a more pedantic approach - if netcard driver spreads packets across all cores there is no practical need to do extra roll with explicit steering them again.

This is the same as setting iqdisc_opts 'besteffort', so better set squash_ingress to 0...

cake will ignore that field ans always use ECN, no need to change the field just so you know.

I currently see similar issues. Not sure where they are coming from....

1 Like

A bit more testing and this might be the best result yet, i did follow @brada4 disabled packet steering but kept irqbalance enabled, and also your notes @moeller0:

config queue 'eth1'
	option enabled '1'
	option interface 'pppoe-wan'
	option download '527240'
	option upload '484200'
	option qdisc 'cake'
	option script 'layer_cake.qos'
	option qdisc_advanced '1'
	option linklayer 'ethernet'
	option overhead '48'
	option ingress_ecn 'ECN'
	option egress_ecn 'ECN'
	option linklayer_advanced '1'
	option tcMTU '2047'
	option tcTSIZE '128'
	option tcMPU '84'
	option debug_logging '0'
	option verbosity '5'
	option squash_dscp '1'
	option squash_ingress '0'
	option linklayer_adaptation_mechanism 'default'
	option qdisc_really_really_advanced '1'
	option iqdisc_opts 'ingress nat dual-dsthost'
	option eqdisc_opts 'nat dual-srchost ack-filter'
	option itarget '5ms'
	option etarget '5ms'

Dave Taht (rip) gave advice about using ack-filter but only if your connection is very asymmetric, something like your download being 20x higher than upload speed which usually only happens on DOCSIS. I don’t believe this should be used on a 550/550 connection like yours.

Oddly having it in seemed to produce better results. My upload speed is is only stable at a much lower speed then the download speed set

Yes, you completely control upload direction, download direction is mostly managed by your provider, you just delay/drop/ecn few packets to slightly affect stream speeds to avoid saturating their buffers.

ACK Filtering supposedly helps if:
a) your egress is much lower than the ingress so egress ACK traffic (typically aroubd 1/40 of forward data traffic) consumes a noticeable fraction of egress capacity, then ack filters frees some of the egress capacity for actual data traffic
b) you are on a quite bursty medium where the ACK traffic is bunched up and you essentialky release a number of ACKs for a given flow simultaneously, so the ACKs are truly redundant and do nog help flow control any better than replacing a flight of such ACKs with the most recent one.

in prctice I would try fair testing to assess its utility on a given link by running test with and without ACK filtering back to back.

1 Like

puzzled in your shaper setting there egress is roughly at 90% of ingress, but in the libreqos test we see more like 20% of ingress, what happened there, did you test with different shaper settings?

That’s a good observation. The variance in this libreqos testing is leading me to believe there are bugs or capacity constraints with the tool. Waveforms test seems much more reliable

One thing waveform does is reporting the difference in the mean RTTs between idle and the two load conditions, while the LibreQosS tests reports the more sensitive difference between the 5%-ile during idle with the 90%-ile RTTs during the load epochs, that way latency spikes have a much larger influence in the libreqos test. That IMHO is a good thing, as such latency spikes also degrade the perceived quality of a number of interactive use cases, but it does creat situations where the waveform test looks "better", but simply by reporting a different metric, from the "Detailed Statistics" field you can calculate what waveform would report, +0.5 down and +3.8 up...
Personally I think I like the LibreQoS test better even though it will be less repeatable..

Regarding shaping only one direction, that can occasionally help, e.g. on a heavily asymetric links where an underpowered router might not be able to shape the higher capacity direction anywhere close to the contracted rate... (personally I would always test whether the link might not feel actually more useful with an sqm instance and lower capacity than without sqm bug higher capacity).
Ingress shaping is a bit more approximate than egress shaping, because there is always the chance of traffic backspilling into the ISP-side buffers of the access link. The smaller the differencf between the true bottleneck rate and the shaper rate the more likely backspill events will be, and since the ISP side buffers typically are both oversized and undermanaged these likely result in noticeable latency spikes.

if its interesting as a tuning point here is the results with egress=0

it appears to top out B/W lower then the ingress, however if i use speedtest.net or fast.com, the speed measured is around 538/538.

@moeller0 based on this what would you set egress value to?

About the router spec its X86 N100, it should have enough processing power to hit around 2gb i think. But i dont know if there is any specifc packages other then irqbalance i should have considered, i just took the default in firmware-selector.

also another unexplainable test. From laptop lan connection disabled IPv4, and had this run IPv6 only and the results seem to be better then when ipv4 and ipv6 were enabled. I also tested on ipv4 only and the loaded latency for both was around 19ms so perhaps the upload higher latency was doing ipv4 on the upload and ipv6 on the download?