CeroWrt II - would anyone care?

Although the plan says 60/60 Mbps, it's not consistent and also typically not overprovisioned. I decided to use 54/54 as that is 90% of 60/60, which seems to work for that connection. I may bump it to 57/57 (95%) or 60/60, but definitely not 72/72.

Hah. So they are factoring in the base distance to their test server for that. That's not correct.... It's the jitter figure that matters more. I will drop them a note...

works for me (and them!). Thx so much for running that quick series of tests!

While I agree that jitter is the biggest problem, it is also true that absolute RTT also matters... e.g. awkwardness in VoIP scales with absolute delay... That and the fact that the 'solution' to jitter is de-jittering, which basically delays all packets such that all experience a considerable fraction of the worst case delay, basically translating jitter into static delay.

Whether you use qosify or sqm-scripts to instantiate cake, the run-time cost should not differ, unless both use different cake options.


Absolutely :grinning: except the relevant delay is not from your home to the speed test server but rather to the game server. So its hard to know what to do with this timing info. Its relevant but at best a proxy for what is the true quantity of interest.

Agreed. Now the waveform test will use Cloudflare's CDN which for most users will resolve to a data center close by, so that the delay measured in the test is an estimate of the best case delay, if static delay is critical and the well-connected CDN is XXms "away" it is not very likely that gamer-servers will be (much) closer than XXms.... also the criterion for "works well for gaming" is * 95th Percentile Latency < 40 ms so the jitter will naturally also affect the result. IMHO that is not unreasonable, even though all of the these attempts to describe complex phenomena/situations by single indicators are approximate at best.

1 Like

Still would prefer to focus on useful things.

Route 666 and...

depending on the game, a game lasts between 15 and 20 minutes. dedicated ports are recommended to be open (3659/3074) when capturing its flows we observe that there are approximately 70'000 micro packets which traffic and generate in total around 10 to 12 Mb .. I find it frustrating to not being able to take advantage of the optimizations by rules of the scripts of the cakes of the qosify (respect to the developer again thank you) in short I do not understand why despite a perfect dslreport, a perfect jitter crazy. perfect data a perfect ping gives me exactly the same result as not optimizing anything and staying in standard ISP router mode .. I always play in the past and I always respond late, so I lose both on entry and on output .. and this despite all optimization and testing .. Daniel here can testify! what kind of ultimate test could give me a reason for this delay in all the compartments when it comes to playing in real time .. ???
what kind of ultimate test could give me a reason for this delay in all the compartments when it comes to playing in real time .. ???

RBL blacklist

WORM blacklist

1 Like

@dtaht I received my Belkin RT3200. When I tried to setup separate instances of SQM for upload (cake + layer_cake.qos) and download (fq_codel + simple.qos), I do not see SQM working on the download side. Please find below my /etc/config/sqm :

config queue
        option interface 'wan'
        option verbosity '5'
        option qdisc_advanced '1'
        option qdisc_really_really_advanced '1'
        option egress_ecn 'NOECN'
        option squash_ingress '1'
        option squash_dscp '1'
        option debug_logging '1'
        option ingress_ecn 'ECN'
        option linklayer 'ethernet'
        option overhead '44'
        option upload '20000'
        option qdisc 'cake'
        option script 'layer_cake.qos'
        option iqdisc_opts 'ingress besteffort nat dual-dsthost'
        option eqdisc_opts 'egress diffserv4 nat dual-srchost ack-filter'
        option enabled '1'
        option download '0'

config queue
        option enabled '1'
        option interface 'wan'
        option download '600000'
        option upload '0'
        option debug_logging '0'
        option verbosity '5'
        option qdisc 'fq_codel'
        option script 'simple.qos'
        option linklayer 'ethernet'
        option overhead '44'

In system log I only see:

Fri Dec  3 10:06:35 2021 user.notice SQM: Starting SQM script: layer_cake.qos on wan, in: 0 Kbps, out: 20000 Kbps
Fri Dec  3 10:06:35 2021 daemon.err modprobe: failed to find a module named act_ipt
Fri Dec  3 10:06:35 2021 daemon.err modprobe: failed to find a module named act_ipt
Fri Dec  3 10:06:36 2021 user.notice SQM: layer_cake.qos was started on wan successfully

"tc -s qdisc show" output:

qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 523119173 bytes 1047389 pkt (dropped 0, overlimits 0 requeues 11)
 backlog 0b 0p requeues 11
  maxpacket 1518 drop_overlimit 0 new_flow_count 986 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev lan1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan2 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan3 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan4 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 802c: dev wan root refcnt 2 bandwidth 20Mbit diffserv4 dual-srchost nat nowash ack-filter split-gso rtt 100ms noatm overhead 44
 Sent 651225 bytes 2315 pkt (dropped 1, overlimits 2300 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 44232b of 4Mb
 capacity estimate: 20Mbit
 min/max network layer size:           29 /    1500
 min/max overhead-adjusted size:       73 /    1544
 average network hdr offset:           14

                   Bulk  Best Effort        Video        Voice
  thresh       1250Kbit       20Mbit       10Mbit        5Mbit
  target         14.5ms          5ms          5ms          5ms
  interval        110ms        100ms        100ms        100ms
  pk_delay          0us       1.57ms          0us          6us
  av_delay          0us        251us          0us          0us
  sp_delay          0us         18us          0us          0us
  backlog            0b           0b           0b           0b
  pkts                0         2314            0            2
  bytes               0       652291            0          292
  way_inds            0           52            0            0
  way_miss            0          132            0            2
  way_cols            0            0            0            0
  drops               0            1            0            0
  marks               0            0            0            0
  ack_drop            0            0            0            0
  sp_flows            0           13            0            1
  bk_flows            0            1            0            0
  un_flows            0            0            0            0
  max_len             0        15376            0          166
  quantum           300          610          305          300

"ip link show" output:

wan@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc cake state UP mode DEFAULT group default qlen 1000

ifb4wan: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 32

Currently sqm only allows a single instance per interface, sorry.

1 Like

Thanks for your reply. I have reported this issue to SQM Scripts project.

Would it be possible to setup fq_codel on the download side manually using commands in /etc/rc.local instead of /etc/config/sqm ? If yes, can you please guide me on how to set this up?

As I wrote over on github, the quickest path for you would be to replace the ingress() function in layer_cake.qos with the relevant parts of simple.qos, just make sure to replace $QDISC in there with fq_codel. You might need/want to copy a few more things from simple.qos but in general it should get you going. Or get a beefier ruter than can sustain cake in both directions.

1 Like

@moeller0 Thank you for the suggestion. I copied the relevant functions and created a hybrid script as you suggested. It seems to be working.

For anyone looking to try the hybrid script please go to https://github.com/tohojo/sqm-scripts/issues/142#issuecomment-986287189 . Thank you.

Wired Ethernet (Linux, Firefox): https://www.waveform.com/tools/bufferbloat?test-id=c8124b82-e4f5-4791-b2fc-125630e9499a

5 GHz WiFi 6 / 802.11ax (Channel 149, 80MHz) (Android, Firefox): https://www.waveform.com/tools/bufferbloat?test-id=58d7d420-8bdd-4930-a436-9d0ef2466df0

1 Like

BTW, this is why SQM was designed to handle user-written .qos scripts in the first place to help/allow unusual configuration without everybody having to re-invent everything over and over again.
In retrospect however, it might have been better to force the user to set up one instance per direction individually, that way your specific request would have been a trivial re-configuration. (On the other hand the current approach works reasonably well for the usual set-ups, and is slightly easier to configure).
For a long time, sqm actually allowed to define multiple scripts per interface (also allowing your configuration) alas that turned out not to be robust in the light of hot plug and friends so got "neutered" to only allow one instance per interface (which I believe was the right decision).

1 Like

@dtaht As far as Windows is concerned, does fq_codel have to be deeply integrated into the OS (or the NT kernel specifically) because of which we are dependent on Microsoft to implement it? OR Is it something that can be provded as a regular third-party Windows driver/software that anyone can install?

I am asking specifically about Windows because whenever I use VPN (Mullvad, Wireguard) in Windows 10 (right now 21H2), I get massive bufferbloat in Windows 10 but I do not get that in Arch Linux. The Linux Wireguard interface seems to have fq_codel qdisc enabled on it ("ip link show" output shows fq_codel on the interface). I don't know if that is the main reason there's no high loaded latency with VPN in Linux, but I think that may atleast be a factor.

Is there any way to make this work better on light CPUs like most of us are using in our routers? The MT7622 as discussed here is already "high end" in comparison.

Looking at "simple.qos" which part is the most CPU intensive? The HTB qdisc, the FQ_CODEL or the fact that we have to mangle each packet?

1 Like

Typically the most costly part is the low latency traffic shaper. Which intuitively makes sense given that ideally the shaper runs after each individual packet has been transmitted at the desired rate (not 100% true, each packet is transmitted at the true interface's speed, it is just that the shaper delays passing the next packet on until the transmission time at the configured shaper rate has elapsed). Practically the shaper cam batch a few packets, but if it is to achiebe low latency without excessive bursts it still needs to be run often with tight deadline demands.

So if we could offload the HTB qdisc (partially) to hardware using the HW-QOS feature found in a lot of routers and still could use software FQ_CODEL we should get a significant performance improvement. In other words: we should be able to handle higher line speeds?

Or? could we use the MQPRIO qdisc instead of HTB? As I understand it MQPRIO is specifically used to hardware offload QOS.