Help me update my HFSC shaper scripts for fw4/nftables

@Hudra - Thats Great, will have a look. Script seems to be working well, thanks for your good work :slight_smile:

https://www.waveform.com/tools/bufferbloat?test-id=247be765-1dad-41ae-8d9b-e76374a36ebe

1 Like

what would be a ideal length for the pfifo queue? any ideas? currenlty using 1 console

PFIFOMIN=5 ## Minimum number of packets in pfifo
PACKETSIZE=655 # Bytes per game packet avg
MAXDEL=5 # Ms we try to keep max delay below for game packets after burst

gives queue around 100p

Is it possible for netem to delay only one ip address?

Zia

For optimal gaming performance on a single console, i guess the ideal pfifo queue length should be as small as possible to minimize latency without causing packet loss. Start with the default settings and adjust based on your experience, monitoring for any packet loss or latency issues. A queue length of around 100 packets could be reasonable if it maintains game stability without latency spikes. It’s best to experiment and tweak the settings during actual gameplay to find the perfect balance for your specific network conditions.

I personally find that using fq_codel for games like Warzone feels better for my connection, although I can't quantify this improvement with numbers. It just "feels" better...

1 Like

Netem is only applied on the high prio realtime class (1:11). So only Pakets marked with

ef 
cs5
cs6
cs7

or udp Pakets for IP's specified in:

REALTIME4=""
REALTIME6=""

So with GSO?GRO packets can size p to 64KB... that is measuring queue size in packets ins inherently not a good idea if the goal is reasonable latency under load... at the very least consider a bfifo (specified in bytes not packets) as for a fixed rate link that can be reasonably be sized okayish, e.g. size the driver queue to contain X ms worth of bytes and you know that the worst case delay that can be introduced by that is the same X ms).

2 Likes

agreed. If I remember correctly we used pfifo because OpenWrt didn't have bfifo in its available options back in the day.

1 Like

Honestly, I've never really thought about it; I didn't develop the logic myself, but your explanation definitely makes sense, thank you. What can be said is that games mostly use UDP, and at least from my observations, the packets are always smaller than 1500 bytes. This knowledge is mainly based on packet captures from CoD titles. Whether it's different for other games, I can't say.

I'm definitely open to experimenting with bfifo. It should be available now:

tc qdisc replace dev eth1 parent 1:11 handle 10: bfifo limit 15000
root@OpenWrtHudra:~# tc -s qdisc | grep -A 2 "parent 1:11"
qdisc bfifo 10: dev eth1 parent 1:11 limit 15000b
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0

Regarding the logic for calculating the limit in a bfifo setup, what would be the best approach?

Take the max acceptable time delay and multiply by the gameup bandwidth, in the proper units.

1 Like

Seems right:

DOWNRATE=90000
UPRATE=45000
MAXDEL=25

tc qdisc add dev "$DEV" parent 1:11 handle 10: bfifo limit $((MAXDEL * RATE / 8))
root@OpenWrtHudra:~# tc -s qdisc | grep -A 2 "parent 1:11"
qdisc bfifo 10: dev eth1 parent 1:11 limit 140625b
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
--
qdisc bfifo 10: dev ifb-eth1 parent 1:11 limit 281250b
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0

Should I really take the gameup variable for both directions or is RATE right?

yeah use the rate for the given direction.

has anyone tried these?

case $useqdisc in
    "drr")
	tc qdisc add dev "$DEV" parent 1:11 handle 2:0 drr
	tc class add dev "$DEV" parent 2:0 classid 2:1 drr quantum 8000
	tc qdisc add dev "$DEV" parent 2:1 handle 10: red limit 150000 min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:2 drr quantum 4000
	tc qdisc add dev "$DEV" parent 2:2 handle 20: red limit 150000 min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:3 drr quantum 1000
	tc qdisc add dev "$DEV" parent 2:3 handle 30: red limit 150000  min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit probability 1.0
	## with this send high priority game packets to 10:, medium to 20:, normal to 30:
	## games will not starve but be given relative importance based on the quantum parameter
    ;;

    "qfq")
	tc qdisc add dev "$DEV" parent 1:11 handle 2:0 qfq
	tc class add dev "$DEV" parent 2:0 classid 2:1 qfq weight 8000
	tc qdisc add dev "$DEV" parent 2:1 handle 10: red limit 150000  min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:2 qfq weight 4000
	tc qdisc add dev "$DEV" parent 2:2 handle 20: red limit 150000 min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:3 qfq weight 1000
	tc qdisc add dev "$DEV" parent 2:3 handle 30: red limit 150000  min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit probability 1.0
	## with this send high priority game packets to 10:, medium to 20:, normal to 30:
	## games will not starve but be given relative importance based on the weight parameter

at the time I created it, we tried to get drr or qfq to be included by suggesting it, but they weren't actually available.

I do use qfq on my desktop machines at home and it is effective. if the qdiscs are available now it's potentially useful to give a try.

1 Like
DONE!
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev eth0 parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth1 root
 Sent 180636826595 bytes 130597404 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev eth1 parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
 Sent 180636826595 bytes 130597404 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth2 root
 Sent 8047998862 bytes 41811753 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc pfifo_fast 0: dev eth2 parent :1 bands 3 priomap 1 2 2 2 1 2 0 0 1 1 1 1 1 1 1 1
 Sent 8047998862 bytes 41811753 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc hfsc 1: dev br-lan root refcnt 2 default 13
 Sent 4718 bytes 18 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc drr 2: dev br-lan parent 1:11
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
Segmentation fault
root@FriendlyWrt:/etc#

i get this error for drr

root@FriendlyWrt:/etc# ls -lha /lib/modules/$(uname -r)/ | grep sch
-rw-r--r-- 1 root root   38K Apr  3 14:50 sch_atm.ko
-rw-r--r-- 1 root root   41K Apr  3 14:50 sch_cake.ko
-rw-r--r-- 1 root root   32K Apr  3 14:50 sch_cbq.ko
-rw-r--r-- 1 root root   17K Apr  3 14:50 sch_choke.ko
-rw-r--r-- 1 root root   16K Apr  3 14:50 sch_codel.ko
-rw-r--r-- 1 root root   24K Apr  3 14:50 sch_drr.ko
-rw-r--r-- 1 root root   27K Apr  3 14:50 sch_dsmark.ko
-rw-r--r-- 1 root root   25K Apr  3 14:50 sch_fq.ko
-rw-r--r-- 1 root root   23K Apr  3 14:50 sch_fq_codel.ko
-rw-r--r-- 1 root root   26K Apr  3 14:50 sch_gred.ko
-rw-r--r-- 1 root root   33K Apr  3 14:50 sch_hfsc.ko
-rw-r--r-- 1 root root   16K Apr  3 14:50 sch_hhf.ko
-rw-r--r-- 1 root root   47K Apr  3 14:50 sch_htb.ko
-rw-r--r-- 1 root root   11K Apr  3 14:50 sch_ingress.ko
-rw-r--r-- 1 root root   17K Apr  3 14:50 sch_mqprio.ko
-rw-r--r-- 1 root root   18K Apr  3 14:50 sch_multiq.ko
-rw-r--r-- 1 root root   25K Apr  3 14:50 sch_netem.ko
-rw-r--r-- 1 root root   19K Apr  3 14:50 sch_pie.ko
-rw-r--r-- 1 root root  8.6K Apr  3 14:50 sch_plug.ko
-rw-r--r-- 1 root root   18K Apr  3 14:50 sch_prio.ko
-rw-r--r-- 1 root root   35K Apr  3 14:50 sch_qfq.ko
-rw-r--r-- 1 root root   22K Apr  3 14:50 sch_red.ko
-rw-r--r-- 1 root root   20K Apr  3 14:50 sch_sfb.ko
-rw-r--r-- 1 root root   21K Apr  3 14:50 sch_sfq.ko
-rw-r--r-- 1 root root   21K Apr  3 14:50 sch_tbf.ko
-rw-r--r-- 1 root root   18K Apr  3 14:50 sch_teql.ko

im running FriendlyWRT at the moment and they have the qdiscs

RTNETLINK answers: Invalid argument
Error: Specified class not found.
RTNETLINK answers: Invalid argument
Error: Specified class not found.
adding fq_codel qdisc for non-game traffic
RTNETLINK answers: Invalid argument
Error: Specified class not found.
RTNETLINK answers: Invalid argument
Error: Specified class not found.
adding fq_codel qdisc for non-game traffic
Section @zone[1] (wan) IPv4 fullcone enabled for zone 'wan'
Section @zone[1] (wan) IPv6 fullcone enabled for zone 'wan'
Section @rule[9] (Reject-IPv6) is disabled, ignoring section
Automatically including '/usr/share/nftables.d/ruleset-post/dscptag.nft'
Automatically including '/usr/share/nftables.d/table-post/20-miniupnpd.nft'
Automatically including '/usr/share/nftables.d/chain-post/dstnat/20-miniupnpd.nft'
Automatically including '/usr/share/nftables.d/chain-post/forward/20-miniupnpd.nft'
Automatically including '/usr/share/nftables.d/chain-post/srcnat/20-miniupnpd.nft'

error for qfq

I think I misunderstood qfq at the time and the max weight is something like 1024, try weights 800, 400, 100 instead of 8000, 4000, 1000 etc

I think also that qfq requires that we use tc filters to move the packets into the individual classes, which I don't think I implemented in the script. wouldn't be hard, but just requires writing about 3 lines of tc filter, which is very, shall we say, not-user-friendly.

I've just done some brief testing, and it seems that 'bfifo' is only available in the version I compile myself.

In my VM with version 23.05.3, unfortunately, 'bfifo' is not available.

It's also absent in the current official snapshot.

Unfortunately, it's a bit difficult to determine why the 'kmod' is not available except in my version, as 'pfifo' and 'bfifo' are part of 'kmod-sched' or 'kmod-sched-core' or the kernel and do not appear as individual qdiscs.

Here is the output from ls -lha /lib/modules/$(uname -r)/ | grep sch on my device where 'bfifo' is available:

-rw-r--r--    1 root     root       34.8K Mar 25 23:00 sch_cake.ko
-rw-r--r--    1 root     root       11.4K Mar 25 23:00 sch_codel.ko
-rw-r--r--    1 root     root       16.4K Mar 25 23:00 sch_drr.ko
-rw-r--r--    1 root     root       18.5K Mar 25 23:00 sch_fq.ko
-rw-r--r--    1 root     root       14.6K Mar 25 23:00 sch_fq_pie.ko
-rw-r--r--    1 root     root       20.8K Mar 25 23:00 sch_gred.ko
-rw-r--r--    1 root     root       26.0K Mar 25 23:00 sch_hfsc.ko
-rw-r--r--    1 root     root       36.7K Mar 25 23:00 sch_htb.ko
-rw-r--r--    1 root     root        8.5K Mar 25 23:00 sch_ingress.ko
-rw-r--r--    1 root     root       12.9K Mar 25 23:00 sch_multiq.ko
-rw-r--r--    1 root     root       20.5K Mar 25 23:00 sch_netem.ko
-rw-r--r--    1 root     root       13.6K Mar 25 23:00 sch_pie.ko
-rw-r--r--    1 root     root       12.9K Mar 25 23:00 sch_prio.ko
-rw-r--r--    1 root     root       16.6K Mar 25 23:00 sch_red.ko
-rw-r--r--    1 root     root       17.4K Mar 25 23:00 sch_sfq.ko
-rw-r--r--    1 root     root       15.3K Mar 25 23:00 sch_tbf.ko
-rw-r--r--    1 root     root       13.5K Mar 25 23:00 sch_teql.ko

Yeah, that's why the script uses pfifo and the drr or qfq never got debugged as I remember.

I'm a bit embarrassed now, but it was my mistake. Bfifo should be available in version 23.05.3 and in the current snapshot. I just tested it again.

My mistake was that I ran the command on my test VM's even though the script wasn't running and no parent was set up. The error message then led me further astray:

root@OpenWrt:~# tc qdisc replace dev eth1 parent 1:11 handle 10: bfifo limit 15000
Error: Failed to find specified qdisc.