Help prioritizing games with alternative qdisc design

root@OpenWrt:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 632472351 bytes 1938953 pkt (dropped 0, overlimits 0 requeues 33)
 backlog 0b 0p requeues 33
  maxpacket 3028 drop_overlimit 0 new_flow_count 1134 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc hfsc 1: dev eth0.1 root refcnt 2 default 3
 Sent 6239947 bytes 13109 pkt (dropped 0, overlimits 798 requeues 0)
 backlog 0b 0p requeues 0
Segmentation fault
root@OpenWrt:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 666176577 bytes 2037593 pkt (dropped 0, overlimits 0 requeues 35)
 backlog 0b 0p requeues 35
  maxpacket 3028 drop_overlimit 0 new_flow_count 1167 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc hfsc 1: dev eth0.1 root refcnt 2 default 3
 Sent 33548297 bytes 70769 pkt (dropped 0, overlimits 1064 requeues 0)
 backlog 0b 0p requeues 0
Segmentation fault
root@OpenWrt:~#

after one game,

This is very similar to old script if your line is more than 3 Mbps and not too asymmetric. If you are in the lower speed category this should dramatically improve game play during a speedtest/browsing by others on your network. Thanks for your test! How was performance?

If you have time to tests you can artificially lower your speeds, upload say 800, download 10000 and see how is game play while browsing...

this is my two last test

in game i have impression than player is more fluide :wink:

i have 70 and 20 but i'm not sure for my overhead i'm let 37 ?

Good enough, it varies a bit but obviously you get outstanding speed tests... If you're off by ~ 5 bytes it makes little difference usually.

How is it when like this

1 Like

yes my two test is very good :slight_smile: it's work for all games ? or only cod

You must find out what your game requires and put GAMEUP and GAMEDOWN high enough for the biggest game you play.... then it works for all games that use UDP on the special machine you say in the IP address.

upload say 800, download 10000 and see how is game play while browsing...???

in game play up 800
gamedown 10000 ?

ah my general speed step 70 20 , replace to 800 10000 ...?

It would be very helpful to see how it works with

UPRATE=800
DOWNRATE=10000
GAMEUP=500
GAMEDOWN=1500

which emulates a very asymmetric DSL line with slow upload. This is the painful link that @Knomax originally wanted help with.

1 Like

i just ran latest script i get this eroor hers the ext

root@OpenWrt:~# /etc/qosscript.sh

This script prioritizes the UDP packets from / to a set of gaming
machines into a real-time HFSC queue with guaranteed total bandwidth

Based on your settings:

Game upload guarantee = 800 kbps
Game download guarantee = 1600 kbps

Download direction only works if you install this on a wired router
and there is a separate AP wired into your network, because otherwise
there are multiple parallel queues for traffic to leave your router
heading to the LAN.

Based on your link total bandwidth, the minimum amount of jitter
you should expect in your network is about:

UP = 3 ms

DOWN = 0 ms

In order to get lower minimum jitter you must upgrade the speed of
your link, no queuing system can help.

Please note for your display rate that:

at 30Hz, one on screen frame lasts: 33.3 ms
at 60Hz, one on screen frame lasts: 16.6 ms
at 144Hz, one on screen frame lasts: 6.9 ms

This means the typical gamer is sensitive to as little as on the order
of 5ms of jitter. To get 5ms minimum jitter you should have bandwidth
in each direction of at least:

7200 kbps

The queue system can ONLY control bandwidth and jitter in the link
between your router and the VERY FIRST device in the ISP
network. Typically you will have 5 to 10 devices between your router
and your gaming server, any of those can have variable delay and ruin
your gaming, and there is NOTHING that your router can do about it.

adding fq_codel qdisc for non-game traffic due to fast link
Cannot find device "eth1"
Cannot find device "eth1"
Cannot find device "eth1"
Cannot find device "eth1"
Cannot find device "eth1"
Cannot find device "eth1"
adding fq_codel qdisc for non-game traffic due to fast link
Cannot find device "eth1"

We are going to add classification rules via iptables to the
POSTROUTING chain. You should actually read and ensure that these
rules make sense in your firewall before running this script.

Continue? (type y or n and then RETURN/ENTER)
y
/etc/qosscript.sh: line 207: [11000: not found
DONE!
sh: red: unknown operand
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc hfsc 1: dev eth0 root refcnt 2 default 3
Sent 2522 bytes 13 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
Segmentation fault
root@OpenWrt:~#

i just waanna say rgarding last script

i think it was great even thought it patrially worked
it had the best aim assist so far
sfq with sqm had great aiim assist as well

all other sqms i never got aim assist ever
so for me its helping

with this value ...

Can you give output of ip addr show

after 4 5 games

root@OpenWrt:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 301208663 bytes 426651 pkt (dropped 0, overlimits 0 requeues 36)
 backlog 0b 0p requeues 36
  maxpacket 7570 drop_overlimit 0 new_flow_count 8133 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc hfsc 1: dev eth0.1 root refcnt 2 default 3
 Sent 207622745 bytes 229981 pkt (dropped 315, overlimits 103513 requeues 0)
 backlog 0b 0p requeues 0
Segmentation fault
root@OpenWrt:~#

is good ?

This is at what speed? How was the game play? Unfortunately the tc command has a bug so output is much less helpful...

better fluidity in personage the game is good

It has some errors the script....check syntax.
Line 207,225 and 228.

I would do this separately for up and download directions (instead of if [$UPRATE -lt 3000 -o $DOWNRATE -lt 3000 ]; then, split this out into two if statements that only change the clamping value per direction), 540 is quite harsh and with some tunneling on the path some servers might not be able to sent something at all. And for IPv6 the recommended minimum MTU is 1280, so this trick will not help with IPv6 traffic much (and more and more links gain IPv6 capabilities, which I ususally consider a good thing, but in this regards it is not going to be helpful).

But now, you also need to make sure that GSO/GRO are disabled, as otherwise the clamping might not actually fix the lumpyness of the send data (meta-packages)...

Honestly at this point in the "game" you will be better off to use the MSS clamping approach and use cake with the ack-filter keyword on the non-realtime tiers, as that will split GSO aggregates and deal with ACKs more gracefully, as well as keep different flows isolated from each other.
Mind you, I am not talking about the real time tiers with their tuning for gaming UDP CPR (constant packet rate) type traffic, but the rest.

BTW, ACK traffic is essentially ineleastic (once a flow reaches minimum congestion window) and hence better dealt with by queuing and filtering than by dropping...

And ...

if [ $((UPRATE/DOWNRATE > 5)) -eq 1 ]; then
	## we need to trim acks in the upstream direction, we let 100/second through, and then drop 90% of the rest
	iptables -A forwarding_rule -p tcp -m tcp --tcp-flags ACK ACK -o $WAN -m length --length 0:100 -m limit --limit 100/second --limit-burst 100 -j ACCEPT
	iptables -A forwarding_rule -p tcp -m tcp --tcp-flags ACK ACK -o $WAN  -m length --length 0:100 -m statistic --mode random --probability .90 -j DROP
    fi

Didn't attach at all.No rule appears in Firewall overview.

The MSS is different than the MTU though. It applies only to TCP which is layer 4. It's harsh but as we saw, below 3000kbps with 1500 byte packets it's impossible to achieve good game play, so it seems worth a try. At this point it's all experimental. You're probably right about doing MSS separately each way.

The ack stuff is much more a shot in the dark... we found by experiment that a speed test was absolutely flooding upstream with acks and ruining game play. I'm rate limiting it to 100/s which means acking each packet at 540MSS 54000 bytes per second download or 400Mbps.... For some reason even at 16Mbps download we were seeing 2000 upstream Acks per second on Knomaxs line.

If we can figure out why this ack flood it would probably help his experience immensely. That just seems broken right?

Whoops that should be DOWNRATE/UPRATE !

Good point, but a number of IPv6 services expect to be able to send packets > 540 Bytes, but i guess tough luck on that gaming ink during game-play :wink:
But I like the idea in general to reduse the packet size to get jitter under more control!

Sure, classic Reno will send one ACK every 2 full MSS packets, that is roughly 1:40 (~1500 * 2 / 64) of the downlink bandwidth, a crazy TCP can even send one ACK per received packet, but more certainly is a bug. The arguably correct solution is to queue the ACK and compress/filter them (that is ACKs are accumulative, so if two ACKs are compatible (meaning only meaningful difference between the is the segment/acknowledgement number)) ACK will be compared with the previous ACK in the queue and if compatible that earlier further down the queue ACK gets exchanged for the later ACK (and the first is dropped). This results in less bandwidth needed for ACKs and still ACK clocking continues to work. cake actually has a pretty decent ACK filter implemented :wink:

I guess the issue with rate limiting is that depending on the time constant a few concurrent TCP flows can generate bursts of ACKs that on a short timescale look like higher frequencies. But again, I guess you are well aware of the subtleties :wink:

But as far as I can tell with Reno like one ACK per two full segments you can drive ~ 2.4 Mbps with 100 ACKs/second, no?

1500 Bytes/packet * 2 Segments/ACK* 100 A\maxACKs/Second * 8 bits/Byte  / (1000^2 bits/Mbits) = 2.4 Mbits packet rate

Ignoring all overheads here, where did I go wrong? Or better why do our estimates differ?

That sounds like something worth investigating on its own, maybe torrents? The thing is at the minimum congestion window of two segments, a flow will still reliable release ACKs.

I agree that looks somewhat insane, according to my calculations, that would correspond to 48 Mbps of download traffic...