Instructions how to create/copy/configure/run dlakelan Gamer QoS script (or any script)

i got an issue again...

  1. i couldnt properly remove the bridge from lan i think... what i did was remove the bridge and then i created a vlan and tried to link wlan0 and wlan1 to it, the shaper didnt work on download side...

if i can get a proper guide for it, could u link it for me? i didnt understand the below part from https://openwrt.org/docs/guide-user/network/wifi/dumbap

Connect this Dumb AP **LAN** port to the main router's **LAN** port via Ethernet. (Yes - **LAN-to-LAN** - the WAN port of the Dumb AP will not be used.)

if that's not the procedure do u have a link to the procedure?

  1. i added veth system in ur script, the shaper was working on veth0 and veth1... problem came when i checked on game, it was lagging super bad... like completely frozen... i played pubg mobile on ipad over 5ghz wifi...
    do i need to change some DSCP tag for it?

was having the very bad lag issue on both red and netem... idk which one i should go for

tc qdisc del dev "$DEV" root > /dev/null 2>&1

case $LINKTYPE in
    "atm")
	tc qdisc replace dev "$DEV" handle 1: root stab mtu 2047 tsize 512 mpu 68 overhead ${OH} linklayer atm hfsc default 13
	;;
    "DOCSIS")
	tc qdisc replace dev $DEV stab overhead 25 linklayer ethernet handle 1: root hfsc default 13
	;;
    *)
	tc qdisc replace dev $DEV stab overhead 40 linklayer ethernet handle 1: root hfsc default 13
	;;
esac

for other's(fiber) is the code saying the overhead to be 40? as we last checked 50 was working for me... should i change it to 50?

I should just create a veth system for this script. What the script is designed to do is work when you have a wired only router. If your router has wifi built in, it doesn't work.

I can fix this with a veth system, but haven't coded that.

Could u add it when u get free time...

Thanks for your time

If you've got time to test it :wink:

grab the latest devel script... there are some configs which will cause it to create a veth and route packets through it. The assumption is that you have exactly 1 LAN not several (ie. guest VLANs and such)

to test set up something like this:


LINKTYPE="ethernet"

USEVETHDOWN=yes
LANBR=br-lan

WAN=eth0.2 # change this to your WAN device name

See how it goes for you. Warning if I did something dumb, you could lose access to your LAN, so don't have this run on startup... just test it by running manually, if you lose access to your LAN then you can simply reboot and recover.

Sorry i felt sleep it was 4 am...

I m always ready to test... it would be an honour...

Gonna give this a try
And will let u know how it goes

i changed the script upper portion to this

LINKTYPE="other"
USEVETHDOWN=yes
WAN=eth0.2 # change this to your WAN device name
UPRATE=22000 #change this to your kbps upload speed
LANBR=br-lan
DOWNRATE=22000 #change this to about 80% of your download speed (in kbps)
OH=50 # number of bytes of Overhead on your line (37 is reasonable
      # starting point, better to be too big than too small) probably
      # likely values are between 20 and 50

@dlakelan this was the result

This script prioritizes the UDP packets from / to a set of gaming
machines into a real-time HFSC queue with guaranteed total bandwidth

Based on your settings:

Game upload guarantee = 3700 kbps
Game download guarantee = 3700 kbps

Download direction only works if you install this on a *wired* router
and there is a separate AP wired into your network, because otherwise
there are multiple parallel queues for traffic to leave your router
heading to the LAN.

Based on your link total bandwidth, the **minimum** amount of jitter
you should expect in your network is about:

UP = 1 ms

DOWN = 1 ms

In order to get lower minimum jitter you must upgrade the speed of
your link, no queuing system can help.

Please note for your display rate that:

at 30Hz, one on screen frame lasts:   33.3 ms
at 60Hz, one on screen frame lasts:   16.6 ms
at 144Hz, one on screen frame lasts:   6.9 ms

This means the typical gamer is sensitive to as little as on the order
of 5ms of jitter. To get 5ms minimum jitter you should have bandwidth
in each direction of at least:

7200 kbps

The queue system can ONLY control bandwidth and jitter in the link
between your router and the VERY FIRST device in the ISP
network. Typically you will have 5 to 10 devices between your router
and your gaming server, any of those can have variable delay and ruin
your gaming, and there is NOTHING that your router can do about it.

adding fq_codel qdisc for non-game traffic
Cannot find device "22000"
sh: lan: unknown operand
Cannot find device "22000"
HFSC: Illegal "m2"
HFSC: Illegal "rt"
Cannot find device "22000"
Cannot find device "22000"
Cannot find device "22000"
Cannot find device "22000"
adding fq_codel qdisc for non-game traffic
Cannot find device "22000"
Cannot find device "22000"
Cannot find device "22000"
Cannot find device "22000"

We are going to add classification rules via iptables to the
FORWARD chain. You should actually read and ensure that these
rules make sense in your firewall before running this script.

Continue? (type y or n and then RETURN/ENTER)
y
iptables: Chain already exists.
ip6tables: Chain already exists.
Bad argument `DSCP'
Try `iptables -h' or 'iptables --help' for more information.
Bad argument `DSCP'
Try `ip6tables -h' or 'ip6tables --help' for more information.
Bad argument `CLASSIFY'
Try `iptables -h' or 'iptables --help' for more information.
Bad argument `CLASSIFY'
Try `ip6tables -h' or 'ip6tables --help' for more information.
YOU MUST PLACE CLASSIFIERS FOR YOUR GAME TRAFFIC HERE
SEND GAME TRAFFIC TO 2:1 (high) or 2:2 (medium) or 2:3 (normal)
Requires use of tc filters! -j CLASSIFY won't work!
DONE!
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 ta                                                                                        rget 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 231878455 bytes 219515 pkt (dropped 0, overlimits 0 requeues 145)
 backlog 0b 0p requeues 145
  maxpacket 1514 drop_overlimit 0 new_flow_count 137 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev wlan0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth0.1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc hfsc 1: dev eth0.2 root refcnt 2 default 13
 Sent 47485 bytes 328 pkt (dropped 0, overlimits 1 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 8007: dev eth0.2 parent 1:12 limit 10240p flows 1024 quantum 3000                                                                                         target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 8009: dev eth0.2 parent 1:14 limit 10240p flows 1024 quantum 3000                                                                                         target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc netem 10: dev eth0.2 parent 1:11 limit 53 delay 1.0ms  7.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 800a: dev eth0.2 parent 1:15 limit 10240p flows 1024 quantum 3000                                                                                         target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 8008: dev eth0.2 parent 1:13 limit 10240p flows 1024 quantum 3000                                                                                         target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 47485 bytes 328 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1450 drop_overlimit 0 new_flow_count 12 ecn_mark 0
  new_flows_len 1 old_flows_len 3
qdisc fq_codel 0: dev pppoe-wan root refcnt 2 limit 10240p flows 1024 quantum 15                                                                                        18 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 2564134 bytes 25725 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0

i have installed:

kmod-veth
kmod-netem
kmod-ipt-ipopt
kmod-nf-nathelper-extra
kmod-sched 
kmod-sched-cake
ip-full 
ipset 
iptables-mod-conntrack-extra 
iptables-mod-extra 
iptables-mod-ipopt 
iptables-mod-nat-extra
ptables-mod-hashlimit

And i was using netem

Well, 300 ACKs/sec can be quite normal, e.g. if we are talking about dozens of concurrent TCP connections. So cake avoids that issue, but typically doing flow queueing, so that ACKs for one download flow end up in one upload queue, mostly separated from other flows (and vice versa for the upload direction), then it carefully "merges" ACKs that are compatible, instead of simply dropping them, since not all ACKs of a flow can be merged...
Again, I am confident you know this and engineered that in (say by making sure that at most a small number of flows ever are treated by that hashlimit), but I want tp avoid that people start believing that a cavalier approach to ACK handling is advisable :wink: (to be clear, "cavalier" is not what you do, but what naive readers might take as quintessence from casually reading the iptables rules)

Yes I should have said 300 ACKs per second per flow. Of course Cake will do a better job than my hash limit rule, because its got the full power of C to code logic in. But just thinking about a big download flow. Lets say 400Mbps It'll be downloading something like 33000 packets per second and could potentially be sending 17000 acks per second for around 13Mbps. Now I'm in favor of some redundancy but that's ridiculous. If you have say 60ms ping time then 30ms one way This means there are 500 acks in flight at any one time. if we cut that to 4... It means 133 acks a second instead of 17000.

Ack redundancy is way way overkill on the modern internet where packet loss is often less than 1% and bitrates are gigabits with still 1500 byte packets designed for 10Mbps Ethernet.

In testing massive ack decimation seems to have little effect on throughput and potentially a big effect on upload jitter and bandwidth.

@mozyes it seems there is a typo or something let me take a look in a few hours.

sure i m here, send me the text i will test it out

4am here on west coast of US so will be in 4-5 hrs :wink:

sure no issue... i m up for next 10 hours atleast

I believe that you deleted the LAN=eth0.1 line at the top, don't do that. put it back, and I think things will work. if you say to use the veth method further down in the script it will switch the value of LAN, but if the variable doesn't exist I think it gets confused.

so the starting of script now looks like

LINKTYPE="other"
USEVETHDOWN=yes
WAN=pppoe-wan # change this to your WAN device name
UPRATE=22000 #change this to your kbps upload speed
LANBR=br-lan
LAN=eth0.1
DOWNRATE=22000 #change this to about 80% of your download speed (in kbps)
OH=50 # number of bytes of Overhead on your line (37 is reasonable
      # starting point, better to be too big than too small) probably
      # likely values are between 20 and 50

and the result is

This script prioritizes the UDP packets from / to a set of gaming
machines into a real-time HFSC queue with guaranteed total bandwidth

Based on your settings:

Game upload guarantee = 3700 kbps
Game download guarantee = 3700 kbps

Download direction only works if you install this on a *wired* router
and there is a separate AP wired into your network, because otherwise
there are multiple parallel queues for traffic to leave your router
heading to the LAN.

Based on your link total bandwidth, the **minimum** amount of jitter
you should expect in your network is about:

UP = 1 ms

DOWN = 1 ms

In order to get lower minimum jitter you must upgrade the speed of
your link, no queuing system can help.

Please note for your display rate that:

at 30Hz, one on screen frame lasts:   33.3 ms
at 60Hz, one on screen frame lasts:   16.6 ms
at 144Hz, one on screen frame lasts:   6.9 ms

This means the typical gamer is sensitive to as little as on the order
of 5ms of jitter. To get 5ms minimum jitter you should have bandwidth
in each direction of at least:

7200 kbps

The queue system can ONLY control bandwidth and jitter in the link
between your router and the VERY FIRST device in the ISP
network. Typically you will have 5 to 10 devices between your router
and your gaming server, any of those can have variable delay and ruin
your gaming, and there is NOTHING that your router can do about it.

adding fq_codel qdisc for non-game traffic
adding fq_codel qdisc for non-game traffic

We are going to add classification rules via iptables to the
FORWARD chain. You should actually read and ensure that these
rules make sense in your firewall before running this script.

Continue? (type y or n and then RETURN/ENTER)
Y
Check the rules and come back when you're ready.
DONE!
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 2783435487 bytes 3759809 pkt (dropped 0, overlimits 0 requeues 418)
 backlog 0b 0p requeues 418
  maxpacket 1498 drop_overlimit 0 new_flow_count 475 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc hfsc 1: dev eth0.1 root refcnt 2 default 13
 Sent 7916 bytes 28 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 800b: dev eth0.1 parent 1:13 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 7916 bytes 28 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 452 drop_overlimit 0 new_flow_count 28 ecn_mark 0
  new_flows_len 1 old_flows_len 0
qdisc fq_codel 800d: dev eth0.1 parent 1:15 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc netem 10: dev eth0.1 parent 1:11 limit 53 delay 1.0ms  7.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 800a: dev eth0.1 parent 1:12 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 800c: dev eth0.1 parent 1:14 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev eth0.2 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc hfsc 1: dev pppoe-wan root refcnt 2 default 13
 Sent 4937 bytes 34 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 8007: dev pppoe-wan parent 1:13 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 4937 bytes 34 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1342 drop_overlimit 0 new_flow_count 25 ecn_mark 0
  new_flows_len 1 old_flows_len 0
qdisc fq_codel 8009: dev pppoe-wan parent 1:15 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc netem 10: dev pppoe-wan parent 1:11 limit 53 delay 1.0ms  7.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 8006: dev pppoe-wan parent 1:12 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 8008: dev pppoe-wan parent 1:14 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev wlan1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0

i dont see veth on tc -s qdisc

and the ip link command is

root@OpenWrt:~# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:43 brd ff:ff:ff:ff:ff:ff
3: ifb0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 32
    link/ether f2:5e:8a:63:44:9f brd ff:ff:ff:ff:ff:ff
4: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 32
    link/ether 9a:7c:15:5f:0b:0c brd ff:ff:ff:ff:ff:ff
5: teql0: <NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 100
    link/void
13: br-lan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:43 brd ff:ff:ff:ff:ff:ff
14: eth0.1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc hfsc master br-lan state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:43 brd ff:ff:ff:ff:ff:ff
15: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:44 brd ff:ff:ff:ff:ff:ff
16: pppoe-wan: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1492 qdisc hfsc state UNKNOWN mode DEFAULT group default qlen 3
    link/ppp
17: wlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-lan state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:42 brd ff:ff:ff:ff:ff:ff
18: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-lan state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:41 brd ff:ff:ff:ff:ff:ff

ok, but it didn't set up the veth. did you get the latest script?

yea it didnt set up veth...

i got script from https://github.com/dlakelan/routerperf

you need to get the latest from the devel branch though:

https://raw.githubusercontent.com/dlakelan/routerperf/devel/SimpleHFSCgamerscript.sh

oh right thats not the script i was using, i will change it and test it

@dlakelan

the script where i changed the things are

#!/bin/sh

## "atm" for old-school DSL or change to "DOCSIS" for cable modem, or
## "other" or anything else, for everything else

LINKTYPE="other"

USEVETHDOWN=Yes
LANBR=br-lan

WAN=eth0.2 # change this to your WAN device name
UPRATE=22000 #change this to your kbps upload speed
LAN=eth0.1 # change to your LAN device if you don't use veth/bridge


DOWNRATE=22000 #change this to about 80% of your download speed (in kbps)
OH=50 # number of bytes of Overhead on your line (37 is reasonable
      # starting point, better to be too big than too small) probably
      # likely values are between 20 and 50

PFIFOMIN=5 ## minimum number of packets in pfifo, 4 to 10 is good guess
PACKETSIZE=350 # bytes per game packet avg (guess, 250 to 500 is likely) 
MAXDEL=25 # ms we try to keep max delay below for game packets after
	  # burst 10-25 is good 1 clock tick at 64Hz is ~16ms

BWMAXRATIO=20 ## prevent ack floods by limiting download to at most
	      ## upload times this amount... ratio somewhere between
	      ## 10 and 20 probably optimal. we down-prioritize
	      ## certain ACKs to reduce the chance of a flood as well.

if [ $((DOWNRATE > UPRATE*BWMAXRATIO)) -eq 1 ]; then
    echo "We limit the downrate to at most $BWMAXRATIO times the upstream rate to ensure no upstream ACK floods occur which can cause game packet drops"
    DOWNRATE=$((BWMAXRATIO*UPRATE))
fi

## how many kbps of UDP upload and download do you need for your games
## across all gaming machines? 

## you can tune these yourself, but a good starting point is this
## formula.  this script will not work for UPRATE less than about
## 600kbps or downrate less than about 1000kbps

GAMEUP=$((UPRATE*15/100+400))
GAMEDOWN=$((DOWNRATE*15/100+400))

## you can try setting GAMEUP and GAMEDOWN manually, some report this
## works well for CoD
#GAMEUP=400
#GAMEDOWN=800


DSCPSCRIPT="/etc/dscptag.sh"

if [ ! -f $DSCPSCRIPT ]; then
    workdir=$(pwd)
    echo "You do not have the DSCP tagging script, downloading from github"
    cd /etc/
    wget https://raw.githubusercontent.com/dlakelan/routerperf/master/dscptag.sh
    cd $workdir
fi



## Right now there are four possible leaf qdiscs: pfifo, red,
## fq_codel, or netem. If you use netem it's so you can intentionally
## add delay to your packets, set netemdelayms to the number of ms you
## want to add each direction. Our default is pfifo it is reported to
## be the best for use in the realtime queue

gameqdisc="netem"

#gameqdisc="netem"

netemdelayms="1"
netemjitterms="7"
netemdist="normal"

pktlossp="none" # set to "none" for no packet loss, or use a fraction
		# like 0.015 for 1.5% packet loss in the realtime UDP
		# streams


if [ $gameqdisc != "fq_codel" -a $gameqdisc != "red" -a $gameqdisc != "pfifo" -a $gameqdisc != "netem" ]; then
    echo "Other qdiscs are not tested and do not work on OpenWrt yet anyway, reverting to red"
    gameqdisc="red"
fi

## set up your ipsets here:

## get rid of any references to the ipsets
iptables -t mangle -F dscptag > /dev/null 2>&1


for set in realtimeset4 lowprioset4  ; do
    ipset destroy $set > /dev/null 2>&1
    ipset create $set hash:ip > /dev/null 2>&1
    ipset flush $set > /dev/null 2>&1
done

for set in realtimeset6 lowprioset6  ; do
    ipset destroy $set > /dev/null 2>&1
    ipset create $set hash:ip family inet6 > /dev/null 2>&1
    ipset flush $set > /dev/null 2>&1
done

## some examples to add your gaming devices to the realtime sets,
## allows you to have more than one console etc. Just add your ips
## into the list of ips in the for loop

for ip4 in 192.168.10.221 192.168.10.187; do
    ipset add realtimeset4 "$ip4"
done

for ip6 in fe80::426:3fde:46ee:a8a0 fde0:fab6:99e0::cc1 ; do
    ipset add realtimeset6 "$ip6"
done


### add ips of "low priority" machines, examples might include things
### that interfere with more important stuff, like gaming ;-). For
### example 4k TVs will typically buffer big chunks of data which can
### cause gaming stuttering but because they have buffers they don't
### really need super high priority



for ip4 in 192.168.1.111 192.168.1.222; do
    ipset add lowprioset4 "$ip4"
done

for ip6 in 2001:db8::1 2001:db8::2 ; do
    ipset add lowprioset6 "$ip6"
done




## Help the system prioritize your gaming by telling it what is bulk
## traffic ... define a list of udp and tcp ports used for bulk
## traffic such as torrents. By default we include the transmission
## torrent client default port 51413 and the default TCP ports for
## bittorrent. Use comma separated values or ranges A:B as shown. Set
## your torrent client to use a known port and include it here

UDPBULKPT="51413,60887"
TCPBULKPT="51413,6881:6889,27014:27050,60887"


WASHDSCPUP="yes"
WASHDSCPDOWN="yes"


######################### CUSTOMIZATIONS GO ABOVE THIS LINE ###########

if [ $USEVETHDOWN = "yes" ] ; then

    ip link show lanveth || ip link add lanveth type veth peer name lanbrport
    LAN=lanveth
    ip link set lanbrport master $LANBR
    ip route add default via $LAN table 100
    ip rule add iif $WAN table 100

fi



cat <<EOF

This script prioritizes the UDP packets from / to a set of gaming
machines into a real-time HFSC queue with guaranteed total bandwidth 

Based on your settings:

Game upload guarantee = $GAMEUP kbps
Game download guarantee = $GAMEDOWN kbps

Download direction only works if you install this on a *wired* router
and there is a separate AP wired into your network, because otherwise
there are multiple parallel queues for traffic to leave your router
heading to the LAN.

Based on your link total bandwidth, the **minimum** amount of jitter
you should expect in your network is about:

UP = $(((1500*8)*3/UPRATE)) ms

DOWN = $(((1500*8)*3/DOWNRATE)) ms

In order to get lower minimum jitter you must upgrade the speed of
your link, no queuing system can help.

Please note for your display rate that:

at 30Hz, one on screen frame lasts:   33.3 ms
at 60Hz, one on screen frame lasts:   16.6 ms
at 144Hz, one on screen frame lasts:   6.9 ms

This means the typical gamer is sensitive to as little as on the order
of 5ms of jitter. To get 5ms minimum jitter you should have bandwidth
in each direction of at least:

$((1500*8*3/5)) kbps

The queue system can ONLY control bandwidth and jitter in the link
between your router and the VERY FIRST device in the ISP
network. Typically you will have 5 to 10 devices between your router
and your gaming server, any of those can have variable delay and ruin
your gaming, and there is NOTHING that your router can do about it.

EOF

ipt64 (){
    iptables $*
    ip6tables $*
}


setqdisc () {
DEV=$1
RATE=$2
MTU=1500
highrate=$((RATE*90/100))
lowrate=$((RATE*10/100))
gamerate=$3
useqdisc=$4
DIR=$5


tc qdisc del dev "$DEV" root > /dev/null 2>&1

case $LINKTYPE in
    "atm")
	tc qdisc replace dev "$DEV" handle 1: root stab mtu 2047 tsize 512 mpu 68 overhead ${OH} linklayer atm hfsc default 13
	;;
    "DOCSIS")
	tc qdisc replace dev $DEV stab overhead 25 linklayer ethernet handle 1: root hfsc default 13
	;;
    *)
	tc qdisc replace dev $DEV stab overhead 40 linklayer ethernet handle 1: root hfsc default 13
	;;
esac
     

DUR=$((5*1500*8/RATE))
if [ $DUR -lt 25 ]; then
    DUR=25
fi

# if we're on the LAN side, create a queue just for traffic from the
# router, like LUCI and DNS lookups
if [ $DIR = "lan" ]; then
    tc class add dev "$DEV" parent 1: classid 1:2 hfsc ls m1 50000kbit d "${DUR}ms" m2 10000kbit
fi


#limit the link overall:
tc class add dev "$DEV" parent 1: classid 1:1 hfsc ls m2 "${RATE}kbit" ul m2 "${RATE}kbit"


gameburst=$((gamerate*10))
if [ gameburst -gt $((RATE*97/100)) ] ; then
    gameburst=$((RATE*97/100));
fi


# high prio realtime class
tc class add dev "$DEV" parent 1:1 classid 1:11 hfsc rt m1 "${gameburst}kbit" d "${DUR}ms" m2 "${gamerate}kbit"

# fast non-realtime
tc class add dev "$DEV" parent 1:1 classid 1:12 hfsc ls m1 "$((RATE*70/100))kbit" d "${DUR}ms" m2 "$((RATE*30/100))kbit"

# normal
tc class add dev "$DEV" parent 1:1 classid 1:13 hfsc ls m1 "$((RATE*20/100))kbit" d "${DUR}ms" m2 "$((RATE*45/100))kbit"

# low prio
tc class add dev "$DEV" parent 1:1 classid 1:14 hfsc ls m1 "$((RATE*7/100))kbit" d "${DUR}ms" m2 "$((RATE*15/100))kbit"

# bulk
tc class add dev "$DEV" parent 1:1 classid 1:15 hfsc ls m1 "$((RATE*3/100))kbit" d "${DUR}ms" m2 "$((RATE*10/100))kbit"



## set this to "drr" or "qfq" to differentiate between different game
## packets, or use "pfifo" to treat all game packets equally

## games often use a 1/64 s = 15.6ms tick rate +- if we're getting so
## many packets that it takes that long to drain at full RATE, we're
## in trouble, because then everything lags by a full tick... so we
## set our RED minimum to start dropping at 9ms of packets at full
## line rate, and then drop 100% by 3x that much, it's better to drop
## packets for a little while than play a whole game lagged by a full
## tick

REDMIN=$((RATE*MAXDEL/3/8)) 

REDMAX=$((RATE * MAXDEL/8)) 

# for fq_codel
INTVL=$((100+2*1500*8/RATE))
TARG=$((540*8/RATE+4))



case $useqdisc in
    "drr")
	tc qdisc add dev "$DEV" parent 1:11 handle 2:0 drr
	tc class add dev "$DEV" parent 2:0 classid 2:1 drr quantum 8000
	tc qdisc add dev "$DEV" parent 2:1 handle 10: red limit 150000 min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:2 drr quantum 4000
	tc qdisc add dev "$DEV" parent 2:2 handle 20: red limit 150000 min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:3 drr quantum 1000
	tc qdisc add dev "$DEV" parent 2:3 handle 30: red limit 150000  min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit probability 1.0
	## with this send high priority game packets to 10:, medium to 20:, normal to 30:
	## games will not starve but be given relative importance based on the quantum parameter
    ;;

    "qfq")
	tc qdisc add dev "$DEV" parent 1:11 handle 2:0 qfq
	tc class add dev "$DEV" parent 2:0 classid 2:1 qfq weight 8000
	tc qdisc add dev "$DEV" parent 2:1 handle 10: red limit 150000  min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:2 qfq weight 4000
	tc qdisc add dev "$DEV" parent 2:2 handle 20: red limit 150000 min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit probability 1.0
	tc class add dev "$DEV" parent 2:0 classid 2:3 qfq weight 1000
	tc qdisc add dev "$DEV" parent 2:3 handle 30: red limit 150000  min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit probability 1.0
	## with this send high priority game packets to 10:, medium to 20:, normal to 30:
	## games will not starve but be given relative importance based on the weight parameter

    ;;

    "pfifo")
	tc qdisc add dev "$DEV" parent 1:11 handle 10: pfifo limit $((PFIFOMIN+MAXDEL*RATE/8/PACKETSIZE))
	;;
    "red")
	tc qdisc add dev "$DEV" parent 1:11 handle 10: red limit 150000 min $REDMIN max $REDMAX avpkt 500 bandwidth ${RATE}kbit  probability 1.0
	## send game packets to 10:, they're all treated the same
	;;
    "fq_codel")
	tc qdisc add dev "$DEV" parent "1:11" fq_codel memory_limit $((RATE*200/8)) interval "${INTVL}ms" target "${TARG}ms" quantum $((MTU * 2))
	;;
    "netem")
	tc qdisc add dev "$DEV" parent 1:11 handle 10: netem limit $((4+9*RATE/8/500)) delay "${netemdelayms}ms" "${netemjitterms}ms" distribution "$netemdist"
	;;


esac


echo "adding fq_codel qdisc for non-game traffic"
for i in 12 13 14 15; do 
    tc qdisc add dev "$DEV" parent "1:$i" fq_codel memory_limit $((RATE*200/8)) interval "${INTVL}ms" target "${TARG}ms" quantum $((MTU * 2))
done


}


setqdisc $WAN $UPRATE $GAMEUP $gameqdisc wan

## uncomment this to do the download direction via output of LAN
setqdisc $LAN $DOWNRATE $GAMEDOWN $gameqdisc lan

## we want to classify packets, so use these rules

cat <<EOF

We are going to add classification rules via iptables to the
FORWARD chain. You should actually read and ensure that these
rules make sense in your firewall before running this script. 

Continue? (type y or n and then RETURN/ENTER)
EOF

read -r cont

if [ "$cont" = "y" ]; then

    /etc/init.d/firewall restart
    
    ipt64 -t mangle -N dscptag
    ipt64 -t mangle -F dscptag
    
    
    if [ "$WASHDSCPUP" = "yes" ]; then
	ipt64 -t mangle -A FORWARD -i $LAN -j DSCP --set-dscp-class CS0
    fi
    if [ "$WASHDSCPDOWN" = "yes" ]; then
	ipt64 -t mangle -A FORWARD -i $WAN -j DSCP --set-dscp-class CS0
    fi

    ipt64 -t mangle -A POSTROUTING -j dscptag
    source $DSCPSCRIPT
    
    ipt64 -t mangle -A FORWARD -j CLASSIFY --set-class 1:13 # default everything to 1:13,  the "normal" qdisc

    # traffic from the router to the LAN bypasses the download queue
    ipt64 -t mangle -A OUTPUT -o $LAN -j CLASSIFY --set-class 1:2
    
    ## these dscp values go to realtime: EF, CS5, CS6, CS7
    ipt64 -t mangle -A POSTROUTING -m dscp --dscp-class EF -j CLASSIFY --set-class 1:11
    ipt64 -t mangle -A POSTROUTING -m dscp --dscp-class CS5 -j CLASSIFY --set-class 1:11
    ipt64 -t mangle -A POSTROUTING -m dscp --dscp-class CS6 -j CLASSIFY --set-class 1:11
    ipt64 -t mangle -A POSTROUTING -m dscp --dscp-class CS7 -j CLASSIFY --set-class 1:11
    
    ipt64 -t mangle -A POSTROUTING -m dscp --dscp-class CS4 -j CLASSIFY --set-class 1:12
    ipt64 -t mangle -A POSTROUTING -m dscp --dscp-class AF41 -j CLASSIFY --set-class 1:12
    ipt64 -t mangle -A POSTROUTING -m dscp --dscp-class AF42 -j CLASSIFY --set-class 1:12
    
    ipt64 -t mangle -A POSTROUTING -m dscp --dscp-class CS2 -j CLASSIFY --set-class 1:14
    ipt64 -t mangle -A POSTROUTING -m dscp --dscp-class CS1 -j CLASSIFY --set-class 1:15

    ## wash DSCP out to the ISP now that we used it for classifying

    if [ "$WASHDSCPUP" = "yes" ]; then
	ipt64 -t mangle -A FORWARD -o $WAN -j DSCP --set-dscp-class CS0
    fi

    
    case $gameqdisc in
	"red")
	;;
	"pfifo")
	;;
	*)
	    echo "YOU MUST PLACE CLASSIFIERS FOR YOUR GAME TRAFFIC HERE"
	    echo "SEND GAME TRAFFIC TO 2:1 (high) or 2:2 (medium) or 2:3 (normal)"
	    echo "Requires use of tc filters! -j CLASSIFY won't work!"
	    ;;
    esac
    
    if [ $UPRATE -lt 3000 -o $DOWNRATE -lt 3000 ]; then
	ipt64 -t mangle -F FORWARD
    fi
    
    if [ $UPRATE -lt 3000 ]; then
	ipt64 -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -o $LAN -j TCPMSS --set-mss 540
    fi
    if [ $DOWNRATE -lt 3000 ]; then
	## need to clamp MSS to 540 bytes in both directions to reduce
	## the latency increase caused by 1 packet ahead of us in the
	## queue since rates are too low to send 1500 byte packets at acceptable delay
	ipt64 -t mangle -A FORWARD -p tcp --tcp-flags SYN,RST SYN -o $WAN -j TCPMSS --set-mss 540
    fi


else
    cat <<EOF
Check the rules and come back when you're ready.
EOF
fi

echo "DONE!"


if [ "$gameqdisc" = "red" ]; then
   echo "Can not output tc -s qdisc because it crashes on OpenWrt when using RED qdisc, but things are working!"
else
   tc -s qdisc
fi

and the result for is

This script prioritizes the UDP packets from / to a set of gaming
machines into a real-time HFSC queue with guaranteed total bandwidth

Based on your settings:

Game upload guarantee = 3700 kbps
Game download guarantee = 3700 kbps

Download direction only works if you install this on a *wired* router
and there is a separate AP wired into your network, because otherwise
there are multiple parallel queues for traffic to leave your router
heading to the LAN.

Based on your link total bandwidth, the **minimum** amount of jitter
you should expect in your network is about:

UP = 1 ms

DOWN = 1 ms

In order to get lower minimum jitter you must upgrade the speed of
your link, no queuing system can help.

Please note for your display rate that:

at 30Hz, one on screen frame lasts:   33.3 ms
at 60Hz, one on screen frame lasts:   16.6 ms
at 144Hz, one on screen frame lasts:   6.9 ms

This means the typical gamer is sensitive to as little as on the order
of 5ms of jitter. To get 5ms minimum jitter you should have bandwidth
in each direction of at least:

7200 kbps

The queue system can ONLY control bandwidth and jitter in the link
between your router and the VERY FIRST device in the ISP
network. Typically you will have 5 to 10 devices between your router
and your gaming server, any of those can have variable delay and ruin
your gaming, and there is NOTHING that your router can do about it.

sh: gameburst: out of range
adding fq_codel qdisc for non-game traffic
sh: gameburst: out of range
adding fq_codel qdisc for non-game traffic

We are going to add classification rules via iptables to the
FORWARD chain. You should actually read and ensure that these
rules make sense in your firewall before running this script.

Continue? (type y or n and then RETURN/ENTER)
Y
Check the rules and come back when you're ready.
DONE!
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 2785338628 bytes 3770732 pkt (dropped 0, overlimits 0 requeues 420)
 backlog 0b 0p requeues 420
  maxpacket 1498 drop_overlimit 0 new_flow_count 476 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc hfsc 1: dev eth0.1 root refcnt 2 default 13
 Sent 1646 bytes 8 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc netem 10: dev eth0.1 parent 1:11 limit 53 delay 1.0ms  7.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 8013: dev eth0.1 parent 1:13 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 1184 bytes 6 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 241 drop_overlimit 0 new_flow_count 6 ecn_mark 0
  new_flows_len 1 old_flows_len 0
qdisc fq_codel 8015: dev eth0.1 parent 1:15 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 8014: dev eth0.1 parent 1:14 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 8012: dev eth0.1 parent 1:12 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc hfsc 1: dev eth0.2 root refcnt 2 default 13
 Sent 3925 bytes 33 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 800f: dev eth0.2 parent 1:13 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 3925 bytes 33 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 233 drop_overlimit 0 new_flow_count 22 ecn_mark 0
  new_flows_len 1 old_flows_len 0
qdisc fq_codel 8011: dev eth0.2 parent 1:15 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc netem 10: dev eth0.2 parent 1:11 limit 53 delay 1.0ms  7.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 800e: dev eth0.2 parent 1:12 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 8010: dev eth0.2 parent 1:14 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc hfsc 1: dev pppoe-wan root refcnt 2 default 13
 Sent 1823838 bytes 9222 pkt (dropped 0, overlimits 797 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 8007: dev pppoe-wan parent 1:13 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 1823838 bytes 9222 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 2962 drop_overlimit 0 new_flow_count 1913 ecn_mark 0
  new_flows_len 0 old_flows_len 9
qdisc fq_codel 8009: dev pppoe-wan parent 1:15 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc netem 10: dev pppoe-wan parent 1:11 limit 53 delay 1.0ms  7.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 8006: dev pppoe-wan parent 1:12 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 8008: dev pppoe-wan parent 1:14 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev wlan1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0

and for ip link

root@OpenWrt:/etc# ip link
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:43 brd ff:ff:ff:ff:ff:ff
3: ifb0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 32
    link/ether f2:5e:8a:63:44:9f brd ff:ff:ff:ff:ff:ff
4: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 32
    link/ether 9a:7c:15:5f:0b:0c brd ff:ff:ff:ff:ff:ff
5: teql0: <NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 100
    link/void
13: br-lan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:43 brd ff:ff:ff:ff:ff:ff
14: eth0.1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc hfsc master br-lan state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:43 brd ff:ff:ff:ff:ff:ff
15: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc hfsc state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:44 brd ff:ff:ff:ff:ff:ff
16: pppoe-wan: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1492 qdisc hfsc state UNKNOWN mode DEFAULT group default qlen 3
    link/ppp
17: wlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-lan state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:42 brd ff:ff:ff:ff:ff:ff
18: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-lan state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:41 brd ff:ff:ff:ff:ff:ff

use lower case:

USEVETHDOWN=yes

the result for that

Device "lanveth" does not exist.
Error: any valid address is expected rather than "lanveth".

This script prioritizes the UDP packets from / to a set of gaming
machines into a real-time HFSC queue with guaranteed total bandwidth

Based on your settings:

Game upload guarantee = 3700 kbps
Game download guarantee = 3700 kbps

Download direction only works if you install this on a *wired* router
and there is a separate AP wired into your network, because otherwise
there are multiple parallel queues for traffic to leave your router
heading to the LAN.

Based on your link total bandwidth, the **minimum** amount of jitter
you should expect in your network is about:

UP = 1 ms

DOWN = 1 ms

In order to get lower minimum jitter you must upgrade the speed of
your link, no queuing system can help.

Please note for your display rate that:

at 30Hz, one on screen frame lasts:   33.3 ms
at 60Hz, one on screen frame lasts:   16.6 ms
at 144Hz, one on screen frame lasts:   6.9 ms

This means the typical gamer is sensitive to as little as on the order
of 5ms of jitter. To get 5ms minimum jitter you should have bandwidth
in each direction of at least:

7200 kbps

The queue system can ONLY control bandwidth and jitter in the link
between your router and the VERY FIRST device in the ISP
network. Typically you will have 5 to 10 devices between your router
and your gaming server, any of those can have variable delay and ruin
your gaming, and there is NOTHING that your router can do about it.

sh: gameburst: out of range
adding fq_codel qdisc for non-game traffic
sh: gameburst: out of range
adding fq_codel qdisc for non-game traffic

We are going to add classification rules via iptables to the
FORWARD chain. You should actually read and ensure that these
rules make sense in your firewall before running this script.

Continue? (type y or n and then RETURN/ENTER)
Y
Check the rules and come back when you're ready.
DONE!
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 2788307068 bytes 3791161 pkt (dropped 0, overlimits 0 requeues 425)
 backlog 0b 0p requeues 425
  maxpacket 1498 drop_overlimit 0 new_flow_count 480 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc hfsc 1: dev eth0.1 root refcnt 2 default 13
 Sent 434644 bytes 1725 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc netem 10: dev eth0.1 parent 1:11 limit 53 delay 1.0ms  7.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 8013: dev eth0.1 parent 1:13 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 434182 bytes 1723 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 695 drop_overlimit 0 new_flow_count 1705 ecn_mark 0
  new_flows_len 1 old_flows_len 0
qdisc fq_codel 8015: dev eth0.1 parent 1:15 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 8014: dev eth0.1 parent 1:14 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 8012: dev eth0.1 parent 1:12 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc hfsc 1: dev eth0.2 root refcnt 2 default 13
 Sent 3221 bytes 20 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 8016: dev eth0.2 parent 1:12 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 8018: dev eth0.2 parent 1:14 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc netem 10: dev eth0.2 parent 1:11 limit 53 delay 1.0ms  7.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 8019: dev eth0.2 parent 1:15 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 8017: dev eth0.2 parent 1:13 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 3221 bytes 20 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 661 drop_overlimit 0 new_flow_count 8 ecn_mark 0
  new_flows_len 1 old_flows_len 0
qdisc hfsc 1: dev pppoe-wan root refcnt 2 default 13
 Sent 4938410 bytes 27762 pkt (dropped 0, overlimits 1802 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 8007: dev pppoe-wan parent 1:13 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 4938410 bytes 27762 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 5132 drop_overlimit 0 new_flow_count 3638 ecn_mark 0
  new_flows_len 1 old_flows_len 2
qdisc fq_codel 8009: dev pppoe-wan parent 1:15 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc netem 10: dev pppoe-wan parent 1:11 limit 53 delay 1.0ms  7.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 8006: dev pppoe-wan parent 1:12 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 8008: dev pppoe-wan parent 1:14 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev wlan1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc hfsc 1: dev lanveth root refcnt 2 default 13
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 801c: dev lanveth parent 1:14 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 801a: dev lanveth parent 1:12 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc netem 10: dev lanveth parent 1:11 limit 53 delay 1.0ms  7.0ms
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 801b: dev lanveth parent 1:13 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 801d: dev lanveth parent 1:15 limit 10240p flows 1024 quantum 3000 target 4.0ms interval 101.0ms memory_limit 550000b ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0

am i missing something?? why does it say gamebrust: out of range?
and ip link

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:43 brd ff:ff:ff:ff:ff:ff
3: ifb0: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 32
    link/ether f2:5e:8a:63:44:9f brd ff:ff:ff:ff:ff:ff
4: ifb1: <BROADCAST,NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 32
    link/ether 9a:7c:15:5f:0b:0c brd ff:ff:ff:ff:ff:ff
5: teql0: <NOARP> mtu 1500 qdisc noop state DOWN mode DEFAULT group default qlen 100
    link/void
13: br-lan: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:43 brd ff:ff:ff:ff:ff:ff
14: eth0.1@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc hfsc master br-lan state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:43 brd ff:ff:ff:ff:ff:ff
15: eth0.2@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc hfsc state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:44 brd ff:ff:ff:ff:ff:ff
16: pppoe-wan: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1492 qdisc hfsc state UNKNOWN mode DEFAULT group default qlen 3
    link/ppp
17: wlan1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-lan state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:42 brd ff:ff:ff:ff:ff:ff
18: wlan0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br-lan state UP mode DEFAULT group default qlen 1000
    link/ether 0c:80:63:fe:2c:41 brd ff:ff:ff:ff:ff:ff
19: lanbrport@lanveth: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc noop master br-lan state DOWN mode DEFAULT group default qlen 1000
    link/ether ca:7d:1d:ba:c6:36 brd ff:ff:ff:ff:ff:ff
20: lanveth@lanbrport: <BROADCAST,MULTICAST,M-DOWN> mtu 1500 qdisc hfsc state DOWN mode DEFAULT group default qlen 1000
    link/ether da:31:fe:0c:53:6d brd ff:ff:ff:ff:ff:ff