SQM TP-Link Archer C7

Hi,
I am trying to reduce my bufferbloat as much as possible for gaming,
I would also like to put my gaming traffic into a fastlane. Ie sqm doesn't touch it. I have luci-app-sqm enabled with piece of cake on.
Here is my:

cat /etc/config/sqm
config queue 'eth1'
        option upload '10000'
        option linklayer 'none'
        option enabled '1'
        option interface 'eth0.2'
        option debug_logging '0'
        option verbosity '5'
        option qdisc 'cake'
        option script 'piece_of_cake.qos'
        option download '115200'

My internet speeds are 128mbps down and 11.1 up on a fiber connection.
How would I go about doing this?
Thanks again.

On my C7 v2, I set egress and ingress at 98% of advertised speed, and have Link Layer Adaption turned off.

I get 94% of advertised on speed tests, and A+ to A for bufferbloat on DSLReports and Waveform.

YMMV.

I update my settings to this and I am getting better results. I also turned on hardware acceleration. <-- This lowered bufferbloat to under 5ms up and down.
I was wondering if there is a way to customize my current settings to not touch certain ports that I would like to prioritize. Possible a script would do it? If so how would I go about doing so?
Thanks

config queue 'eth1'
        option upload '10000'
        option linklayer 'none'
        option enabled '1'
        option interface 'eth0.2'
        option download '115000'
        option debug_logging '0'
        option verbosity '5'
        option qdisc 'fq_codel'
        option script 'simple.qos'

I'm trying to prioritize ports or a range of ports to give them full speeds and not get caught up in the sqm queue.

That is not possible, and frankly not a great idea. Unless sqm sees all traffic on an interface it can not do its job.
But have you tried to switch to cake and configure it for per internal IP fairness? This might already carve out enough low latency capacity for your gaming traffic.
Alternative, sqm allow for some dscp baded prioritization, but that only works well for egress/upload traffic, as proper marking of ingress traffic is hard. Then again there are dedicated script developed here in the forum that tackle gaming isdues like ingress marking, maybe search the forums?

This essentially results in a hidden per packet overhead of 14 bytes.... Getting the per packet overhead correct seems unimportant, mostly because for typical bufferblost testing larg packets are used, and one can compensate a wrong overhead by setting a lower rate. The problem is that if overhead is configure too small a link will show bufferbloat if saturated with small packets... that situation is unlikely but far from impossible. As a result we recommend to overestimate the overhead gently, if in doubt....

I have seen it before because netduma do it for their products I.e their traffic prioritization feature. But they use it for gaming traffic because its so small and won't change overall latency of the network.

What is it exactly what they do however? Do they prioretize somehow classified gaming traffic with-in their traffic shaper set-up or are they truly exempting that traffic from the traffic shaper? The former can work and boils down to the classification challenge, the latter is not robustly and reliably keepnig bufferbloat in check.

That is an assumption, a) nobody guarantees that a game is not accidentally going to flood the network with data packets, b) classification is based on heuristics like IP addresses, port numbers and can/will have false positives which might violate the assumption of limited rate. The robust way to do this is to create a high priority class in the traffic shaper that combines both highest priority and a hard rate limit... like for example done in this forum thread.

With-in sqm it might already help to switch from simple.qos/fq_codel to layer_cake.qos/cake and use a custom rule in the firewall to DSCP mark all packets from your gaming machines IP (and an appropriate port range) to DSCP EF, that will already give you priority for the uplink. For the downlink I would propose to follow the "sing and dance section" over here (which will carve out an equal share of the download capacity for all internal IP addresses, with ~100 Mbps you will need >=20 concurrently active machines before the gaming machine sees less than 5 Mbps ingress rate which for typical gaming traffic should be plenty).

Thanks for this.
Im using the second option with layer_cake and added nat dual-dsthost for ingress and nat dual-srchost for egress and i'm getting great results on ethernet, but the moment I run a speed test on wifi my ping goes crazy.

@moeller0
I think im having a problem with my cake qdisc
I am getting this with tc -s qdisc

qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 ta                                                                                                                                                             rget 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 278949729 bytes 340832 pkt (dropped 0, overlimits 0 requeues 102)
 backlog 0b 0p requeues 102
  maxpacket 1514 drop_overlimit 0 new_flow_count 151 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth0.1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8005: dev eth0.2 root refcnt 2 bandwidth 10Mbit besteffort triple-iso                                                                                                                                                             late nonat nowash no-ack-filter split-gso rtt 100ms raw overhead 0
 Sent 20403188 bytes 69147 pkt (dropped 735, overlimits 52812 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 162Kb of 4Mb
 capacity estimate: 10Mbit
 min/max network layer size:           42 /    1506
 min/max overhead-adjusted size:       42 /    1506
 average network hdr offset:           14

                  Tin 0
  thresh         10Mbit
  target            5ms
  interval        100ms
  pk_delay        505us
  av_delay         42us
  sp_delay         13us
  backlog            0b
  pkts            69882
  bytes        21501102
  way_inds            2
  way_miss          222
  way_cols            0
  drops             735
  marks               0
  ack_drop            0
  sp_flows            2
  bk_flows            1
  un_flows            0
  max_len          7420
  quantum           305

qdisc ingress ffff: dev eth0.2 parent ffff:fff1 ----------------
 Sent 107354562 bytes 86126 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8006: dev ifb4eth0.2 root refcnt 2 bandwidth 105Mbit besteffort triple                                                                                                                                                             -isolate nonat wash no-ack-filter split-gso rtt 100ms raw overhead 0
 Sent 108112781 bytes 85828 pkt (dropped 298, overlimits 104308 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 182Kb of 4250000b
 capacity estimate: 105Mbit
 min/max network layer size:           60 /    1506
 min/max overhead-adjusted size:       60 /    1506
 average network hdr offset:           14

                  Tin 0
  thresh         105Mbit
  target            5ms
  interval        100ms
  pk_delay       1.26ms
  av_delay        488us
  sp_delay         83us
  backlog            0b
  pkts            86126
  bytes       108560326
  way_inds            2
  way_miss          228
  way_cols            0
  drops             298
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len          1506
  quantum          1514

I was wondering why I am getting
qdisc fq_codel 0: dev eth0 root refcnt 2 limit 10240p flows 1024 quantum 1514 ta
When I am trying to use only cake.

Also, why when I have SQM enabled my wifi speeds top out at ~80mpbs even if my hardwired pc can get full speeds.

Thanks a lot

Because you configured cake to use eth0.2 (dev eth0.2) and OpenWrt uses fq_codel as default qdisc so all real ethernet interfaces, including eth0 get fq_codel. So that is as expected. Since cake on eth0.2 has a traffic shaper, while fq_codel on eth0 has not, there will be (almost) no packets queueing in eth0 and its qdisc does not matter much.

This might indicate that your router runs out of CPU cycles, that is there are not enough CPU cycles left for saturating WiFi when cake is running at full tilt. Maybe it is time to switch to a faster wired-only router and just use the Archer C7 as "dumb" AP?

Also another question, when I use fq_codel and any script my wifi latency is pretty stable but the moment I use cake with any script I get huge ping spikes on the download side (using wifi). What would be causing this?