You have to measure, hfsc does not help all aqm but uses very little cpu, very good to game on slow platform routers like 7621/7622, on x86 or filogic you can choose.

2 Likes

I am using cake for now.

Run htop , check that cpu usage is not steady 100%

I think you linked the wrong line...

You PR make sense, but:

While the hex range approach {0x01-0x07, 0x09-0x1f} should work correctly, I believe the initial PR using ip dscp < cs4 ip dscp != cs1 is more maintainable and self-documenting. Using symbolic names (cs4, cs1) clearly shows the intention of not upgrading bulk traffic, and makes future adjustments easier. Though the hex ranges might be marginally more efficient, the readability and maintainability benefits of the symbolic approach outweigh this small performance difference. Would you be open to reverting to the symbolic version?

1 Like

What @brada4 suggested. Also just try different combinations (cake, hfsc, hfsc + pfio, hfsc + fq_codel...) and settle with the one that "feels" best for you...

1 Like

Hey Hudra, I had a few questions for you or anyone else.

I noticed with the QoS Luci repo that existed before yours released, there were basically only two options, FQ_Codel, and CAKE/PIECE OF CAKE. What is that piece of cake thing?

Another question. Using that prior Luci QoS repo, it felt like I had to do lots of test runs to get things to pass the Waveform benchmarks (sub 5ms on both Up and Down under load). So if I didn't input 80% or lower of my total ISP bandwidth, bufferbloat was present with the benchmark. And I would get 80% of those speeds as expected, WITH bufferbloat elimination.

With your solution, I notice it almost doesn't matter what I put (for reference, 500Mbps Down, and a borderline illegal ~10Mbps Up is all I can get, no fiber offered unfortunately). If I put my connection bandwidth ~80-90% of these totals, I will pass the Waveform benchmark, but the download speed never exceeds ~140Mbps.

Personally I don't actually care about high download speeds when bufferbloat mitigation is the primary concern - but I'm wondering just out of curiosity. Is the reason for this because you have that rank-order setting ordeal that tries to leave headroom for other devices that could potentially cause spikes if a single client was allowed to use up the entire allotted 80% bandwidth figure and cause bufferbloat due to others being starved for bandwidth?

A quick Yes/No theoretical question if you're able.. If in the Firewall section I enable the ability to drop invalid packets - can that in theory potentially lead to better latency in any contexts?

And finally, I noticed that your bufferbloat solution works even under software flow offloading. Really cool. But of course, hardware flow offloading does not. Do you have any idea if such a thing would be possible in the future at all, or what the main impediments for implementation would be? I currently use it as my quick-toggle - if I want bandwidth, I toggle it on and CPU usage basically goes to zero regardless of load. If I need bufferbloat mitigation, I turn toggle it off, and your service tool picks up the slack without any restarts or any of that sort of thing.

What do you mean by "with the QoS Luci repo that existed before yours"? Do you mean SQM? If you're using SQM, you should actually have additional qdiscs available. The simple answer is that "Layer Cake," "Piece of Cake," etc., just use different CAKE configurations. "Piece of Cake" aims for simplicity, using only one class ("besteffort" plus the "triple-isolate" keyword), and will likely work very well for many use cases because it maintains a certain fairness across your connections and ensures that individual hosts/devices don't consume all your bandwidth.

If you want to apply targeted unfairness (which we gamers often want for our gaming packets), "Layer Cake" is the right setting. It doesn't use "besteffort" and a single class in the background but instead uses "diffserv3" with three classes (Bulk, Best Effort, and Voice).

It's best to look at the CAKE manual for more information:

https://man7.org/linux/man-pages/man8/tc-cake.8.html

and additionally, this post:

[OpenWrt Wiki] SQM Details

That might simply be due to your hardware, especially your CPU, not being powerful enough. Unfortunately, you haven't provided us with any information about your hardware, the bandwidth you receive from your ISP, or the settings you've configured. You didn't even mention whether you're using cake or hfsc with qosmate. Honestly, I'm not exactly sure what your problems are, but in any case, you could at least provide us with the following information:

ubus call system board
cat /etc/config/qosmate
service qosmate status

This could be due to ACK limiting, especially if you have a highly asymmetric connection. As mentioned, it would be primarily helpful to know what type of connection you have.

No idea, but theoretically, it might offer a very small performance advantage. However, the effect would likely be minimal and hardly measurable in most real-world use cases.

I don't think it will ever be possible...

Also please make sure to not use "QoS Luci repo that existed before yours" and qosmate at the same time...

Nothing to do with QoS, those packets will not NAT, and it is sysctl-able what will be dropped: checksums, opening connection with synack, picking up connection with ack. Or unconditionally invalid tcp flags, packets out of permissible window, some classes of icmp related. Even if accepted/routed they will turn out useless, but pure l3 router is supposed to forward them.

You can drop them at earliest chance before conntrack (eg ftp, sip) alg helpers https://github.com/openwrt/firewall4/pull/22
(just download ruleset.uc over openwrt 23.05.0 or better)

If you receive them mxassively dropping them early is best. Since they dont conntrack they dont qosmate, it just some unmanaged side traffic

1 Like

Yes, correct, sorry I forgot what it was called, sorry about that. Thank you for explaining the cake variants and the links for further reading.

7800X3D (and also an 4790K system, and a few Macbooks through Ethernet thunderbolt connections), Flint 2 Router (Ethernet of course, using CAT 5E cables from BlueJeans so there should be no issues as I tested two of them, both of which come with testing reports). Wasn't seeing anything remotely taxing from the basic diagnostics. The bandwidth is that pathetic 500Mbps down+ 11Mbps up.

Using Cake since I imagine I have the computing power to spare, as I've seen many mentions that HFSC is far less CPU intensive. Otherwise, it's mostly stock other than the few basic values. Sorry for leaving this info out before.

{
        "kernel": "6.6.47",
        "hostname": "GL-MT6000",
        "system": "ARMv8 Processor rev 4",
        "model": "GL.iNet GL-MT6000",
        "board_name": "glinet,gl-mt6000",
        "rootfs_type": "squashfs",
        "release": {
                "distribution": "OpenWrt",
                "version": "24.0",
                "revision": "r27229+44-ebe7c5f1a3",
                "target": "mediatek/filogic",
                "description": "OpenWrt 24.0 r27229+44-ebe7c5f1a3"
        }
}
config global 'global'
        option enabled '1'

config settings 'settings'
        option WAN 'eth1'
        option DOWNRATE '204800'
        option UPRATE '10240'
        option ROOT_QDISC 'cake'

config advanced 'advanced'
        option MSS '536'
        option PRESERVE_CONFIG_FILES '0'
        option WASHDSCPUP '1'
        option WASHDSCPDOWN '1'
        option BWMAXRATIO '20'
        option UDP_RATE_LIMIT_ENABLED '0'
        option TCP_UPGRADE_ENABLED '1'
        option UDPBULKPORT '51413'
        option TCPBULKPORT '51413,6881-6889'
        option NFT_HOOK 'forward'
        option NFT_PRIORITY '0'

config hfsc 'hfsc'
        option LINKTYPE 'ethernet'
        option OH '44'
        option gameqdisc 'fq_codel'
        option nongameqdisc 'fq_codel'
        option nongameqdiscoptions 'besteffort ack-filter'
        option MAXDEL '24'
        option PFIFOMIN '5'
        option PACKETSIZE '450'
        option netemdelayms '30'
        option netemjitterms '7'
        option netemdist 'normal'
        option pktlossp 'none'

config cake 'cake'
        option COMMON_LINK_PRESETS 'ethernet'
        option PRIORITY_QUEUE_INGRESS 'diffserv4'
        option PRIORITY_QUEUE_EGRESS 'diffserv4'
        option HOST_ISOLATION '1'
        option NAT_INGRESS '1'
        option NAT_EGRESS '1'
        option ACK_FILTER_EGRESS 'auto'
        option AUTORATE_INGRESS '0'

config custom_rules 'custom_rules'
==== qosmate Status ====
qosmate service is enabled.
Traffic shaping is active on the egress interface (eth1).
Traffic shaping is active on the ingress interface (ifb-eth1).
==== Overall Status ====
qosmate is currently active and managing traffic shaping.
==== Current Settings ====
Upload rate: 10240 kbps
Download rate: 204800 kbps
Game traffic upload: 1936 (Default value) kbps
Game traffic download: 31120 (Default value) kbps
Queue discipline: CAKE (Root qdisc)
==== Package Status ====
All required packages are installed.

==== Detailed Technical Information ====
Traffic Control (tc) Queues:
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
 Sent 26991388279 bytes 18561276 pkt (dropped 3566, overlimits 0 requeues 41461)
 backlog 0b 0p requeues 41461
qdisc fq_codel 0: dev eth0 parent :10 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :f limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :e limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :d limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :c limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :b limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :a limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :9 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 10175349923 bytes 6967349 pkt (dropped 0, overlimits 0 requeues 961)
 backlog 0b 0p requeues 961
  maxpacket 7590 drop_overlimit 0 new_flow_count 446 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 4801042150 bytes 3644325 pkt (dropped 0, overlimits 0 requeues 901)
 backlog 0b 0p requeues 901
  maxpacket 1518 drop_overlimit 0 new_flow_count 422 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 12014996206 bytes 7949602 pkt (dropped 3566, overlimits 0 requeues 39599)
 backlog 0b 0p requeues 39599
  maxpacket 1518 drop_overlimit 0 new_flow_count 18694 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc cake 8003: dev eth1 root refcnt 17 bandwidth 10240Kbit diffserv4 dual-srchost nat wash ack-filter split-gso rtt 100ms noatm overhead 38 mpu 84
 Sent 934003224 bytes 7915950 pkt (dropped 1714050, overlimits 12993992 requeues 799)
 backlog 0b 0p requeues 799
 memory used: 4343296b of 4Mb
 capacity estimate: 10240Kbit
 min/max network layer size:           28 /    1500
 min/max overhead-adjusted size:       84 /    1538
 average network hdr offset:           14

                   Bulk  Best Effort        Video        Voice
  thresh        640Kbit    10240Kbit     5120Kbit     2560Kbit
  target         28.4ms          5ms          5ms        7.1ms
  interval        123ms        100ms        100ms        102ms
  pk_delay          1us       2.72ms       1.21ms        278us
  av_delay          0us        559us         50us         12us
  sp_delay          0us         16us          2us          3us
  backlog            0b           0b           0b           0b
  pkts                1      9398160       229984         1855
  bytes            1242   1064103132     50510013       204470
  way_inds            0        19845         1107            0
  way_miss            1        15050        10073            6
  way_cols            0            0            0            0
  drops               0      1026243           39            0
  marks               0            1            4            0
  ack_drop            0       686777          991            0
  sp_flows            1            4            0            0
  bk_flows            0            1            0            0
  un_flows            0            0            0            0
  max_len          1242        32567        16654          437
  quantum           300          312          300          300

qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
 Sent 27951330265 bytes 32722729 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan2 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan3 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan4 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan5 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-guest root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan1-1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev wlan0-1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8004: dev ifb-eth1 root refcnt 2 bandwidth 204800Kbit diffserv4 dual-dsthost nat wash ingress no-ack-filter split-gso rtt 100ms noatm overhead 38 mpu 84
 Sent 28805879244 bytes 32681357 pkt (dropped 41372, overlimits 23987304 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 8517Kb of 10000Kb
 capacity estimate: 204800Kbit
 min/max network layer size:           46 /    1500
 min/max overhead-adjusted size:       84 /    1538
 average network hdr offset:           14

                   Bulk  Best Effort        Video        Voice
  thresh      12800Kbit   204800Kbit   102400Kbit    51200Kbit
  target            5ms          5ms          5ms          5ms
  interval        100ms        100ms        100ms        100ms
  pk_delay        613us         43us        102us          9us
  av_delay        275us          6us         17us          2us
  sp_delay          1us          1us          2us          1us
  backlog            0b           0b           0b           0b
  pkts          2030269     16634466      1188563     12869431
  bytes      2764824480  23851614100   1478967798    772165913
  way_inds            0        43817         2478            0
  way_miss            4        20079         5618            5
  way_cols            0            0            0            0
  drops               3        40942          427            0
  marks               0            2            2            0
  ack_drop            0            0            0            0
  sp_flows            0            4            1            0
  bk_flows            0            0            0            1
  un_flows            0            0            0            0
  max_len         28766        66616        43906           74
  quantum           390         1514         1514         1514


==== Nftables Ruleset (dscptag) ====
# Warning: table ip filter is managed by iptables-nft, do not touch!
        chain dscptag {
                type filter hook forward priority filter; policy accept;
                meta l4proto udp ct original proto-src 51413 counter packets 0 bytes 0 jump mark_cs1
                meta l4proto udp ct original proto-dst 51413 counter packets 0 bytes 0 jump mark_cs1
                meta l4proto tcp ct original proto-dst { 6881-6889, 51413 } counter packets 0 bytes 0 jump mark_cs1
                meta length < 100 tcp flags & ack == ack add @xfst4ack { ct id limit rate over 51200/second burst 5 packets } counter packets 23247 bytes 979885 jump drop995
                meta length < 100 tcp flags & ack == ack add @fast4ack { ct id limit rate over 5120/second burst 5 packets } counter packets 66407 bytes 3185469 jump drop95
                meta length < 100 tcp flags & ack == ack add @med4ack { ct id limit rate over 512/second burst 5 packets } counter packets 107805 bytes 5038242 jump drop50
                meta length < 100 tcp flags & ack == ack add @slow4ack { ct id limit rate over 512/second burst 5 packets } counter packets 53942 bytes 2520756 jump drop50
                meta l4proto tcp ct bytes < 12800000 jump mark_500ms
                meta l4proto tcp ct bytes > 256000000 jump mark_10s
                meta l4proto tcp add @slowtcp { ct id limit rate 150/second burst 150 packets } ip dscp set af42 counter packets 1111118 bytes 1131683498
                meta l4proto tcp add @slowtcp { ct id limit rate 150/second burst 150 packets } ip6 dscp set af42 counter packets 0 bytes 0
                meta priority set ip dscp map @priomap counter packets 18747431 bytes 26031334785
                meta priority set ip6 dscp map @priomap counter packets 0 bytes 0
                ct mark set ip dscp | 0x80 counter packets 18747431 bytes 26031334785
                ct mark set ip6 dscp | 0x80 counter packets 0 bytes 0
        }
}

==== Custom Rules Table Status ====
Custom rules table (qosmate_custom) is not active or doesn't exist.

Ah so it's just a no go even in theory. Ah well, I'll still have it as a nice quick toggle without having to dig into Luci if I need to turn off QoS for full bandwidth (there is even a phone app for this router which makes this one-click, it's really nice). As for SQM, nah I got rid of that the moment I tried your tool. I'm constantly doing full resets anytime I feel the need to update router firmware, or if I borked something (I actually have your setting to "preserve settings through firmware upgrades" disabled as my needs only require basic setup after a fresh firmware install). I've also disabled some qos-like bundled service that runs on startup (I read your advice to others they should be wary of any such services running by default and screwing with the behavior of your tool).

So no issues to worry about on this configuration front I'd imagine.

Thanks brada, though I'm going to need to spend a few hours Googling some of the stuff you mentioned as every sentence invokes terms of which I know almost nothing about, and certainly not on some relational mechanistic level.

TL;DR Keep it enabled on home router.

1 Like

The CPU of your gaming PC isn't relevant here (only the router's CPU matters).

I think the Flint 2 router should handle traffic shaping for 500 Mbps.

Your main challenge appears to be your highly asymmetric bandwidth distribution.

A brief explanation: Most downloads use TCP. TCP uses ACKs (acknowledgments) to confirm packet reception. For each packet that comes in through your download, a small ACK packet is sent back (upload) to signal successful receipt. With highly asymmetric connections (like your 500/10 Mbps), during a large download, the ACK packets alone can saturate your upload bandwidth, negatively impacting all other devices on your network, especially those running latency-sensitive applications.

QoSmate has mechanisms to counter this, which can result in reduced download bandwidth. The following settings influence this behavior (without going into too much detail):

  1. ACK Rate
  2. Bandwidth Max Ratio

Since you're using CAKE as root qdisc, you could try disabling QoSmate's ACK handling mechanisms and use CAKE's built-in functionality, which is more sophisticated and might provide better bandwidth utilization. To try this:

  1. Set ACK Rate to '0'
  2. Set Bandwidth Max Ratio to a higher value (e.g., '100')
  3. Ensure in the CAKE tab that ACK Filter (Egress) is set to either 'AUTO' or 'Enable'

This approach leverages CAKE's native ACK filtering, which is specifically designed for asymmetric connections and might give you better overall performance.

4 Likes

Makes complete sense, you called it even on first glance before my reply with all the system info.

Wow, this is actually wild, the jitter figures are so incredibly low now, and the download speed seems unhindered entirely. The graph grouping is so tight barring the periodic large spikes in the Active Download phase (no idea what could be causing that, but the jitter remains sub 3ms until the large spikes ruin an otherwise impressive looking average).

I did a few more test runs and the scores were even better, sub 2ms jitter on the active upload until the last moments with a handful of 30ms+ spikes during the run. This looks like a massive improvement across the board.

So the question I have, what sort of things should I be thinking about, if bandwidth isn't a concern? I tried a lower download cap, and the latency again improved with tighter timings and less tall spikes. The only question I have at this point, would doing this yield worse overall results in practice if there was say more people on the network? Or is my problem the monopoly ISP which goodness knows when will offer fiber for symmetrical connections.

Btw thank you again, this is really awesome.

I dont understand what is your claim of wrong here.

Not sure what you mean? Everything is going great now. The last thing I was wondering that I asked about in my prior post was if lowering download bandwidth to get closer to the poor upload speeds was detrimental if multiple users of the network start going heavy with internet usage, or will these new settings still behave good enough even with this restriction of maximum download bandwidth. Or should I resort to default settings I had prior and let QoS mate handle the ACK finessing?

Somewhat of a useless question at this point as the only relevant answer is: "try it and see how it goes".

No telling at what rate and why 30ms spikes your provider supplies come in. Your best chance is if their engineers play same games as you. You have to load network with one hand pointlesly then do the waveform and gaming with other.

1 Like

Lol that sounded funny. I was more concerned about the 100ms+ spikes after increasing the bandwidth. But it's no big deal, I don't really care too much about download spikes, upload spikes is what I want to avoid like the plague.

Keep in mind that modern web browsers are marvels of technology, but high performance network measurement instruments they are not... the wave form test is an excellent help, but try to look at is as a first level coarse measure and not publication-grade data.

Sidenote: cake and fq_codel by default operate with a 5ms delay target, so under sustained load with enough parallel flows you actually expect the average queuing delay for bulk flows to reach 5ms and hence the waveform test to show around +5ms. (The waveform test typically is not long and wide enough to show this reliably, but that is the theoretical expectation for the default target value). If you are willing to trade in throughput fairness for longer RTT flows you can manually adjust that queueing target. Mind you that target exists for a good reason, so that your link can even odd the occasional traffic spike/burst (like a shock absorber can help a vehicle over a pot hole more gracefully than no shock absorber).

2 Likes

QoSmate Update: APK Package Manager Support

I've added support for the APK package manager in QoSmate. This update ensures compatibility with OpenWrt SNAPSHOT builds that use APK instead of OPKG.

Users running OpenWrt SNAPSHOT builds should now be able to use QoSmate without any package manager related issues. The script automatically detects which package manager is being used and adjusts its behavior accordingly.

All existing functionality remains unchanged for users with OPKG-based systems.

Please report any issues you encounter while using QoSmate with APK-based systems.

5 Likes

Yes, and when I designed that bandwidth max ratio it was with the idea that people would prefer low latency and moderate asymmetry than high asymmetry and fully jammed up upload direction. So max ratio of 15 or 20 is probably a good idea, if you're getting 10 up then that's 150-200 down. In general anyone who cares about latency will prefer those speeds to maxing out their download and choking their upload.

So, pretend your ISP offers you 200/10 and then just be happy when QoSmate gives you 200/10 :star_struck:

3 Likes

I have a 300 mbps connection. When I use sqm I get around 260 mbps. However, when using qosmate I can't get over 160 mbps no matter what I do. Why is my download speed so much slower? I also just did the newest update and now I'm getting +10ms on my upload and causing latency when it was 0 prior.