QoSmate: (Yet Another) Quality of Service Tool for OpenWrt

I'm excited to announce the release of QoSmate, a new QoS solution for OpenWrt. QoSmate aims to simplify the process of setting up QoS on your router while offering customization options for those who need them.

Key features:

  • Support for both HFSC and CAKE queueing disciplines
  • Easy-to-use LuCI interface (luci-app-qosmate)
  • Flexible traffic classification with nftables
  • Automatic package installation and setup
  • ingress shaping via ifb + ctinfo

QoSmate builds upon the excellent work of @dlakelan and his SimpleHFSCgamerscript. It extends the original concept with additional features and a user-friendly interface.

You can find QoSmate on GitHub:

Installation:
QoSmate is not yet an official OpenWrt package, but it's designed for easy installation. You can set it up with just two commands. Detailed installation instructions are available in the README file on the GitHub repository.

Please note that the README is still a work in progress. I am continuously updating it to provide helpful information. If you encounter any unclear instructions or have suggestions for improvement, please let me know.

While QoSmate isn't currently in the official OpenWrt package repository, if there's sufficient interest and demand from the community, I'll consider submitting a request for its inclusion in future OpenWrt releases.

Getting Started:

  1. Follow the installation instructions in the GitHub README
  2. Access QoSmate through the LuCI web interface under Network > QoSmate
  3. Configure your WAN interface and speeds in the Settings tab (or use the Auto Setup function)
  4. Choose between HFSC and CAKE in the Advanced tab
  5. Set up custom rules in the Rules tab if needed
  6. Apply the changes

Special thanks to @choppyc for extensive testing and invaluable feedback throughout the development process.

I welcome any feedback, suggestions, or contributions from the community.

Screenshots:
To give you a better idea of what QoSmate looks like in action, I've included some screenshots below:

image

22 Likes

Interesting contribution Hudra. This weekend I'm going to try it, see how it works for me. Thank you! :sunglasses:

1 Like

Will this work if my wan is wifi (wwan)

My only choice for internet is using wifi.

Not sure if it matters but my router is a TP-Link TL-WDR4300 running 23.05.3

Currently, QoSmate is primarily designed for wired WAN connections with relatively stable bandwidth. Using it with a WWAN connection might not be optimal due to the potentially variable nature of wifi connections.

However, if your wifi connection provides relatively stable and consistent bandwidth, you could give it a try. The key is to set the shaper rate to a bandwidth value that you will have available at all times. This ensures that the QoS rules remain effective even during periods of lower bandwidth.

I was considering the implementation of CAKE w/ Adaptive Bandwidth in some way. This could potentially make QoSmate more effective for variable bandwidth connections like WiFi or mobile internet.

4 Likes

I share my internet with other users and just use QoS to divide the total bandwidth by 3 and I have not noticed my connection going slower than that

I’m not sure if I understood you correctly. You want to divide your total bandwidth equally at a fixed rate among 3 users? So for example if you have 300 mbit everybody only gets 100 mbit? If this is the case QoSmate might not be the ideal solution for you. QoSmate is designed to manage overall bandwidth and prioritize different types of traffic, rather than allocating fixed bandwidth to different users or networks.

If your primary goal is to evenly divide bandwidth among multiple users or networks, a solution like luci-app-nft-qos might be more suitable, as it's designed for that specific purpose.

However, you can still use QoSmate to manage the overall WWAN uplink.

Note, this is a really inefficient method. Suppose you're the only one home, you're getting 1/3 the speed for no one to benefit.

Instead, QoSmate will allow you to share the full bandwidth equally among those who demand it at any given time.

3 Likes

You want to divide your total bandwidth equally at a fixed rate among 3 users? So for example if you have 300 mbit everybody only gets 100 mbit?

No I use openwrt to be the client on the houses wifi, I have the client set up as wwan and thus it is natted from the rest I just want to make mine 1/3rd.

There has been more than once I got blamed from hogging all the traffic and the way it is setup now I was using sqm app in openwrt and that way it has stopped that conversation in it's tracks because I just say FU I have mine set to 1/3.

I actually like yours it uses about half of the CPU compaired to openwrt package luci-sqm

I use cake's dual-dsthost: Flows are defined by the 5-tuple, and fairness is applied first over destination addresses, then over individual flows. Good for use on ingress traffic to a LAN from the internet, where it'll prevent any one LAN host from monopolising the downlink, regardless of the number of flows they use.

There's no artificial upper-limit on each host. Bandwidth is shared equally, allowing single hosts to use 100% of bandwidth.

2 Likes

Thank you for your contribution! However, It doesn't seem to shape my download rate but upload rate works.

My max download bandwidth is 300000 Kbps but even setting it to 170000Kbps for testing, the speed was still 300Mbps on speedtest and fast. Im connected to the router via ETH cable btw. Hardware and software accelerations are disabled. My router device is MT6000.

What is the output of:

service qosmate status
==== qosmate Status ====
qosmate service is enabled.
Traffic shaping is active on the egress interface (eth1).
Traffic shaping is active on the ingress interface (ifb-eth1).
==== Overall Status ====
qosmate is currently active and managing traffic shaping.
==== Current Settings ====
Upload rate: 500000 kbps
Download rate: 270000 kbps
Game traffic upload: 75400 (Default value) kbps
Game traffic download: 40900 (Default value) kbps
Queue discipline: pfifo (for game traffic in HFSC)
==== Detailed Technical Information ====
Traffic Control (tc) Queues:
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
 Sent 1316725574 bytes 1466396 pkt (dropped 0, overlimits 0 requeues 1315)
 backlog 0b 0p requeues 1315
qdisc fq_codel 0: dev eth0 parent :10 limit 10240p flows 1024 quantum 1518 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :f limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :e limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :d limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :c limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :b limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :a limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :9 limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 1872489 bytes 11291 pkt (dropped 0, overlimits 0 requeues 14)
 backlog 0b 0p requeues 14
  maxpacket 1470 drop_overlimit 0 new_flow_count 4 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 1152405688 bytes 1263167 pkt (dropped 0, overlimits 0 requeues 1114)
 backlog 0b 0p requeues 1114
  maxpacket 8612 drop_overlimit 0 new_flow_count 882 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 162447397 bytes 191938 pkt (dropped 0, overlimits 0 requeues 187)
 backlog 0b 0p requeues 187
  maxpacket 9108 drop_overlimit 0 new_flow_count 309 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1518
target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc hfsc 1: dev eth1 root refcnt 17 default 13
 Sent 282060064 bytes 1146677 pkt (dropped 107, overlimits 167244 requeues 408)
 backlog 0b 0p requeues 408
qdisc cake 8022: dev eth1 parent 1:13 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 186299629 bytes 868948 pkt (dropped 75, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 1196800b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:           66 /    1554
 min/max overhead-adjusted size:       66 /    1554
 average network hdr offset:           14
                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay         66us
  av_delay          7us
  sp_delay          1us
  backlog            0b
  pkts           869023
  bytes       186310507
  way_inds        78597
  way_miss        35665
  way_cols            0
  drops               1
  marks               0
  ack_drop           74
  sp_flows            2
  bk_flows            1
  un_flows            0
  max_len         65046
  quantum          1514
qdisc cake 8024: dev eth1 parent 1:15 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 0b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:        65535 /       0
 min/max overhead-adjusted size:    65535 /       0
 average network hdr offset:            0
                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay          0us
  av_delay          0us
  sp_delay          0us
  backlog            0b
  pkts                0
  bytes               0
  way_inds            0
  way_miss            0
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            0
  un_flows            0
  max_len             0
  quantum          1514
qdisc pfifo 10: dev eth1 parent 1:11 limit 3338p
 Sent 1164830 bytes 6725 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8021: dev eth1 parent 1:12 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 94595605 bytes 271004 pkt (dropped 32, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 1296896b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:           70 /    1554
 min/max overhead-adjusted size:       70 /    1554
 average network hdr offset:           14
                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay         28us
  av_delay          5us
  sp_delay          2us
  backlog            0b
  pkts           271036
  bytes        94600461
  way_inds         3337
  way_miss         9101
  way_cols            0
  drops               1
  marks               0
  ack_drop           31
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len         65046
  quantum          1514
qdisc cake 8023: dev eth1 parent 1:14 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 0b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:        65535 /       0
 min/max overhead-adjusted size:    65535 /       0
 average network hdr offset:            0
                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay          0us
  av_delay          0us
  sp_delay          0us
  backlog            0b
  pkts                0
  bytes               0
  way_inds            0
  way_miss            0
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            0
  un_flows            0
  max_len             0
  quantum          1514
qdisc ingress ffff: dev eth1 parent ffff:fff1 ----------------
 Sent 118118 bytes 97 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan2 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan3 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan4 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan5 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev phy0-ap0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev phy1-ap0 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc hfsc 1: dev ifb-eth1 root refcnt 2 default 13
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8026: dev ifb-eth1 parent 1:13 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 0b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:        65535 /       0
 min/max overhead-adjusted size:    65535 /       0
 average network hdr offset:            0
                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay          0us
  av_delay          0us
  sp_delay          0us
  backlog            0b
  pkts                0
  bytes               0
  way_inds            0
  way_miss            0
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            0
  un_flows            0
  max_len             0
  quantum          1514
qdisc pfifo 10: dev ifb-eth1 parent 1:11 limit 1805p
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8028: dev ifb-eth1 parent 1:15 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 0b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:        65535 /       0
 min/max overhead-adjusted size:    65535 /       0
 average network hdr offset:            0
                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay          0us
  av_delay          0us
  sp_delay          0us
  backlog            0b
  pkts                0
  bytes               0
  way_inds            0
  way_miss            0
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            0
  un_flows            0
  max_len             0
  quantum          1514
qdisc cake 8025: dev ifb-eth1 parent 1:12 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 0b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:        65535 /       0
 min/max overhead-adjusted size:    65535 /       0
 average network hdr offset:            0
                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay          0us
  av_delay          0us
  sp_delay          0us
  backlog            0b
  pkts                0
  bytes               0
  way_inds            0
  way_miss            0
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            0
  un_flows            0
  max_len             0
  quantum          1514
qdisc cake 8027: dev ifb-eth1 parent 1:14 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 0b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:        65535 /       0
 min/max overhead-adjusted size:    65535 /       0
 average network hdr offset:            0
                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay          0us
  av_delay          0us
  sp_delay          0us
  backlog            0b
  pkts                0
  bytes               0
  way_inds            0
  way_miss            0
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            0
  un_flows            0
  max_len             0
  quantum          1514
==== Nftables Ruleset (dscptag) ====
        chain dscptag {
                type filter hook forward priority filter; policy accept;
                ip dscp set cs0 counter packets 1543602 bytes 533097394
                ip6 dscp set cs0 counter packets 604283 bytes 1329148249
                ip protocol udp udp sport 51413 ip dscp set cs1 counter packets 29 bytes 9715
                ip6 nexthdr udp udp sport 51413 ip6 dscp set cs1 counter packets 80 bytes 17398
                ip protocol udp udp dport 51413 ip dscp set cs1 counter packets 145 bytes 19230
                ip6 nexthdr udp udp dport 51413 ip6 dscp set cs1 counter packets 138 bytes 20055
                ip protocol tcp tcp sport { 6881-6889, 51413 } ip dscp set cs1 counter packets 2835 bytes 157292
                ip6 nexthdr tcp tcp sport { 6881-6889, 51413 } ip6 dscp set cs1 counter packets 244 bytes 24556
                ip protocol tcp tcp dport { 6881-6889, 51413 } ip dscp set cs1 counter packets 3995 bytes 6018904
                ip6 nexthdr tcp tcp dport { 6881-6889, 51413 } ip6 dscp set cs1 counter packets 388 bytes 38047
                ip protocol tcp tcp flags & ack == ack meta length < 100 add @xfst4ack { ip daddr . ip saddr . tcp dport . tcp sport limit rate over 2500000/second burst 5 packets } counter packets 0 bytes 0 jump drop995
                ip protocol tcp tcp flags & ack == ack meta length < 100 add @fast4ack { ip daddr . ip saddr . tcp dport . tcp sport limit rate over 250000/second burst 5 packets } counter packets 0 bytes 0 jump drop95
                ip protocol tcp tcp flags & ack == ack meta length < 100 add @med4ack { ip daddr . ip saddr . tcp dport . tcp sport limit rate over
25000/second burst 5 packets } counter packets 48975 bytes 3517445 jump drop50
                ip protocol tcp tcp flags & ack == ack meta length < 100 add @slow4ack { ip daddr . ip saddr . tcp dport . tcp sport limit rate over 25000/second burst 5 packets } counter packets 24586 bytes 1765960 jump drop50
                ip protocol udp udp dport { 53, 3478-3479, 5938, 19302-19309 } ip dscp set af42 counter packets 4914 bytes 336347
                ip6 nexthdr udp udp dport { 53, 3478-3479, 5938, 19302-19309 } ip6 dscp set af42 counter packets 1215 bytes 115043
                ip protocol tcp ct bytes < 16875000 ip dscp < cs4 ip dscp
set cs0 counter packets 492399 bytes 206742545
                ip protocol tcp ct bytes > 337500000 ip dscp < cs4 ip dscp set cs1 counter packets 0 bytes 0
                ip protocol tcp add @slowtcp4 { ip saddr . ip daddr . tcp
sport . tcp dport limit rate 150/second burst 150 packets } ip dscp set af42 counter packets 458217 bytes 193785063
                ip6 nexthdr tcp add @slowtcp6 { ip6 saddr . ip6 daddr . tcp sport . tcp dport limit rate 150/second burst 150 packets } ip6 dscp set af42 counter packets 158277 bytes 94340730
                ip saddr 192.168.1.181 udp dport != { 80, 443 } ip dscp set cs5 counter packets 6722 bytes 801842 comment "Game_Console_Outbound"
                udp sport != { 80, 443 } ip daddr 192.168.1.181 ip dscp set cs5 counter packets 2717 bytes 398503 comment "Game_Console_Inbound"
                ip saddr 192.168.1.181 udp sport 3074 udp dport { 4380, 27000-27031, 27036 } ip dscp set cs5 counter packets 0 bytes 0 comment "dota2_1"
                ip daddr 192.168.1.181 udp dport 3074 ip dscp set cs5 counter packets 0 bytes 0 comment "dota2_2"
                meta priority set ip dscp map @priomap counter packets 1506774 bytes 530453027
                meta priority set ip6 dscp map @priomap counter packets 604283 bytes 1329148249
                ct mark set ip dscp | 0x80 counter packets 1506774 bytes 530453027
                ct mark set ip6 dscp | 0x80 counter packets 604283 bytes 1329148249
                oifname "eth1" ip dscp set cs0
                oifname "eth1" ip6 dscp set cs0
        }
}

Note: I said 170000 earlier because I want to test if it really shapes the download since my custom value that I want (270000) might be inside the margin of error on speedtest. Still, it maxes out my download bandwidth.

Please post the whole output.

1 Like

As @dlakelan pointed out, limiting your bandwidth to just one-third is an inefficient approach. Instead, it would be more practical to utilize all available bandwidth when, for example, no one else is using the network at home.

To achieve this, you'd need to apply shaping on your upstream device. As @Nullity said you can switch to cake and use use Host Isolation which in QoSmate is the option for dual-srchost/dual-dsthost. This setting ensures that bandwidth is fairly distributed among different hosts on your network. The key advantage of using dual-srchost/dual-dsthost over simply prioritizing individual flows is that it prevents a single host with many active flows—such as when downloading torrents—from monopolizing the bandwidth. Instead, cake ensures that the bandwidth is allocated fairly to the host as a whole, rather than distributing it across each individual flow.

However, keep in mind that while cake provides features like host isolation, it generally requires more resources compared to hfsc. Hfsc also incorporates fair bandwidth distribution (at least to an extent).

1 Like

A few steps to consider:

  1. Please switch back to fq_codel as Non-Game Queue Discipline and then try again.
  2. What is the output of:
ifstatus wan | grep -e '"device"' | cut -d'"' -f4
ifstatus wan | grep -e '"l3_device"'

tc class show dev eth1
tc class show dev ifb-eth1
  1. Did you ensure that:
  • Any existing QoS services or scripts (e.g., SQM, Qosify, DSCPCLASSIFY, SimpleHFSCgamerscript...) are disabled and stopped to avoid conflicts.
  • Your router is rebooted to clear out old settings for a clean start.

Thats not a good idea if the internet goes down for whatever reason it will be my fault.

These guy's are not technical at all and I really don't feel like dealing with them.

Just to give you an idea of how non tec they are, lets say that if the power went down again on most of the north American continent they would still blame me for no internet

1 Like

Beautiful! Pending in test it, but at the moment is the best solution for people who has poor connections, easy peasy setup, visual support. Awesome work from everyone involved on this @Hudra

2 Likes
  1. Switched back to fq_codel and rebooted still maxing out the download bandwidth.
root@FLINT-2:~# ifstatus wan | grep -e '"device"' | cut -d'"' -f4
eth1

root@FLINT-2:~# ifstatus wan | grep -e '"l3_device"'
        "l3_device": "eth1",

root@FLINT-2:~# tc class show dev eth1
class hfsc 1:11 parent 1:1 leaf 10: rt m1 485Mbit d 25ms m2 75400Kbit
class hfsc 1: root
class hfsc 1:1 parent 1: ls m1 0bit d 0us m2 500Mbit ul m1 0bit d 0us m2 500Mbit
class hfsc 1:13 parent 1:1 leaf 8022: ls m1 100Mbit d 25ms m2 225Mbit
class hfsc 1:12 parent 1:1 leaf 8021: ls m1 350Mbit d 25ms m2 150Mbit
class hfsc 1:15 parent 1:1 leaf 8024: ls m1 15Mbit d 25ms m2 50Mbit
class hfsc 1:14 parent 1:1 leaf 8023: ls m1 35Mbit d 25ms m2 75Mbit
class fq_codel 8022:6 parent 8022:
class fq_codel 8022:1c parent 8022:
class fq_codel 8022:bc parent 8022:
class fq_codel 8022:dc parent 8022:
class fq_codel 8022:18d parent 8022:
class fq_codel 8022:1d2 parent 8022:
class fq_codel 8022:352 parent 8022:
class fq_codel 8022:393 parent 8022:
class fq_codel 8022:3c3 parent 8022:
class fq_codel 8022:3da parent 8022:
class fq_codel 8022:3db parent 8022:
class fq_codel 8024:229 parent 8024:
class fq_codel 8021:9 parent 8021:
class fq_codel 8021:11 parent 8021:
class fq_codel 8021:23 parent 8021:
class fq_codel 8021:3c parent 8021:
class fq_codel 8021:65 parent 8021:
class fq_codel 8021:b8 parent 8021:
class fq_codel 8021:e7 parent 8021:
class fq_codel 8021:f0 parent 8021:
class fq_codel 8021:fa parent 8021:
class fq_codel 8021:108 parent 8021:
class fq_codel 8021:109 parent 8021:
class fq_codel 8021:113 parent 8021:
class fq_codel 8021:138 parent 8021:
class fq_codel 8021:15a parent 8021:
class fq_codel 8021:169 parent 8021:
class fq_codel 8021:187 parent 8021:
class fq_codel 8021:1b7 parent 8021:
class fq_codel 8021:1b9 parent 8021:
class fq_codel 8021:1fb parent 8021:
class fq_codel 8021:226 parent 8021:
class fq_codel 8021:27c parent 8021:
class fq_codel 8021:288 parent 8021:
class fq_codel 8021:2a2 parent 8021:
class fq_codel 8021:2cc parent 8021:
class fq_codel 8021:2d6 parent 8021:
class fq_codel 8021:2f9 parent 8021:
class fq_codel 8021:304 parent 8021:
class fq_codel 8021:380 parent 8021:
class fq_codel 8021:3cc parent 8021:
class fq_codel 8021:3ce parent 8021:
class fq_codel 8021:3d6 parent 8021:

root@FLINT-2:~# tc class show dev ifb-eth1
class hfsc 1:11 parent 1:1 leaf 10: rt m1 261900Kbit d 25ms m2 40900Kbit
class hfsc 1: root
class hfsc 1:1 parent 1: ls m1 0bit d 0us m2 270Mbit ul m1 0bit d 0us m2 270Mbit
class hfsc 1:2 parent 1: ls m1 50Mbit d 25ms m2 10Mbit
class hfsc 1:13 parent 1:1 leaf 8026: ls m1 54Mbit d 25ms m2 121500Kbit
class hfsc 1:12 parent 1:1 leaf 8025: ls m1 189Mbit d 25ms m2 81Mbit
class hfsc 1:15 parent 1:1 leaf 8028: ls m1 8100Kbit d 25ms m2 27Mbit
class hfsc 1:14 parent 1:1 leaf 8027: ls m1 18900Kbit d 25ms m2 40500Kbit
  1. No other scripts running. Packet steering is also disabled. Did few reboots with the same result.

By the way, I also tried to hard reset and start fresh with still no avail.

Hello everyone,
What an incredible job Hudra, I can't wait to test QOSmate, you've made the perfect combination, Smq, Qos, priorities...etc.
Big hug from Brazil.

3 Likes

This is really really strange.

What is the output of:

ubus call system board; 

Do you get any errors when you manually restart the script:

service qosmate restart

.. please show me the whole outuput when you manually restart the script.