Now it seems like everything is working? What did you change or do?
i did nothing when i click auto setup thats when i get problems or when i click save after changing the download speed
Cool, we've been going back and forth this whole time, and now you're finally telling me what the actual problem is?
Sorry, but if you're not more specific, I can't help you.
Hope all is well, should we adjust the buffer ring size for all point to point connections to be the same ? I primarily play warzone. wan>eth1>gamingpc are all set rx/tx 250. I am on metronet asymmetrical fiber to x86 pc.
I started looking into the implementation, however there is a bit of an issue. If you want update to be able to fetch and install specified qosmate version, that version needs to be registered on Github as a tag. This basically means making a Github release for each version (at least for versions which you want update to be able to fetch), which you are currently not doing. So the question is: do you want to have this functionality, at the cost of making a Github release when releasing a new version? Making Github releases is not a big deal but it does add this little bit of extra work.
In general, the way we implemented this in adblock-lean, update has quite a lot of flexibility. It can fetch the latest release (update or update -v latest), or current snapshot from the master branch (update -v snapshot), or a specified released version (update -v v[version]), or a version corresponding to a Github tag (update -v tag=<tag>), or any version corresponding to a commit in any branch (update -v commit=<commit_hash>). Any or all of this functionality can be implemented in qosmate, and it is also easy to implement something like update -v branch=<branch_name> which would fetch version corresponding to latest commit in a certain branch.
We also have a notion of update channel, which can be set to either release or snapshot. On the release update channel (default), users get an update notification when a new release is available. On the snapshot update channel, users are notified about an update when any new commits are merged to the master branch. The update channel is initially set to release. If a user later issues the update command with options, the update channel is automatically inferred and changed based on the options. Updating to a version which corresponds to a commit or a tag disables the updates check.
In short - there is a lot of flexibility, so the question is what do you want implemented.
Thanks, Iām not at home the rest of the weekend. Will think about it and come back to you as soon as Iām back home.
Thankyou @Hudra for creating QoSmate and everyone who has contributed It's amazing ![]()
It's been a steep learning curve for me over the last couple of weeks but I am really enjoying the journey.
I play COD on console and have been using DumaOS for several years which is awful in comparison. I have been reading through the README and wanted to have a go at limiting bandwidth with a custom rule. Is it as simple as adding the suggested rule in the README?
Thanks
chain forward {
type filter hook forward priority 0; policy accept;
# Limit traffic with destination port 3074 to 4 MB/s (32 Mbit/s)
ct original proto-dst 3074 limit rate over 4 mbytes/second counter drop
}
I have added the above to the custom rule section and it says that Custom rules validation successful but how can i test and would this rule be enough?
Run iperf3 server on remote port and test out
You're welcome!
It would be interesting to know what your goal is with bandwidth limiting and what you are trying to achieve.
- CoD definitely doesnāt need 32 Mbit/s bandwidth (it's more like 1-2 Mbit). You can check the approximate value while playing and by looking at the connections tab.
- You didn't mention which console you're using. For example, Xbox uses port 3074 not only for CoD but also for other games, possibly even other applications or downloads (I'm not entirely sure). Since the rule you suggested is quite general (no specific protocol defined), itās possible that other services on your Xbox might also be affected by the bandwidth limit, depending on your goal. Additionally, if other network devices also use port 3074, their bandwidth would be limited as well.
- If you want to test the rule, just replace the port with ā8080ā, save & apply, and then go to speedtest.net and run a speed test. Speedtest uses port 8080, and you should see the bandwidth you have configured.
As an old school gamer of 55 I was hoping to improve my gaming experience against alI the whipper snappers SBMM is putting me in with
I only play call of duty on a Xbox series X and had read on another thread (I think it was Elan script) that limiting bandwidth would help with lag compensation.
I will start monitoring the connection tab whilst playing to get a better idea on speeds. Thankyou.
The rule I used was just the example you used in your README
Ah, I kinda thought thatās what you were getting at... I feel you. Just give it a shot, maybe it'll help, but I doubt itāll make a huge difference.
The best things you can try against SBMM are:
- Using a VPN to spoof your location - though that used to work better back in the day.
- Or using a geo-filter to only connect to certain servers - you could use geomate
- Reverse boosting.
- Two-boxing
- Becoming a streamer and getting on the whitelist - if thatās even a thing.

... or combine 1 + 2.
Hey @Hudra, the update that is available is that for the experimental branch ? Is it safe to update to?
Does the order of QoS rules affect classification in QoSmate?
Hello everyone,
I'm configuring QoSmate on my OpenWrt setup, and I have a question regarding the order of QoS rules and their impact on packet classification.
Currently, I want to ensure that all traffic from my Wi-Fi devices is marked as CS1 (lower priority), but at the same time, I need to classify video streaming and social media traffic as CS2 for better performance.
My concern:
- If I apply the CS1 rule first to all Wi-Fi traffic, will it override the CS2 classification for streaming and social media that comes later?
- Or does QoSmate process rules sequentially, allowing later rules (CS2 for streaming) to take precedence over earlier ones (CS1 for Wi-Fi)?
config rule
option name 'WiFi Lower Priority'
list src_ip '@wifi_devices'
option class 'cs1'
option counter '1'
option enabled '1'
config rule
option name 'Video Streaming'
option class 'cs2'
list dest_ip '@streaming_services'
option counter '1'
option enabled '1'
config rule
option name 'Social Streaming'
option class 'cs2'
list dest_ip '@social_media_services'
option counter '1'
option enabled '1'
Should I swap the order of these rules, placing the CS2 classifications first to avoid them being overridden by the CS1 rule?
Or does QoSmate process traffic in a way that later rules take priority over earlier ones (allowing streaming/social media to be properly classified)?
Thanks in advance for any insights!
You can try it out, the caveat is that heavily dscp-marked traffic from other sources is not profoundly tested. Something may fall or climb dscp ladder compared to before.
Do you mean 0.5.62? If so, then you should be safe. This was the commit for the version:
feat: Enhance QoS rule generation for dual IP version support Ā· hudra0/qosmate@e486c9d
root@OpenWrt:~# /etc/init.d/qosmate start
Configuration file already exists.
Global configuration section already exists.
Enabled option already exists.
Config files have been added to sysupgrade.conf for preservation.
/etc/qosmate.sh: eval: line 317: log_msg: not found
/etc/qosmate.sh: eval: line 317: log_msg: not found
Error: Exclusivity flag on, cannot modify.
RTNETLINK answers: File exists
This script prioritizes the UDP packets from / to a set of gaming
machines into a real-time HFSC queue with guaranteed total bandwidth
Based on your settings:
Game upload guarantee = 13702 kbps
Game download guarantee = 42400 kbps
Download direction only works if you install this on a *wired* router
and there is a separate AP wired into your network, because otherwise
there are multiple parallel queues for traffic to leave your router
heading to the LAN.
Based on your link total bandwidth, the **minimum** amount of jitter
you should expect in your network is about:
UP = 0 ms
DOWN = 0 ms
In order to get lower minimum jitter you must upgrade the speed of
your link, no queuing system can help.
Please note for your display rate that:
at 30Hz, one on screen frame lasts: 33.3 ms
at 60Hz, one on screen frame lasts: 16.6 ms
at 144Hz, one on screen frame lasts: 6.9 ms
This means the typical gamer is sensitive to as little as on the order
of 5ms of jitter. To get 5ms minimum jitter you should have bandwidth
in each direction of at least:
7200 kbps
The queue system can ONLY control bandwidth and jitter in the link
between your router and the VERY FIRST device in the ISP
network. Typically you will have 5 to 10 devices between your router
and your gaming server, any of those can have variable delay and ruin
your gaming, and there is NOTHING that your router can do about it.
DONE!
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
Sent 609975153 bytes 1303094 pkt (dropped 0, overlimits 0 requeues 39)
backlog 0b 0p requeues 39
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 609975153 bytes 1303094 pkt (dropped 0, overlimits 0 requeues 39)
backlog 0b 0p requeues 39
maxpacket 1482 drop_overlimit 0 new_flow_count 25 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev lan4 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan3 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan2 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan1 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 8011: dev wan root refcnt 2 bandwidth 88686Kbit diffserv4 dual-srchost nat wash no-ack-filter split-gso rtt 100ms noatm overhead 38 mpu 84
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
memory used: 0b of 4434300b
capacity estimate: 88686Kbit
min/max network layer size: 65535 / 0
min/max overhead-adjusted size: 65535 / 0
average network hdr offset: 0
Bulk Best Effort Video Voice
thresh 5542Kbit 88686Kbit 44343Kbit 22171Kbit
target 5ms 5ms 5ms 5ms
interval 100ms 100ms 100ms 100ms
pk_delay 0us 0us 0us 0us
av_delay 0us 0us 0us 0us
sp_delay 0us 0us 0us 0us
backlog 0b 0b 0b 0b
pkts 0 0 0 0
bytes 0 0 0 0
way_inds 0 0 0 0
way_miss 0 0 0 0
way_cols 0 0 0 0
drops 0 0 0 0
marks 0 0 0 0
ack_drop 0 0 0 0
sp_flows 0 0 0 0
bk_flows 0 0 0 0
un_flows 0 0 0 0
max_len 0 0 0 0
quantum 300 1514 1353 676
qdisc ingress ffff: dev wan parent ffff:fff1 ----------------
Sent 69280561 bytes 277956 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-lan root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 8012: dev ifb-wan root refcnt 2 bandwidth 280Mbit diffserv4 dual-dsthost nat wash ingress no-ack-filter split-gso rtt 100ms noatm overhead 38 mpu 84
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
memory used: 0b of 14000000b
capacity estimate: 280Mbit
min/max network layer size: 65535 / 0
min/max overhead-adjusted size: 65535 / 0
average network hdr offset: 0
Bulk Best Effort Video Voice
thresh 17500Kbit 280Mbit 140Mbit 70Mbit
target 5ms 5ms 5ms 5ms
interval 100ms 100ms 100ms 100ms
pk_delay 0us 0us 0us 0us
av_delay 0us 0us 0us 0us
sp_delay 0us 0us 0us 0us
backlog 0b 0b 0b 0b
pkts 0 0 0 0
bytes 0 0 0 0
way_inds 0 0 0 0
way_miss 0 0 0 0
way_cols 0 0 0 0
drops 0 0 0 0
marks 0 0 0 0
ack_drop 0 0 0 0
sp_flows 0 0 0 0
bk_flows 0 0 0 0
un_flows 0 0 0 0
max_len 0 0 0 0
quantum 534 1514 1514 1514
Automatically including '/usr/share/nftables.d/ruleset-post/dscptag.nft'
root@OpenWrt:~# /etc/init.d/qosmate status
==== qosmate Status ====
qosmate autostart is enabled.
qosmate service is enabled.
Traffic shaping is active on the egress interface (wan).
Traffic shaping is active on the ingress interface (ifb-wan).
==== Overall Status ====
qosmate is currently active and managing traffic shaping.
==== Current Settings ====
Upload rate: 88686 kbps
Download rate: 280000 kbps
Game traffic upload: 13702 kbps
Game traffic download: 42400 kbps
Queue discipline: CAKE (Root qdisc)
==== Package Status ====
All required packages are installed.
==== Detailed Technical Information ====
Traffic Control (tc) Queues:
qdisc noqueue 0: dev lo root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
Sent 617150538 bytes 1320140 pkt (dropped 0, overlimits 0 requeues 39)
backlog 0b 0p requeues 39
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1522 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64
Sent 617150538 bytes 1320140 pkt (dropped 0, overlimits 0 requeues 39)
backlog 0b 0p requeues 39
maxpacket 1482 drop_overlimit 0 new_flow_count 25 ecn_mark 0
new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev lan4 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan3 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan2 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev lan1 root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 8011: dev wan root refcnt 2 bandwidth 88686Kbit diffserv4 dual-srchost nat wash no-ack-filter split-gso rtt 100ms noatm overhead 38 mpu 84
Sent 1841162 bytes 7610 pkt (dropped 0, overlimits 1207 requeues 0)
backlog 0b 0p requeues 0
memory used: 64765b of 4434300b
capacity estimate: 88686Kbit
min/max network layer size: 28 / 1460
min/max overhead-adjusted size: 84 / 1498
average network hdr offset: 14
Bulk Best Effort Video Voice
thresh 5542Kbit 88686Kbit 44343Kbit 22171Kbit
target 5ms 5ms 5ms 5ms
interval 100ms 100ms 100ms 100ms
pk_delay 0us 10us 484us 116us
av_delay 0us 3us 46us 8us
sp_delay 0us 2us 3us 2us
backlog 0b 0b 0b 0b
pkts 0 338 3700 3572
bytes 0 34568 930461 876133
way_inds 0 0 160 0
way_miss 0 240 193 90
way_cols 0 0 0 0
drops 0 0 0 0
marks 0 0 0 0
ack_drop 0 0 0 0
sp_flows 0 2 1 0
bk_flows 0 0 0 0
un_flows 0 0 0 0
max_len 0 164 13086 1292
quantum 300 1514 1353 676
qdisc ingress ffff: dev wan parent ffff:fff1 ----------------
Sent 74187978 bytes 287410 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-lan root refcnt 2
Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
qdisc cake 8012: dev ifb-wan root refcnt 2 bandwidth 280Mbit diffserv4 dual-dsthost nat wash ingress no-ack-filter split-gso rtt 100ms noatm overhead 38 mpu 84
Sent 5072813 bytes 9454 pkt (dropped 0, overlimits 3253 requeues 0)
backlog 0b 0p requeues 0
memory used: 39511b of 14000000b
capacity estimate: 280Mbit
min/max network layer size: 46 / 1460
min/max overhead-adjusted size: 84 / 1498
average network hdr offset: 14
Bulk Best Effort Video Voice
thresh 17500Kbit 280Mbit 140Mbit 70Mbit
target 5ms 5ms 5ms 5ms
interval 100ms 100ms 100ms 100ms
pk_delay 0us 34us 76us 42us
av_delay 0us 4us 13us 5us
sp_delay 0us 1us 2us 2us
backlog 0b 0b 0b 0b
pkts 0 650 4354 4450
bytes 0 125849 2662658 2284306
way_inds 0 0 5 0
way_miss 0 254 185 92
way_cols 0 0 0 0
drops 0 0 0 0
marks 0 0 0 0
ack_drop 0 0 0 0
sp_flows 0 2 1 1
bk_flows 0 0 0 0
un_flows 0 0 0 0
max_len 0 5800 10318 1292
quantum 534 1514 1514 1514
==== Nftables Ruleset (dscptag) ====
chain dscptag {
type filter hook forward priority filter; policy accept;
iif "lo" accept
meta l4proto udp ct original proto-src { 6881-6889, 51413 } counter packets 0 bytes 0 jump mark_cs1
meta l4proto udp ct original proto-dst { 6881-6889, 51413 } counter packets 0 bytes 0 jump mark_cs1
meta l4proto tcp ct original proto-dst { 6881-6889, 51413 } counter packets 0 bytes 0 jump mark_cs1
meta length < 100 tcp flags ack add @xfst4ack { ct id . ct direction limit rate over 443400/second } counter packets 0 bytes 0 jump drop995
meta length < 100 tcp flags ack add @fast4ack { ct id . ct direction limit rate over 44340/second } counter packets 0 bytes 0 jump drop95
meta length < 100 tcp flags ack add @med4ack { ct id . ct direction limit rate over 4434/second } counter packets 26 bytes 1040 jump drop50
meta length < 100 tcp flags ack add @slow4ack { ct id . ct direction limit rate over 4434/second } counter packets 15 bytes 600 jump drop50
meta l4proto tcp ct bytes < 17500000 jump mark_500ms
meta l4proto tcp ct bytes > 350000000 jump mark_10s
meta l4proto tcp ip dscp != cs1 add @slowtcp { ct id . ct direction limit rate 150/second burst 150 packets } ip dscp set af42 counter packets 7493 bytes 3181106
meta l4proto tcp ip6 dscp != cs1 add @slowtcp { ct id . ct direction limit rate 150/second burst 150 packets } ip6 dscp set af42 counter packets 0 bytes 0
udp dport != { 80, 443 } ip dscp set cs5 counter packets 6423 bytes 2362468 comment "ipv4_Game_Console_Outbound"
udp dport != { 80, 443 } ip6 dscp set cs5 counter packets 0 bytes 0 comment "ipv6_Game_Console_Outbound"
udp sport != { 80, 443 } ip dscp set cs5 counter packets 5160 bytes 937177 comment "ipv4_Game_Console_Inbound"
udp sport != { 80, 443 } ip6 dscp set cs5 counter packets 0 bytes 0 comment "ipv6_Game_Console_Inbound"
meta priority set ip dscp map @priomap counter packets 15625 bytes 6549555
meta priority set ip6 dscp map @priomap counter packets 0 bytes 0
meta nfproto ipv4 ct mark set @nh,8,8 & 0xfc [invalid type] | 0x80 counter packets 15625 bytes 6549555
meta nfproto ipv6 ct mark set @nh,0,16 & 0xfc0 [invalid type] | 0x80 counter packets 0 bytes 0
}
}
==== Custom Rules Table Status ====
Custom rules table (qosmate_custom) is active.
Current custom rules:
table inet qosmate_custom {
chain ingress {
type filter hook ingress device "wan" priority -500; policy accept;
iif "wan" counter packets 0 bytes 0 ip dscp set cs0 comment "Wash all ISP DSCP marks to CS0 (IPv4)"
iif "wan" counter packets 0 bytes 0 ip6 dscp set cs0 comment "Wash all ISP DSCP marks to CS0 (IPv6)"
}
}
Hello, is everything ok?
Do you think that something is wrong or what is the intention of your post? Output looks good...
This is probably because qosmate was already running when you entered the service qosmate start command.
Yes you are right it was working and I want to know that everything is ok but I have a question I use PC for gaming and I want to ask should I just put this rule
config rule
option name 'Game_Console_Outbound'
option proto 'udp'
list dest_port '!=80'
list dest_port '!=443'
option class 'cs5'
option counter '1'
option enabled '1'
option trace '0'
config rule
option name 'Game_Console_Inbound'
option proto 'udp'
list src_port '!=80'
list src_port '!=443'
option class 'cs5'
option counter '1'
option enabled '1'
option trace '0'
I will suffice because I have an example game about Rocket League. The ports are similar to the Discord ports. What should I do? Is CS5 or EF better for games? Because if I set the first rule, it includes both the game and Discord.
The DSCP shouldn't matter as long as the packet is assigned to the correct/intended class. So for example, whether you use CS5 or EF with Cake (diffserv4) shouldn't make a difference - at least not if you're washing your DSCP values and not passing them on to your ISP. You can find more information and discussions on this topic further up in this thread.
Your rule is missing the source IP. You should specify the IP address of your gaming PC as source IP and make sure it has a static IP. Otherwise all UDP traffic in your entire network that isn't on port 80 or 443 will be prioritized into the highest class... which isnāt ideal.
That said, your rule should be sufficient but I'm not exactly sure which ports Rocket League uses, but I assume it's using UDP. The rules are fairly general and cover all UDP traffic that isnāt on port 80 or 443 (which are often used for QUIC video traffic).
So yea, itās possible that Discord traffic gets marked with CS5 if it uses a UDP port other than 80 or 443.
You have a few options:
- Either you donāt use a general rule for your gaming PC and search for the exact ports the game use (you can google or use the connections tab while playing a game to find out the right ports)
- or you explicitly exclude the Discord ports in the
Game_Consolerule - seems like Discord uses port 50000-65535 (udp). An adjusted rule could look like this (just an example):
config rule
option name 'Game_Console_Outbound'
option proto 'udp'
list src_ip '192.168.1.111'
option class 'cs5'
option counter '1'
option trace '0'
option enabled '1'
list dest_port '!=80'
list dest_port '!=443'
list dest_port '!=50000-65535'
- or you create a separate rule after the
Game_Consolerule that specifically matches Discord traffic and downgrades it, for example to CS0.
If you set the counter to enabled you can see if traffic is hitting your rules when you go to Status/Firewall and search for the table dscptag in your LUCI ui.
Yes!
Why not just try it out?
In qosmate rules are processed sequentially, and all matching rules are applied. This means that later rules can overwrite earlier ones if they apply to the same traffic.
For example, if you have a general rule marking all traffic as CS1 followed by a rule marking the same traffic as CS2 (higher priority), the CS2 rule will overwrite the CS1 marking.
The key takeaway: Place more specific rules after broader ones to ensure that specific traffic classifications take precedence. This avoids unintended overwrites and ensures proper prioritization.
But keep in mind:
Effective QoS is about strategic prioritization, not blanket elevation of all traffic.
QoSmate allows you to prioritize specific traffic types, but it's crucial to use this capability judiciously. Over-prioritization can negate the benefits of QoS, as elevating too much traffic essentially equates to no prioritization at all.
Remember that for every packet given preferential treatment, others may experience increased delay or even drops. The goal is to create a balanced, efficient network environment, not to prioritize everything.