QoSmate: (Yet Another) Quality of Service Tool for OpenWrt

Interesting theory, thanks for sharing. I have a few Nits to pick though:

  1. K/D 3/0 ist not 3 but rather undefined as division by zero is "problematic", but I get that K/D is just shorthand for the actual number the gmae uses internally that surely avoids division by zero
Column 1 Column 2 Column 3 Column 4 E
Above Average 1.2 - 1.75 ~6.0% 98.5% Top 25% of players
Good 1.75 - 2.5 ~1.25% 99.75% Top 10% of players

Some disconnect if only 6% of players are ranked as being to the top 25% percent or only 1.23% as in the top 10%

  1. This hypothesis should be testable, for a skiled player simply be purposefully avoiding kills for some time to artificially force a downgrade of ones skill level and hence placement in lower skilled lobbies.
1 Like

Great points, and thanks for the sharp feedback! You're right on all counts.

You're correct, I was just using that as shorthand. My main point was that K/D is a simplistic metric for skill in the first place.

Another good catch. The percentages are inconsistent because it was a community-made chart I found, quickly compiled with ai and used just to illustrate the idea of skill brackets, not to present official data. Apologies for the confusion.

You've perfectly described a real and common practice called "Reverse Boosting," and it's just one of several elaborate tricks players use to manipulate the system.

The fact that players go to such lengths is probably the strongest proof of how strict SBMM is. These methods include:

  • Reverse Boosting: As you said, intentionally playing terribly for many games to lower their internal skill rating and get placed in easier lobbies.
  • Two-Boxing (or using a "bot account"): This is an even more cynical tactic. A player runs a second, brand-new or extremely low-skilled account on another device (like a second console or PC). They make this "bot account" the party leader. The matchmaking system sees the low-skill leader and places the entire party, including the player's main, high-skilled account, into the easiest possible lobbies.
  • other techniques like using a vpn ...

And this reveals another, wider problem within the gaming industry and its content creation scene. These exploits are systematically used by many streamers. They use these tactics to get into "bot lobbies" but rarely admit it publicly. They then sell their spectacular gameplay as if they are the best pro players in the world.

The result is that a large part of their audience watches these videos and compares their own normal, sweaty lobbies with those of the streamer. They start to think they are doing something wrong or that their setup must be flawed. This sends them down the rabbit hole of optimization, and that's how the circle closes: they end up right here, on forums like this, looking for a technical fix (sometimes spending a lot of money) to a problem that was actually created by game design and a lack of transparency in the content they consume.

1 Like

This is really interesting as I never thought it would be an issue on my own side. The only stuttering killcams I see are on the players that are almost impossible to kill. When players I can kill fine in the same match kill me, their killcam looks smooth so I always assumed the stuttering killcams were related to wi-fi / a worse connection than mine. What also made me think this way was whenever I'd come across a few people I regularly party with now, their killcams stutter a lot you'd think they are playing with packetloss and they are impossible to kill. My theory was it was related to their connection to the server being unstable so the killcam doesn't get recreated smoothly (they do play on wi-fi, and i know they have ping fluctuations because when they search for a match, sometimes it starts off <21ms, other times it starts at <40ms. My search always starts at <13ms because my ping is stable.)

I'm on PS5 and I live in the same state as the only cod datacentre in Australia (Sydney). I've never had packetburst appear on my own screen during a match apart from when i'm testing wi-fi and I get the play of the game at the end, it shows a packetburst when it recreates it. This is another reason why I think the stuttering killcams that have packetburst appear are people on wi-fi.

I have fibre to the premises, my speeds are 1Gbps/50Mbps and I use CAT7/CAT8 through my home (Fibre NTD > Router is CAT7 roughly 15-20m, Router > PS5 CAT8 ~2m). I have a custom controller with micro triggers so i technically could shoot faster than some people and I also play on a gaming monitor with a fast RT. I have installed the game on a WD SN850X SSD. OpenWRT is running off a x86 device with an N100 CPU. I have tried to squeeze out what i can with my setup.

My ping is stable and when I'm gaming a lot of the time there's no other internet usage.

Thank you for the detailed explanation. Regarding the interpolation, this is interesting as i've noticed when I'd come across my friend who uses the 2.4GHz wi-fi, his character's movement doesn't really look smooth, he'd always have more of a jittery movement to his character like he doesn't belong in the match.

I've also always had an issue with peeker's advantage never working in my favour :sweat_smile: I could jump around a corner pre firing someone I saw on the minimap but still lose. Likewise, sliding around corners in BO6 i'm almost always guaranteed to lose the gunfight unless i've been put in a below average skill match. I know killcams aren't accurate all the time, in my normal matches when I slide around a corner the enemy appears to have a lot of time to shoot at me, but when they slide around a corner on my end its much faster. I do have the correct class set ups to slide/minimise sprint out fire etc. but it doesn't make a difference. At times i've had matches where someone with an LMG outguns me running an SMG close range. You can imagine how slow their ADS alone is compared to mine, on my end i ADS & shoot first but still lose. It's like the game is always against me with my current setup, and it's at it's worst when I run OpenWRT unfortunately, so I've stopped using it for gaming.

I couldn't agree more. My KD is close to 3 so you can imagine the terrible time i've had levelling up as a solo. I was negative KD the first couple months due to major connection issues with the game (my ping was stable, I just couldn't kill anyone as it felt I was playing well over half a second behind the enemy, which i still do now but occasionally i'll get a week where i shoot first and kill first, i make sure i take advantage of these weeks :rofl:).

You've described my situation perfectly. I think the main issue adding to my problem is my location. At a higher skill bracket the game pulls people from New Zealand (60-80ms+ ping) and they seem to have god mode-like advantage. A lot of my matches have 70%+ players from NZ, so I think because i sit around 5-7ms in-game, i'm getting too much if a penalty applied to me, enough for me to notice things are feeling off and i should be winning the gunfights i'm not.

I think due to the way their matchmaking system now works and has to pull from recent performance/skill levels, the playerpool needs to be widened for it to work. This will sacrifice player's ping but that's where their lag compensation/latency balancing system comes into play. The issue is these matches are the problematic ones because i feel their system is not handling it fairly, which is why i experience what i experience. Maybe the matches that run well for me are because i'm vs local players only on a similar ping and not people more than 10x my ping. A funny thing, before i blocked foreign matchmaking servers i had some experiences where i'd be connected to the US on 150ms+ ping and still manage to top the leaderboards with single digit deaths.

Regarding IPv6 testing, I may have IPv6 disabled and i'm unsure where to go to enable it without breaking something. I'm on 24.10

I did some more research to see if I could find any clues about what's really happening in the background when you see network icons in the killcam, but unfortunately, I couldn't find any official statements.

Here are 4 possible options that come to mind spontaneously:

  1. The icons show your network status at the exact moment you were killed.
  2. The symbols continue to display your current network problems. Your client keeps running as normal while the killcam is playing. If you're having internet or even hardware issues, these will also be displayed during the killcam.
  3. You are seeing the network problems that the other player had at the time of the kill. This is also a possibility, of course.
  4. Local processing issues: The server sends the killcam data as a compressed packet, and your system has trouble decompressing and replaying it. Your console has to decompress and render this data in real-time. During this intensive process, you might briefly experience:
  • CPU/GPU spikes during decoding/rendering.
  • Network micro-stutters as the data is downloaded.
  • RAM bottlenecks while the scene is being built.

As has been mentioned, "Packet Burst" is often tied to FPS drops, and the fact that you're using a PS5 could potentially explain such issues. This isn't to say you should immediately go out and buy a gaming PC, but it could be a hint...

To illustrate that the FPS are tied to the packets being sent to the server, you can run a simple test (at least on PC). Monitor your packets per second (PPS), either using qosmate (check the AVG PPS - it's not super precise but gives you a rough idea) or via Wireshark. Then tab out of the game window. The game will automatically drop to 30 FPS since it’s running in the background, and you'll see the PPS drop to around 30 as well. Should be reproducible by manually lowering/capping FPS.

With your PS5, you're likely getting around 60 FPS. If you have any FPS dips, it will certainly have a more negative impact on your experience than it would for me, for example, when I'm playing at 180-200 FPS and have a short dip.

Maybe you could try to monitor this. Your monitor might have a built-in overlay that can display your FPS... the console itself probably doesn't have a native feature for this.

Another thing worth mentioning is (unless you have crossplay turned off), you will always have a slight disadvantage against PC players. This is likely due to system latency alone and the fact that PC players can process many more frames. While it won't make a night-and-day difference, it can negatively affect you in certain situations. My buddy plays on an Xbox Series X, and it seems to me that he runs into more problems.

Techniques like interpolation only work up to a certain "threshold". When your friend's connection on 2.4GHz Wi-Fi crosses that threshold, the engine can no longer compensate for the missing data. The result could exactly be the "jittery movement" you're seeing.

If you're having that many issues where it seems like other players have a bad connection (aside from the fact that there's nothing you can do about it), you should be able to record a few clips so we can get a better picture of what's going on.

The longer a CoD title is in its life cycle, the tougher the lobbies usually get... mainly because casual players tend to stop playing over time and the player pool shrinks. What’s left are mostly the hardcore players.

claiming that players with a 60–70 ms ping are in some kind of god mode doesn’t really make sense to me and should be something you can test yourself. For example, install Geomate and set up a geofilter for New Zealand (to only allow the NZ server) ... then, by that logic, you should have god mode. Or use netem to artificially increase your ping to around 60–70 ms in-game and play a few matches. I doubt it’ll make a difference. It’s never worked for me either.

would this work with the flint 2? i have about 30 devices connected. download speed is 2gbps and upload is 350mbps

Yes, of course… at least if you’re using OpenWrt. I’m not sure which version of OpenWrt the original gl.inet firmware is based on. Why are you concerned?

i installed it following the guide on github and this is what i get.


do i need to download other packages?

Is that the original firmware? You’re definitely using a very outdated OpenWrt version. Only OpenWrt versions 22.03 and newer are compatible, as they include fw4 with nftables. Your version (21.02) is likely still using iptables, which is why you’re seeing a red X under the Health Check for nft.

And since you’re using a snapshot version, you’re probably running into issues when trying to install the required packages. Just update to the latest stable release and you shouldn’t have any more problems.

thanks ill have to see how i can update it

Either I'm doing something wrong, or Source and Destination are flipped as well as Egress and Ingress.

Every time I set a rule for a Destination port (using the destination port in the 'Connections' tab), it does not mark the packets. If I set the same rule but on the Source port (still using the destination port information), then it starts marking the packets correctly.

In the 'Statistics' tab, it shows most packets get marked as normal on Egress, the rule defined packets are only marked on Ingress.
I thought normally packets are marked on Egress, but there were a few extra steps / firewall rules to get it working on Ingress?

I do like the GUI and how easy it is to set up though. It's easily the best QoS service that I've used so far. Good work.

1 Like

Thanks!

Hmm, hard to say what’s really going on here.

Can you show me the output of:

service qosmate status
root@OpenWrt:~# service qosmate status
==== qosmate Status ====
qosmate autostart is enabled.
qosmate service is enabled.
Traffic shaping is active on the egress interface (eth6).
Traffic shaping is active on the ingress interface (ifb-eth6).
==== Overall Status ====
qosmate is currently active and managing traffic shaping.
==== Current Settings ====
Upload rate: 36000 kbps
Download rate: 400000 kbps
Game traffic upload: 5800 kbps
Game traffic download: 60400 kbps
Queue discipline: pfifo (for game traffic in HFSC)
==== Version Information ====
Backend versions:
  Update channel: release
  Current version: 1.3.0
  Latest version: 1.3.0
Frontend versions:
  Update channel: release
  Current version: 1.3.0
  Latest version: 1.3.0

QoSmate components 'BACKEND FRONTEND' are up to date.
==== System Information ====
{
        "kernel": "5.15.150",
        "hostname": "OpenWrt",
        "system": "Intel(R) Core(TM) i5-6500 CPU @ 3.20GHz",
        "model": "Dell Inc. OptiPlex 5040",
        "board_name": "dell-inc-optiplex-5040",
        "rootfs_type": "ext4",
        "release": {
                "distribution": "OpenWrt",
                "version": "23.05.3",
                "revision": "r23809-234f1a2efa",
                "target": "x86/64",
                "description": "OpenWrt 23.05.3 r23809-234f1a2efa"
        }
}
==== Health Check ====
status=service:enabled;nft:ok;tc:ok;config:ok;packages:ok;BACKEND_integrity:ok;FRONTEND_integrity:ok;;errors=0
==== WAN Interface Information ====
        "l3_device": "eth6",
        "device": "eth6",
==== QoSmate Configuration ====

config global 'global'
        option enabled '1'

config settings 'settings'
        option WAN 'eth6'
        option DOWNRATE '400000'
        option UPRATE '36000'
        option ROOT_QDISC 'hfsc'

config advanced 'advanced'
        option PRESERVE_CONFIG_FILES '1'
        option WASHDSCPUP '1'
        option WASHDSCPDOWN '1'
        option BWMAXRATIO '20'
        option UDP_RATE_LIMIT_ENABLED '0'
        option TCP_UPGRADE_ENABLED '1'
        option TCP_DOWNPRIO_INITIAL_ENABLED '1'
        option TCP_DOWNPRIO_SUSTAINED_ENABLED '1'
        option UDPBULKPORT '51413,6881-6889'
        option TCPBULKPORT '51413,6881-6889'
        option NFT_HOOK 'forward'
        option NFT_PRIORITY '0'

config hfsc 'hfsc'
        option LINKTYPE 'ethernet'
        option OH '20'
        option gameqdisc 'pfifo'
        option nongameqdisc 'cake'
        option nongameqdiscoptions 'besteffort ack-filter'
        option MAXDEL '24'
        option PFIFOMIN '5'
        option PACKETSIZE '450'
        option netemdelayms '30'
        option netemjitterms '7'
        option netemdist 'normal'
        option pktlossp 'none'

config cake 'cake'
        option COMMON_LINK_PRESETS 'docsis'
        option PRIORITY_QUEUE_INGRESS 'diffserv4'
        option PRIORITY_QUEUE_EGRESS 'diffserv4'
        option HOST_ISOLATION '1'
        option NAT_INGRESS '1'
        option NAT_EGRESS '1'
        option ACK_FILTER_EGRESS '1'
        option AUTORATE_INGRESS '0'

config custom_rules 'custom_rules'

config rule
        option name 'COD'
        list src_port '3074'
        option class 'ef'
        option counter '1'
        option trace '0'
        option enabled '1'

config rule
        option name 'COD #2'
        list dest_port '3074'
        option class 'ef'
        option counter '1'
        option trace '0'
        option enabled '1'

config rule
        option name 'Halo'
        list src_port '1353'
        option class 'ef'
        option counter '1'
        option trace '0'
        option enabled '1'

config rule
        option name 'Halo #2'
        list dest_port '1353'
        option class 'ef'
        option counter '1'
        option trace '0'
        option enabled '1'

config rule
        option name 'Genshin Impact'
        list dest_port '22101-22102'
        option class 'ef'
        option counter '1'
        option trace '0'
        option enabled '1'

config rule
        list src_port '22101-22102'
        option class 'ef'
        option counter '1'
        option trace '0'
        option enabled '1'
        option name 'Genshin Impact #2'

config rule
        option name 'ICMP (PING)'
        option proto 'icmp'
        option class 'ef'
        option counter '1'
        option trace '0'
        option enabled '1'

config rule
        option name 'Apex Legends'
        option proto 'udp'
        list dest_port '37000-40000'
        option class 'ef'
        option counter '1'
        option trace '0'
        option enabled '1'

config rule
        option name 'Apex Legends #2'
        option proto 'udp'
        list src_port '37000-40000'
        option class 'ef'
        option counter '1'
        option trace '0'
        option enabled '1'

config rule
        option name 'Fortnite'
        list dest_port '9000-9100'
        option class 'ef'
        option counter '1'
        option trace '0'
        option enabled '1'
        option proto 'udp'

config rule
        option name 'Fortnite #2'
        list src_port '9000-9100'
        option class 'ef'
        option counter '1'
        option trace '0'
        option enabled '1'
        option proto 'udp'

config ipset
        option mode 'static'
        option family 'ipv4'
        list ip4 '192.168.1.2'
        list ip4 '192.168.1.3'
        list ip4 '192.168.1.4'
        list ip4 '192.168.1.7'
        option enabled '1'
        option name 'Gaming_Stuff'

==== Package Status ====
All required packages are installed.

==== Detailed Technical Information ====
Traffic Control (tc) Queues:
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth0 root
 Sent 8156875948 bytes 5583549 pkt (dropped 0, overlimits 0 requeues 29)
 backlog 0b 0p requeues 29
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 2104737524 bytes 1410522 pkt (dropped 0, overlimits 0 requeues 10)
 backlog 0b 0p requeues 10
  maxpacket 1514 drop_overlimit 0 new_flow_count 6 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 1181914824 bytes 810412 pkt (dropped 0, overlimits 0 requeues 8)
 backlog 0b 0p requeues 8
  maxpacket 1514 drop_overlimit 0 new_flow_count 6 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 3270027210 bytes 2271469 pkt (dropped 0, overlimits 0 requeues 8)
 backlog 0b 0p requeues 8
  maxpacket 1514 drop_overlimit 0 new_flow_count 8 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 1600196390 bytes 1091146 pkt (dropped 0, overlimits 0 requeues 3)
 backlog 0b 0p requeues 3
  maxpacket 1514 drop_overlimit 0 new_flow_count 1 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev eth1 root
 Sent 40960945800 bytes 27765553 pkt (dropped 0, overlimits 0 requeues 924)
 backlog 0b 0p requeues 924
qdisc fq_codel 0: dev eth1 parent :4 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 8996604237 bytes 6081802 pkt (dropped 0, overlimits 0 requeues 342)
 backlog 0b 0p requeues 342
  maxpacket 1486 drop_overlimit 0 new_flow_count 211 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :3 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 10769443736 bytes 7289805 pkt (dropped 0, overlimits 0 requeues 56)
 backlog 0b 0p requeues 56
  maxpacket 1486 drop_overlimit 0 new_flow_count 33 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 652145014 bytes 461326 pkt (dropped 0, overlimits 0 requeues 30)
 backlog 0b 0p requeues 30
  maxpacket 1486 drop_overlimit 0 new_flow_count 21 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :1 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 20542752813 bytes 13932620 pkt (dropped 0, overlimits 0 requeues 496)
 backlog 0b 0p requeues 496
  maxpacket 1486 drop_overlimit 0 new_flow_count 345 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth2 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 40074305617 bytes 32895241 pkt (dropped 0, overlimits 0 requeues 234)
 backlog 0b 0p requeues 234
  maxpacket 25738 drop_overlimit 0 new_flow_count 25026 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth3 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 264708013871 bytes 191947561 pkt (dropped 0, overlimits 0 requeues 49180)
 backlog 0b 0p requeues 49180
  maxpacket 36336 drop_overlimit 0 new_flow_count 276685 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth4 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 4394090705 bytes 4731427 pkt (dropped 0, overlimits 0 requeues 161)
 backlog 0b 0p requeues 161
  maxpacket 1514 drop_overlimit 0 new_flow_count 4643 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth5 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 32Mb ecn drop_batch 64
 Sent 32457394604 bytes 23299026 pkt (dropped 0, overlimits 0 requeues 4924)
 backlog 0b 0p requeues 4924
  maxpacket 1514 drop_overlimit 0 new_flow_count 118567 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc hfsc 1: dev eth6 root refcnt 2 default 13
 Sent 2790330674 bytes 20438022 pkt (dropped 2260927, overlimits 18387956 requeues 4)
 backlog 0b 0p requeues 4
qdisc cake 80c8: dev eth6 parent 1:13 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 2771309577 bytes 20305808 pkt (dropped 2260927, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 1463424b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:           60 /    1554
 min/max overhead-adjusted size:       60 /    1554
 average network hdr offset:           14

                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay        721us
  av_delay         92us
  sp_delay          1us
  backlog            0b
  pkts         22566735
  bytes      3036005999
  way_inds       122259
  way_miss        77612
  way_cols            0
  drops             185
  marks               0
  ack_drop      2260742
  sp_flows            2
  bk_flows            1
  un_flows            0
  max_len         38479
  quantum          1514

qdisc pfifo 10: dev eth6 parent 1:11 limit 245p
 Sent 4855762 bytes 12343 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 80ca: dev eth6 parent 1:15 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 5206 bytes 51 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 768b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:           94 /     145
 min/max overhead-adjusted size:       94 /     145
 average network hdr offset:            3

                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay          6us
  av_delay          0us
  sp_delay          0us
  backlog            0b
  pkts               51
  bytes            5206
  way_inds            0
  way_miss           25
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            0
  un_flows            0
  max_len           145
  quantum          1514

qdisc cake 80c9: dev eth6 parent 1:14 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 0b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:        65535 /       0
 min/max overhead-adjusted size:    65535 /       0
 average network hdr offset:            0

                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay          0us
  av_delay          0us
  sp_delay          0us
  backlog            0b
  pkts                0
  bytes               0
  way_inds            0
  way_miss            0
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            0
  un_flows            0
  max_len             0
  quantum          1514

qdisc cake 80c7: dev eth6 parent 1:12 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 14159514 bytes 119816 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 37632b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:           94 /    1554
 min/max overhead-adjusted size:       94 /    1554
 average network hdr offset:           14

                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay        149us
  av_delay         11us
  sp_delay          2us
  backlog            0b
  pkts           119816
  bytes        14159514
  way_inds          680
  way_miss        24943
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len          9722
  quantum          1514

qdisc ingress ffff: dev eth6 parent ffff:fff1 ----------------
 Sent 51878241932 bytes 41114062 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc hfsc 1: dev ifb-eth6 root refcnt 2 default 13
 Sent 54011094610 bytes 41106943 pkt (dropped 7111, overlimits 38770087 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 80cd: dev ifb-eth6 parent 1:14 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 0b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:        65535 /       0
 min/max overhead-adjusted size:    65535 /       0
 average network hdr offset:            0

                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay          0us
  av_delay          0us
  sp_delay          0us
  backlog            0b
  pkts                0
  bytes               0
  way_inds            0
  way_miss            0
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            0
  un_flows            0
  max_len             0
  quantum          1514

qdisc pfifo 10: dev ifb-eth6 parent 1:11 limit 2671p
 Sent 1693676061 bytes 1416244 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 80ce: dev ifb-eth6 parent 1:15 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 13431264374 bytes 9669819 pkt (dropped 708, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 1047776b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:           60 /    1554
 min/max overhead-adjusted size:       60 /    1554
 average network hdr offset:           14

                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay        170us
  av_delay         36us
  sp_delay          4us
  backlog            0b
  pkts          9670527
  bytes     13431358530
  way_inds            0
  way_miss           25
  way_cols            0
  drops              17
  marks               0
  ack_drop          691
  sp_flows            0
  bk_flows            1
  un_flows            0
  max_len         59474
  quantum          1514

qdisc cake 80cc: dev ifb-eth6 parent 1:13 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 28665550850 bytes 22269836 pkt (dropped 5671, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 5252240b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:           60 /    1554
 min/max overhead-adjusted size:       60 /    1554
 average network hdr offset:           14

                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay        446us
  av_delay        426us
  sp_delay          3us
  backlog            0b
  pkts         22275507
  bytes     28673363838
  way_inds        47571
  way_miss        51948
  way_cols            0
  drops            5046
  marks               0
  ack_drop          625
  sp_flows            0
  bk_flows            1
  un_flows            0
  max_len         65266
  quantum          1514

qdisc cake 80cb: dev ifb-eth6 parent 1:12 bandwidth unlimited besteffort triple-isolate nonat nowash ack-filter split-gso rtt 100ms raw overhead 0
 Sent 10220596581 bytes 7751034 pkt (dropped 730, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 512104b of 15140Kb
 capacity estimate: 0bit
 min/max network layer size:           60 /    1554
 min/max overhead-adjusted size:       60 /    1554
 average network hdr offset:           14

                  Tin 0
  thresh           0bit
  target            5ms
  interval        100ms
  pk_delay         92us
  av_delay         32us
  sp_delay          3us
  backlog            0b
  pkts          7751764
  bytes     10220715995
  way_inds        24395
  way_miss        25016
  way_cols            0
  drops              31
  marks               0
  ack_drop          699
  sp_flows            2
  bk_flows            1
  un_flows            0
  max_len         65266
  quantum          1514


==== Nftables Ruleset (dscptag) ====
        chain dscptag {
                type filter hook forward priority filter; policy accept;
                iif "lo" accept
                counter packets 41172535 bytes 52623971352 jump mark_cs0
                meta l4proto udp ct original proto-src { 6881-6889, 51413 } counter packets 0 bytes 0 jump mark_cs1
                meta l4proto udp ct original proto-dst { 6881-6889, 51413 } counter packets 0 bytes 0 jump mark_cs1
                meta l4proto tcp ct original proto-dst { 6881-6889, 51413 } counter packets 0 bytes 0 jump mark_cs1
                meta length < 100 tcp flags ack add @xfst4ack { ct id . ct direction limit rate over 180000/second } counter packets 0 bytes 0 jump drop995
                meta length < 100 tcp flags ack add @fast4ack { ct id . ct direction limit rate over 18000/second } counter packets 1193 bytes 60864 jump drop95
                meta length < 100 tcp flags ack add @med4ack { ct id . ct direction limit rate over 1800/second } counter packets 2292 bytes 129582 jump drop50
                meta length < 100 tcp flags ack add @slow4ack { ct id . ct direction limit rate over 1800/second } counter packets 1142 bytes 64306 jump drop50
                meta l4proto tcp ct bytes < 25000000 jump mark_500ms
                meta l4proto tcp ct bytes > 500000000 jump mark_10s
                meta l4proto tcp ip dscp != cs1 add @slowtcp { ct id . ct direction limit rate 150/second burst 150 packets } ip dscp set af42 counter packets 5534115 bytes 6237954751
                meta l4proto tcp ip6 dscp != cs1 add @slowtcp { ct id . ct direction limit rate 150/second burst 150 packets } ip6 dscp set af42 counter packets 0 bytes 0
                th sport 3074 ip dscp set ef counter packets 4 bytes 216 comment "ipv4_COD"
                th sport 3074 ip6 dscp set ef counter packets 0 bytes 0 comment "ipv6_COD"
                th dport 3074 ip dscp set ef counter packets 15 bytes 5053 comment "ipv4_COD #2"
                th dport 3074 ip6 dscp set ef counter packets 0 bytes 0 comment "ipv6_COD #2"
                th sport 1353 ip dscp set ef counter packets 0 bytes 0 comment "ipv4_Halo"
                th sport 1353 ip6 dscp set ef counter packets 0 bytes 0 comment "ipv6_Halo"
                th dport 1353 ip dscp set ef counter packets 0 bytes 0 comment "ipv4_Halo #2"
                th dport 1353 ip6 dscp set ef counter packets 0 bytes 0 comment "ipv6_Halo #2"
                th dport 22101-22102 ip dscp set ef counter packets 5 bytes 240 comment "ipv4_Genshin Impact"
                th dport 22101-22102 ip6 dscp set ef counter packets 0 bytes 0 comment "ipv6_Genshin Impact"
                th sport 22101-22102 ip dscp set ef counter packets 49991 bytes 23933039 comment "ipv4_Genshin Impact #2"
                th sport 22101-22102 ip6 dscp set ef counter packets 0 bytes 0 comment "ipv6_Genshin Impact #2"
                meta l4proto icmp ip dscp set ef counter packets 11273 bytes 987434 comment "ipv4_ICMP (PING)"
                meta l4proto icmp ip6 dscp set ef counter packets 0 bytes 0 comment "ipv6_ICMP (PING)"
                udp dport 37000-40000 ip dscp set ef counter packets 1231844 bytes 1524547951 comment "ipv4_Apex Legends"
                udp dport 37000-40000 ip6 dscp set ef counter packets 0 bytes 0 comment "ipv6_Apex Legends"
                udp sport 37000-40000 ip dscp set ef counter packets 3492 bytes 3463435 comment "ipv4_Apex Legends #2"
                udp sport 37000-40000 ip6 dscp set ef counter packets 0 bytes 0 comment "ipv6_Apex Legends #2"
                udp dport 9000-9100 ip dscp set ef counter packets 12 bytes 991 comment "ipv4_Fortnite"
                udp dport 9000-9100 ip6 dscp set ef counter packets 0 bytes 0 comment "ipv6_Fortnite"
                udp sport 9000-9100 ip dscp set ef counter packets 128981 bytes 65617447 comment "ipv4_Fortnite #2"
                udp sport 9000-9100 ip6 dscp set ef counter packets 0 bytes 0 comment "ipv6_Fortnite #2"
                meta priority set ip dscp map @priomap counter packets 41169655 bytes 52623815286
                meta priority set ip6 dscp map @priomap counter packets 0 bytes 0
                meta nfproto ipv4 ct mark set @nh,8,8 & 0xfc [invalid type] | 0x80 counter packets 41169655 bytes 52623815286
                meta nfproto ipv6 ct mark set @nh,0,16 & 0xfc0 [invalid type] | 0x80 counter packets 0 bytes 0
                oifname "eth6" jump mark_cs0
        }
}

==== Custom Rules Table Status ====
Custom rules table (qosmate_custom) is not active or doesn't exist.
root@OpenWrt:~#

Unfortunately, I'm extremely busy today, but after taking a quick look at your output, everything seems fine at first glance. You’re getting hits in the tins, and your rules counters are also increasing.

The only thing I’d suggest is possibly updating to the latest stable version.

If you could describe in a bit more detail what exactly you think isn’t working correctly and maybe provide a few detailed examples, that would help.

Also, it would be great if you could show us the connection in the Connection tab that you’re trying to prioritize.

Here's an example of barely anything being marked on Egress, but working fine on Ingress.

Rule

Connection
Connection

Statistics



.

Now here's an example of the source and destination being flipped:
Rules (Source Enabled)


Packets not being marked as EF
Packets not being marked as EF

If I flip the rules & enable the destination rule, packets start being marked correctly:
Rules (Destination Enabled)


Packets being marked
Packets being marked correctly
.

Also IP Sets seem to break things as well:
IP Set


Rules

Connections
Packets not being marked

I believe what you're seeing is both a flaw (more cosmetic) and an advantage of QoSmate at the same time.

Here's a more technical explanation:

QoSmate creates nftables rules that apply to both directions (ingress and egress), and packets traverse these rules. However, the rules are actually only relevant during egress, because that's when DSCP values are written to conntrack and later restored via tc-ctinfo on ingress. This is necessary because tc processes packets before nftables rules on ingress, so we can't control DSCP markings via nftables rules on ingress.

At the very beginning of these rules (when using HFSC or hybrid), the system checks if ingress washing is enabled. If yes, the following rules are inserted into the .nft include file and set all DSCP values to CS0:

$(if { [ "$ROOT_QDISC" = "hfsc" ] || [ "$ROOT_QDISC" = "hybrid" ]; } && [ "$WASHDSCPDOWN" -eq 1 ]; then
    echo "# wash all the DSCP on ingress ... "
    echo "        counter jump mark_cs0"
  fi
)

This has the advantage that if your ISP sends DSCP markings to you, they would be removed before reaching your LAN devices. However, it also means that QoSmate doesn't honor DSCP markings from your LAN devices on egress if they were to make DSCP markings themselves. I find this to be an acceptable tradeoff since you can mark connections with QoSmate anyway.

Further down in the rules, all DSCP values are assigned to the correct classes for tc with this command:

## classify for the HFSC queues:
meta priority set ip dscp map @priomap counter
meta priority set ip6 dscp map @priomap counter

After that, the DSCP values are written to conntrack with this command:

# Store DSCP in conntrack for restoration on ingress
ct mark set ip dscp or 128 counter
ct mark set ip6 dscp or 128 counter

Then the DSCP values are washed again (if egress washing is enabled), which means the DSCP markings are not passed on to your ISP - which in most cases is desired behavior and also best practice, because you usually don't know what the ISP does with DSCP markings.

When the packets return (ingress), the DSCP values are restored from conntrack as mentioned and assigned to the correct classes via tc. The packets have already been correctly classified and everything works properly. However, the packet then goes through the nftables rules again, and since source and destination are swapped in certain constellations, the rules don't match. At the end of the rules, the DSCP values are written to conntrack again. In the connections tab, we read the connections from conntrack every second, and because the last action is writing CS0 to conntrack, it appears as if the connection has CS0, but in reality everything was correctly classified and assigned beforehand. If you configured the connections tab to read every 10ms or so, you would probably see the DSCP values jumping between EF and CS0. But as I said, everything should be correctly assigned beforehand, and this is more of a cosmetic flaw.

You can easily test this by simply disabling ingress washing - then EF should correctly remain in conntrack because it won't be subsequently set to CS0 during actual ingress.

Alternatively, if you want EF to permanently remain in conntrack, you can simply create both rules (one with source port 3074 and one with destination port 3074) and then you shouldn't see this phenomenon anymore.

This way, the traffic gets correctly marked in both directions regardless of how source/destination appear in the different packet flow stages.

I hope this helps you better understand the behavior.

1 Like

UPDATE:

I think I've now fixed this cosmetic issue discussed earlier with @toasty with the following commit:

The solution modifies the mark_cs0 chain to handle ingress and egress packets differently:

  • Ingress packets (from WAN): Wash DSCP and ACCEPT (stop processing)
    → Prevents conntrack overwrite while still removing ISP DSCP markings

  • Egress packets (not from WAN): Wash DSCP and RETURN (continue processing)
    → Allows normal QoSmate rule processing and correct conntrack storage

This approach should fix the connections tab display issue (no more CS0 flickering) and should maintain all existing functionality, still removes ISP DSCP markings when ingress washing is enabled and should slightly Improve performance by preventing redundant rule traversal on ingress

The connections tab should now consistently show the correct DSCP values (EF, CS5, etc.) instead of jumping between the actual value and CS0.

Testing and feedback would be appreciated! To test, update to the latest snapshot version.

3 Likes

Thank you for the detailed response. I appreciate being able to learn why something does or doesn't work. I had a small feeling that it was possibly only a visual flaw since QoS performance was still good.

The fix you implemented works perfectly on my end. :smile: Prioritized packets are now being marked correctly to the rules I have set (even when including IP sets, which was previously an issue). Thank you!
.

I'm assuming this is also a visual flaw, but in the Egress section of the Statistics page, basically everything is still going into the CS0 / Normal stats.

It seems like maybe its visually pulling stats from before washing DSCP & before the QoSmate rules as some packets from other tins are "slipping by", as well as not reflecting the correct outcome of the rules.
.
.

I also found another issue that I don't think is just visual, but I'm not sure.

The "Sustained TCP Down Prioritization" option doesn't seem to work sometimes.
TCP Down Priority

Some traffic doesn't get put in CS1 correctly. I've seen a download get 'stuck' in AF42 when I was downloading a large file (20GB+ web download), and the traffic didn't change to CS1 / Bulk. It stayed at AF42 the whole time, confirmed by watching both the Connections and the Statistics pages. (this was with both boost low volume traffic & deprioritize sustained traffic enabled)

As I was typing this... with the boost low volume traffic disabled and deprioritize sustained traffic enabled, I noticed a game that I was downloading on PS5 got stuck in CS0. It never deprioritized to CS1. :frowning:

1 Like

Great to hear that it's working exactly as intended! I might make a small adjustment since currently on ingress, the dscptag chain traversal is only stopped when ingress washing is active. This should probably also be the case when it's disabled, as the rules come after conntrack restoration and tc classification anyway and are essentially irrelevant.

UPDATE: I've now implemented this optimization and pushed the changes to the repo. The solution now uses a universal ingress optimization that stops rule processing for all ingress packets (from WAN) regardless of washing settings, providing better performance while maintaining the same functionality. The mark_cs0 chain has also been simplified back to its original state for cleaner code. Please update to the latest snapshot and retest.

If you believe something isn't right with the egress section of the statistics page, there are 3 possible explanations:

  1. Visual UI bug: This should be easy to test: Connect to your router via SSH and run the following command: tc -s qdisc You'll see all entries with your classes 1:11, 1:12, 1:13... displayed twice - once with the ingress interface (ifb + your wan interface) and once with just your wan interface (egress interface). The values for sent packets/bytes should be approximately equal. There will be slight variations because the command is only executed once while the UI updates continuously.

  2. Packets not reaching the correct classes: This could happen if rules are incorrectly configured or due to a bug.

  3. Most likely explanation (especially looking at your "service qosmate status" output): When no additional rules are configured, everything defaults to class 1:13 (normal). This explains why this class has the most traffic. All the rules you posted are classified as EF and should go to class 1:11 (Realtime). However, if you're not currently gaming, this class will naturally be empty. You do have a rule that marks ICMP with EF, which explains why there are a few packets in the realtime class in your screenshot.

Your 1:13 class only shows 338 MiB, which tells me you recently restarted either your router or qosmate or it was restarted automatically. When qosmate restarts, the tc rules and filters are cleared and counters are reset. Therefore, you're only looking at a brief snapshot of your tc stats.

You can easily test this by marking your entire subnet with cs1. Then almost all traffic should end up in bulk class 1:15.

Good observation, but unfortunately there's not much we can do about this. Since downloads are often offered as chunks and the chunks themselves are only a few hundred MB, the underlying rule often doesn't get triggered.

It can also happen with slow downloads (not using full bandwidth). In this case, "Boost Low-Volume TCP Traffic" might be permanently triggered because you're downloading at less than 150 pps. Since "Enable Sustained TCP Down-Prioritization" essentially takes the equivalent of 10 seconds of maximum download and down-prioritizes the connection with cs1 when this threshold is reached, it can take much longer to reach the threshold. We calculate:

# Calculated values
FIRST500MS=$((DOWNRATE * 500 / 8))
FIRST10S=$((DOWNRATE * 10000 / 8))

And then set this rule:

if [ "$TCP_DOWNPRIO_SUSTAINED_ENABLED" -eq 1 ]; then
    downprio_sustained_rules="meta l4proto tcp ct bytes > \$first10s jump mark_10s"
else
    downprio_sustained_rules="# Sustained TCP down-prioritization disabled"
fi

As I mentioned, I believe there's no solution for this because we use nftables "ct bytes" which refers to individual connections from conntrack, not all connections of a download.

If you believe these rules are impacting your performance, you can simply disable them.

1 Like

Is marking bulk traffic with 'LE' supported by QoSmate? I think this would be the recommended way to mark bulk traffic as opposed to CS1. Any thoughts on this?