SQM CABLE DOCSIS 3.1 Settings + Packet Prioritization

ah okay so its working ? is there a way to check the upload dscp after qosify ?

Try something like:

tcpdump -i eth1 -n -vv udp and ! port 53

Where eth1 would be your WAN interface.

1 Like

If you know the remote host ip address just use that and use -i wan or whatever your wan device is. Packets sent in the direction towards the remote IP should show the expected marking.

1 Like

i can confirm my dscp prioritization is working well in/egress

The tcpdump exercises really just demonstrate that DSCP marking works as expected. Marking alone is not prioritization though. To test whether that works the easiest is to repeatedly look at the updated output of
tc -s qdisc show dev pppoe-wan
and
tc -s qdisc show dev ifb4pppoe-wan
(replace pppoe-wan and ifb4pppoe-wan with the actual devices you instantiated cake on) und two conditions:

  1. no priority traffic (e.g. if you prioritize traffic from a specific computer, shut that computer down)
  2. with expected priority traffic active
    if you repeatedly run one the tc commands above in a ssh terminal you expect the packet/byte counters for the higher priority tins to increase faster under condition 2) than under condition 1) (where the increase might actually be zero depending on the specific rules you instantiated).

Could I also ask you to write a quick summary of what you finally converged on, so other's funding this thread might learn from your gained experience?

Please note that these are really just names, the only thing that matters is how cake treats a specific bit-pattern of the TOS byte. Being modern enough, cake will split the old 8bit TOS byte into a 6bit DSCP field and a 2bit ECN field, as far as I can see tcpdump reports the full value as tos but also extracts the ECN value. What I am aiming at, the old TOS names can be safely ignored.

1 Like

This is my simple qosify gaming config (warzone, csgo) for a 1000/50 cable connection

vi /etc/config/qosify


config defaults
        list defaults /etc/qosify/*.conf
        option dscp_prio besteffort
        option dscp_bulk besteffort
        option dscp_icmp besteffort
        option dscp_default_udp besteffort
        option dscp_default_tcp besteffort


config class besteffort
        option ingress CS0
        option egress CS0

config class voice
        option ingress EF
        option egress EF

config interface wan
        option name wan
        option disabled 0
        option bandwidth_up 42mbit
        option bandwidth_down 850mbit
        option overhead_type docsis
        # defaults:
        option ingress 1
        option egress 1
        option mode diffserv4
        option nat 1
        option host_isolate 1
        option autorate_ingress 0
        option ingress_options ""
        option egress_options ""
        option options ""

config device wandev
        option disabled 1
        option name wan
        option bandwidth 100mbit

vi /etc/qosify/00-defaults.conf

# Gaming
udp:27005-27200         voice
udp:3074                voice
udp:30000-45000         voice

root@OpenWrt:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc cake 8007: dev eth0 root refcnt 9 bandwidth 42Mbit diffserv4 dual-srchost nat nowash no-ack-filter split-gso rtt 100ms noatm overhead 18 mpu 64 
 Sent 9231038 bytes 48980 pkt (dropped 3, overlimits 7101 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 131680b of 4Mb
 capacity estimate: 42Mbit
 min/max network layer size:           28 /    1500
 min/max overhead-adjusted size:       64 /    1518
 average network hdr offset:           14

                   Bulk  Best Effort        Video        Voice
  thresh       2625Kbit       42Mbit       21Mbit    10500Kbit
  target         6.92ms          5ms          5ms          5ms
  interval        102ms        100ms        100ms        100ms
  pk_delay          0us        194us         58us         15us
  av_delay          0us         17us          1us         10us
  sp_delay          0us          7us          1us          8us
  backlog            0b           0b           0b           0b
  pkts                0        42787           14         6182
  bytes               0      8148151         5004      1082385
  way_inds            0          169            0           20
  way_miss            0         2162            3           99
  way_cols            0            0            0            0
  drops               0            3            0            0
  marks               0            0            0            0
  ack_drop            0            0            0            0
  sp_flows            0            1            0            1
  bk_flows            0            1            0            0
  un_flows            0            0            0            0
  max_len             0        52990         1410         1306
  quantum           300         1281          640          320

qdisc clsact ffff: dev eth0 parent ffff:fff1 
 Sent 25628101394 bytes 44372965 pkt (dropped 940, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth1 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64 
 Sent 35830894696 bytes 37264505 pkt (dropped 6776, overlimits 0 requeues 1124482) 
 backlog 0b 0p requeues 1124482
  maxpacket 1514 drop_overlimit 6773 new_flow_count 258987 ecn_mark 0 drop_overmemory 6773
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev ifb-dns root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64 
 Sent 186364 bytes 1041 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc cake 8008: dev ifb-eth0 root refcnt 2 bandwidth 850Mbit diffserv4 dual-dsthost nat nowash ingress no-ack-filter split-gso rtt 100ms noatm overhead 18 mpu 64 
 Sent 228973673 bytes 549758 pkt (dropped 13, overlimits 12220 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 1378468b of 15140Kb
 capacity estimate: 850Mbit
 min/max network layer size:           46 /    1500
 min/max overhead-adjusted size:       64 /    1518
 average network hdr offset:           14

                   Bulk  Best Effort        Video        Voice
  thresh      53125Kbit      850Mbit      425Mbit   212500Kbit
  target            5ms          5ms          5ms          5ms
  interval        100ms        100ms        100ms        100ms
  pk_delay          0us        187us          0us         11us
  av_delay          0us         18us          0us          7us
  sp_delay          0us          5us          0us          1us
  backlog            0b           0b           0b           0b
  pkts                0       161266            0       388505
  bytes               0    203755867            0     25236180
  way_inds            0          137            0           20
  way_miss            0         1255            0           84
  way_cols            0            0            0            0
  drops               0           13            0            0
  marks               0            0            0            0
  ack_drop            0            0            0            0
  sp_flows            0            1            0            0
  bk_flows            0            1            0            1
  un_flows            0            0            0            0
  max_len             0        68130            0         1314
  quantum          1514         1514         1514         1514

qdisc clsact ffff: dev ifb-eth0 parent ffff:fff1 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
root@OpenWrt:~# 


1 Like

One quick note, the IETF recommends and some ISP implement to drop CS7 (and CS6) marked packets that come into a network without an appropriate service level agreement in place. I would therefore recommend to use the more common EF bitpattern for your high priority class as that has a better chance of potentially just being bleached (re-marked to DSCP 0 on ingress) instead of being dropped.

1 Like

Now that I'm remembering more, I think I had the "docsis" keyword included, and also set mpu 64 in the extra settings on the advanced stuff on the Link Layer page. The "docsis" keyword sets the ether overhead 18 and mpu 64, if I'm remembering correctly. I don't think it doubles the overhead bytes, but it's been a long time. One should use the keyword, or set them manually, not both...

Pablo has finally commented on Jeremy’s patches.

https://marc.info/?l=netfilter-devel&m=165332867915114&w=2

1 Like

hi nik in ssh can you post

ubus call qosify dump 

and

qosify-status

and tell me if you are an error thanks

root@OpenWrt:~# qosify-status
===== interface wan: active =====
egress status:
qdisc cake 800f: root refcnt 9 bandwidth 42Mbit diffserv4 dual-srchost nat nowash no-ack-filter split-gso rtt 100ms noatm overhead 18 mpu 64 
 Sent 15040238 bytes 90804 pkt (dropped 1, overlimits 4241 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 67324b of 4Mb
 capacity estimate: 42Mbit
 min/max network layer size:           28 /    1500
 min/max overhead-adjusted size:       64 /    1518
 average network hdr offset:           14

                   Bulk  Best Effort        Video        Voice
  thresh       2625Kbit       42Mbit       21Mbit    10500Kbit
  target         6.92ms          5ms          5ms          5ms
  interval        102ms        100ms        100ms        100ms
  pk_delay          0us         17us          0us         20us
  av_delay          0us          4us          0us         10us
  sp_delay          0us          2us          0us          7us
  backlog            0b           0b           0b           0b
  pkts                0        31294            0        59511
  bytes               0      4811626            0     10230126
  way_inds            0           48            0           14
  way_miss            0          668            0          198
  way_cols            0            0            0            0
  drops               0            1            0            0
  marks               0            0            0            0
  ack_drop            0            0            0            0
  sp_flows            0            1            0            1
  bk_flows            0            1            0            0
  un_flows            0            0            0            0
  max_len             0        17061            0         1342
  quantum           300         1281          640          320


ingress status:
qdisc cake 8010: root refcnt 2 bandwidth 850Mbit diffserv4 dual-dsthost nat nowash ingress no-ack-filter split-gso rtt 100ms noatm overhead 18 mpu 64 
 Sent 252534430 bytes 328215 pkt (dropped 5, overlimits 9877 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 4106944b of 15140Kb
 capacity estimate: 850Mbit
 min/max network layer size:           46 /    1500
 min/max overhead-adjusted size:       64 /    1518
 average network hdr offset:           14

                   Bulk  Best Effort        Video        Voice
  thresh      53125Kbit      850Mbit      425Mbit   212500Kbit
  target            5ms          5ms          5ms          5ms
  interval        100ms        100ms        100ms        100ms
  pk_delay          0us       18.3ms          0us        121us
  av_delay          0us       18.3ms          0us         12us
  sp_delay          0us         70us          0us          1us
  backlog            0b           0b           0b           0b
  pkts                0       160838            0       167382
  bytes               0    203126214            0     49415342
  way_inds            0           64            0            1
  way_miss            0          420            0          152
  way_cols            0            0            0            0
  drops               0            5            0            0
  marks               0            0            0            0
  ack_drop            0            0            0            0
  sp_flows            0            1            0            0
  bk_flows            0            1            0            1
  un_flows            0            0            0            0
  max_len             0        62074            0         1242
  quantum          1514         1514         1514         1514


root@OpenWrt:~# 

1 Like

And ubus call qosify dump ?

there is nothing to copy

I have a doubt on what occasions it is changed to overhead 22

Well, according to the DOCSIS specifications the cable traffic shaper ISPs generally use to limit customers to their contracted maximum rates emulates 18bytes of overhead. However an ISP might add a 4 byte VLAN tag, so the sqm recommendation was changed to 22. The rationale consists out of the following:

  1. slightly overestimating the per packet overhead introduces a minor loss in maximally achievable throughput, underestimating it can result in a more noticeable increase of bufferbloat/latency under load
  2. sqm's main purpose is to reduce bufferbloat so we recommend to use 22 instead of 18 on the principle of erring on the side of caution.

Feel free to use 18 if you are confident your ISP does not use VLAN tags.

Side-note: Getting overhead and gross shaper rate set correctly is harder than it seems, because if you set the per-packet-overhead to small, you can "make-up" for that by at the same time setting the shaper rate appropriately lower (so that you see no bufferbloat under saturating loads); and vice versa, you can undo the throughput-reduction of setting too high a per-packet-overhead by setting the shaper rate higher. Now, this equivalence is not complete (otherwise there would be little need to model both parameters independently in sqm) because the proportionality factor between the two depends on the packet size (shaper rate does not care, but any fixed per-packet-overhead will contribute a larger fraction of the total packet size with decreasing payload size, so the relative error (and hence the throughput or bufferbloat effect) of a wrong overhead setting increases with decreasing packet size). One way to tackle this is to slightly overestimate the overhead and set the shaper rate such that it stays below the bottleneck rate (e.g. by setting the gross shaper rate to the goodput as measured with a speedtest). The other way is to first set shaper rate and overhead for normal ~1500 Byte packets and confirm the lack of bufferbloat, and then repeat the measurements after using bi-directional MSS clamping to restrict maximum segment size (MSS) to say 200 Byte; if both shaper rate and overhead were set appropriately there will be no increased bufferbloat at the smaller packet size, assuming the router and servers in question have no issues with the increased packet rate for the tests with the smaller MSS.
It is often considerably simpler to just select/set shaper rate and overhead conservatively enough such that the described method can be avoided.

1 Like

how would i do this? because measuring it by speed test is not enough

If you use firewall3 still, just add the following to /etc/firewall.user and restart the firewall:

# special rules to allow MSS clamping for in and outbound traffic                                                                                                                               
# use ip6tables -t mangle -S ; iptables -t mangle -S to check                                                                                                                                   
forced_MSS=216                                                                                                                                                                                 
# affects both down- and upstream, egress seems to require at least 216                                                                                                                         
iptables -t mangle -A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -m comment --comment "custom: Zone wan MTU fixing" -j TCPMSS --set-mss ${forced_MSS}                                       
ip6tables -t mangle -A FORWARD -p tcp -m tcp --tcp-flags SYN,RST SYN -m comment --comment "custom6: Zone wan MTU fixing" -j TCPMSS  --set-mss ${forced_MSS}  

make sure to set forced_MSS to the desired value.

In the past that worked for me (I used tcpdump and wireshark to confirm that TCP MSS was actually changed).