SQM CABLE DOCSIS 3.1 Settings + Packet Prioritization

Could I ask you for a favor please?

Instead of going through the hassle of creating a screenshot and pasting that here, could you simply select all the relevant text in the terminal window and copy and paste it into the forum editor as "Preformatted text"? To paste as preformatted text, click on the </> icon in the toolbar and replace the "type or paste here" placeholder with what you copied from the terminal window.

Or if you want to do it in the forum editor purely with entering text:

Just make sure you "sandwich" your text between two rows of backtick characters ` (which themselves will be invisible in the preview) looking in something like this in the editor:
```
Your Pasted Text as preformatted text with fixed width font
1
1111 (note with fixed-width fonts the numbers are right-aligned)
```
but looking like this in the rendered forum:

Your Pasted Text as preformatted text with fixed width font
   1
1111 (note with fixed-width fonts the numbers are right-aligned)
1 Like
root@OpenWrt:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc cake 8013: dev eth0 root refcnt 9 bandwidth 42Mbit diffserv4 dual-srchost nat nowash no-ack-filter split-gso rtt 100ms noatm overhead 18 
 Sent 33480212 bytes 263100 pkt (dropped 1, overlimits 30183 requeues 1) 
 backlog 0b 0p requeues 1
 memory used: 45472b of 4Mb
 capacity estimate: 42Mbit
 min/max network layer size:           28 /    1500
 min/max overhead-adjusted size:       46 /    1518
 average network hdr offset:           14

                   Bulk  Best Effort        Video        Voice
  thresh       2625Kbit       42Mbit       21Mbit    10500Kbit
  target         6.92ms          5ms          5ms          5ms
  interval        102ms        100ms        100ms        100ms
  pk_delay          0us         14us          0us         21us
  av_delay          0us          3us          0us         10us
  sp_delay          0us          2us          0us          6us
  backlog            0b           0b           0b           0b
  pkts                0       178649            0        84452
  bytes               0     18524071            0     14957627
  way_inds            0          814            0            0
  way_miss            0         1759            0           43
  way_cols            0            0            0            0
  drops               0            1            0            0
  marks               0            0            0            0
  ack_drop            0            0            0            0
  sp_flows            0            1            0            1
  bk_flows            0            1            0            0
  un_flows            0            0            0            0
  max_len             0        25262            0         1342
  quantum           300         1281          640          320

qdisc clsact ffff: dev eth0 parent ffff:fff1 
 Sent 9219449783 bytes 17876577 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth1 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64 
 Sent 10372130910 bytes 7785575 pkt (dropped 3291, overlimits 0 requeues 113173) 
 backlog 0b 0p requeues 113173
  maxpacket 1514 drop_overlimit 2752 new_flow_count 78394 ecn_mark 0 drop_overmemory 2752
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev ifb-dns root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64 
 Sent 93622 bytes 569 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc cake 8014: dev ifb-eth0 root refcnt 2 bandwidth 850Mbit diffserv4 dual-dsthost nat nowash ingress no-ack-filter split-gso rtt 100ms noatm overhead 18 
 Sent 1375328856 bytes 1417003 pkt (dropped 77, overlimits 40519 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 5253462b of 15140Kb
 capacity estimate: 850Mbit
 min/max network layer size:           46 /    1500
 min/max overhead-adjusted size:       64 /    1518
 average network hdr offset:           14

                   Bulk  Best Effort        Video        Voice
  thresh      53125Kbit      850Mbit      425Mbit   212500Kbit
  target            5ms          5ms          5ms          5ms
  interval        100ms        100ms        100ms        100ms
  pk_delay          0us       13.8ms          0us        611us
  av_delay          0us       13.8ms          0us         34us
  sp_delay          0us       1.46ms          0us          1us
  backlog            0b           0b           0b           0b
  pkts                0       956886            0       460194
  bytes               0   1265476040            0    109967702
  way_inds            0          832            0            0
  way_miss            0         1360            0           28
  way_cols            0            0            0            0
  drops               0           77            0            0
  marks               0            0            0            0
  ack_drop            0            0            0            0
  sp_flows            0            1            0            0
  bk_flows            0            1            0            1
  un_flows            0            0            0            0
  max_len             0        68130            0         1242
  quantum          1514         1514         1514         1514

qdisc clsact ffff: dev ifb-eth0 parent ffff:fff1 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0

1 Like

Thank you, much easier to read for me. This looks like things are working out for you, both ingress and egress cake instances show traffic in only Best Effort and Voice, as expected from your configuration.

If you want to configure the overhead manually, just add overhead 18 mpu 64 to both:

option ingress_options ""
option egress_options ""

while you are at it you might want to try adding ack-filter to egress_options...

config defaults
        list defaults /etc/qosify/*.conf

config class voice
        option ingress EF
        option egress EF

config interface wan
        option name wan
        option disabled 0
        option bandwidth_up 42mbit
        option bandwidth_down 850mbit
        option overhead_type none
        # defaults:
        option ingress 1
        option egress 1
        option mode diffserv4
        option nat 1
        option host_isolate 1
        option autorate_ingress 0
        option ingress_options "overhead 18 mpu 64"
        option egress_options "overhead 18 mpu 64 ack-filter"
        option options ""           

what does the ack filter do ?

So ACK filter will aggregate queued up ACKs, so let's assume for a given TCP flow, we have say three ACK packets in the egress queue, now if this queue is serviced again we could either send these three ACKs back to back or (since ACKs are accumulative) we could drop the first two ACKs and only send the last one with the highest sequence number. The receiver of the ACKs will perform more or less the same whether three consecutive ACKs arrive back to back of only the last one, but we removed 66% of the traffic volume of that ACK burst. AND if we would not have send all three ACKs at the same time, but say only the first two the improvement not only affects the ACK volume, but the ACK signal of the last ACK would be delayed, potentially stalling the sender (while with an ACK filter the most recently received segment would be acknowledged quicker).

In short on bursty MAC layers like DOCSIS ack-filtering can help.

2 Likes

i tried ack filtering but it clearly felt better without it so this is my final config:

config defaults
        list defaults /etc/qosify/*.conf

config class voice
        option ingress EF
        option egress EF

config interface wan
        option name wan
        option disabled 0
        option bandwidth_up 42mbit
        option bandwidth_down 850mbit
        option overhead_type none
        # defaults:
        option ingress 1
        option egress 1
        option mode diffserv4
        option nat 1
        option host_isolate 1
        option autorate_ingress 0
        option ingress_options "overhead 22 mpu 64"
        option egress_options "overhead 22 mpu 64"
        option options ""

# Games
udp:27000-27031         voice
udp:27036               voice
udp:3074                voice
udp:3478                voice
udp:4379                voice
udp:4380                voice

ACK filtering should have zero impact on pure UDP flows, but in theory ACK filtering might overload your CPU. But I wonder how you assessed "clearly felt better", because that can be a fluke...

That said, for your gaming issue ACK-filtering is completely orthogonal, as long as your game traffic is the only traffic in the Voice priority class your game should not care much about what happens with TCPs....

Just to illustrate the issue though, with classic TCP Reno there will be one small ACK packet for every two full MSS segment received, assuming MTU 1500 and IPv4/TCP a pure ACK packet is roughly 1/20 the size of a full-MTU/full-MSS packet, and one ACK every two full segments gives us:
ACK/Data = 1/40
So for a saturating download on a 1 Gbps link you can expect 1000/40 = 25 Mbps of reverse ACK upload traffic, on a 1000/50 link that is already 50% of you upload capacity...

More modern TCPs than Reno, especially with aggregation techniques like GRO/GSO will emit noticeably fewer ACKs ameliorating the problem.

1 Like

my shots didnt register right (same server) but i tried it with overhead 18 mpu 64 ack-filter
i will try it later again with overhead 22 mpu 64 ack-filter

Again, unlikely to be involved in your issue. Overhead 18 or 22 is unlikely to be an issue unless you saturate at least one direction of your link with really small packets,.
So unless you did heavily load your internet acces link in parallel to playing your game this is unlikely to affect your game at all.

I understand that testing responsiveness in games is quite tricky and I do net envy you....

I think this will answer your question: The ingress & egress setup calls

egress() {                                                                   
    SILENT=1 $TC qdisc del dev $IFACE root                                   
    $TC qdisc add dev $IFACE root cake bandwidth ${UPLINK}kbit \             
            $( get_cake_lla_string ) ${EGRESS_CAKE_OPTS} ${EQDISC_OPTS}      
                                                                             
    # Put act_ctinfo on the egress interface to set DSCP from the stored connmark.
    # This seems counter intuitive but it ensures once the mark is set that all   
    # subsequent egress packets have the same stored DSCP. This avoids the need   
    # for iptables to run/mark every packet.                                      
                                                                                  
    $TC filter add dev $IFACE matchall \                                          
        action ctinfo dscp ${DSCP} ${DSCPS}                                       
}                                                                                 
                                                                                  
                                                                                  
ingress() {                                                                       
                                                                                  
    SILENT=1 $TC qdisc del dev $IFACE handle ffff: ingress                        
    $TC qdisc add dev $IFACE handle ffff: ingress                                 
                                                                                  
    SILENT=1 $TC qdisc del dev $DEV root                                          
                                                                                  
    [ "$ZERO_DSCP_INGRESS" -eq "1" ] && INGRESS_CAKE_OPTS="$INGRESS_CAKE_OPTS wash"
                                                                                   
    $TC qdisc add dev $DEV root cake bandwidth ${DOWNLINK}kbit \                   
            $( get_cake_lla_string ) ${INGRESS_CAKE_OPTS} ${IQDISC_OPTS}           
                                                                                   
    $IP link set dev $DEV up                                                       
                                                                                   
    # restore DSCP from conntrack mark into packet                                 
    # redirect all packets arriving in $IFACE to ifb0                              
    $TC filter add dev $IFACE parent ffff: matchall \                              
        action ctinfo dscp ${DSCP} ${DSCPS} \                                      
        action mirred egress redirect dev $DEV                                     
}  
1 Like

would this prio work when i dont use sqm or qosify ?

I guess not, since there would be no scheduler to honors priorities. One more issue, NAT can/will remap the internal-port numbers so it might be safer to attach your rules to the remote port ranges instead. (I might be misreading your screenshot though, maybe better post the relevant sections from your \etc/config/firewall as text?)

Silly questions:

A) If the IFB goes away for what ever reason (say the underlaying pppoe interface ceased to exist) the action will also be gone for good? Or put differently is there anything else required than a $TC qdisc del dev $DEV root to get rid of the action?

B) What requirement for additional packets exists for using this (I would like to add this great capability to a variant of layer_cake.qos in the main repository, but want to make it safe even on systems lacking the requirements)

Please note: my idea is to keep actual egress marking rules out of the scripts and delegate users to using the firewalls DSCP classification action to set egress DSCPs to their hearts content, so all we need would be two additional tc lines guarded by a check whether the required functionality exists on the system...

config rule
	option name 'CS4 gaming'
	list proto 'udp'
	option src '*'
	option dest 'lan'
	list dest_ip '192.168.2.160'
	option target 'DSCP'
	option set_dscp 'CS4'
	option enabled '0'

config rule
	option name 'BROADCAST VIDEO'
	list proto 'tcp'
	option src '*'
	option src_port '1935 1936 2935 2396'
	option dest 'lan'
	list dest_ip '192.168.2.160'
	option target 'DSCP'
	option set_dscp 'CS3'
	option enabled '0'

A big +1 for this concept.

Using the firewall traffic rules UI is straightforward enough to personalize the markings.

And having the ingress traffic mirror the outgoing DSCP is a big win, and much simpler (and I'd guess, more efficient) than veth and other approaches.

Hrm, that is a bit of a show stopper for me right now, as my iptables tells me:

root@turris:~# iptables -t mangle -A MANGLE_CHAIN_HERE -j CONNMARK --set-dscpmark help
iptables v1.8.3 (legacy): unknown option "--set-dscpmark"
Try `iptables -h' or 'iptables --help' for more information.

Yes, the command would fail as exercised anyway, but I guess unknown option "--set-dscpmark" also tells my that my iptables is not usable right now, which makes testing a tad tricky...

Also, if I want to use the firewall GUI to set the marking rules, I guess (naively?) that I need a generic mangle rule here that copies the existing DSCP into the connmark somehow.

Ignore me: @ldir laid things out in a way even I can understand here:
https://lore.kernel.org/netfilter-devel/20191203160652.44396-2-ldir@darbyshire-bryant.me.uk/

I still need to get a version of OpenWrt on a testing device that sports the modified iptables version...

2 Likes

hey @moeller0 is this post from you still relevant ?

why should someone do point 4. - 6.1 when it says :
Show Advanced Linklayer Options, (only needed if MTU > 1500)

what will these commands do then ?

did i set up everything correct for low latency gaming ?


root@OpenWrt:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc cake 8039: dev eth0 root refcnt 9 bandwidth 42Mbit besteffort dual-srchost nat nowash ack-filter split-gso rtt 100ms noatm overhead 18 mpu 64 
 Sent 557716 bytes 2136 pkt (dropped 1, overlimits 503 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 99668b of 4Mb
 capacity estimate: 42Mbit
 min/max network layer size:           28 /    1500
 min/max overhead-adjusted size:       64 /    1518
 average network hdr offset:           14

                  Tin 0
  thresh         42Mbit
  target            5ms
  interval        100ms
  pk_delay        248us
  av_delay         22us
  sp_delay          8us
  backlog            0b
  pkts             2137
  bytes          559230
  way_inds            0
  way_miss          218
  way_cols            0
  drops               1
  marks               0
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len         17301
  quantum          1281

qdisc ingress ffff: dev eth0 parent ffff:fff1 ---------------- 
 Sent 5015802 bytes 55734 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev eth1 root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5ms interval 100ms memory_limit 4Mb ecn drop_batch 64 
 Sent 9269453949 bytes 6920256 pkt (dropped 5753, overlimits 0 requeues 89363) 
 backlog 0b 0p requeues 89363
  maxpacket 1514 drop_overlimit 4864 new_flow_count 17581 ecn_mark 0 drop_overmemory 4864
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-lan root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0
qdisc cake 803a: dev ifb4eth0 root refcnt 2 bandwidth 850Mbit besteffort dual-dsthost nat wash no-ack-filter split-gso rtt 100ms noatm overhead 18 mpu 64 
 Sent 5809566 bytes 55734 pkt (dropped 0, overlimits 679 requeues 0) 
 backlog 0b 0p requeues 0
 memory used: 32358b of 15140Kb
 capacity estimate: 850Mbit
 min/max network layer size:           46 /    1500
 min/max overhead-adjusted size:       64 /    1518
 average network hdr offset:           14

                  Tin 0
  thresh        850Mbit
  target            5ms
  interval        100ms
  pk_delay         14us
  av_delay          7us
  sp_delay          1us
  backlog            0b
  pkts            55734
  bytes         5809566
  way_inds            2
  way_miss          218
  way_cols            0
  drops               0
  marks               0
  ack_drop            0
  sp_flows            0
  bk_flows            1
  un_flows            0
  max_len         21852
  quantum          1514

That is the actual text in the GUI, which admittedly is not correct, since adjusting the MPU is required as the default of 0 is correct only increasingly rarely (on some ATM encapsulations).
For cable/DOCSIS you really should set mpu 64 how you do that, via the e/ingress options or via the GUI does not matter anymore (cake will honor the mpu value in the GUI.

As a non-gamer I would say probably, I might try to add the ingress keyword to the advanced option string for ingress, but that is all I can see. However if gaming sucks you might want to switch to layer_cake.qos and carefully up-priritize only the gaming packets. But that is a whole different kettle of fish.
Personally I would always first try without prioritization, but then I stopped reaction-time gated gaming shortly after the original quake came around, so have little first hand experience to base my recommendations on.

Same here. For various reasons, I need to stay on 19.07.x, is there a patch I can apply to that version of iptables (1.8.3 in my case) that would add in the set-dscpmark function ?