SQM & bufferbloat advice/help

Here:

root@OpenWrt:~# ifstatus wan
{
        "up": true,
        "pending": false,
        "available": true,
        "autostart": true,
        "dynamic": false,
        "uptime": 12389,
        "l3_device": "pppoe-wan",
        "proto": "pppoe",
        "device": "eth1.2",
        "updated": [
                "addresses",
                "routes"
        ],
        "metric": 0,
        "dns_metric": 0,
        "delegation": true,
        "ipv4-address": [
                {
                        "address": "90.244.90.136",
                        "mask": 32,
                        "ptpaddress": "212.158.250.96"
                }
        ],
        "ipv6-address": [

        ],
        "ipv6-prefix": [

        ],
        "ipv6-prefix-assignment": [

        ],
        "route": [
                {
                        "target": "0.0.0.0",
                        "mask": 0,
                        "nexthop": "212.158.250.96",
                        "source": "0.0.0.0/0"
                }
        ],
        "dns-server": [
                "1.1.1.1",
                "1.0.0.1"
        ],
        "dns-search": [

        ],
        "neighbors": [

        ],
        "inactive": {
                "ipv4-address": [

                ],
                "ipv6-address": [

                ],
                "route": [

                ],
                "dns-server": [
                        "212.158.248.6",
                        "83.146.21.6"
                ],
                "dns-search": [

                ],
                "neighbors": [

                ]
        },
        "data": {

        }
}
root@OpenWrt:~# cat /etc/config/sqm

config queue 'eth1'
        option qdisc_advanced '0'
        option debug_logging '0'
        option verbosity '5'
        option qdisc 'cake'
        option script 'piece_of_cake.qos'
        option linklayer 'atm'
        option overhead '0'
        option interface 'pppoe-wan'
        option enabled '1'
        option upload '700'
        option download '2500'

Note: since my first reply I have set the download to 2500 as I've been attempting to find the best compromise between latency and throughput.

root@OpenWrt:~# tc -s qdisc
qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth1 root
 Sent 214107485 bytes 771479 pkt (dropped 0, overlimits 0 requeues 1)
 backlog 0b 0p requeues 1
qdisc fq_codel 0: dev eth1 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 214107485 bytes 771479 pkt (dropped 0, overlimits 0 requeues 1)
 backlog 0b 0p requeues 1
  maxpacket 113 drop_overlimit 0 new_flow_count 1 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev eth0 root
 Sent 739506816 bytes 885905 pkt (dropped 0, overlimits 0 requeues 34)
 backlog 0b 0p requeues 34
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 739506816 bytes 885905 pkt (dropped 0, overlimits 0 requeues 34)
 backlog 0b 0p requeues 34
  maxpacket 1514 drop_overlimit 0 new_flow_count 8 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc noqueue 0: dev br-WRT_Guest root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth0.3 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth0.1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth1.2 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 8039: dev pppoe-wan root refcnt 2 bandwidth 700Kbit besteffort triple-isolate nonat nowash no-ack-filter split-gso rtt 100.0ms atm overhead 0
 Sent 53439915 bytes 297041 pkt (dropped 2091, overlimits 180539 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 111488b of 4Mb
 capacity estimate: 700Kbit
 min/max network layer size:           29 /    1492
 min/max overhead-adjusted size:       53 /    1696
 average network hdr offset:            0

                  Tin 0
  thresh        700Kbit
  target         26.0ms
  interval      121.0ms
  pk_delay       48.7ms
  av_delay        3.4ms
  sp_delay          5us
  backlog            0b
  pkts           299132
  bytes        56067873
  way_inds        27254
  way_miss        16308
  way_cols            0
  drops            2091
  marks               0
  ack_drop            0
  sp_flows            3
  bk_flows            1
  un_flows            0
  max_len         13320
  quantum           300

qdisc ingress ffff: dev pppoe-wan parent ffff:fff1 ----------------
 Sent 335851529 bytes 355897 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev wlan0 root
 Sent 17978143 bytes 81482 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev wlan0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 17977579 bytes 81478 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 564 bytes 4 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev wlan1 root
 Sent 36639830 bytes 111440 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev wlan1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 36449619 bytes 108816 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 191 drop_overlimit 0 new_flow_count 1 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 190211 bytes 2624 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev wlan1-1 root
 Sent 396717 bytes 6549 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev wlan1-1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1-1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 396717 bytes 6549 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1-1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1-1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev wlan0-1 root
 Sent 395946 bytes 6545 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev wlan0-1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0-1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 395946 bytes 6545 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0-1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0-1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc cake 803a: dev ifb4pppoe-wan root refcnt 2 bandwidth 2500Kbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 100.0ms atm overhead 0
 Sent 300818603 bytes 330747 pkt (dropped 25150, overlimits 472383 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 140864b of 4Mb
 capacity estimate: 2500Kbit
 min/max network layer size:           28 /    1492
 min/max overhead-adjusted size:       53 /    1696
 average network hdr offset:            0

                  Tin 0
  thresh       2500Kbit
  target          7.3ms
  interval      102.3ms
  pk_delay       20.7ms
  av_delay        6.5ms
  sp_delay         54us
  backlog            0b
  pkts           355897
  bytes       335851529
  way_inds        18945
  way_miss        14958
  way_cols            0
  drops           25150
  marks              12
  ack_drop            0
  sp_flows            1
  bk_flows            1
  un_flows            0
  max_len          1492
  quantum           300```

Your overhead in Link Layer adaptation is set to 0. I’m not sure if it’s correct. I think you have to specify the actual overhead of the DSL with the PPPOE enabled, but you may want to search this forum about this topic.

2 Likes

Okay, you are right, only was incorrect. But I still do not accept that for a flow queueing qdisc like cake egress priority marking is technically required. But as I said easy to test for the OP, and if layer_cake performs noticeably better I will change my opinion for his link.
A lot of typical advice for setting up quality of service systems at home assume basically a dumb FIFO qdisc and are less important (not not-important) on a link with traffic shaper and flow queueing advanced queue management.

I do agree though, that the OP's situation is dire, and my first hand experience with faster links (the slowest I tried voip on was 6/1 Mbps) might not extrapilate well down to his bandwidth and then I guess all your great advice might become immediately applicable....

1 Like

Regarding overhead, 44 should be okay for typical ATM links, but you can try the method in https://github.com/moeller0/ATM_overhead_detector to try to actually measure the overhead on your link.

1 Like

I’ve never used or supported DSL, but this article may help.

I think it’s worth trying to lower the MTU to 1492 or even 1452 and specify the Link Layer adaptation to be 26 bytes (8 for PPOE and 18 for Ethernet). All the complexity of factoring in ATM cell padding is probably managed by CAKE automatically when you choose ATM.

2 Likes

Thanks. No, just the built in switch on my WRT3200ACM which is the openwrt device.

Thanks for all the great advice! Eternally grateful, I should be able to get it working as optimally as possible now at least until I get FTTP sometime next year.

Please set the overhead to 44. Overhead 0 is certainly wrong....

How did I miss that?!?!?! Thanks for pointing that out!

Layered Cake is the only script available in OpenWRT that schedules RTP packets into the priority queue. I wish there would be more scripts available, as I do not agree 100% with the way that video and voice is scheduled to the same priority queue, but frankly, in the environments that OpenWRT is designed for, Layered Cake is more than adequate. However, for the RTP packets to be scheduled to the priority queue, DSCP must be properly marked. Otherwise, RTP packets will be handled in the “fair-queue” fashion with other small UDP based packets, and they will be delayed or even dropped without any regard for being carriers of delay-sensitive voice packets.

1 Like

That is certaily an unusual recommendation for an ATM/AAL5 carrier with pppoe. This is a better fit for VDSL2. The linklayer atm setting for SQM will only deal with ATM/AAL5's cell quantization, but it will not assume any additional overhead. That is on purpose as there are quite some possible overhead settings for ATM that are likely to be found in the field, so many in case that the recommendation is, either to configure the likely realistic maximum of 44 bytes or to empirically measure the overhead.

2 Likes

Good to know. Like I said I’ve never dealt with DSL and hope to never have to in the future.

I know, my argument is more that I am not convinced that with cake/SQM using priority scheduling for VoIP is as necessary as it was in the past with other QoS schemes :wink:
I note that cake/fq_codel have an in-built small priority boost for sparse flows that will help VoIP along quite nicely, albeit not necessarily on such slow links as the OP's.

No, they will be dropped if the accumulate such that their queue's minimum delay stays above target for more than interval duration independent on whether they are in the highest or lowest priority tin, what differs is that the likelihood of accumulating enough delay to merit a drop is inversely proportional to the tin's priority.

This is not the way that the strict priority queue works in Layered CAKE. With Layered CAKE there are no queues with different priorities. There’s one priority queue that has bandwidth limit strictly set. If the flow of packets scheduled into the strict priority queue exceeds its bandwidth limit, CAKE drops packets out of the strict priority queue on the tail-drop basis. This is actually very similar to the CBWFQ scheduler with a strict priority queue in Cisco MQC QoS.

All other queues (besides the strict priority queue) are managed more or less based on the weighted fair-queue principle with certain weights assigned to certain traffic flows.

The bottom line with Layered CAKE is that as long as the packet flows scheduled into the strict priority queue do not exceed the strict priority bandwidth limit, the packets are not delayed or dropped. They are dequeued into the interface tx-ring (which is a FIFO queue) with no regard to the delay occurring in all other queues (which are non-priority queues). And that is the reason that Layered CAKE provisions the priority queue with 25% of the total configured bandwidth so as not to cannibalize other queues.

No, that is not how cake operates. The priority tiers are not strictly rate limited with drop on overload.
And in each priority tier the codel derived marking dropping rules still apply. This is not your typical QoS scheme from the past....

1 Like

You are wrong. You should read up on CAKE and Layered CAKE.

And the secret sauce with CAKE and Layered CAKE over other non-CoDel based QoS algorithms is the way how the buffer-imposed delay is managed (congestion avoidance algorithm) based on the packet delay rather than on the percentage of the provisioned queue length filled with enqueued packets. There’s no secret source in the packet scheduling itself when it comes to CAKE / Layered CAKE compared to advanced traditional QoS algorithms, such as Cisco’s CBWFQ.

The advantage of CAKE / Layered CAKE over FQ_CoDel is actually in the scheduling algorithm rather than in the congestion-avoidance algorithm, as the congestion avoidance is the same in CAKE as it is in FQ_CoDel, but the scheduling algorithm in FQ_CoDel is much less advanced and is comparable to Cisco’s WFQ (weighted fair queue).

The real breakthrough with CAKE, Layered CAKE, and FQ_CoDel is the congestion-avoidance algorithm; there’s no innovation in these protocols when it comes to queue scheduling routines, as they employ well known queue scheduling concepts. In fact, Layered-CAKE is almost verbatim Cisco CBWFQ with strict priority queue except for a very important difference - congestion-avoidance algorithm, which is WRED in Cisco’s CBWFQ, and CoDel in Layered CAKE.

No need for shouting, please. I will have a look at the source code again later, but I am not convinced that your claim is valid, sorry.
BTW the script is called layer_cake.qos not layered CAKE... and I am happy to read it again, but I do not expect to be surprised, after all I initially wrote that script.

Where did you see me shout?

Sorry, I mistook your uppercasing cake for shouting.

Yes, cake leverages a form of deficit round robin (DRR). But that will not guarantee that packets in the priority tin will never encounter queueing delay, even if the rate stays below 25% of the total, just think bursty senders that admit a bunch of packets at the same time, say 4, firstly packets 2-4 will have to wait until packet one is sent, and there comes a point when the lower-priority tiers get serviced (otherwise the highest priority tin would get 100% of the bandwidth... In reality rate and latency are not orthogonal to each other, so limiting one will have effects on the other, but I digress.

That is not in cake but needs to be in the endpoints that exchange packets, what codel brought to the table is a marking/dropping strategy that fits well with how typical TCP responds to congestion signals, but the interpretation of the signal, congestion-avoidance algorithm is not part of cake.

I would appreciate links to cisco's code to help me understand the similarity, please.

Mmmh, I fail to find that in https://github.com/dtaht/sch_cake/blob/master/sch_cake.c, what i see is:

    • A Diffserv-aware priority queue, giving more priority to certain classes,
  • up to a specified fraction of bandwidth. Above that bandwidth threshold,
  • the priority is reduced to avoid starving other tins.

and

  • The priority queue operates according to a weighted DRR scheme, combined with
  • a bandwidth tracker which reuses the shaper logic to detect which side of the
  • bandwidth sharing threshold the tin is operating. This determines whether a
  • priority-based weight (high) or a bandwidth-based weight (low) is used for
  • that tin in the current pass.

and matching code. But it is possible that I overlook something in the code, so please feel free to point out where in cake's code you see the "if the flow of packets scheduled into the strict priority queue exceeds its bandwidth limit, CAKE drops packets out of the strict priority queue on the tail-drop basis."