QoS in lockdown

Gah... this is a pain to be doing on my phone..

In case I borked up the link, heres my two lines for the advanced options strings that work well for me and my cable connection :
nat dual-dsthost ingress ack-filter mpu 64
nat dual-srchost ack-filter mpu 64

Those improve how cake will handle different traffic flows. And, of course, it applies to basic cake optimizing, not more advanced tagging. dlakeland is an expert, so you're in good hands there... :wink:

1 Like

If you use piece of cake everything will be treated equally. In this case I think it's likely you'll see your video conferences suffering somewhat. It depends on what you decide to do, if you can identify the other interactive apps you can keep layer cake and put them all at moderate priority, say DSCP tag CS4, and then your interactivity will be preserved and only less interactive stuff will suffer which is usually what you want.

@JonP
It does deliver more. I get 220 down and 21 up without anything using Speedtest app on the PC.
Have not tried anything in the dangerous section

@dlakelan
When I had DD-WRT running and I rate limited the offending devices, they never really had any issues. The video quality would be lower, but there would not be any issues with them as a whole.
Would really like to have some sort of queueing that says “anything that is coming from this device and a connection initiated by this device gets priority over any other packet on the network”. Preferably, if Teams is using DSCP, then I would like some what for the QoS to ignore those type of packets, and not give them any priority at all.

If you want everything at equal priority use cake with piece of cake. See how that goes.

Also if there are manual quality settings you might keep your current setup and just turn down the quality settings. That will have the video conf use less bandwidth but still get rock solid performance.

OK.. I have a similar Cable situation, where, on a good day or hour, I get up to 350/32 on my "300/30" connection. But, I also need to bring it down to 300-320 on the ingress/download and maybe 30-28 on the egress/upload, before I get a consistent low latency. It's worth playing around and looking closely at the latency graphs.

There was a long back post on setting up and using DSLReports Speedtest, which is still the best tool for this kind of tuning. Enabling the high speed bloat graph and finding the details tells you a lot about what's going on. There had been usability problems with it, but it seems to be usable now, though you have to use http, and you may have to trial and error edit which servers it picks out of the list. If one isn't fully functional, the test will not work.
https://forum.openwrt.org/t/sqm-qos-recommended-settings-for-the-dslreports-speedtest-bufferbloat-testing/2803

Looking at the detailed, 10 per second latency results hidden in the graph makes it easy to find the threshold of fully managing the latency on your link. I get a lot of various tiny bursts of high latency, or sometimes ramps. Getting the speed enough below that "noise" level and they all are pretty much flattened out. Hard to see it, otherwise.

And, I would highly recommend the above mentioned "dangerous" Advanced Option strings that enable the per host behavior, as well as a few other things that cake can do, but need to be enabled. The per host flow control behavior might help your scheduling, somewhat.

Could you post the output of tc -s qdisc from before and after a Teams session please. cake will collect and report some statistics over the different priority tins and that should allow to figure out whether DSCPs are in play.

1 Like

Hey all,
Sorry for the late reply. The forum does not allow more than 22 post I think for a new user within a 24 hour period. Really odd....

Anyways just want to thank everyone for their help.
I think I got it sussed with all your help and a little bit more reading online.

This is what I put in place and so far it has been pretty solid the whole day:

  • I set the download speed to 200 and the upload to 18 (think lowering the upload helped more than lowering the download)
  • Change queue script to piece_of_cake. I noticed high CPU load when using layer_cake. Not so high that it was causing router issues (about 0.5 to 0.7 for 1 minute load averages). Though I do think this contributed in some way also.
  • I set the following as per JonP :
    ingress : nat dual-dslhost ingress ack-filter mpu 64
    egress: nat dual-srchost ack-filter mpu 64
  • Link Layer Adaption set to Ethernet - 22
  • I also change my MTU on the WAN side to 1478 as I was seeing that max packet size was above 1500 when doing a tsc -s qdisc. So to alleviate possible fragmentation I dropped it down to 1478. Now it shows the max as being 1500

I have done a test where I am uploading a large file and checking the latency for Horizon. If I did nothing on my session I could see the latency jump up, but as soon as I started working and moving about the screen the latency dropped to where it should be, even while the upload from home was going on.

So I think we are on to a winner here.

Will leave it with these settings and hope that it stays constantly low(ish).

If anyone has any other tweaks I could put in that would great. Anything to get the best performance I can.

Thanks all once again.

1 Like

everything sounds fine except this bit. I'm not sure what the concept is here, and I don't think it's necessary. But if you have evidence that it helps... go for it. Still I think it's probably not needed.

As long as your interactive communications stays sufficiently solid for your desired level of solid, then you're all good. If you find that your video conf or audio conf stuff is problematic/garbled etc then there's probably more you can do, but test for a while and come back if you perceive problems, otherwise sounds good!

Does the WRT3200ACM support HWNAT?

To be honest, SQM has only ever partially worked for me. I ended up getting an edgerouter X to do the heavy routing and QOS as its own devices while letting my wifi access points act as managed wifi switches.

I hope this is just a typo here in the forum it should be:
dual-dsthost
dst is short for destination.

That should not be a issue, as cake will dissect meta-packets (from GSO or GRO) into their individual MSS sized segments, this is one thing other qdiscs tend not to do... IMHO the reduced MTU effectively disables the GRO on another interface and is hence analog to using ethtool to disable GRO GSO on all interfaces. BUT these meta packets actually help the network stack to save work (e.g. routing lookup will only be needed once per meta packet instead of individually for each constituting packet).

It was a typo, it is set as dual-dsthost

1 Like

Does cake have a switch to disable this? If you're operating at high speeds it seems like keeping them in big chunks could be fine. Like even a 30KB packet only takes 0.4 ms to send at 600Mbps for example. The dissection is absolutely great at slower speeds of course, like a 600Kbps DSL line would need 400ms to swallow that

1 Like

Yes: split-gso and no-split-gso. From man tc-cake:

       split-gso

            This option controls whether CAKE will split General Segmentation Offload (GSO) super-packets into their on-the-wire com‐
       ponents and dequeue them individually.

       Super-packets are created by the networking stack to improve efficiency.  However, because they are larger they take longer to
       dequeue,  which  translates  to higher latency for competing flows, especially at lower bandwidths. CAKE defaults to splitting
       GSO packets to achieve the lowest possible latency. At link speeds higher than 10 Gbps, setting the no-split-gso parameter can
       increase the maximum achievable throughput by retaining the full GSO packets.

I think that cake defaults to split-gso unless the shaper speed is set to >= 1Gbps, but can't seem to find that in cake's tc code (because it lives in the kernel's sch_cake.c:

#define CAKE_SPLIT_GSO_THRESHOLD (125000000) /* 1Gbps */").
[...]
        if (q->rate_bps && q->rate_bps <= CAKE_SPLIT_GSO_THRESHOLD)
                q->rate_flags |= CAKE_FLAG_SPLIT_GSO; /* bit wise OR */
        else
                q->rate_flags &= ~CAKE_FLAG_SPLIT_GSO; /*bit wise AND */

Yes, that is why be default gso-splitting only happens unless the rate is >= 1 Gbps.
One of the reasons to do it up to 1Gbps IIRC is that GSO introduces a lumpiness that makes fairness guarantees much weaker than one would like...

There once was the idea of splitting only packets that exceed a certain serialization time (so allow small super-packets on links slower than 1Gbps, but that got replaced by the simple <= 1Gbps test and the ability to override that by the user via the split-gso/no-split-gso keywords)

I know the comment says 1Gbps but doesn't the number say 125 Mbps? Or maybe it's in Bytes! That would make sense.

So it has been running for about a day or so quite well. though I do see at times a little lag, but not as bad as before.

here is the output of tc -s qdisc

qdisc noqueue 0: dev lo root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc mq 0: dev eth1 root
 Sent 23707657607 bytes 34070060 pkt (dropped 0, overlimits 0 requeues 369)
 backlog 0b 0p requeues 369
qdisc fq_codel 0: dev eth1 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 23707657607 bytes 34070060 pkt (dropped 0, overlimits 0 requeues 369)
 backlog 0b 0p requeues 369
  maxpacket 1506 drop_overlimit 0 new_flow_count 423 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev eth0 root
 Sent 20788674660 bytes 27923981 pkt (dropped 1, overlimits 0 requeues 690)
 backlog 0b 0p requeues 690
qdisc fq_codel 0: dev eth0 parent :8 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :7 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :6 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :5 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev eth0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 20788674660 bytes 27923981 pkt (dropped 1, overlimits 0 requeues 690)
 backlog 0b 0p requeues 690
  maxpacket 1514 drop_overlimit 0 new_flow_count 871 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev wlan0 root
 Sent 18282088236 bytes 18630490 pkt (dropped 0, overlimits 0 requeues 4)
 backlog 0b 0p requeues 4
qdisc fq_codel 0: dev wlan0 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 18282005026 bytes 18630269 pkt (dropped 0, overlimits 0 requeues 4)
 backlog 0b 0p requeues 4
  maxpacket 1492 drop_overlimit 0 new_flow_count 1 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan0 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 83210 bytes 221 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev wlan1 root
 Sent 10559481709 bytes 14929346 pkt (dropped 0, overlimits 0 requeues 1)
 backlog 0b 0p requeues 1
qdisc fq_codel 0: dev wlan1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 10559425256 bytes 14929043 pkt (dropped 0, overlimits 0 requeues 1)
 backlog 0b 0p requeues 1
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 56453 bytes 303 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc mq 0: dev wlan1-1 root
 Sent 21378310 bytes 296999 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc fq_codel 0: dev wlan1-1 parent :4 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1-1 parent :3 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 18929735 bytes 281059 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1-1 parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: dev wlan1-1 parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 Sent 2448575 bytes 15940 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc cake 8019: dev eth1.2 root refcnt 2 bandwidth 18Mbit besteffort dual-srchost nat nowash ack-filter split-gso rtt 100.0ms noatm overhead 22 mpu 64
 Sent 18793606546 bytes 26923348 pkt (dropped 393147, overlimits 32742965 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 896192b of 4Mb
 capacity estimate: 18Mbit
 min/max network layer size:           24 /    1478
 min/max overhead-adjusted size:       64 /    1500
 average network hdr offset:           14

                  Tin 0
  thresh         18Mbit
  target          5.0ms
  interval      100.0ms
  pk_delay        1.9ms
  av_delay        301us
  sp_delay         14us
  backlog            0b
  pkts         27316495
  bytes     18831423273
  way_inds      1370071
  way_miss       262223
  way_cols            0
  drops            7827
  marks              63
  ack_drop       385320
  sp_flows            2
  bk_flows            2
  un_flows            0
  max_len         22290
  quantum           549

qdisc ingress ffff: dev eth1.2 parent ffff:fff1 ----------------
 Sent 39135446265 bytes 46568609 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev br-lan root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc noqueue 0: dev eth0.1 root refcnt 2
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
qdisc cake 801a: dev ifb4eth1.2 root refcnt 2 bandwidth 200Mbit besteffort dual-dsthost nat wash ingress ack-filter split-gso rtt 100.0ms noatm overhead 22 mpu 64
 Sent 40241081723 bytes 46567472 pkt (dropped 1137, overlimits 30050995 requeues 0)
 backlog 0b 0p requeues 0
 memory used: 495053b of 10000000b
 capacity estimate: 200Mbit
 min/max network layer size:           46 /    1494
 min/max overhead-adjusted size:       68 /    1516
 average network hdr offset:           14

                  Tin 0
  thresh        200Mbit
  target          5.0ms
  interval      100.0ms
  pk_delay        395us
  av_delay        147us
  sp_delay          2us
  backlog            0b
  pkts         46568609
  bytes     40241442063
  way_inds      2722737
  way_miss       259198
  way_cols            0
  drops             212
  marks               3
  ack_drop          925
  sp_flows            7
  bk_flows            2
  un_flows            0
  max_len         58432
  quantum          1514

Does it look ok ? any more advice to tweak this ?

Thanks once again

These two snapshots look okay to me except that I would recommend to only use ACK filter on upload/egress and not on ingress, but that should not have much consequences either way.

Ok, I have changed that now and applied it.

All those dropped packets on the egress. Is that normal to have such a high amount ?

Also, do you suggest I change my MTU back 1500 ?

Well, that is how TCP works, it will increase its sending side window of acceptable not-yet-acknowledged segments (the congestion window) until it sees signs of having reached/exceeded the network path's capacity, which typically are packet drops (reported to the sender via the reversely flowing ACK packets from the receiver). Traditionally these drops happen becsause the buffers of the bttleneck are overfull and hence that node drops all packets that arive until there is again room in the node's egress queue; AQM's like cake or fq_codel now selectively drop packets for individual flows before all buffer/queue space is exhausted, so over all might generate a slightly higher rate of drops (but not by much, drops are normal for TCP flows, AQM or no AQM).
The one alternative would be to configure your computers such that they try to negotiate Explicit Congestion Notification (ECN) with the servers (will only work if the servers support that, but many servers on the internet will use ECN if the client actually requests it). In that case fq_codel/cake will just mark a packet with a Congestion Exposure mark that then is transmitted via the ACK reverse flow from receiver to sender and cause the same reduction in congestion window as a drop would have done. Please note that when push comes to shove and fq_codel/cake are absulutely swamped with packets, both will not bother with CE marking but directly drop, because an ECN marked packet still takes up space on the queue, so to dig out of absolute overload dropping is the only realistic option.