CAKE w/ Adaptive Bandwidth [October 2021 to September 2022]

UPDATE: one outcome of this thread has been the development of a bash script, config file and associated service file for automatically adjusting CAKE bandwidth in the context of variable rate connections such as LTE or Starlink, which is available here:

There is also a LUA-based approach available here:

If you use either script then I have just one ask: please can you provide your feedback on this thread and/or on the respective GitHub pages. Such feedback motivates and helps shape further development work. After all, these are maintained on a purely voluntary basis by enthusiasts.

There are a couple of further approaches in development. If the author of any such approach would like theirs mentioned on this post then please let me know.

Original post is below (and the figure above demonstrates the issue). Happy reading!


So from browsing these forums it is evident that CAKE's autorate-ingress is a feature that is highly sought after. It promises automatic bandwidth determination for the many of us who have connections with a fluctuating bandwidth. From the man page:

autorate-ingress
Automatic capacity estimation based on traffic arriving at this qdisc. This is most likely to be useful with cellular links, which tend to change quality randomly. A bandwidth parameter can be used in conjunction to specify an initial estimate. The shaper will periodically be set to a bandwidth slightly below the estimated rate. This estimator cannot estimate the bandwidth of links downstream of itself.

Sadly it is also evident from these forums that it doesn't work. I have tested this on my own 4G connection without success. It ramps up slowly to a point where I see a lot of bufferbloat. Happy to test further if I should be trying it in conjunction with different settings (could my packet overhead be set too low - presently 70 to cover 60 for WireGuard and 10 for LTE overhead?), but I do not think I have come across even just one positive experience with this feature.

Overall CAKE seems to work very well, but the issue of bandwidth determination remains highly problematic, and for some individuals is even a reason to forego CAKE altogether.

Might the developers be convinced to make it work for those of us on such connections?

As for presumably any LTE connection, my own 4G connection heavily fluctuates. Mostly there is a 30Mbit/s stable component and then anything between 0-40Mbit/s on top of that. Setting bandwidth to 30Mbit/s is mostly OK, but in the evenings I still see some bufferbloat, presumably because of congestion at my ISP / cell tower.

In summary I sacrifice a huge amount of otherwise perfectly usable bandwidth for a solution that works only most of the time. This is obviously not very satisfactory. Anyone on a variable rate connection will face the same issue.

Otherwise I see various DIY attempts have been made to address this interesting issue.

These can be divided into two categories.

Firstly, bandwidth is tweaked by monitoring latency. For example ping tests are carried out and used to increase bandwidth until the ping increases. @dlakelan wrote a script for this:

Secondly, an attempt is made to determine the maximum bandwidth by saturating the connection.

A crude implementation is to disable CAKE and then run a bandwidth test and take the output from that bandwidth test to inform CAKE.

A better approach is however excellently summarised in this post by @richb-hanover-priv here:

The idea is that you disable SQM and then just run a speed test to saturate the connection. You don't care about the output from the speed test; it is just to ensure that the connection is totally saturated. You then compare packets transferred before and after to determine the max bandwidth on the line. The nice thing about this is that it helps reduce interference with normal traffic since you are just filling in the gaps around ordinary traffic. I suppose a downside with this approach however is that it will still result in temporary bufferbloat. So it would presumably interfere with a zoom call.

@Bndwdthseekr wrote a script based on this approach - see here:

My question is: what is the best approach between the above two? I think I favour the former on a theoretical level because it presumably offers reduced interference with normal traffic. But personally I think I would favour a simple and elegant bash script whereas the erlang script looks a bit complicated.

I'd like to try such a simple and elegant script even if it means writing my own.

@Bndwdthseekr what did you settle on in the end?

@moeller0 I'd love to have your thoughts on the above.

7 Likes

Well, variable rate links without proper AQM or back-pressure are just nasty. For transmission things like BQL for ethernet sand AQL for WiFi demonstrate that even with variable rates buffers can be kept smallish automatically, but that requires integration into the variable rate link's driver. For reception things are even nastier and the best we can do is heuristics. I never tried autorate-ingress since I am using a fixed rate link myself (the problem I see is that cake tries to determine the rate from the inter-packet delay of incoming packets, but if inter-packet delay is large, is this because bandwidth is low, or just because one of the senders has little data to send?).
Out of the other approaches I consider the "monitor latency" the better approach, but I note that this will only work after the fact, so you can expect at least some level of bufferbloat to appear transiently. Not a big issue if the rate of bandwidth fluctuation is small compared to the "latency sampling" frequency of the heuristic, but at latests once bandwidth changes happen faster than the sampling frequency things will become unpleasant (maybe some hysteresis can help).

In short I do not envy you... the proper solution is debloating the LTE basestation and modem seems infeasible, and the other options are work-arounds that can ameliorate the situation but not reliably fix it. That said, I can see that a work-around might still be attractive even if it only works most of the time.

2 Likes

Insightful, thank you!

Any idea why autorate-ingress increases bandwidth to a level that is too high and results in bufferbloat for me? It really feels like this feature could do with some further development, no?

And since bandwidth determination is so fundamental to CAKE it would seem CAKE could be significantly improved in this respect.

Although naturally I understand that we are at the mercy of what the developers choose to do with their own free time!

As far as I can tell the feature works for its designer, but as I said I never tried it myself. Looking at it, I would have expected an UI in which one could configure both an upper and a lower limit.... If packets come in smoothly it probably works, but I guess it will fail if packets are delivered in bursts, because then estimating the instantaneous rate from the inter-packet interval will over-estimate the true available capacity.
But, if you feel up for it, have a look at sch_cake in the Linux kernel, maybe you see some ways to improve it, I am sure a lot of users would be happy.

Sure, but how?

NO, the beauty of open source is that you can just have a go at it in your own time, and if you have a proof of principle I am sure getting it upstream you would find some help :wink:

4 Likes

What about borrowing an idea from Gargoyle's Active Congestion Controller?

It monitors latency and tries to control it under a threshold by adjusting TC rates and parameters.
It takes the "maximum rate" as user input and then controls this value between 15 and 100%. If you have a link that varies outside these parameters it can't help.
The same theory could be applied to CAKE i would assume?

2 Likes

I really like this idea because I have read many positive reports on this forum about this solution. In fact one user even withheld from switching to OpenWrt purely because of this - see:

So this is surely a promising idea?

Perhaps it could be introduced into CAKE as a replacement or alternative for autorate-ingress?

1 Like

I think @dlakelan erlang script pretty much takes this idea and runs with it. IIRC it pings multiple know well-connected targets and will tolerate individual servers to respond slower (by some sott of voting/averaging).

One of the challenges is that ICMP echo requests and responses ('pings') only measure round trip time, while for bi-directional shaper set-ups one would desire the RTT to be dissected into its constituting one-way delays.... I believe irtt is a tool that allows to measure one-way delays, but requires properly time-coordinated endpoints (and that probably means you will have to rent a server/virtual server somewhere in a close by and well-connected datacenter, to have a reliable remote measurement end-point)...

Nope, it relies on sending probe packets to some upstream server that is as far as I can tell fully out of scope for a kernel qdisc, sorry.
Autorate-ingress for all its warts has the advantage that it simply passively observes properties of packets it has to handle anyway. But that comes at the cost of not being able to actively probe for hallmarks of bufferbloat.

1 Like

Dumb question time. I see CAKE in my simple visually-oriented mind as follows. There is a pipe and CAKE manages the flow through the pipe by ensuring the flow never fully saturates the pipe. So there is a nice little gap between the diameter of the flow and the diameter of the pipe, this gap ensuring a nice smooth flow owing to lack of friction. It prioritises traffic within the flow. By contrast, when pipe is saturated too much tries to get squeezed into the pipe and the flow gets messed up and buffers increase, etc. Can CAKE itself not see when the pipe is getting jammed and thus iteratively increase or decrease the width of the pipe?

I think what I mean to say is does the way CAKE works not include something that can be worked with, or does the CAKE algorithm work based on a static bandwidth parameter that must be set? I am not stating this very well, but hopefully you catch my drift.

If the CAKE algorithm requires bandwidth, then the bandwidth is a sort of add on to get CAKE to work properly.

If the CAKE algorithm can see whether there is jamming, then the bandwidth control can be part of the CAKE algorithm. In this latter case the fix would be better put in the CAKE algorithm rather than add on.

We need to differentiate between uplink and downlink here:

Uplink: here cake either uses its own traffic shaper to admit only so much data over time as the user configured, and if that rate stays below the real interface's true link speed the interface buffers will stay mostly empty. So exactly what we want. As an alternative to this costly traffic shaper, cake will also work IFF the interface it is configured on creates "back-pressure" by signaling that it will not accept any more packets (assuming the interface's buffers are well managed, by it by BQL or AQL or similar techniques).
So in a sense the back-pressure is seeing that the "pipe" would be overloaded allowing cake to withhold sending packets for a bit (packets in cake's queue will still have growing sojourn times and hence the AQM component will do the right thing).

Download: here things get tricky, because cake sits on the wrong end of the bottleneck, and the only available signs of the pipe getting jammed is that on the remote end packets start to pile up, but how would cake, sitting at the near end, ever be able to figure out whether packets are piling up on the other end or not? So there is no real back-pressure possible for cake to operate on.

Just as a side note, before SQM was started, most people (but by no means all) on the internet assumed that traffic shaping will only work for the upload/egress direction. And they are right in that download shaping is a bit more approximate that upload shaping. Cake's ingress keyword helps a bit in that cake will then aim to keep its ingress at the configured rate as compared to its egress, effectively making cake's shaping aggressiveness scale with the load. Clever, but does not help on variable rate links by itself....

Yes, as I tried to explain, if cake sees back-pressure it can do the right thing even without an explicitly configured shaper rate, but your main problems are:
a) the LTE-modem probably does not generate back-pressure in the send direction (even though theoretically it could)
b) the base station will make its own decisions how much of your traffic to send at what time, with no chance of letting your ingress cake instance know...

2 Likes

I don't entirely understand this - could you elaborate? Here it is stated:

Most notably, this counts drops as data transferred, making ingress shaping more accurate, since packets will have already traversed the link before Cake gets to choose what to do with them.

Surely there is something that can be seen at the near end that is indicative of bufferbloat that CAKE has visibility of in terms of the way the flows pass through?

Can CAKE not identify periodic flows and then observe that the time gaps between those periodic flows expand and so there is bufferbloat? Or some other phenomenon that could be used to determine pipe is getting overly stuffed? Isn't there some effect on flows that can be monitored without having to actually put through 'marker flows' to be monitored? Of course putting in marker flows is no issue if it doesn't affect the pipe very much.

As a further thought, in the context of LTE is there not some identifier in a packet that indicates the bandwidth or similar? Or would that get stripped out by the modem? In my modem page I see:

Yes, so in normal mode cake will make sure that the gross rate of packet it sends out (so packets that have traversed cake's shaper component) is <= the configured gross shaper rate. In ingress-mode it will make sure that the gross rate of packets entering cake is <= the configured gross shaper rate. The difference between these is the number of packets that cake had to drop. Does that make sense?

Well what? auto-rate ingress IIRC correctly assumes that the inter-packet delay (IPD) (or probably each packets instantaneous rate) is diagnostic of the transmission rate. Which for a smooth medium seems like a reasonable heuristic, but will fail fuer bursty links... sure you could think about smoothing the measured IPDs, but then you still need to know about what temporal smooting kernel to use...

How do you, from cake's intermediate position in the network figure out whether any gaps in the packet sequence are caused by congestion or simply be the remote sender not having sent something at that exact point in time? Or a bursty link just having changed the precise iner packet delay without having changed the average rate?

No idea, sure modem and base-station talk to each other (for example the modem needs clearance from the base station before it transmits data), but I do not know whether the base station let's the modem know about the instantaneous sending rate. The modem might know about the coding scheme but how could it know for how long the base station is going to keep sending packets and whether the RF environment stays sufficiently stable to maintain the coding scheme...

In other words I am out of my league here, maybe its time tp bring in LTE experts :wink:

1 Like

Many thanks indeed @moeller0. I will give this some careful thought. I always so enjoy reading your posts. Sorry about my ignorant posts and I appreciate your patience.

In the meantime, perhaps some others have ideas about how to improve upon the 'autorate-ingress' feature?

I am looking forward to hearing how @Bndwdthseekr got on in the end. Or what other users with variable bandwidth connections have done, even if it meant just giving up!

I'd be curious @lantis1008 if you have any further thoughts given your initial suggestion about the ACC in Gargoyle. What about its applicability to this situation?

Ah, come on, I am not really patient and I do enjoy a friendly discussion, sometimes having to explain a concept helps a lot in getting a better grasp on the details of that concept :wink:

2 Likes

Only that it is what I am familiar with and it attempts to solve the initial problem you highlighted. Other points raised about it relying on ICMP performance and having to act largely in userspace are also valid (I think you could move it to kernel space with some effort).

If someone was keen to "have a go" and explore all options, I wouldn't count it out. But it isn't ready to use "out of the box" with OpenWrt. It is Gargoyle centric.
I've been using it since 2014 (I think) with great success.

Would you mind providing an outline summary of how it works? I personally liked the simplicity of @Bndwdthseekr's bash script and the overall concept in the ICMP approach written by @dlakelan, albeit the latter seemed a little complicated.

I don't think I really care about inconsistent ICMP performance because 8.8.8.8 inconsistency is surely peanuts compared to the huge bufferbloat issues I see on my LTE connection. I am not looking for perfection, just something that works to a sufficient degree. I don't mind some bufferbloat creeping in, so long as pings stay below 100ms from a baseline of 50ms. Since then in my experience everything will work fine. The problem comes when pings start to shoot up to more like 500ms and beyond, which is what happens when I disable SQM entirely.

Hopefully this will allow users like me to claim back some more bandwidth rather than having to unduly sacrifice bandwidth to get CAKE to work properly.

Update: I have looked into @dlakelan's script more now. Actually it doesn't seem as complicated as I thought. My RT3200 already has all the erlang dependencies. How would I go about using this script? Is it as simple as running something like:

erlang sqmfeedback.erl

After editing the lines at the bottom:

monitor_ifaces([{"tc qdisc change root dev eth0.2 cake bandwidth ~BKbit diffserv4 dual-srchost overhead 34 ", 4000, 6000, 8000},
		    {"tc qdisc change root dev ifb4eth0.2 cake bandwidth ~BKbit diffserv4 dual-dsthost nat overhead 34 ingress",15000,30000,35000}],
    ["dns.google.com","one.one.one.one","quad9.net","facebook.com",
     "gstatic.com","cloudflare.com","fbcdn.com","akamai.com","amazon.com"]),

No point in me reinventing the wheel. The more I look at this code the more I like it.

My Erlang script was at least as much about me learning Erlang as about solving the problem, the price of me doing it in my spare time... I do think it works reasonably well, and it doesn't require burning tons of bandwidth.

If you can run it on your router, then go for it.

1 Like

Cool, thanks. I got it up and running. Not what I'd call a piece of cake but I'm glad I got there in the end.

For the benefit of others, you need to install the 'erlang', 'erlang-compiler' and 'iputils-ping' packages.

I see:

root@OpenWrt:/etc/init.d/erl# erl
Erlang/OTP 23 [erts-11.0] [source] [64-bit] [smp:2:2] [ds:2:2:10] [async-threads:1]

Eshell V11.0  (abort with ^G)
1> c(sqmfeedback).
{ok,sqmfeedback}
2> sqmfeedback:main().
ping cloudflare.com with results: [48.9,50.7,52.5,52.7,53.4]
ping akamai.com with results: [49.4,49.4,50.4,57.5,57.6]
ping one.one.one.one with results: [56.4,56.4,56.5,56.8,57.5]
ping google.com with results: [73.2,74.2,74.2,74.3,82.9]
ping gstatic.com with results: [66.7,68.6,69.5,69.8,70.1]
ping amazon.com with results: [113.0,116.0,117.0,118.0,119.0]
ping quad9.net with results: [185.0,186.0,187.0,187.0,187.0]
ping facebook.com with results: [265.0,266.0,267.0,267.0,268.0]
ping fbcdn.com with results: [271.0,272.0,273.0,276.0,282.0]
ping cloudflare.com with results: [51.7,52.2,52.6,55.2,55.3]
ping akamai.com with results: [45.4,49.9,52.7,52.9,55.3]
ping quad9.net with results: [179.0,180.0,187.0,188.0,189.0]
ping gstatic.com with results: [64.4,64.5,66.4,66.5,67.1]
ping one.one.one.one with results: [45.5,46.5,47.4,54.5,56.5]
ping amazon.com with results: [117.0,124.0,125.0,126.0,126.0]
ping akamai.com with results: [44.9,45.3,45.8,46.3,47.7]
ping google.com with results: [60.0,69.0,70.0,70.3,77.0]
ping fbcdn.com with results: [260.0,261.0,268.0,270.0,272.0]
ping facebook.com with results: [268.0,269.0,270.0,270.0,270.0]
ping cloudflare.com with results: [44.7,46.6,47.7,48.1,57.0]
Checking up on things: 1634056326
Full Delayed Site List: [{[102,98,99,100,110,46,99,111,109],14.0,1634056315},{[103,111,111,103,108,101,46,99,111,109],12.900000000000006,1634056315},{[111,110,101,46,111,110,101,46,111,110,101,46,111,110,101],10.100000000000001,1634056310}]
Recent Delayed Site List: [{[102,98,99,100,110,46,99,111,109],14.0,1634056315},{[103,111,111,103,108,101,46,99,111,109],12.900000000000006,1634056315},{[111,110,101,46,111,110,101,46,111,110,101,46,111,110,101],10.100000000000001,1634056310}]
tc qdisc change root dev wan cake bandwidth 29237Kbit flows nonat nowash no-ack-filter split-gso rtt 50ms noatm overhead 70
tc qdisc change root dev veth-lan cake bandwidth 29237Kbit besteffort triple-isolate nonat wash no-ack-filter split-gso rtt 50ms noatm overhead 70 ingress

Also @dlakelan for now I have placed the .beam file in /etc/init.d/sqmfeedback-erl/ and created a new sqmfeedback service:

root@OpenWrt:/etc/init.d# cat sqmfeedback
#!/bin/sh /etc/rc.common
# Copyright (C) 2007 OpenWrt.org

export PATH=/usr/sbin:/usr/bin:/sbin:/bin

START=51
STOP=4

start() {
        erl -pa /etc/init.d/sqmfeedback-erl -eval 'sqmfeedback:main().' -noshell -detached
}

stop() {
        pgrep erl | xargs kill -9
}

Does this seem sensible?

On initialisation, it unloads the link (stops traffic) and starts to transmit pings to the nominated server. After 5 seconds (stabilising period) it starts measuring those pings for 10 seconds to find an average. The average returned latency is referred to as the "link entitlement" and is a measure of what your link can achieve under the best circumstance. After the 15 seconds has elapsed, the ping is terminated and traffic is resumed on the link.
From memory the link entitlement has a small amount added to it for hysteresis, and also the user can specify that they want to increase the entitlement by Xms to target greater utilisation as a tradeoff for latency.
The maximum bandwidth entered by the user is stored as the max link rate, and 75% of this value is used as the initial fair link rate.

When the link goes above 10% utilisation, the pinger is turned on and the latency is actively monitored.
As long as the latency is under the target, and there is more demand for the link, the fair link limit is increased towards 100% of the line rate. If the latency goes above the target, the fair link limit is reduced towards 15% (but no lower).
The algorithm it uses to adjust the bandwidth drops more aggressively depending on how much the latency has exceeded the target. For recovering back to the line rate it steps up in some amount (can't remember).

There is also a realtime mode which reduces the latency target to the raw entitlement value (no additions) and certain traffic flows can trigger this mode.

If you want more explanation than that, i'd encourage you to read the source which i linked previously. It is actually well commented (a nice surprise).

This is what my link looks like at idle (two of us working from home, 1 on a video call at the moment)
image

I then started a download and got this
image
The latency is below 56ms so it didn't shape the link at all.
I'm not willing to actually soak the line at the moment (l'est i be yelled at :smiley: )

Note that my link is actually capable of 9ms of latency but i've added 20ms to the entitlement. I found that when the ACC was targetting 9 i wasn't able to utilise my full speed, and for my use cases speed was better than latency. Plus i don't mind gaming at 29ms of latency, perfectly manageable.

2 Likes

@lantis1008 many thanks indeed. That seems rather different to @dlakelan's script and it is very helpful to see what yours does because of the many positive reports on this forum about it.