CAKE w/ Adaptive Bandwidth [August 2022 to March 2024]

@aqua_hopps maybe you could post a couple more logs so we can better see what's going on?

The speedtests and logs were recorded side by side.

Is it reading the wrong rx_bytes and tx_bytes files?

# verify these are correct using 'cat /sys/class/...'
case "${dl_if}" in
    \veth*)
        rx_bytes_path="/sys/class/net/${dl_if}/statistics/tx_bytes"
        ;;
    \ifb*)
        rx_bytes_path="/sys/class/net/${dl_if}/statistics/tx_bytes"
        ;;
    *)
        rx_bytes_path="/sys/class/net/${dl_if}/statistics/rx_bytes"
        ;;
esac

case "${ul_if}" in
    \veth*)
        tx_bytes_path="/sys/class/net/${ul_if}/statistics/rx_bytes"
        ;;
    \ifb*)
        tx_bytes_path="/sys/class/net/${ul_if}/statistics/rx_bytes"
        ;;
    *)
        tx_bytes_path="/sys/class/net/${ul_if}/statistics/tx_bytes"
        ;;
esac

if (( $debug )) ; then
    echo "DEBUG rx_bytes_path: $rx_bytes_path"
    echo "DEBUG tx_bytes_path: $tx_bytes_path"
fi

I presume these are wrong in your case:

# download
rx_bytes_path="/sys/class/net/eth0/statistics/rx_bytes"
# upload
tx_bytes_path="/sys/class/net/eth1/statistics/tx_bytes"

Can you verify?

1 Like

Regarding cases where data usage is relevant...

Maybe this has already been ruled out, but:

Is generating pings the only latency indicator available?
I know very little about all this, but maybe there's a way to track latency passively.

If there were some way to infer latency from existing traffic, the service would have no data overhead.

One likely caveat being that the 'latency reference frame' (?) would be constantly changing, since the list of available "reflectors" at any moment would depend entirely on client activity.

There are some approaches like pping that try to leverage TCP timestamps or selfmade measurements from seeing a TCP sequence number reappear in the reverse traffic ACK.
The challenge is when measuring passively we have little choice in the reflectors and their quality and reliability.

Yes, alas I see no robust way of doing that cheaply, neither doing this in kernel nor using tcpdump to exfiltrate the required information to use space are easy.

+1; yes pretty much, the "solution" would be likely to simply use the minimal RTT measured over an interval, but that is not terribly responsive.

I am not saying there is no way/solution to this, merely that I do not see one.

1 Like

Yes both achieved rates look pretty much identical.

I think this would need to be:
rx_bytes_path="/sys/class/net/${dl_if}/statistics/tx_bytes"
by handling download as egress on a LAN interface we have the same inversion as with veth and ifbs.
This might be a longstanding bug/oversight I introduced, sorry...

@Lynx this change should be safe since we can not instantiate a cake instance on the ingress side of an interface, so the old code would never be right unless Linux is changed to allow the desired set-up with qdiscs on the ingress side. Again sorry, this clearly was never tested before.

1 Like

Thank you for pointing it out!
By changing the rx into tx, the dl bandwidth is now properly classified!

2 Likes

Great - humour us please and can you send new data log using the latest and greatest code here:

https://github.com/lynxthecat/cake-autorate/tree/next-generation

Logging is activated by default (and active whether you run in console or as service) so you only need to run it with mostly default settings and pastebin/upload /tmp/cake-autorate.log. The default ping binary is now fping (better timing between ping requests), but you can still using iputils-ping if you want to.

Thanks to @patrakov hping3 is apparently available on OpenWrt 'main' and '22.03' now so OWDs using that ping binary are now (or shortly once package builds updated) possible for those on those OpenWrt branches. Seems like getting hping3 pulled into other branches is just a question of asking here like I did:

And thanks to @moeller0's pushing for this the the code has been generalised to easily support multiple ping binaries and OWDs now so it won't require much work to add in hping3 wrappers.

Shortly this will all get pushed to main.

Fixed here:

And once pushed to main I guess that can mark version 1.1 since there have been several important changes and new features.

1 Like

Sure!

without sqm

Looks a lot better now:

What kind of connection is this?

DOCSIS 3.0 Cable
200M down 30M up

I don't know how to interpret this plot. Is it a terrible connection?

A request for future plots: could you please make the legend and the axis label fonts bigger (3-4x bigger), so that the letters and numbers are visible directly in the post, without the need for zooming or having a Retina display?

Actually I wish I had something closer to this, albeit it might not have given me the motivation to work on this autorate stuff so much. I see bandwidth spread of 10Mbit/s to 70Mbit/s on download and keeping RTTs below 100ms is the challenge :slight_smile:.

1 Like

your baseline on this test is still very very good by most standards.

1 Like

sorry i had no time here is the datas with your recomanded modifications.

:face_with_open_eyes_and_hand_over_mouth: i forgot the file...it's uploading ... just wait my GO.

https://paste.3cmr.fr/lufi/r/BGjF6ZaG3z#XsofBkBxjaP3HefLwB2px+MgybAk+ZAW+roLXuEYJM4=

for information the video streaming were something worst than before.

750 TTL is is set to this value because i think that more than this value network connection is not good at all :stuck_out_tongue: ... but it seems mine is really bad :cry:

1 Like

hping3 is now in 22.03 so I will add wrapper and hopefully show OWD handling working:

root@OpenWrt:~# hping3 9.9.9.9 --icmp --icmp-ts
HPING 9.9.9.9 (wan 9.9.9.9): icmp mode set, 28 headers + 0 data bytes

len=46 ip=9.9.9.9 ttl=56 id=58 icmp_seq=0 rtt=59.4 ms
ICMP timestamp: Originate=30602286 Receive=30602314 Transmit=30602314
ICMP timestamp RTT tsrtt=60

Can anyone advise why its output buffers when piped? @colo?

So:

 while read x; do echo $x; done< <(ping 1.1.1.1)

gives immediate and smooth output, whereas:

while read x; do echo $x; done< <(hping3 9.9.9.9 --icmp 2>/dev/null)

is very stuttery?

1 Like

Because they don't set stdout to line-buffered. Try this:

while read x; do echo $x; done< <(stdbuf -o L hping3 9.9.9.9 --icmp 2>/dev/null)

(needs coreutils-stdbuf package)

Yes got it - also this hack works:

root@OpenWrt:~# while read x; do echo $x; done< <(hping3 9.9.9.9 --icmp 1>&2)

It seems redirecting stdout to stderr removes the block buffering? See any issues with this route?

It removes buffering, and also redirects the lines away from the "read". To see what happens, try:

while read x; do echo === $x; done< <(stdbuf -o L hping3 9.9.9.9 --icmp 2>/dev/null)
while read x; do echo === $x; done< <(hping3 9.9.9.9 --icmp 1>&2)

The second command does not prefix the lines with ===.

1 Like

Is there a fix here or is the extra binary/dependency needed?

I think I'd really like @Lochnair's custom C code :smiley:! @Lochnair any appetite to release an OpenWrt custom C-based binary for pinging out with round robin timestamps?