Help! I have managed to remove wireguard-tools and luci-protocols-wireguard, and can’t reinstall them because there is no kmod-wireguard package dependency since it is built into the kernel. Any suggestions for fixing this goof?
if you use my build you can find it on repo...
I am using your build, the 2026-03-20 version but I must not have my apk distfeeds.list file correct. Can you please post what it should be for that, or point me at the original contents?
Sigh, I looked for that and missed it, not realizing that git had curtailed the directory listing due to the number of files. However, I get an error trying to install that because I’m running kernel 6.12.69 and that’s built on 6.12.74. I seem to have failed in updating from the 02_07 version to the 03_20 version. Let me go see if I can’t get that sorted out and if that doesn’t fix my problems…
Allright, no idea how I got into that state, but upgrading to the 03_20 version again sorted it. I must have managed to upgrade with the wrong file and then everything was confused… Sigh. Thanks for the pointers, all sorted now!
prebuilt testing version based on kernel 6.18, only tested on mt6000, not tested on mt3000.
please let me know
things to be tested.. with 1gbps simmetric please let me know upload download speed and cpu use, please play with settings of time of sync --ADVANCED-- CAKE setting
and with rps here:
I simply say Thank You Immensely Pesa for all your hard work , expertise and devotion to this project which we all so appreciatively benefit from
Peace
Good morning Pesa, since I'm testing the new build 6.18, the wed, despite doing Save & Apply, the command is not accepted. Do you know if it is possible to activate Telnet via ssh? and with which command?where the cake setting menu remains
Thanks for everything, Pesa.
From my preliminary tests, this seems to confirm the issue: the connection gets throttled, sharply cut off, even when setting high download speed values. I tried different Sync Time values, but the result didn’t change.
In my case, since I’m using VLAN tagging, I usually rely on PPPoE-WAN as the interface, but I decided to try eth1 anyway, and it was actually worse. According to the tests, the throttling caused the line’s download speed to drop dramatically.
I tested both Piece of Cake and Layer Cake. The latter, at least based on these tests, seems to introduce some bufferbloat issues even though the throttling still persists. I also tested using Packet Steering (the one available in the Interface section), and I think it helped a bit, but the line throttling issue was still there.
p.s I’d like to point out, however, that my connection is not symmetric fiber but asymmetric.
I’m not sure if this is relevant, but I noticed that when using the latest custom firmware with luci-app-sqm or QoSMate and the latest CAKE-MQ enabled, my download speed roughly doubles—from around 240 Mbps to 400 Mbps (sometimes around 350 Mbps; keep in mind this is cable, not fiber)—when switching from Priority Queue Ingress and Egress with DiffServ 4-tier priority to DiffServ 8-tier priority. I don’t have a 1 Gbps connection, but that’s still about double the speed sometimes. Again, I’m not sure if this is relevant.
Dear @pesa1234 how exactly sync_time impacts our connection? I can reach gigabit by increasing a lot the sync_time.
Dear guys, at the beginning thanks a lot for testing it.
Thanks again to everyone, this feedback is very useful.
From the reports so far, this does not look like a simple bandwidth-setting issue. It looks more like a CAKE-MQ tuning issue, and sync_time is probably a big part of it.
For anyone wondering what sync_time actually does: in CAKE-MQ, traffic is handled by multiple CAKE sub-instances, usually one per TX queue. The sync_time value is the interval, in nanoseconds, used to re-check how many queues are currently active and to rebalance the shaping rate across them.
In practical terms:
- lower sync_time = CAKE-MQ synchronizes more often
- higher sync_time = CAKE-MQ synchronizes less often
That means there is a trade-off:
- if sync_time is too low, synchronization happens very frequently, which can increase CPU/scheduling overhead and reduce throughput on some setups
- if sync_time is too high, synchronization reacts more slowly when the number of active queues changes, which can hurt latency control and sometimes increase bufferbloat
So when some users say that increasing sync_time helps them recover download speed, that suggests the lower value may be causing too much sync overhead on their setup. On the other hand, if sync_time is pushed too high, the line may become less responsive under load even if peak throughput improves.
Mmh, interesting. From what I understand, there’s still a lot of ground to cover before reaching a practical solution to the problem.
By the way, I experienced something similar in the previous release before yesterday’s one. When I set CAKE MQ and diffserv3, for some reason the same connection bottleneck happened even when I set them to high values.
I write what I think without any kind of filter, so I would like to share only what I understand (littlle and low) ![]()
I think I may have identified a likely cause of the download-side issue, and I am now testing a possible kernel-side fix.
At the moment, on my setup, upload and download do not behave the same way with CAKE-MQ:
- on egress/upload (
eth1), traffic is actually being distributed across multiple active CAKE child queues - on ingress/download (
ifb4eth1), even though the root qdisc iscake_mq, almost all traffic ends up on a single child queue, while the other child queues remain idle
So in practice, download is currently behaving much closer to single-queue CAKE than true multiqueue CAKE-MQ.
My current theory is:
- the Mediatek TX queue hash patch this makes sense for upload/egress on
eth1 - but it does not solve the download side, because ingress shaping happens on the IFB device
- on the IFB side, queue selection appears to collapse redirected traffic onto one queue, so CAKE-MQ cannot really spread the download load across multiple child queues
Because of that, I am testing a new IFB-side patch that adds queue selection by flow hash, so redirected ingress traffic can be distributed across multiple IFB TX queues instead of landing on only one.
If this works as expected, the result should be:
- more active CAKE child queues on the IFB side
- better distribution of download traffic
- less download-side throttling
- potentially better download throughput and stability
This is still experimental and not yet fully verified, but from the stats I have collected so far, it looks like a plausible direction.
Looks promising. I’m looking forward to seeing the results in the near future. Thanks as always for your work
upload the patch... https://github.com/pesa1234/openwrt/commit/59354004620f9ff7dc678a1ed167012e6a773db5
do you need that I compile?
If you have time, yes please
Don’t apologize for being Italian!
![]()
updated:

