from the last debug output you sent me... your network ( upload ) showed extremely high contention ( too much load for the speed )... you really need to study more about qos/sqm... and ideally upgrade your wan subscription speed / isp...
read this it explains clearly (using an alternate shaper) how adequate upload bandwidth availability is CRITICAL to everything...
Just some SQM questions, it seems like the Rpi4 can handle upload SQM pretty well(comfortable at 800 000kbit/s). But for download it caps at 600 000~kbit/s no matter what settings I use. Is this a restriction of the Rpi4 from a CPU standpoint or should I do some tests for you @anon50098793?
ive never used pie... should be cake for that shaper...
off-topic-note: realized recently that script should really be called;
ctinfo_8layercake_rpi4.qos
due to it instantiating with diffserv8... but i'm happy to just keep the name aligned with where it was pilfered from for provinence reasoning...
i've never really provided much guidance for the SQM-CONFIG setting for use with that script...
verbage
partly because i'm not sure what it can support... but for reference... i've always used it with a bone standard up/down on single wan with practically no other options...
up to you... but you need to ressign eth0 interrupt affinity (or use packet_steering ) to achieve max throughput...
verbage
i (got off my rear) ran controlled 'internal-fake-GB-wan' testing yesterday ( @dlakelandid so a long time ago ) with and without sqm... so we know where the main impediments are... we just don't know what is 'perfect'-value-s/sane-default/alternate nic behavior... which is something only a user comfortable with messing with IRQ etc. can really assist with... even with my mediocre ( witness-me ) skills its difficult to isolate or interpret tweaking all of these values...
fwiw... i'm now semi-randomly setting...
echo -n c > /proc/irq/32/smp_affinity
echo -n c > /proc/irq/33/smp_affinity
echo -n a > /sys/class/net/eth0/queues/*/*_cpus
and to avoid panic / confusion... most of this is only really relevant for people with WAN OVER ~250-350Mb/s hence why it took me so long to dig into this at all...
this graph is pretty interesting/evident... the spike at the end of monday is the perf testing with defaults which was cpu bound...
you can see high sirq and little to no load on cpu3/4 in the proceeding days...
tuesday onwards are with the current tweaks... sirq barely registers and load is distributed much more evenly across all cores... ( the big spike on tuesday are identical perf tests yet thresholds are almost halved due to interrupt levelling and better cpu allocations )
Tried your build and couldn't find any poweroff button. Is it hidden somewhere? Could only find reboot. Didn't want to corrupt the SD card. Had to SSH in to use poweroff command. Any shortcut for this?
great... so you pulled down the main part which was the packet_steering (FORCEON) for testing ... then rebooted...
if you have improved speeds... then there is no real need to run the other (2nd) script from which you would have needed/wanted to test mostly the IRQ-AFFINITY part...
no... you don't really need it... what you can do is run
htop
then do a speedtest... (edit: it's worth using a few sites as they vary how they work)