Hi guys, I am wondering if anybody can gimme a hint.
I use openwrt on a RPi2 (and treid the same on RPi3 and 4 (8GB)) but on my starlink uplink I NEVER get more than 40mbit when using it with the openwrt.
As soon as I unplug the starlink from the openwrt, and plug it into my Yoga Ubuntu notebook, I can run a speedtest with more than 200mbit down, 50mbit up, but when on openwrt, whatever I try, less then 40mbit ..
I have the rpi single-homed nic with 5 vlans. It has a mwan3 configured with a 4G vodafone connection + starlink, and on the 4G it sometimes can do 50-60mbit, which to me sounds better than the average 15/20 on the starlink. I tried this cake thing, but the best it gave me were maybe 46mbit.
What I am wondering about is, that even on a rpi4 with 8GB it doesn change much ..
Its the V3 dish, no obstructions, Advanced Speedtest not possible anymore, but it gave me around 240/60mbit .. same did speedtest when running it from my notebook directly connected to the ethernet adapter.
Rpi4 should easily be able to traffic shape a gigabit. What version of owrt are you using? Please post logs and dmesg using the preformatted text. Erase any macs or personally identifiable information.
5 vlans into one 1000M NIC sounds like it could be a problem. What managed switch do you have attached?
I have the same. I think you will definitely want to be using cake + cake-autorate if you want to avoid massive bufferbloat (circa 500ms to 1 second), rendering any latency sensitive applications unusable.
Bandwidth is not the only consideration, especially above 10Mbit/s; latency begins to affect the perceived quality of an internet connection when it spikes above 50ms.
Not sure about how Starlink addresses bufferbloat now.
Nope the waveform test does report quite a set of useful measures, but the top-of-line report really just shows:
mean idle latency; mean download latency - mean idle latency; mean upload latency - mean idle latency
(actually it reports max(0, (mean([up|down]) - mean(idle))) see:
idle: mean 15.6 -> rounds to 16
download: mean 16.6 -> rounds to 17 -> down - idle = 17-16 = 1
upload: mean 14.8 -> rounds to 15 -> up - idle = 15-16 = -1 -> max(0, -1) = 0
This is why I personally only look at the top-of-line numbers very cursory, mean latencies are only useful to assess latency performance of the spread is super tight.
I note however that on safari the waveform test has relatively many extreme outliers caused by the browser, so I assume some kind of mean is reported simply because it is more robust and stable (people appreciate if tests under similar conditions return similar results, which e.g. for higher percentiles like P99 would require considerably longer measurement times, but I digress).
Up until recently Starlink connections showed significant bufferbloat under maximum load just like with LTE. So I think either Starlink has fixed this or for some reason you are not saturating your connection.