To the 10Gbit point, showing how cake can improve a 10Gbit link - on some cheap platform - is also on my mind. I mostly fear we have to make that big move to xdp and a xpf firewall compiler to make progress at these speeds.
The mikrotik folk have finally added fq_codel and cake across their product line in the beta, but there's no means to dynamically reconfigure it. They do make some interesting looking hw that can be reflashed to openwrt.
We have fq_codel support in our community nss builds for the IPQ806x, which is accelerated by the nss cores.There is pretty much no cpu load when shaping 1Gbps. I added a very basic script for that, in order to be selectable in webif:
It could use some work, but I don't have much time lately.
It's also worth asking if there is something simpler and lower computation than cake which could help these speeds. Like hashing to bfifos below a qfq, and a TBF on top maybe. Even 1000 MTU packets takes 1.2ms at 10G speed so you could have a lot of slop compared to optimal without much bloat.
For science! I have a nearly religious love of drop head queuing vs tail drop, but showing any benefits and 10gbit performance on cheap hardware of anything but a FIFO would be good.
Unfortunately I deleted them, but I can run the tests again tomorrow. But I only have a 100Mbps ISP connection, for higher speeds I would have to setup some local server on the wan side of the router.
Not sure if that comes close to a real world scenario.
well, my concern is that there's problems above 100Mbit. Does the nsstbf work at 900mbit? inbound?? It would be cool if it worked in bound.... It's a very real world scenario to have a server "right there" on the wan side....
But: Certainly verifying that it worked at 100Mbit would be a start! But the regular sqm-scripts use a scalable burst and htb quantum you don't have....
Exactly. The CPU requirements at 10Gbps are significant. Something computationally cheap enough to run with routing and firewall on say a Celeron, particularly if it could be multithreaded, would be a lot better than a 1000 element pfifo.
I have a spare R7800 not (yet) actively in use and a fiber connection of 500/500mbps. I’ve never done anything with flent, but if you guys need some testing on speeds above 100mbps and can “talk me through it”, just let me know.
--socket-stats only works on the up. Both the up and down look ok, however the distribution of tcp RTT looks a bit off. These lines should be pretty identical.
Three possible causes: 1) your overlarge burst parameter. At 35Mbit you shouldn't need more than 8k!! The last flow has started up late and doesn't quite get back into fairness with the others. 2) They aren't using a DRR++ scheduler, but DRR. 3) Unknown. I always leave a spot in there for the unknown and without a 35ms rtt path to compare this against I just go and ask you for more data
No need for more down tests at the moment.
A) try 8 and 16 streams on the up.
B) try a vastly reduced htb burst.
A packet capture of a simple 1 stream test also helps me on both up and down, if you are in a position to take one. I don't need a long one (use -l 20) and tcpdump -i the_interface -s 128 -w whatever.cap on either the server or client will suffice.
I am temporarily relieved, it's just that the drop scheduler in the paper was too agressive above 100mbit... and we haven't tested that on this hardware....
Anyway this more clearly shows two of the flows are "stuck":
I have a self built NAS, it has a 1Gb/s NIC in it. I ran speedtest (Ookla) on it just now:
Speedtest by Ookla
Server: Jonaz B.V. - Amersfoort (id = 10644)
ISP: KPN
Latency: 3.35 ms (0.11 ms jitter)
Download: 502.56 Mbps (data used: 238.9 MB)
Upload: 598.88 Mbps (data used: 1.1 GB)
Packet Loss: 0.0%
Result URL: https://www.speedtest.net/result/c/812e0f44-d77e-4a36-9a1e-35e19707c1c6
It's getting older now, built about 6 years or so with low energy consumption in mind but still fits the bill, quad core, 8GB RAM.
Would this be decent enough as a server? I'm running @ACwifidude 's NSS build (21.02) build on my R7800. On the other R7800 I'll be happy to sysupgrade to a build from @KONG with the settings that are used with 100Mbps so I can produce some flent output to see what happens above 100Mbps? Need to try to match the same circumstances off course.