Please: Hit it with 128 flows at that speed. We should see some hash collisions, which is ok, and from those we can infer how many flows they really have.
THEN the all seeing all knowing rrul test.
I'm very impressed that it only eats 3% of cpu at this speed. I'd be even more impressed if it operated correctly.
The estimated completion time is a little off, at first I thought flent just got stuck
I set 900000 up/down in qos settings:
tc -s qdisc | grep burst
qdisc nsstbl 1: dev eth0 root refcnt 2 buffer/maxburst 112500b rate 900Mbit mtu 1514b accel_mode 0
qdisc nsstbl 1: dev nssifb root refcnt 2 buffer/maxburst 112500b rate 900Mbit mtu 1514b accel_mode 0
Still pretty much no load on the router. The box where netserver runs is only a tiny Celeron(Intel(R) Celeron(R) N4100 CPU @ 1.10GHz) hopefully it is fast enough to not cause any issues in the runs.
OK, that's puzzling. The odds were good that we'd see a third tier hash collission here, and we don't, and the bimodal distribution is odd... way too many flows in this other tier to be a birthday paradox at flows 1024.
And that's And codel, should have controlled all the RTTs here despite the collision - throughput should have been different but observed latencies eventually the same.
This one doesn't have that spike. Dang it. 50GB of memory used up for a good cause tho. Doing more than one plot OOMed my laptop also. Don't need to do -l 300 again for a while...
This is a pretty normal looking result. I note I am not recommending 16 flows to your end-users, it was just seeing if codel was correct.
OK, in looking at this and the 3 others, we hit a limit of 800Mbit/sec hard for some reason. Should have been about 870, I think, but haven't done the math. Add 20% or so to your burst parameter?
hmm. dont know where that 800mbit limit is coming from.It's mildly higher with a larger burst size and the ping is more stable (again we are engaging the codel component harder here).
For laughs, try fq_codel quantum 300? We let that be the MTU at higher rates (in software'), but in practice the smaller quantum helps at lower loads (at the cost of a lot more cpu).
Another test would be to bump the rate up to 1GBit with the autoconfigured burst param and compare the shaped result at that speed vs just having fq_codel on without any shaper at all at that speed, still with flows 16.
in terms of even more wild speculation, double the packet limit. Thx for all your help!
OK, tried a couple of things including quantum change, qlen, nothing really changes things, but if I set rate to 950000, then we get 50000 more, with the same pings etc.
Only explanation for that is, that there is a bug in some calculation.