Have recently had a touted service speed increase to a touted 950/450 from 200/100 .... Previous speed testing would reliably show 200 down and 100 up from the wired connection however speed testing on the new service is topping out at about 250 down by 200 up.
Now I know these are ridiculous speeds so I'm happy enough...but I'm curious as to if the ISP doesn't provide the promised speeds or if the throughput of the AC1900AC is simply at it maximum at around 250....
Yes I do have SQM enabled and had increased my bandwidth settings, however it (understandably) appears that the throughput is too much for SQM. I performed some retesting with SQM off and reached peak uploads of 500 Mbps and downloads of 450 Mbps.
I'm on 18.06.5 firmware. Are there performance enhancements available in the 19.x firmware?
OpenWRT 18.05 uses kernel 4.14?
The mvneta driver included there has an RX queue limit of 128 packets (TX: 532 packets)
and uses default RX interrupt rate of 100 microseconds.
Is it possible that the queue is simply too small?
And it isn't drained fast enough because of the 100 microseconds interrupt rate?
How do you test your speed?
For example I use my ISPs speediest server as I don't want to hit inter ISP peering bottlenecks.
And you should also test without your router (modem plugged directly in the pc) if you think you,re not getting your full bandwidth.
I use the speedtest package on my ubuntu server, seems to be backed by Ookla ...haven't tweaked it at all. My broadband service requires / provides no modem. I have a fiber to the home service which terminates into an ethernet jack. I can see the mac address of the Cisco device on the ISP end from the router
Someone knows how to properly calculate optimal ring buffer size?
For example:
1 Gbit Link @ 50 ms latency needs a 6250000 bytes buffer.
6250000 / 1500 byte frames = ~ 4166 packets per ms
Assuming the interrupt rate is 100 microseconds:
4166 / 10 = 416
So the minimum ring buffer / queue size is 416?
Is this correct?
OpenWRT 19.07 also uses kernel version 4.14 for mvebu. (which has the mvneta driver with a ring buffer / queue size of 128)
Unfortunately, it is not possible to increase the ring buffer sizes with ethtool at runtime.
It is only possible to decrease them. (Limitation of mvneta driver)
But it is possible to adjust rx-usecs and rx-frames.
For example: