WOn Fri, 10 Mar 2017, moeller0 wrote:
... would be quite thankful if you could elaboarte a bit more how much
measurable good-put one can expect from typical wifi.
In this case, I didn't realize that he was using a AP and device that were
both supporting two streams, I thought he was using basic 802.11n (which I
believe only supports one stream), so if it should have supported 2, it was a
little low, as he saw when he switched to a different device, but not
unreasonable.
The problem with predicting good-put is that there are just so many variables.
Just looking at airtime useage at high data rates and you will see that the wifi
headers per timeslot overwelm the data transport.
IIRC, you have to transmit somewhere around 8K of packets to equal the airtime
used by the overhead, so transmitting 8K takes twice the airtime of transmitting
64 bytes
When you are doing a speed test, you are transmitting a lot of data in one
direction, and only acks in the other direction, each ack that gets transmitted
separately eats up a lot of airtime that could be used by data, so the ability
of the endpoint OS to thin out acks and batch them makes a huge difference. As
do details of the wifi drivers and buffering in the endpoint.
If you add bufferbloat monitoring to a raw speed test, you are sending
additional packets to measure the latency, and that can have a huge effect on
the number of transmit slots the endpoint needs to use.
Because of this, even a trickle of other data on the wifi network can have a
significant effect on the measured speed. If you have devices that are
generating any broadcast traffic (Alexa looking for devices to manage, DNLA
servers advertising themselves, many IoT things, etc), that can eat up valuable
transmission slots that are now not available for your speed tests[1].
One of the recent improvements in LEDE is the introduction of Airtime Fairness
into the ath10k drivers. This prevents any one station from hogging too much
airtime. This does wonders at limiting the damage that can be done by a single
small station[2]. But a side effect of this is that it will tend to put a lower
limit on the size of any single data transmission. I believe that I've seen that
some drivers would send up to 96K of data in a single transmission (up to 64k
1500 byte packets), and this will be trimmed down a bit, slightly lowering
good-put, but greatly improving latency.
An additional factor is that wifi tries really hard to play nice with other
stations. When a station is preparing to transmit, it checks to see if the
channel is clear or if something else is transmitting. As a result of this, even
a weak station that's transmitting on a channel you are using can eat an airtime
slot, even if you are right next to the AP and have an overwelmingly powerful
signal[3]
The delays that you can suffer on wifi will look like bufferbloat to soemthing
like dslreports. As a result, you should really do your bufferbloat tests
separately from wifi tests.
Because of all these things, wifi good-put can easily swing by far more than the
15% difference noted here without you having any idea why (unless you are doing
detailed captures of the RF environment at both the AP and the endpoint and then
analyze them carefully), unless you happen to be out in the boonies where you
don't have anyone else around
doing wifi speed tests is good for comparing different wifi configs (different
channels, different antennas, etc. But unless your connection is very bad, it's
not really going to tell you much about the bufferbloat in your wired devices.
I really wish I had a good way of predicing good-put, but mostly it boils down
to 'wifi is worse than you ever imagined', most of the numbers that people are
concerned about are not that unreasonable, they may be able to be improved, but
unless you do a RF analysis, or show that a different combination works much
better in that area, unless you are using multiple channels or -ac with multiple
streams, good-put in ballpark of 40Mb/s in a speed test isn't shabby
David Lang
[1] This is a really good argument for routing between 2.4G, 5G, and wired
subnets instead of bridging them. Most IoT things still tend to be 2.4G only
[2] prior to this, Linux would try to send the same amount of data to each
station, and if one is 1000x slower than the others, it wold get 1000x the
airtime. With these changes, they all get roughly equal airtime so the station
that's 1000x slower will get 1000x less data throughput instead. And in high-end
-ac environment, the difference between the fastest and the slowest station can
literally be 1000x, but even in 802.11n setups you can get to 100x
[3] This has the perverse effect that if you have better antennas on your AP,
your throughput will decrease because it is better at hearing unrelated stations
in your area. Sometimes just putting something metal between your AP and the
direction of an interfering station can greatly improve conditions for both
groups. Similarly, going to wider channels means that there are more things out
there that can clobber you, so unless you are in a fairly quiet RF enviornment,
and have only a small number of stations, you are probably better off using more
APs, each on a single channel, than using wider channels.