802.11ax worse than 802.11ac with mt76 driver?

Good. We just don't know. The potential to have measurements in this forum thread based on settings involving multiple clients is a possibility and not nill. Good that you were fast with your feedback, so we can conclude that multiple client streams handling (especially for RX) could also be improved upon, but even if that were fixed, it most likely will not fix the issues you are having.

My dream is that y'all test latency simultaneously with throughput, especially in the problematic cases with walls in the way. Even taking apart packet captures (via tcptrace -G and xplot.org) of iperf3's RTTs would be preferable to just seeing ya'll just reporting bandwidth figures.

I can note a similar issue with this. I setup a WAX202 for my church in a WWAN configuration. I first set it up parallel to a wall (hoping to present a low profile, save space and move the LEDs from direct view of the congregation). From a client in the same room approximately 5 meters away (sanctuary - no obstructions), I could not maintain connection or Internet. Immediately after rotating the device 90 degrees to be perpendicular to the wall - I had 0 issues.

3 Likes

LEDs are controllable.

Yes I know, but this happened on first boot at the location, lol.

Not sure how that's relevant to what I discovered, though.

Could you please give a simple example for how to do it and how it should look like? tcptrace seems to be a very old program (from 2003) and I am running Windows on my machine, which it apparently was not designed for.

Edit: I will try to set up OWAMP or TWAMP, as suggested by this website: https://kadiska.com/measure-network-latency/

Do the packet capture on windows via wireshark. Wireshark also has a pretty decent Statistics -> Stream graphs -> TCP RTT. What I'm mostly looking for is large gaps or spikes in the RTT, which would be more the radio rate seeking malfunctioning, or uncontrolled growth, like I show below.

Yea, tcptrace -G and xplot.org are ancient tools and only work well on linux, but they in general produce outputs that are more detailed.

Heading deeper, if it's more rate control that's the problem here, actually taking captures of the 802.11 frames might be revealing. Lack of packet aggregation proved to be a problem on a mt76 I took apart recently... you can indirectly see aggregation in action by observing layer three packet arrivals all in a bunch...

Anyway, here's an example of a packet capture of iperf3, using wireshark, running over LTE. Over the course of 10seconds it gradually grows to over 1.5s (with no end in sight!). I've seen wifi be even worse... A bidir test will utterly go to hell here...

1 Like

irtt is far more modern and fine grained than twamp, and easier to setup... it's in pure go....

1 Like

I managed to install and configure twamp:

Client ← Wifi 802.11ax 5G ch40 80MHz, 2 Walls (~5 meter) ~ 69dBm → D-Link DAP x1860 OpenWRT snapshot 2022-12-22 ← wifi 802.11ac 5G ch40 80MHz, 1 wall → Fritzbox 7490 OEM ← Gigabit LAN → Server

I tried to use Wireshark and found the "roundtrip" option, but was not sure which pakets to select and how to purely measure specific traffic from within my local network. Using the "just try things" method, I downloaded a large 2 gigabyte file from a more or less reliable website (Linux Mint) and then selected a random packet from this large download, which gave me these results:

Client ← Wifi 802.11ax 5G ch40 80MHz, 2 Walls (~5 meter) ~ 69dBm → D-Link DAP x1860 OpenWRT snapshot 2022-12-22 ← wifi 802.11ac 5G ch40 80MHz, 1 wall → Fritzbox 7490 OEM → Internet

I think this is fair enough. Not the best, but since there are 3 walls, my fritzbox 7490 being quite old hardware and the endpoint being on the internet with how many servers in between, i think acceptable for the fact that I have not tried to optimize my router and wifi configuration in any way.

I was not sure, if i should post these results, since I have not found grave issues, but who knows, maybe you can do more with it than me.

@dtaht How can you find the traffic you conducted via iperf3 in wireshark? Edit: When we found them, which packets should we choose for the roundtrip test?

In the screenshot down below, you can see packets from an iperf3 test with following settings:

  • iperf3 -c 192.168.178.39 -p 5201 -P 8 -t 60 -i 10 --bidir
  • Iperf Client < --- Wifi 802.11ax 5G ch40 80MHz, 2 Walls (~5 meter) ~ 69dBm --- > D-Link DAP x1860 OpenWRT snapshot 2022-12-22 < ---Wifi 802.11ac 5G ch40 80MHz, 1 wall --- > Fritzbox 7490 OEM < --- Gigabit LAN ---> Iperf Server 192.168.178.39.

There are hundreds and thousands of packages to choose from, but which one(s) should we select?

I clicked on one of the packets and had this result:

But when I click on another random packet, results can be slightly different.

The stream graphs start with the identified beginning of the flow and plot from there, so selecting any packet in the flow should more or less give you the same result from the wireshark stream graphs.

See how much larger the RTT variance is on your last plot? spikes as high as 60ms, the vast majority close to zero? (this is essentially a zero length path).

(also selecting a throughput graph on the same data and looking at both at the same time is revealing - part of why I use xplot, but whatever)

From what I see of both plots, you are not aggregating very well at all, min RTT
should have been hovering at at least 4ms, instead it is closer to 1. Good aggregation gives 60% or more better throughput...

You can also see packet loss and retransmits...

There's another sort of packet capture you can take of the 802.11 packets "in the air" to see how many AMPDUs are being sent.

clicking on any random packet within a flow (src,dst,dst port,src port, proto should all be the same) should give you the same graph no matter where in the flow you clicked.

@amteza I kind of burned out for a while. You get anywhere on the NAPI stuff? (merry christmas!)

words I don't understand:

  • "zero length path". - Length of what? Do you mean "zero latency path"?

Further questions:

How to do good aggregation? How should good aggregation look like?

"zero base latency" path. I use length, rather than "latency", because it's the speed of light I think about. A nanosecond is about a foot, so...

A full TXOP of aggregated packets can be as big as 5.7ms (and we usually can get to 2 or 3), so 30ns vs 15ms is close enough to "zero". Sorry to be unclear.

There are a lot of things that can prevent good aggregation - notably not negotiating the capability at all, a driver problem, or excessive packet loss. Also the notion of "good" is fungible. 8 minutes of this explains how badly we used to do aggregation: https://www.youtube.com/watch?v=Rb-UnHDw02o&t=1560s - and the mt76 is supposed to be doing this far, far more right that it was then - but given all the complaints in every direction, not just on this, but power, I'm beginning to suspect something else is rather wrong... [1]

Anyway the theory that you are not aggregating well is just a theory for now. Here's how to capture the wifi management frames on a variety of OSes, which could find the truth, as opposed to a theory..

https://wiki.wireshark.org/CaptureSetup/WLAN

[1] and I will have to get on top of a mt76 chip to dig deeper myself.

1 Like

Happy summer holidays. The last weeks were crazy, and I just returned from a trip with my family, so I've got nowhere. However, before leaving, I left running the latest master build with a new patch to fix NAPI settings to our previously tested value. My devices are usually running with latencies under load below 8 ms. Is it related to the NAPI weight being lowered? Maybe, but I am not sure.

My devices are mt76 (7615e) AC devices, not AX (7915).

For completeness, test running r23234, with the usual patches, at 3 metres from the AP, 866 Mb/s 2x2 MIMO, MCS: 9 and NSS: 2. ISP connection is 1000/50, SQM is 800/42 and WiFi network is an AC network. The network was only in use by a Roblox game, YouTube, and a FaceTime call apart from the Waveform test, of course:

A quick comparison of AX vs AC both on 80MHz channel 64 one floor so a few meters on rt3200 with a 7915 chip on openwrt r21885 with all offloading including wifi downstream on a Macbook Air M2. Even saw some RTT over 1000 ms on a couple of AX runs. iperf server on a raspberry connected with ethernet.

AX
[  7]   0.00-1.00   sec  6.04 MBytes  50.6 Mbits/sec  408336   1.34 MBytes   412ms
[  7]   1.00-2.00   sec  39.6 KBytes   325 Kbits/sec    0    987 KBytes   662ms
[  7]   2.00-3.00   sec  1.60 MBytes  13.4 Mbits/sec  653944    822 KBytes   489ms
[  7]   3.00-4.01   sec  1.53 MBytes  12.8 Mbits/sec    0   1.11 MBytes   596ms
[  7]   4.01-5.01   sec   382 KBytes  3.13 Mbits/sec    0   1.05 MBytes   512ms
[  7]   5.01-6.00   sec  2.91 MBytes  24.6 Mbits/sec    0    877 KBytes   627ms
[  7]   6.00-7.00   sec  2.35 MBytes  19.6 Mbits/sec    0   1.08 MBytes   573ms
[  7]   7.00-8.00   sec   524 KBytes  4.30 Mbits/sec    0   1.00 MBytes   485ms
[  7]   8.00-9.00   sec  2.68 MBytes  22.5 Mbits/sec    0   1.08 MBytes   572ms
[  7]   9.00-10.00  sec  2.09 MBytes  17.5 Mbits/sec    0   1.10 MBytes   663ms
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  7]   0.00-10.00  sec  20.1 MBytes  16.9 Mbits/sec  1062280             sender
[  7]   0.00-10.60  sec  18.7 MBytes  14.8 Mbits/sec                  receiver

AC
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd          RTT
[  7]   0.00-1.00   sec  43.9 MBytes   368 Mbits/sec  4881208   1.48 MBytes   20ms
[  7]   1.00-2.00   sec  50.0 MBytes   420 Mbits/sec  2896000   1.53 MBytes   23ms
[  7]   2.00-3.00   sec  50.4 MBytes   423 Mbits/sec  1448000   1.57 MBytes   16ms
[  7]   3.00-4.00   sec  51.3 MBytes   431 Mbits/sec    0   1.61 MBytes   35ms
[  7]   4.00-5.00   sec  44.5 MBytes   373 Mbits/sec  1531984   2.04 MBytes   25ms
[  7]   5.00-6.00   sec  42.5 MBytes   356 Mbits/sec  1743392   2.05 MBytes   51ms
[  7]   6.00-7.00   sec  45.2 MBytes   380 Mbits/sec    0   2.06 MBytes   38ms
[  7]   7.00-8.00   sec  47.8 MBytes   401 Mbits/sec  1867920   1.56 MBytes   33ms
[  7]   8.00-9.00   sec  46.2 MBytes   386 Mbits/sec  1614520   1.18 MBytes   62ms
[  7]   9.00-10.00  sec  44.7 MBytes   375 Mbits/sec  1669544   1.90 MBytes   35ms
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  7]   0.00-10.00  sec   466 MBytes   391 Mbits/sec  17652568             sender
[  7]   0.00-10.02  sec   464 MBytes   388 Mbits/sec                  receiver
1 Like

Try a 160MHz channel as a workaround.

Are you using IPerf2 on the Mac to test it?

Tried that too but didn't help.

iperf3-darwin available on macOS

1 Like