AQL and the ath10k is *lovely*

git, ssh, and android are no fun, tomorrow for a patch?

1.5 years ago! :roll_eyes:

I've got a spare device that might run properly with Linux.

1 Like

I feel your pain, I just need a pointer on where to insert the printk() in the right place. Try to find it will take a little while for me otherwise.

1 Like

Quick teaser, after having an idea:

Download:

Upload:

5 Likes

That's really pretty.

Pretty huge throughput hit, though. Unless that's VT80? I'm not complaining! Given a choice between household members being able to game, videoconference, watch movies and surf the web with zero glitches, or the maximum throughput possible, I'll always choose the former. Ripping latency out usually ends up with finding ways to get more bandwidth back, just with different techniques like zero copy or fq, or stuff we haven't thunk of yet.

I'd like us to move back over here:

and have some feedback on just the ath10k on this bug from other folk.

1 Like

So let me explain about the speed hit, these tests were done with the same parameters, including the MS2TIME and poll both set to 8. However, I had an idea after reading your feedback about Apple fq_codel upload buffering (which can be seen clearly in the log with the list of flows). Usually, I work in a different room where I have my computer (macOS) connected to a small switch (ERX running OpenWrt) and it is connected to another NanoHD that connects using WDS to my main AP. As it follows:

macOS <--USB dongle/eth--> switch <--eth--> NanoHD <--4x4 MIMO WDS--> NanoHD <--eth--> RPi4 (irtt/netserver)

With this convo I'm not using the macOS wireless but NanoHD to NanoHD, I thought our latency problem was in the macOS wifi Codel this will remove it, and it seems it worked. But, the wireless connection between these two devices is not as good due to distance, so it changes (bandwidth hit explanation), for example now is

780.0 Mbit/s, 80 MHz, VHT-MCS 4, VHT-NSS 4, Short GI
650.0 Mbit/s, 80 MHz, VHT-MCS 7, VHT-NSS 2, Short GI

Can you point me to where I should put the printk() or dev_info() to ensure the poll value is correct in my image? I wasn't able to make it work.
And with this bombshell moving on to the other thread. I guess our work with AQL is done here as 22.03.0 works flawlessly according, at least, to my tests.

what does the rrul_be test look like on this topology?

still far from something I can git on

Hi all, I am not sure if this is related or not ... but the "vanilla" ath10k-firmware-qca4019-ct firmware in setup described below with ~ 3 tasmota clients was resulting in large latency spikes and packets being dropped in openwrt 22.03.0. Switching to ath10k-firmware-qca4019-ct-full-htt seems to fix it for my device (GL-B1300).

Setup ->

Openwrt version: 22.03.0
Device: GL.iNet GL-B1300

wifi config


config wifi-device 'radio0'
        option type 'mac80211'
        option hwmode '11g'
        option path 'platform/soc/a000000.wifi'
        option htmode 'HT20'
        option channel '11'
        option country 'AU'
        # option disabled '1'

config wifi-device 'radio1'
        option type 'mac80211'
        option hwmode '11na'
        option path 'platform/soc/a800000.wifi'
        option htmode 'VHT80'
        option channel '36'
        option country 'AU'
        option disabled 1


config wifi-iface 'default_radio0'
        option device 'radio0'
        option network 'lan'
        option mode 'ap'
        option key 'REDACTED'
        option ssid 'mainline'
        option encryption 'sae-mixed'
        option ieee80211w '1'
        option macaddr '<REDACTED>:01'
        option disassoc_low_ack '0'



config wifi-iface 'wifi_iot_2_4'
        option ssid 'internetofthings'
        option encryption 'psk2+ccmp'
        option device 'radio0'
        option mode 'ap'
        option ieee80211w '1'
        option key 'REDACTED'
        option network 'iot'
        option macaddr 'REDACTED:02'
        option disassoc_low_ack '0'


config wifi-iface 'wifi_guest_24'
        option network 'guest'
        option ssid 'ourguestnetwork'
        option encryption 'sae-mixed'
        option device 'radio0'
        option mode 'ap'
        option ieee80211w '1'
        option key 'REDACTED'
        option macaddr 'REDACTED:03'
        option disassoc_low_ack '0'

Relevant package information I hope is as follows ->

opkg list-installed|grep ath
ath10k-board-qca4019 - 20220411-1
ath10k-firmware-qca4019-ct-full-htt - 2020-11-08-1
kmod-ath - 5.10.138+5.15.58-1-1
kmod-ath10k-ct - 5.10.138+2022-05-13-f808496f-1
opkg list-installed|grep host
hostapd-common - 2022-01-16-cff80b4f-12
opkg list-installed|grep openssl
libopenssl1.1 - 1.1.1q-1
wpad-openssl - 2022-01-16-cff80b4f-12

I don't know if it is related or not. We've had to take a look at the long term behaviors of each chipset and multiple driver combinations using flent to drive tests over the past few months of development. Any chance you could run those?

2 Likes

Sure. Do you suggest testing using flent as per AQL and the ath10k is *lovely* - #36 by dtaht

flent -H some_server_on_the_other_side -t sometitle --te=upload_streams=4 --socket-stats tcp_nup

or

flent -H the_server_ip --step-size=.04 --socket-stats --te=upload_streams=16 tcp_nup
(AQL and the ath10k is *lovely* - #63 by dtaht)

or

flent -x --socket-stats --step-size=.04 -t whatever_is_under_test --te=upload_streams=1 tcp_nup # upload_streams=4, upload_streams=16

(AQL and the ath10k is *lovely* - #181 by dtaht)

?

I will try to test on both directions:

flent -H some_server_on_the_other_side -t sometitle -s .05 --te=upload_streams=4 -X --socket-stats tcp_nup
flent -H some_server_on_the_other_side -t sometitle -s .05 --te=download_streams=4 -X --socket-stats tcp_ndown

I will probably increase it from 4 to 8 and finally to 16 streams. By the way, I'm not experiencing packet loss in my only ath10k device with ath10k-firmware-qca988x-ct and on kmod-ath10k-ct-smallbuffers, but its chipset is different, so probably is not a good example.

First is with the ath10k-firmware-qca4019-ct-full-htt & second is with ath10k-firmware-qca4019-ct. It might just be a weird quirk that randomly happens after some time and or with the full firmware & these tasmota devices. Note: This uses the same configuration as per the above & so 5ghz isn't in use here - only 2.4ghz.

First
filename

Second
filename

Summary of tcp_ndown test run from 2022-09-11 02:58:19.392528
  Title: 'first-download'

                             avg       median          # data pts
 Ping (ms) ICMP   :        28.68        29.35 ms              910
 TCP download avg :        13.65          N/A Mbits/s        1277
 TCP download sum :        54.59          N/A Mbits/s        1277
 TCP download::1  :        12.57        17.55 Mbits/s        1277
 TCP download::2  :        15.24        22.21 Mbits/s        1277
 TCP download::3  :        13.58        16.85 Mbits/s        1277
 TCP download::4  :        13.20        18.25 Mbits/s        1277
Summary of tcp_ndown test run from 2022-09-11 03:10:25.597397
  Title: 'second-download'

                             avg       median          # data pts
 Ping (ms) ICMP   :        21.30        21.10 ms             1399
 TCP download avg :        21.27          N/A Mbits/s        1399
 TCP download sum :        85.08          N/A Mbits/s        1399
 TCP download::1  :        21.05        20.28 Mbits/s        1399
 TCP download::2  :        20.97        20.62 Mbits/s        1399
 TCP download::3  :        24.21        23.16 Mbits/s        1399
 TCP download::4  :        18.85        18.44 Mbits/s        1399
Summary of tcp_nup test run from 2022-09-11 02:55:26.109736
  Title: 'first'

                                             avg       median          # data pts
 Ping (ms) ICMP                   :        60.71        59.55 ms             1144
 TCP upload avg                   :        12.13          N/A Mbits/s        1380
 TCP upload sum                   :        48.54          N/A Mbits/s        1380
 TCP upload::1                    :        12.86        18.10 Mbits/s        1380
 TCP upload::1::tcp_cwnd          :        75.92        79.00                 917
 TCP upload::1::tcp_delivery_rate :        14.80        14.82                 916
 TCP upload::1::tcp_pacing_rate   :        21.56        21.49                 916
 TCP upload::1::tcp_rtt           :        67.18        59.27                 915
 TCP upload::1::tcp_rtt_var       :         6.29         5.43                 915
 TCP upload::2                    :        11.67        16.18 Mbits/s        1380
 TCP upload::2::tcp_cwnd          :        70.47        75.00                 917
 TCP upload::2::tcp_delivery_rate :        12.43        10.79                 917
 TCP upload::2::tcp_pacing_rate   :        19.65        19.83                 917
 TCP upload::2::tcp_rtt           :        68.74        58.44                 913
 TCP upload::2::tcp_rtt_var       :         8.35         4.85                 913
 TCP upload::3                    :        11.02        14.88 Mbits/s        1380
 TCP upload::3::tcp_cwnd          :        64.00        70.00                 917
 TCP upload::3::tcp_delivery_rate :        12.19        10.36                 917
 TCP upload::3::tcp_pacing_rate   :        19.22        19.47                 917
 TCP upload::3::tcp_rtt           :        66.59        56.96                 916
 TCP upload::3::tcp_rtt_var       :         8.27         5.23                 916
 TCP upload::4                    :        12.99        17.93 Mbits/s        1380
 TCP upload::4::tcp_cwnd          :        74.89        81.00                 917
 TCP upload::4::tcp_delivery_rate :        15.23        16.24                 917
 TCP upload::4::tcp_pacing_rate   :        22.24        23.05                 917
 TCP upload::4::tcp_rtt           :        65.54        57.33                 917
 TCP upload::4::tcp_rtt_var       :         6.79         5.50                 917
Summary of tcp_nup test run from 2022-09-11 03:08:31.711736
  Title: 'second'

                                             avg       median          # data pts
 Ping (ms) ICMP                   :        45.22        45.45 ms             1392
 TCP upload avg                   :        18.94          N/A Mbits/s        1400
 TCP upload sum                   :        75.76          N/A Mbits/s        1400
 TCP upload::1                    :        18.29        18.79 Mbits/s        1400
 TCP upload::1::tcp_cwnd          :       100.29       101.00                 892
 TCP upload::1::tcp_delivery_rate :        17.16        16.98                 892
 TCP upload::1::tcp_pacing_rate   :        24.49        23.34                 892
 TCP upload::1::tcp_rtt           :        60.59        58.63                 889
 TCP upload::1::tcp_rtt_var       :         4.03         3.34                 889
 TCP upload::2                    :        20.06        20.04 Mbits/s        1400
 TCP upload::2::tcp_cwnd          :       115.51       109.00                 891
 TCP upload::2::tcp_delivery_rate :        18.71        18.43                 891
 TCP upload::2::tcp_pacing_rate   :        26.97        25.07                 891
 TCP upload::2::tcp_rtt           :        62.51        60.92                 890
 TCP upload::2::tcp_rtt_var       :         4.08         3.44                 890
 TCP upload::3                    :        18.45        19.24 Mbits/s        1400
 TCP upload::3::tcp_cwnd          :       101.08       104.00                 890
 TCP upload::3::tcp_delivery_rate :        17.18        17.57                 889
 TCP upload::3::tcp_pacing_rate   :        24.20        24.14                 889
 TCP upload::3::tcp_rtt           :        60.79        59.74                 887
 TCP upload::3::tcp_rtt_var       :         4.22         3.64                 887
 TCP upload::4                    :        18.96        19.64 Mbits/s        1400
 TCP upload::4::tcp_cwnd          :       104.76       106.00                 890
 TCP upload::4::tcp_delivery_rate :        17.70        17.86                 889
 TCP upload::4::tcp_pacing_rate   :        25.10        24.53                 889
 TCP upload::4::tcp_rtt           :        60.92        59.16                 886
 TCP upload::4::tcp_rtt_var       :         3.93         3.26                 886
```

what's the wifi chip on the client? the second run is better across the board,
are you sure you weren't hitting the bugs we had pre august?
-l 300 for a longer run..

This chromebook seems to be using iwlwifi - spec sheet suggests that it has an Intel Wireless-AC 9560 card.

I am now wondering if I need to reset this tasmota device though ...

29 packets transmitted, 14 received, 51.7241% packet loss, time 28557ms
rtt min/avg/max/mdev = 4.984/1881.031/4272.207/1385.128 ms, pipe 5

Interestingly, this seems to relate to switching from psk2+ccmp to sae-mixed & seems to fix by switching it back. This is kind of weird as the other devices do not have this issue. Anyways - thanks, I think we can rule this out as being related to AQL.

3 Likes

you are also showing 40+ms of bloat on your client's wifi chip. We just got that down to 8 on the mt76, hope was by reducing AQL's limits your current 20+ on the ath10k we could get down to 8 also.

1 Like

I was testing out cAP ac with OpenWrt 22.03 (as an AP running irqbalance). Paired with an Intel AX210 on a Windows client it performed better than RouterOS and had lower latency, but I noticed it had quite a bit of packet loss. Here's the measurement I did with crusader where the down direction is from the AP:

My 2013 MacBook Pro also seems to have the high packet loss. My Samsung A41 phone does not have the packet loss, but the latency is higher than RouterOS there ~40 ms vs ~20 ms. These tests were done on 5 Ghz with a 80 Mhz channel.

1 Like

this is what we were achieving on a ifferent test, in 2016: http://flent-newark.bufferbloat.net/~d/Airtime%20based%20queue%20limit%20for%20FQ_CoDel%20in%20wireless%20interface.pdf

2 Likes

@zoxc you are measuring measurement packet loss, not tcp packet loss, yes?

1 Like

Yeah. It's the packet loss of the separate UDP flow doing latency measurement. I was expecting the FQ component to keep that loss down.

1 Like