R7500v2 (ath10k 9980) netperf observations, firmware/board-2.bin files, and other wifi issues

yes, but I think this is a trivial bug.

I think it's because the airtime calculation code relies on WMI_SERVICE_PEER_STATS.
It uses the tx rate in peer stats.

But I think we can use the rate from tx_status instead.

1 Like

I'm running this device as an AP only and the qdisc for the wlan's are not set (by openwrt) by default. I've set the qdisc manually to fq_codel now (thanks to @quarky - see here) and there may be an improvement. I will also try changing the ATF weights and see if i can't change the netperf behavior now.

Early on i noticed the lack of qdisc on the wlan if's and tried to set them, but I failed. Can't express how happy I am that @quarky noticed/suggested it along with an example.

Why do you need that?
TXQ already uses fq_codel.

FWIW the openwrt default "AP only" wlan0 (5g) qdisc is

r7500v2 # tc -d -s qdisc show dev wlan0
qdisc noqueue 0: root refcnt 2 
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0) 
 backlog 0b 0p requeues 0

So ATF should work with a "noqueue" qdisc?

I don't know that i need it or not as the whole ATF implementation is complex and I have not found good (user) documentation describing how to use it - I think because

A. I'm playing with it on master in a "development" state and
B. I suspect ATF is not meant to have much "non-expert" user interaction

of course when things don't work as a user expects, it makes it tough for the "non-expert" to identify issues and help troubleshoot. I think this whole thread is a good example of that.

But i digress and I very much appreciate your comments - thank you.

ATF works when the driver uses mac80211's TXQ and can report airtime.

BTW I can try to make a patch to enable ATF if you are willing to test

1 Like

yes please

Try this

--- a/ath10k-5.15/mac.c
+++ b/ath10k-5.15/mac.c
@@ -11302,10 +11302,7 @@ int ath10k_mac_register(struct ath10k *a
 		wiphy_ext_feature_set(ar->hw->wiphy,
 				      NL80211_EXT_FEATURE_ACK_SIGNAL_SUPPORT);
 
-	if (ath10k_peer_stats_enabled(ar) ||
-	    test_bit(WMI_SERVICE_REPORT_AIRTIME, ar->wmi.svc_map))
-		wiphy_ext_feature_set(ar->hw->wiphy,
-				      NL80211_EXT_FEATURE_AIRTIME_FAIRNESS);
+	wiphy_ext_feature_set(ar->hw->wiphy, NL80211_EXT_FEATURE_AIRTIME_FAIRNESS);
 
 	if (test_bit(WMI_SERVICE_RTT_RESPONDER_ROLE, ar->wmi.svc_map))
 		wiphy_ext_feature_set(ar->hw->wiphy,
--- a/ath10k-5.15/txrx.c
+++ b/ath10k-5.15/txrx.c
@@ -5,6 +5,8 @@
  * Copyright (c) 2018, The Linux Foundation. All rights reserved.
  */
 
+#include <net/mac80211.h>
+
 #include "core.h"
 #include "txrx.h"
 #include "htt.h"
@@ -168,6 +170,8 @@ int ath10k_txrx_tx_unref(struct ath10k_h
 	struct sk_buff *msdu;
 	u8 flags;
 	bool tx_failed = false;
+	u32 duration = 0;
+	int len = 0;
 
 	ath10k_dbg(ar, ATH10K_DBG_HTT,
 		   "htt tx completion msdu_id %u status %d\n",
@@ -286,6 +290,14 @@ int ath10k_txrx_tx_unref(struct ath10k_h
 		ar->ok_tx_rate_status = true;
 		ath10k_set_tx_rate_status(ar, &info->status.rates[0], tx_done);
 
+		len = msdu->len;
+		duration = ieee80211_calc_tx_airtime(htt->ar->hw, info, len);
+		rcu_read_lock();
+		if (txq && txq->sta && duration)
+			ieee80211_sta_register_airtime(txq->sta, txq->tid,
+						       duration, 0);
+		rcu_read_unlock();
+
 		/* Only in version 14 and higher of CT firmware */
 		if (test_bit(ATH10K_FW_FEATURE_HAS_TXSTATUS_NOACK,
 			     ar->running_fw->fw_file.fw_features)) {
1 Like

FYI it applies and builds np. Just waiting for a chance to test it.

As it only changes ath10k_core.ko, i only moved the new one into /lib/modules/5.10.96/ and will reboot the AP to try it.

oh didn't know you can do that

ok, it's running. I'll need to do some testing for the various scenarios I've experienced before.

That said, if you have suggestions about how to evaluate your patch I'm open to them (in particular, I'm not certain about what if anything i might need to change for hostapd regarding ATF)

What i just tried: with only two clients on 5g (one able to do 500+ mbps, one slower at ~200 mbps, both line of site to AP), individually they both give similar netperf as before (i.e. they are ok).

If i change the ATF weights on the AP by setting the faster client to a weight of 10 and the slower client to a weight of 512, the netperf results (testing both clients simultaneously) are similar to what i've seen before with equal ATF weights (both ATF weights at 256 - no changes to default hostapd ATF settings in either case). In fact by visual inspection, it looks the same to what I saw just before i tested your patch.

I'll try clients with equal phy rate capabilities next (both about 300 mbps) and through a wall as that is when ATF is really supposed to help. This scenario is also the one I have observed in the past when one netperf session grinds to halt.

Thank you again for the patch.

It should work based on what I read but I don’t have a proper way to test it.

So there a lot of different test setups i can do, but this one is pretty typical.

  • Two 5g wifi clients (channel 36, 80 MHz, little to no interference from neighbors),
  • "phy rates" about equal (240 mbps and 270 mbps - they are both equal at 300 mbps 3 feet from the AP and clear line of site).
  • Both clients located next to each other say 15 meters from the AP and no clear line of site to the AP.
  • Hostapd for the 5g phy has
airtime_mode=2
airtime_bss_weight=1

No clue if i need this or not to change the ATF weights with iw, it does not seem to matter if I have this set for hostapd or not.

  • ATF weights for both clients 256.
  • Individual netperfs: ~100 mbps and ~ 80 mbps.
  • results below using @castiel652's patch above - but I have got similar results before trying it.

Here is the "simultaneous" netperf i.e. running netperf on both clients at the same time:

[66] $ netperf -l 60 -D 1s -H XXX.XXX.XXX.26
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to XXX.XXX.XXX.26 () port 0 AF_INET : demo
Interim result:   94.43 10^6bits/s over 1.509 seconds ending at 1644260386.157
Interim result:   98.40 10^6bits/s over 1.098 seconds ending at 1644260387.255
Interim result:   88.94 10^6bits/s over 1.107 seconds ending at 1644260388.362
Interim result:   91.47 10^6bits/s over 1.045 seconds ending at 1644260389.406
Interim result:   91.46 10^6bits/s over 1.128 seconds ending at 1644260390.534
Interim result:   89.21 10^6bits/s over 1.108 seconds ending at 1644260391.642
Interim result:   90.93 10^6bits/s over 1.061 seconds ending at 1644260392.703
Interim result:   88.20 10^6bits/s over 1.109 seconds ending at 1644260393.811
Interim result:   93.57 10^6bits/s over 1.034 seconds ending at 1644260394.845

*** netperf on slower client started here; listed after the current output is finished but run simultaneously from this point on ***

Interim result:  100.82 10^6bits/s over 1.066 seconds ending at 1644260395.911
Interim result:   90.06 10^6bits/s over 1.119 seconds ending at 1644260397.030
Interim result:   79.46 10^6bits/s over 1.209 seconds ending at 1644260398.239
Interim result:   80.17 10^6bits/s over 1.071 seconds ending at 1644260399.310
Interim result:   71.08 10^6bits/s over 1.128 seconds ending at 1644260400.439
Interim result:   86.95 10^6bits/s over 1.119 seconds ending at 1644260401.557
Interim result:   86.35 10^6bits/s over 1.006 seconds ending at 1644260402.564
Interim result:   81.36 10^6bits/s over 1.062 seconds ending at 1644260403.625
Interim result:   86.04 10^6bits/s over 1.043 seconds ending at 1644260404.669
Interim result:   93.03 10^6bits/s over 1.065 seconds ending at 1644260405.734
Interim result:   97.10 10^6bits/s over 1.023 seconds ending at 1644260406.757
Interim result:   93.28 10^6bits/s over 1.041 seconds ending at 1644260407.798
Interim result:   90.97 10^6bits/s over 1.133 seconds ending at 1644260408.931
Interim result:   62.47 10^6bits/s over 1.456 seconds ending at 1644260410.387
Interim result:   85.02 10^6bits/s over 1.051 seconds ending at 1644260411.438
Interim result:  102.93 10^6bits/s over 1.042 seconds ending at 1644260412.480
Interim result:  100.05 10^6bits/s over 1.028 seconds ending at 1644260413.508
Interim result:  108.30 10^6bits/s over 1.058 seconds ending at 1644260414.566
Interim result:  100.33 10^6bits/s over 1.079 seconds ending at 1644260415.645
Interim result:   95.90 10^6bits/s over 1.047 seconds ending at 1644260416.692
Interim result:  106.55 10^6bits/s over 1.159 seconds ending at 1644260417.851
Interim result:   73.11 10^6bits/s over 1.457 seconds ending at 1644260419.308
Interim result:   67.76 10^6bits/s over 1.079 seconds ending at 1644260420.388
Interim result:  104.44 10^6bits/s over 1.183 seconds ending at 1644260421.571
Interim result:   81.00 10^6bits/s over 1.290 seconds ending at 1644260422.861
Interim result:   79.01 10^6bits/s over 1.025 seconds ending at 1644260423.886
Interim result:   78.37 10^6bits/s over 1.008 seconds ending at 1644260424.895
Interim result:   94.67 10^6bits/s over 1.047 seconds ending at 1644260425.941
Interim result:   88.58 10^6bits/s over 1.068 seconds ending at 1644260427.010
Interim result:   72.92 10^6bits/s over 1.215 seconds ending at 1644260428.225
Interim result:   82.95 10^6bits/s over 1.067 seconds ending at 1644260429.291
Interim result:  124.63 10^6bits/s over 1.041 seconds ending at 1644260430.333
Interim result:   89.38 10^6bits/s over 1.395 seconds ending at 1644260431.727
Interim result:   77.96 10^6bits/s over 1.147 seconds ending at 1644260432.874
Interim result:   70.94 10^6bits/s over 1.099 seconds ending at 1644260433.973
Interim result:  113.89 10^6bits/s over 1.108 seconds ending at 1644260435.081
Interim result:   92.17 10^6bits/s over 1.236 seconds ending at 1644260436.317
Interim result:   85.00 10^6bits/s over 1.084 seconds ending at 1644260437.401
Interim result:  100.04 10^6bits/s over 1.067 seconds ending at 1644260438.468
Interim result:   90.67 10^6bits/s over 1.103 seconds ending at 1644260439.571
Interim result:   86.81 10^6bits/s over 1.045 seconds ending at 1644260440.616
Interim result:   98.34 10^6bits/s over 1.042 seconds ending at 1644260441.658
Interim result:   91.98 10^6bits/s over 1.123 seconds ending at 1644260442.781
Interim result:   85.51 10^6bits/s over 1.076 seconds ending at 1644260443.857
Interim result:   86.93 10^6bits/s over 0.792 seconds ending at 1644260444.649
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

131072  16384  16384    60.29      88.91   

*** netperf output from slower client ***
[11] $ netperf -l 60 -D 1s -H XXX.XXX.XXX.26
MIGRATED TCP STREAM TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to XXX.XXX.XXX.26 () port 0 AF_I\
NET : demo
Interim result:    0.81 10^6bits/s over 2.759 seconds ending at 1644260397.731
Interim result:   27.73 10^6bits/s over 1.134 seconds ending at 1644260398.866
Interim result:   23.84 10^6bits/s over 1.237 seconds ending at 1644260400.103
Interim result:    8.00 10^6bits/s over 2.982 seconds ending at 1644260403.085
Interim result:   27.03 10^6bits/s over 1.081 seconds ending at 1644260404.166
Interim result:    5.00 10^6bits/s over 5.395 seconds ending at 1644260409.562
Interim result:   27.76 10^6bits/s over 1.006 seconds ending at 1644260410.567
Interim result:    3.43 10^6bits/s over 8.098 seconds ending at 1644260418.665
Interim result:   28.54 10^6bits/s over 36.307 seconds ending at 1644260454.972
Recv   Send    Send
Socket Socket  Message  Elapsed
Size   Size    Size     Time     Throughput
bytes  bytes   bytes    secs.    10^6bits/sec

131072  16384  16384    60.14      20.54

I can change the ATF weight for the faster client from 256 to 1 and get the essentially the same result. The slower client just grinds to a halt.

ATF, if it is even working on my device, is not behaving as I've been lead to believe. I have some old builds i think before ATF was implemented - I need to try those.

@quarky, the test in post above was done with no qdisc on wlan0 (5g). I just repeated it with fq_codel as you suggest and i get essentially the same result. This was with casteil652's patch and my network was not as quiet (kids are home now). I'll try testing without the patch and when the network is quieter later - I'm not hopeful.

I've followed your posts about ATF since I first saw them a week or two ago. I did not comment earlier as I'm still not sure my issues are ATF related or related to your observations. But if you have something you'd like others to test please do let me know.

Thanks again for responding to my comment earlier.

EDIT: Testing again using fq_codel on wlan0 without castiel652's patch on a quiet network and no change.

Some progress.

The tests above use:

[66] $ netperf -l 60 -D 1s -H XXX.XXX.XXX.26

which stream data from the wifi clients to the netperf server attached by a wire to the AP.

In the past, I have also tried

netperf -t tcp_maerts -l 60 -D 1s -H XXX.XXX.XXX.26

to stream data from the netperf server through the AP out to the clients. There has always been a difference - slightly higher throughput and I thought perhaps a little better behaved but nothing that made me think ATF was working and still issues (at least as i recall).

However, repeating the netperf -t tcp_maerts today (no wlan0 qdisc or castiel652's patch, my own patch so a i can adjust the ATF weights, otherwise the same setup as here) I can finally see the effect of changing the ATF weights - and i did not observe one client grinding to a halt. It's not perfect and I have to go to extreme ATF weights but it behaves as I expect ATF should.

Note I tried both with and with out entries in /etc/config/wireless described above related to ATF - the following works regardless (i.e. no hostapd config adjustment is necessary).

on the AP:

r7500v2 # iw dev wlan0 station set <slower:client:mac:add> airtime_weight 1023
r7500v2 # iw dev wlan0 station set <faster:client:mac:add> airtime_weight 1
r7500v2 # iw dev wlan0 station dump # output confirms the ATF set as desired above

netperfs from the clients, faster client started first

[97] $ netperf -t tcp_maerts -l 60 -D 1s -H XXX.XXX.XXX.26
MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to XXX.XXX.XXX.26 () port 0 AF_INET : demo
Interim result:  111.88 10^6bits/s over 1.000 seconds ending at 1644339878.155
Interim result:  124.65 10^6bits/s over 1.007 seconds ending at 1644339879.162
Interim result:  123.00 10^6bits/s over 1.014 seconds ending at 1644339880.176
Interim result:  121.70 10^6bits/s over 1.010 seconds ending at 1644339881.186
Interim result:  126.06 10^6bits/s over 1.002 seconds ending at 1644339882.188
Interim result:  104.13 10^6bits/s over 1.210 seconds ending at 1644339883.398
*** netperf on slower client started here (it's output is below)***
Interim result:   15.37 10^6bits/s over 6.780 seconds ending at 1644339890.177
Interim result:   13.48 10^6bits/s over 1.150 seconds ending at 1644339891.327
Interim result:   15.25 10^6bits/s over 1.089 seconds ending at 1644339892.417
Interim result:   13.73 10^6bits/s over 1.121 seconds ending at 1644339893.538
Interim result:   15.81 10^6bits/s over 1.046 seconds ending at 1644339894.583
Interim result:   17.61 10^6bits/s over 1.016 seconds ending at 1644339895.600
Interim result:    5.04 10^6bits/s over 3.505 seconds ending at 1644339899.105
Interim result:    4.85 10^6bits/s over 1.056 seconds ending at 1644339900.161
Interim result:   14.55 10^6bits/s over 1.019 seconds ending at 1644339901.179
Interim result:   15.50 10^6bits/s over 1.014 seconds ending at 1644339902.193
Interim result:   15.36 10^6bits/s over 1.012 seconds ending at 1644339903.205
Interim result:   17.85 10^6bits/s over 1.030 seconds ending at 1644339904.235
Interim result:   14.57 10^6bits/s over 1.222 seconds ending at 1644339905.457
Interim result:   17.16 10^6bits/s over 1.054 seconds ending at 1644339906.511
Interim result:   14.65 10^6bits/s over 1.170 seconds ending at 1644339907.680
Interim result:   17.02 10^6bits/s over 1.029 seconds ending at 1644339908.710
Interim result:   15.43 10^6bits/s over 1.102 seconds ending at 1644339909.811
Interim result:   13.70 10^6bits/s over 1.124 seconds ending at 1644339910.935
Interim result:   12.48 10^6bits/s over 1.099 seconds ending at 1644339912.034
Interim result:   13.98 10^6bits/s over 1.039 seconds ending at 1644339913.073
Interim result:   14.33 10^6bits/s over 2.479 seconds ending at 1644339915.552
Interim result:   14.63 10^6bits/s over 1.054 seconds ending at 1644339916.607
Interim result:   14.83 10^6bits/s over 1.005 seconds ending at 1644339917.612
Interim result:   13.45 10^6bits/s over 1.090 seconds ending at 1644339918.702
Interim result:   12.89 10^6bits/s over 1.048 seconds ending at 1644339919.750
Interim result:   12.65 10^6bits/s over 1.018 seconds ending at 1644339920.769
Interim result:   14.70 10^6bits/s over 1.012 seconds ending at 1644339921.781
Interim result:   16.01 10^6bits/s over 1.003 seconds ending at 1644339922.784
Interim result:   12.79 10^6bits/s over 1.307 seconds ending at 1644339924.091
Interim result:   19.46 10^6bits/s over 3.227 seconds ending at 1644339927.318
Interim result:   16.26 10^6bits/s over 1.199 seconds ending at 1644339928.517
Interim result:   15.36 10^6bits/s over 1.061 seconds ending at 1644339929.578
Interim result:    7.51 10^6bits/s over 2.048 seconds ending at 1644339931.625
Interim result:    4.75 10^6bits/s over 1.629 seconds ending at 1644339933.254
Interim result:   13.08 10^6bits/s over 2.046 seconds ending at 1644339935.300
Interim result:   14.50 10^6bits/s over 1.121 seconds ending at 1644339936.421
Interim result:   15.92 10^6bits/s over 0.734 seconds ending at 1644339937.155
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

131072  16384  16384    60.00      24.55   
 *** netperf from slower client ***
[18] $ netperf -t tcp_maerts -l 60 -D 1s -H XXX.XXX.XXX.26
MIGRATED TCP MAERTS TEST from 0.0.0.0 (0.0.0.0) port 0 AF_INET to XXX.XXX.XXX.26 () port 0 AF_INET : demo
Interim result:   47.80 10^6bits/s over 1.001 seconds ending at 1644339883.886
Interim result:  103.80 10^6bits/s over 1.011 seconds ending at 1644339884.897
Interim result:  113.96 10^6bits/s over 1.000 seconds ending at 1644339885.897
Interim result:  122.33 10^6bits/s over 1.000 seconds ending at 1644339886.897
Interim result:  121.49 10^6bits/s over 1.007 seconds ending at 1644339887.904
Interim result:  123.43 10^6bits/s over 1.001 seconds ending at 1644339888.905
Interim result:  121.33 10^6bits/s over 1.017 seconds ending at 1644339889.923
Interim result:  124.46 10^6bits/s over 1.000 seconds ending at 1644339890.923
Interim result:  122.64 10^6bits/s over 1.014 seconds ending at 1644339891.937
Interim result:  120.45 10^6bits/s over 1.018 seconds ending at 1644339892.955
Interim result:  119.97 10^6bits/s over 1.004 seconds ending at 1644339893.959
Interim result:  119.93 10^6bits/s over 1.000 seconds ending at 1644339894.960
Interim result:  118.84 10^6bits/s over 1.009 seconds ending at 1644339895.969
Interim result:  126.80 10^6bits/s over 1.003 seconds ending at 1644339896.972
Interim result:  138.38 10^6bits/s over 1.001 seconds ending at 1644339897.973
Interim result:  125.87 10^6bits/s over 1.099 seconds ending at 1644339899.072
Interim result:  110.71 10^6bits/s over 1.137 seconds ending at 1644339900.209
Interim result:  121.02 10^6bits/s over 1.003 seconds ending at 1644339901.212
Interim result:  117.45 10^6bits/s over 1.029 seconds ending at 1644339902.241
Interim result:  117.89 10^6bits/s over 1.023 seconds ending at 1644339903.264
Interim result:  118.09 10^6bits/s over 1.002 seconds ending at 1644339904.266
Interim result:  118.04 10^6bits/s over 1.003 seconds ending at 1644339905.269
Interim result:  115.57 10^6bits/s over 1.022 seconds ending at 1644339906.291
Interim result:  117.10 10^6bits/s over 1.003 seconds ending at 1644339907.294
Interim result:  116.53 10^6bits/s over 1.005 seconds ending at 1644339908.299
Interim result:  118.40 10^6bits/s over 1.001 seconds ending at 1644339909.300
Interim result:  112.63 10^6bits/s over 1.051 seconds ending at 1644339910.351
Interim result:   98.77 10^6bits/s over 1.140 seconds ending at 1644339911.491
Interim result:   94.77 10^6bits/s over 1.042 seconds ending at 1644339912.534
Interim result:   96.61 10^6bits/s over 1.003 seconds ending at 1644339913.537
Interim result:  105.18 10^6bits/s over 1.008 seconds ending at 1644339914.545
Interim result:  106.25 10^6bits/s over 1.003 seconds ending at 1644339915.548
Interim result:  102.20 10^6bits/s over 1.039 seconds ending at 1644339916.587
Interim result:   96.05 10^6bits/s over 1.064 seconds ending at 1644339917.651
Interim result:  100.34 10^6bits/s over 1.007 seconds ending at 1644339918.658
Interim result:   97.70 10^6bits/s over 1.027 seconds ending at 1644339919.685
Interim result:   96.35 10^6bits/s over 1.015 seconds ending at 1644339920.701
Interim result:   96.37 10^6bits/s over 1.002 seconds ending at 1644339921.702
Interim result:   96.67 10^6bits/s over 1.006 seconds ending at 1644339922.709
Interim result:  112.06 10^6bits/s over 1.005 seconds ending at 1644339923.714
Interim result:  110.46 10^6bits/s over 1.014 seconds ending at 1644339924.728
Interim result:  116.31 10^6bits/s over 1.007 seconds ending at 1644339925.735
Interim result:  121.63 10^6bits/s over 1.005 seconds ending at 1644339926.741
Interim result:  122.25 10^6bits/s over 1.001 seconds ending at 1644339927.742
Interim result:  118.51 10^6bits/s over 1.033 seconds ending at 1644339928.775
Interim result:  122.12 10^6bits/s over 1.002 seconds ending at 1644339929.776
Interim result:  129.54 10^6bits/s over 1.001 seconds ending at 1644339930.777
Interim result:  126.06 10^6bits/s over 1.029 seconds ending at 1644339931.806
Interim result:  127.00 10^6bits/s over 1.000 seconds ending at 1644339932.806
Interim result:  119.94 10^6bits/s over 1.059 seconds ending at 1644339933.865
Interim result:  119.73 10^6bits/s over 1.003 seconds ending at 1644339934.868
Interim result:  105.05 10^6bits/s over 1.138 seconds ending at 1644339936.007
Interim result:   94.32 10^6bits/s over 1.114 seconds ending at 1644339937.120
Interim result:  104.28 10^6bits/s over 1.001 seconds ending at 1644339938.121
Interim result:  109.50 10^6bits/s over 1.001 seconds ending at 1644339939.122
Interim result:  107.90 10^6bits/s over 1.015 seconds ending at 1644339940.137
Interim result:  107.90 10^6bits/s over 1.000 seconds ending at 1644339941.137
Interim result:  106.61 10^6bits/s over 1.012 seconds ending at 1644339942.149
Interim result:  107.80 10^6bits/s over 0.736 seconds ending at 1644339942.884
Recv   Send    Send                          
Socket Socket  Message  Elapsed              
Size   Size    Size     Time     Throughput  
bytes  bytes   bytes    secs.    10^6bits/sec  

131072  16384  16384    60.00     112.33   

if i set the ATF weights equal at 256, the netperf tcp_maerts rates are about equal at ~ 67 mbps (the slower client tends to have slow spots but no significant "grinding to a halt")

I can also set the ATF weight of the fast client high and the slow client low and see the expected behaviour - the slow client rate stays slow, mostly. The rates do drift around and the slow client speeds up (say 15 mbps to 70 mbps), while the fast client slows down (say 110 mbps to 65 mbps). Again neither client "ground to a halt."

@quarky

A suggestion that might help you diagnose your latency observation (after your current tests are done and assuming your not satisfied with the those results):

Try rate limiting the clients on your network in the client to AP direction.

Why? My thought is that your latency observations might be caused by clients trying to send data to the AP at a rate it just can't handle. ATF will not help you here since it only throttles in the AP to client direction.

I can "fix" my client -> AP netperf issues by using a tbf qdisc on the clients to limit their throughput.

For example, on both my ubuntu clients I do:

sudo tc qdisc replace dev wlo1 root tbf rate 25mbit burst 1mbit latency 400ms

If i then netperf client to AP I no longer have issues. This works for me up to about 50mbit, but 75mbit for each client is too much and i start to have issues again.

Just a thought and HTH.

Thanks for the suggestion. The issue I’m facing is not one of congestion tho. I will see extremely high latency when none of the clients are transmitting or receiving data from the AP. This will be especially apparent when I try to SSH into the router via Wi-Fi. Each key press will take a while to register a response. It’s like typing in slow motion.

So far the change that @tohojo suggested seems to work, but I will have to let it run for 3-4 days before I can conclude. It seems the fix should be the right one, and I suspect the new ATF algo. somehow fails to let ath10k transmit management frames properly. Let’s see.

1 Like

@anon98444528 would you like to do some more testing?

I've made some change to the patch. Now the below two patches should enable ATF and make AQL work slightly better.

ath10k-ct: improve tx performance · castiel652/openwrt@0fd98f9 · GitHub

I can try it. How can I assess/demonstrate "slightly better" AQL?

FWIW I've done my own patch to enable ATF via the ath10k-ct driver fwcfg API which is nice as I can turn it on or off without having to change out the driver. I can post that if your interested. It's not hard to do tho.

It's more like less CPU load. not sure how much difference it'd make on flent test.

Sure I am interested.

1 Like