Reducing multiplexing latencies still further in wifi

I have this patch.

---
 net/mac80211/sta_info.c | 10 +---------
 net/mac80211/tx.c       |  4 ++--
 2 files changed, 3 insertions(+), 11 deletions(-)

--- a/net/mac80211/sta_info.c
+++ b/net/mac80211/sta_info.c
@@ -2893,15 +2893,7 @@ static void sta_update_codel_params(stru
 	if (ieee80211_hw_check(&sta->sdata->local->hw, HAS_TX_QUEUE))
 		return;

-	if (thr && thr < STA_SLOW_THRESHOLD * sta->local->num_sta) {
-		sta->cparams.target = MS2TIME(50);
-		sta->cparams.interval = MS2TIME(300);
-		sta->cparams.ecn = false;
-	} else {
-		sta->cparams.target = MS2TIME(20);
-		sta->cparams.interval = MS2TIME(100);
-		sta->cparams.ecn = true;
-	}
+	return;
 }

 void ieee80211_sta_set_expected_throughput(struct ieee80211_sta *pubsta,
--- a/net/mac80211/tx.c
+++ b/net/mac80211/tx.c
@@ -1610,8 +1610,8 @@ int ieee80211_txq_setup_flows(struct iee
 		fq->memory_limit = 4 << 20; /* 4 Mbytes */

 	codel_params_init(&local->cparams);
-	local->cparams.interval = MS2TIME(100);
-	local->cparams.target = MS2TIME(20);
+	local->cparams.interval = MS2TIME(40);
+	local->cparams.target = MS2TIME(3);
 	local->cparams.ecn = true;

 	local->cvars = kcalloc(fq->flows_cnt, sizeof(local->cvars[0]),

And if it helps, here is a test with crusader
wax620: OpenWrt support for Netgear WAX620 - #81 by Spacebar
wax630: Add support for Netgear WAX630 - #48 by Spacebar

1 Like

3 ms and 40 ms, how much bandwidth do you lose?

Tx rate is 1170 and bandwidth is 940-ish. So that would be 80% ish... I loose 20%.

1 Like

Are you touching any other parameters? AQL?

IPQ8074 does not support AQL. But only the parameters presented in Luci.
I'm also downgrading my WLAN.HK firmware to 2.7 because I saw that i.e version 2.9 the latency doubled. But need more data to verify that...

Note: I'm also using AC mode, not AX. Because when I was testing speed with my iPhone the latency went all yo-yo compared to AC mode that is stable.

There was a patch to add it but they never resend it.
You can try it yourself if you are feeling adventurous. :slightly_smiling_face:
https://patchwork.kernel.org/project/linux-wireless/patch/20230501130725.7171-1-quic_tamizhr@quicinc.com/

Ooooh that's nice. Yes, this is something I would love to test. Thanks :slight_smile:

Edit: Well, it did build and run for a short time before it crashed.

Supported extended features:
* [ AQL ]: Airtime Queue Limits (AQL)

But yeah, didn't go so well. But fun to give it a go. :slight_smile:

@pesa1234 has been working on a nice build for the GL-MT6000 device. I have been helping to test these settings. The latest build is 8ms and 80ms and here are some benchmarks I ran with those values:

However, prior to testing with @pesa1234's builds, I had been running with 5ms and 50ms for a LONG time.

My vote goes for custom values. If not I think 5/50.

If the target is too low, how are we punished? Bandwidth?

1 Like

Do you have the crash log?

Hope you understand more than me :slight_smile:

[   36.147634] qcom-q6v5-wcss-pil cd00000.q6v5_wcss: fatal error received:
[   36.147634] QC Image Version: QC_IMAGE_VERSION_STRING=WLAN.HK.2.7.0.1-01744-QCAHKSWPL_SILICONZ-1
[   36.147634] Image Variant : IMAGE_VARIANT_STRING=8074.wlanfw.eval_v2Q
[   36.147634]
[   36.147634]     :Excep  :0 Exception detectedparam0 :zero, param1 :zero, param2 :zero.
[   36.147634] Thread ID      : 0x00000069  Thread name    : WLAN RT0  Process ID     : 0
[   36.147634] Register:
[   36.147634] SP : 0x4bfacea0
[   36.147634] FP : 0x4bfaceb0
[   36.147634] PC : 0x4b15be0c
[   36.147634] SSR : 0x00000003
[   36.147634] BADVA : 0x0080cca4
[   36.147634] LR : 0x4b15be14
[   36.147634]
[   36.147634] Stack Dump
[   36.147634] from : 0x4bfacea0
[   36.147634] to   : 0x4bfad400
[   36.147634]
[   36.194179] remoteproc remoteproc0: crash detected in cd00000.q6v5_wcss: type fatal error
[   36.216420] remoteproc remoteproc0: handling crash #1 in cd00000.q6v5_wcss
[   36.224569] remoteproc remoteproc0: recovering cd00000.q6v5_wcss
[   36.257000] remoteproc remoteproc0: stopped remote processor cd00000.q6v5_wcss
[   36.257224] ath11k c000000.wifi: failed to clear rx_filter for monitor status ring: (-108)
[   36.596553] remoteproc remoteproc0: remote processor cd00000.q6v5_wcss is now up
[   36.640495] ath11k c000000.wifi: qmi ignore invalid mem req type 3
[   36.648005] ath11k c000000.wifi: chip_id 0x0 chip_family 0x0 board_id 0xff soc_id 0xffffffff
[   36.648059] ath11k c000000.wifi: fw_version 0x270204a5 fw_build_timestamp 2022-08-04 13:05 fw_build_id WLAN.HK.2.7.0.1-01744-QCAHKSWPL_SILICONZ-1
[   36.664964] ath11k c000000.wifi: Last interrupt received for each CE:
[   36.668484] ath11k c000000.wifi: CE_id 0 pipe_num 0 21880ms before
[   36.674914] ath11k c000000.wifi: CE_id 1 pipe_num 1 830ms before
[   36.680955] ath11k c000000.wifi: CE_id 2 pipe_num 2 830ms before
[   36.687132] ath11k c000000.wifi: CE_id 3 pipe_num 3 840ms before
[   36.693127] ath11k c000000.wifi: CE_id 5 pipe_num 5 4294703996ms before
[   36.699100] ath11k c000000.wifi: CE_id 7 pipe_num 7 1160ms before
[   36.705450] ath11k c000000.wifi: CE_id 9 pipe_num 9 4294704006ms before
[   36.711692] ath11k c000000.wifi: CE_id 10 pipe_num 10 4294704016ms before
[   36.718109] ath11k c000000.wifi: CE_id 11 pipe_num 11 4294704016ms before
[   36.725065] ath11k c000000.wifi:
[   36.725065] Last interrupt received for each group:
[   36.731834] ath11k c000000.wifi: group_id 0 21680ms before
[   36.739982] ath11k c000000.wifi: group_id 1 21680ms before
[   36.745372] ath11k c000000.wifi: group_id 2 4294704046ms before
[   36.750833] ath11k c000000.wifi: group_id 3 4294704046ms before
[   36.756657] ath11k c000000.wifi: group_id 4 1000ms before
[   36.762554] ath11k c000000.wifi: group_id 5 2280ms before
[   36.768106] ath11k c000000.wifi: group_id 6 4294704066ms before
[   36.773493] ath11k c000000.wifi: group_id 7 4294704076ms before
[   36.779218] ath11k c000000.wifi: group_id 8 4294704076ms before
[   36.785126] ath11k c000000.wifi: group_id 9 4294704086ms before
[   36.791108] ath11k c000000.wifi: group_id 10 4294704096ms before
[   36.796928] ath11k c000000.wifi: dst srng id 0 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 4294704096ms
[   36.803219] ath11k c000000.wifi: dst srng id 1 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 4294704106ms
[   36.813699] ath11k c000000.wifi: dst srng id 2 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 4294704116ms
[   36.824459] ath11k c000000.wifi: dst srng id 3 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 4294704126ms
[   36.835223] ath11k c000000.wifi: dst srng id 4 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 21780ms
[   36.845988] ath11k c000000.wifi: src srng id 5 hp 0, reap_hp 248, cur tp 0, cached tp 0 last tp 0 napi processed before 4294704146ms
[   36.856406] ath11k c000000.wifi: src srng id 8 hp 0, reap_hp 2550, cur tp 0, cached tp 0 last tp 0 napi processed before 4294704156ms
[   36.868471] ath11k c000000.wifi: dst srng id 9 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 4294704166ms
[   36.880362] ath11k c000000.wifi: src srng id 16 hp 0, reap_hp 4088, cur tp 0, cached tp 0 last tp 0 napi processed before 4294704176ms
[   36.890969] ath11k c000000.wifi: src srng id 17 hp 0, reap_hp 4088, cur tp 0, cached tp 0 last tp 0 napi processed before 4294704186ms
[   36.903132] ath11k c000000.wifi: src srng id 18 hp 0, reap_hp 4088, cur tp 0, cached tp 0 last tp 0 napi processed before 4294704206ms
[   36.915175] ath11k c000000.wifi: src srng id 24 hp 0, reap_hp 248, cur tp 0, cached tp 0 last tp 0 napi processed before 4294704216ms
[   36.927239] ath11k c000000.wifi: dst srng id 25 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 4294704226ms
[   36.939308] ath11k c000000.wifi: src srng id 32 hp 20, reap_hp 16, cur tp 20, cached tp 20 last tp 16 napi processed before 22150ms
[   36.950243] ath11k c000000.wifi: src srng id 35 hp 16, reap_hp 12, cur tp 16, cached tp 16 last tp 12 napi processed before 1100ms
[   36.961787] ath11k c000000.wifi: src srng id 36 hp 72, reap_hp 68, cur tp 72, cached tp 72 last tp 68 napi processed before 1430ms
[   36.973592] ath11k c000000.wifi: src srng id 39 hp 64, reap_hp 60, cur tp 64, cached tp 64 last tp 60 napi processed before 1440ms
[   36.985324] ath11k c000000.wifi: src srng id 41 hp 0, reap_hp 124, cur tp 0, cached tp 0 last tp 0 napi processed before 4294704286ms
[   36.997046] ath11k c000000.wifi: src srng id 57 hp 712, reap_hp 712, cur tp 716, cached tp 716 last tp 716 napi processed before 1150ms
[   37.009097] ath11k c000000.wifi: src srng id 58 hp 286, reap_hp 286, cur tp 290, cached tp 290 last tp 290 napi processed before 1160ms
[   37.021076] ath11k c000000.wifi: src srng id 61 hp 1020, reap_hp 1020, cur tp 0, cached tp 0 last tp 0 napi processed before 22240ms
[   37.033229] ath11k c000000.wifi: src srng id 66 hp 1020, reap_hp 1020, cur tp 0, cached tp 0 last tp 0 napi processed before 22250ms
[   37.045382] ath11k c000000.wifi: dst srng id 81 tp 1432, cur hp 1432, cached hp 1432 last hp 1432 napi processed before 1200ms
[   37.057276] ath11k c000000.wifi: dst srng id 82 tp 580, cur hp 580, cached hp 580 last hp 580 napi processed before 1210ms
[   37.068469] ath11k c000000.wifi: dst srng id 85 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 4294704366ms
[   37.079495] ath11k c000000.wifi: dst srng id 90 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 4294704376ms
[   37.090519] ath11k c000000.wifi: src srng id 104 hp 65532, reap_hp 65532, cur tp 0, cached tp 0 last tp 0 napi processed before 22310ms
[   37.101371] ath11k c000000.wifi: src srng id 105 hp 0, reap_hp 504, cur tp 0, cached tp 0 last tp 0 napi processed before 4294704406ms
[   37.113266] ath11k c000000.wifi: dst srng id 106 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 22060ms
[   37.125413] ath11k c000000.wifi: dst srng id 107 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 22070ms
[   37.136091] ath11k c000000.wifi: dst srng id 108 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 4294704436ms
[   37.146596] ath11k c000000.wifi: dst srng id 109 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 22090ms
[   37.157448] ath11k c000000.wifi: src srng id 128 hp 8190, reap_hp 8190, cur tp 0, cached tp 0 last tp 0 napi processed before 22100ms
[   37.168037] ath11k c000000.wifi: src srng id 130 hp 8190, reap_hp 8190, cur tp 0, cached tp 0 last tp 0 napi processed before 22140ms
[   37.179933] ath11k c000000.wifi: src srng id 132 hp 242, reap_hp 242, cur tp 244, cached tp 244 last tp 244 napi processed before 1420ms
[   37.191911] ath11k c000000.wifi: dst srng id 133 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 22140ms
[   37.204233] ath11k c000000.wifi: dst srng id 134 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 4294704506ms
[   37.214736] ath11k c000000.wifi: src srng id 135 hp 8190, reap_hp 8190, cur tp 0, cached tp 0 last tp 0 napi processed before 22190ms
[   37.225590] ath11k c000000.wifi: src srng id 143 hp 8190, reap_hp 8190, cur tp 0, cached tp 0 last tp 0 napi processed before 22170ms
[   37.237569] ath11k c000000.wifi: src srng id 145 hp 8190, reap_hp 8190, cur tp 0, cached tp 0 last tp 0 napi processed before 22180ms
[   37.249547] ath11k c000000.wifi: src srng id 147 hp 14, reap_hp 14, cur tp 16, cached tp 16 last tp 16 napi processed before 2760ms
[   37.261540] ath11k c000000.wifi: dst srng id 148 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 22210ms
[   37.273189] ath11k c000000.wifi: dst srng id 149 tp 0, cur hp 0, cached hp 0 last hp 0 napi processed before 4294704576ms
[   37.283922] ath11k c000000.wifi: src srng id 150 hp 8190, reap_hp 8190, cur tp 0, cached tp 0 last tp 0 napi processed before 22230ms
[   37.340171] ath11k c000000.wifi: Already processed, so ignoring dma ring caps

WLAN.HK.2.9 latest did enable the radios as normal.
But suddenly rebooted and I did not manage to get that crash log. Sorry about that...

Here is a comparison between target 5ms + interval 50ms vs. target 8ms + interval 80ms on a GL-MT6000 with WED disabled and AQL low/high both set to 1500:

Target 8ms - Interval 80ms:

Target 5ms - Interval 50ms:

I have some opinions about which seems better, but I'm going to save those thoughts for later. I'm more eager to gather objective feedback from those following this thread. :slight_smile:

1 Like

Hmm... this is interesting. My thoughts when I vote for customize is because qca vs mt. They do things different, and just go for "one fits all" is not something I think would work. There must be a default/fallback too.

And when I've tested different qca WLAN.HK versions I see they differ with bandwidth and latency. I still have issues with AX and iPhone. But that's a different story :slight_smile:

And thumbs up for sharing visual data :+1: Maybe in the future we can have a service like geekbench for Crusader data. Looking forward to hearing your thoughts.

Absolutely--without objective comparisons, this is a daunting task due to subjective feedback (and endless Waveform bufferbloat test links that introduce ISP latency into the picture). :slight_smile:

Not to take this OT, but...

<OT>
My only real gripe with Crusader right now is that I would desperately love to be able to do one or both of:

  1. Set a consistent Y-axis scale upon the image output. That would make better consistency in comparing one result to another.
  2. Overlay two or more tests on the same screen/image output in a similar way as Flent allows. Again, this would make comparison so much more precise and useful.

</OT>

2 Likes

Don't have a ath11k device at the moment but I am tempted to get one.

Not that it help much, but since crusader allows to save out the data, there is at least a potential for being able to perform the desired plots outside of crusader. While will require to parse the data first in some other language...

Tested on an IPQ6018, and the same happened once a client connected to the wifi.

kern.alert kernel: [  478.865967] Unable to handle kernel read from unreadable memory at virtual address 0000000000000010
kern.alert kernel: [  478.866026] Mem abort info:
kern.alert kernel: [  478.873854]   ESR = 0x0000000096000005
kern.alert kernel: [  478.876605]   EC = 0x25: DABT (current EL), IL = 32 bits
kern.alert kernel: [  478.880427]   SET = 0, FnV = 0
kern.alert kernel: [  478.885890]   EA = 0, S1PTW = 0
kern.alert kernel: [  478.888761]   FSC = 0x05: level 1 translation fault
kern.alert kernel: [  478.891797] Data abort info:
kern.alert kernel: [  478.896654]   ISV = 0, ISS = 0x00000005, ISS2 = 0x00000000
kern.alert kernel: [  478.899783]   CM = 0, WnR = 0, TnD = 0, TagAccess = 0

I am happy to see uptake of crusader... I do not know who the author actually is but i filed a bug on your behalf here: https://github.com/Zoxc/crusader/issues/21

3 Likes

I added an option to increase the Y scale in Crusader. Apparently the graph library I used doesn't support cropping.

I also added the ability to measure latency from an additional separate Crusader instance to the server, which is useful to measure latency impact on a separate Wi-Fi station.

6 Likes
+---------------------------+------------------------------------+----------------------------------+
|        Metric             |   Target 8ms, Interval 80ms        |   Target 5ms, Interval 50ms      |
+---------------------------+------------------------------------+----------------------------------+
| Download Bandwidth        | 500 Mbps (stable)                  | 500 Mbps (stable)                |
| Upload Bandwidth          | 1500+ Mbps (fluctuates)            | 1200 Mbps (more stable)          |
| Latency (Download)        | Low (~20 ms, stable)               | Low (~20 ms, stable)             |
| Latency (Upload)          | Higher peaks (>120 ms)             | Lower peaks (~80 ms)             |
| Total Latency             | High peaks (~120 ms)               | Smoother, fewer high peaks       |
| Packet Loss               | Minor loss during high traffic     | Less loss, better reliability    |
+---------------------------+------------------------------------+----------------------------------+
|      **Pros**             | - Higher max upload bandwidth      | - Lower overall latency          |
|                           | - More relaxed timing (easier to   | - More stable network with less  |
|                           |   handle peak traffic)             |   packet loss                    |
|                           |                                    | - Suitable for latency-sensitive |
|                           |                                    |   applications                   |
+---------------------------+------------------------------------+----------------------------------+
|      **Cons**             | - Higher latency spikes during     | - Slightly lower upload max      |
|                           |   upload-heavy traffic             | - More aggressive timing may     |
|                           | - More variable latency under load |   require fine-tuning            |
+---------------------------+------------------------------------+----------------------------------+

4 Likes