Reducing multiplexing latencies still further in wifi

Did you see this PR patch:

https://patchwork.ozlabs.org/project/openwrt/patch/20220722063631.9903-1-sultan@kerneltoast.com/

In terms of other random latency reducing stuff, I've always wanted to revisit NAPI_POLL_WEIGHT and make it configurable. Arm devices with short ins pipelines like the A53 can context switch rapidly (especially with a nearly dedicated core), and have relatively small caches, so doing less work, more often, might be a win. The mt76 defaults to 64, where, who knows? 8 might be doable.

See also: https://patchwork.ozlabs.org/project/netdev/patch/1362535042.15793.144.camel@edumazet-glaptop/

@dtaht

A few more tests. With below configured parameters in hostapd:

tx_queue_data2_aifs=1
tx_queue_data2_cwmin=7
tx_queue_data2_cwmax=15
tx_queue_data2_burst=3.0

wmm_ac_be_txop_limit=94

Following up/down tests were ran in macOS:

T="macos v22.03-rc6 mt76 WLAN ECN-on loc servs off wmm tx_data2 mods"; for i in 1 2 4 8 16; do flent -l 30 --socket-stats -x --step-size=.05 --te=upload_streams=$i -H openwrt.lan -t tcp_nup-$i-threads-$T tcp_nup; flent -l 30 --socket-stats -x --step-size=.05 --te=download_streams=$i -H openwrt.lan -t tcp_ndown-$i-threads-$T tcp_ndown; done

I'm not sure how useful this will be as the SsRunner cannot be executed on macOS, no ss command. Did anyone compile it for macOS?

And now up/down tests ran in Linux:

T="linux v22.03-rc6 mt76 WLAN ECN-on loc servs off wmm tx_data2 mods"; for i in 1 2 4 8 16; do flent -l 30 --socket-stats -x --step-size=.05 --te=upload_streams=$i -H openwrt.lan -t tcp_nup-$i-threads-$T tcp_nup; flent -l 30 --socket-stats -x --step-size=.05 --te=download_streams=$i -H openwrt.lan -t tcp_ndown-$i-threads-$T tcp_ndown; done
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_nup test. Expected run time: 40 seconds.
Data file written to ./tcp_nup-2022-08-04T180236.448340.tcp_nup-1-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_wmm_tx_data2_mods.flent.gz

Summary of tcp_nup test run from 2022-08-04 18:02:36.448340
  Title: 'tcp_nup-1-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off wmm tx_data2 mods'

                                             avg       median          # data pts
 Ping (ms) ICMP                   :        20.20        17.05 ms              800
 TCP upload avg                   :       581.35          N/A Mbits/s         800
 TCP upload sum                   :       581.35          N/A Mbits/s         800
 TCP upload::1                    :       581.35       592.17 Mbits/s         800
 TCP upload::1::tcp_cwnd          :      1662.25      1367.00                 509
 TCP upload::1::tcp_delivery_rate :       543.21       539.70                 509
 TCP upload::1::tcp_pacing_rate   :       829.72       797.32                 509
 TCP upload::1::tcp_rtt           :        28.74        28.16                 509
 TCP upload::1::tcp_rtt_var       :         0.22         0.07                 509
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-04T180321.824282.tcp_ndown-1-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_wmm_tx_data2_mods.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:03:21.824282
  Title: 'tcp_ndown-1-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off wmm tx_data2 mods'

                             avg       median          # data pts
 Ping (ms) ICMP   :        18.50        17.45 ms              800
 TCP download avg :       459.34          N/A Mbits/s         800
 TCP download sum :       459.34          N/A Mbits/s         800
 TCP download::1  :       459.34       467.75 Mbits/s         800
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_nup test. Expected run time: 40 seconds.
Data file written to ./tcp_nup-2022-08-04T180407.176189.tcp_nup-2-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_wmm_tx_data2_mods.flent.gz

Summary of tcp_nup test run from 2022-08-04 18:04:07.176189
  Title: 'tcp_nup-2-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off wmm tx_data2 mods'

                                             avg       median          # data pts
 Ping (ms) ICMP                   :        26.87        28.40 ms              798
 TCP upload avg                   :       295.78          N/A Mbits/s         800
 TCP upload sum                   :       591.57          N/A Mbits/s         800
 TCP upload::1                    :       374.50       386.94 Mbits/s         800
 TCP upload::1::tcp_cwnd          :      1157.21      1114.00                 507
 TCP upload::1::tcp_delivery_rate :       348.52       346.27                 507
 TCP upload::1::tcp_pacing_rate   :       548.71       518.97                 507
 TCP upload::1::tcp_rtt           :        31.21        29.82                 507
 TCP upload::1::tcp_rtt_var       :         0.47         0.20                 507
 TCP upload::2                    :       217.07       216.53 Mbits/s         800
 TCP upload::2::tcp_cwnd          :       675.62       630.00                 507
 TCP upload::2::tcp_delivery_rate :       202.02       202.34                 507
 TCP upload::2::tcp_pacing_rate   :       315.48       305.86                 507
 TCP upload::2::tcp_rtt           :        31.44        30.07                 507
 TCP upload::2::tcp_rtt_var       :         0.57         0.30                 507
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-04T180452.627249.tcp_ndown-2-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_wmm_tx_data2_mods.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:04:52.627249
  Title: 'tcp_ndown-2-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off wmm tx_data2 mods'

                             avg       median          # data pts
 Ping (ms) ICMP   :        23.00        24.90 ms              799
 TCP download avg :       241.73          N/A Mbits/s         799
 TCP download sum :       483.46          N/A Mbits/s         799
 TCP download::1  :       241.30       244.83 Mbits/s         799
 TCP download::2  :       242.16       246.33 Mbits/s         799
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_nup test. Expected run time: 40 seconds.
Data file written to ./tcp_nup-2022-08-04T180538.069757.tcp_nup-4-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_wmm_tx_data2_mods.flent.gz

Summary of tcp_nup test run from 2022-08-04 18:05:38.069757
  Title: 'tcp_nup-4-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off wmm tx_data2 mods'

                                             avg       median          # data pts
 Ping (ms) ICMP                   :        34.05        31.95 ms              798
 TCP upload avg                   :       151.00          N/A Mbits/s         800
 TCP upload sum                   :       603.98          N/A Mbits/s         800
 TCP upload::1                    :       212.48       217.38 Mbits/s         800
 TCP upload::1::tcp_cwnd          :       692.34       703.00                 503
 TCP upload::1::tcp_delivery_rate :       203.72       203.59                 503
 TCP upload::1::tcp_pacing_rate   :       313.53       296.97                 503
 TCP upload::1::tcp_rtt           :        32.79        32.90                 503
 TCP upload::1::tcp_rtt_var       :         0.69         0.31                 503
 TCP upload::2                    :       125.89       130.91 Mbits/s         800
 TCP upload::2::tcp_cwnd          :       410.25       426.00                 504
 TCP upload::2::tcp_delivery_rate :       120.98       124.14                 504
 TCP upload::2::tcp_pacing_rate   :       183.69       175.98                 504
 TCP upload::2::tcp_rtt           :        32.92        32.96                 504
 TCP upload::2::tcp_rtt_var       :         0.99         0.56                 504
 TCP upload::3                    :       134.12       140.65 Mbits/s         800
 TCP upload::3::tcp_cwnd          :       437.49       455.50                 504
 TCP upload::3::tcp_delivery_rate :       129.47       132.38                 504
 TCP upload::3::tcp_pacing_rate   :       196.25       186.38                 504
 TCP upload::3::tcp_rtt           :        32.93        32.88                 502
 TCP upload::3::tcp_rtt_var       :         0.84         0.50                 502
 TCP upload::4                    :       131.49       136.40 Mbits/s         800
 TCP upload::4::tcp_cwnd          :       433.67       454.00                 505
 TCP upload::4::tcp_delivery_rate :       126.53       129.99                 505
 TCP upload::4::tcp_pacing_rate   :       191.57       183.41                 505
 TCP upload::4::tcp_rtt           :        33.37        33.26                 503
 TCP upload::4::tcp_rtt_var       :         0.94         0.52                 503
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-04T180623.742220.tcp_ndown-4-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_wmm_tx_data2_mods.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:06:23.742220
  Title: 'tcp_ndown-4-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off wmm tx_data2 mods'

                             avg       median          # data pts
 Ping (ms) ICMP   :        25.30        24.45 ms              800
 TCP download avg :       124.37          N/A Mbits/s         800
 TCP download sum :       497.48          N/A Mbits/s         800
 TCP download::1  :       108.32       120.79 Mbits/s         800
 TCP download::2  :       115.44       124.77 Mbits/s         800
 TCP download::3  :       192.98       144.30 Mbits/s         800
 TCP download::4  :        80.74        90.15 Mbits/s         800
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_nup test. Expected run time: 40 seconds.
Data file written to ./tcp_nup-2022-08-04T180709.395616.tcp_nup-8-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_wmm_tx_data2_mods.flent.gz

Summary of tcp_nup test run from 2022-08-04 18:07:09.395616
  Title: 'tcp_nup-8-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off wmm tx_data2 mods'

                                             avg       median          # data pts
 Ping (ms) ICMP                   :        30.73        34.50 ms              797
 TCP upload avg                   :        71.40          N/A Mbits/s         800
 TCP upload sum                   :       571.23          N/A Mbits/s         800
 TCP upload::1                    :        78.05        80.18 Mbits/s         800
 TCP upload::1::tcp_cwnd          :       271.46       267.00                 509
 TCP upload::1::tcp_delivery_rate :        73.87        73.49                 509
 TCP upload::1::tcp_pacing_rate   :       110.31       104.17                 509
 TCP upload::1::tcp_rtt           :        36.23        35.73                 509
 TCP upload::1::tcp_rtt_var       :         1.09         0.79                 509
 TCP upload::2                    :        64.25        65.90 Mbits/s         800
 TCP upload::2::tcp_cwnd          :       223.21       225.00                 509
 TCP upload::2::tcp_delivery_rate :        61.26        61.78                 509
 TCP upload::2::tcp_pacing_rate   :        90.25        85.85                 509
 TCP upload::2::tcp_rtt           :        36.29        35.63                 509
 TCP upload::2::tcp_rtt_var       :         1.20         0.90                 509
 TCP upload::3                    :        56.55        58.10 Mbits/s         800
 TCP upload::3::tcp_cwnd          :       196.84       201.00                 508
 TCP upload::3::tcp_delivery_rate :        53.64        54.25                 508
 TCP upload::3::tcp_pacing_rate   :        78.96        76.05                 508
 TCP upload::3::tcp_rtt           :        36.49        35.70                 506
 TCP upload::3::tcp_rtt_var       :         1.41         1.10                 506
 TCP upload::4                    :        44.52        47.15 Mbits/s         800
 TCP upload::4::tcp_cwnd          :       155.95       160.00                 507
 TCP upload::4::tcp_delivery_rate :        42.11        43.04                 507
 TCP upload::4::tcp_pacing_rate   :        61.87        60.95                 507
 TCP upload::4::tcp_rtt           :        37.11        36.27                 507
 TCP upload::4::tcp_rtt_var       :         1.70         1.35                 507
 TCP upload::5                    :       151.87       155.40 Mbits/s         800
 TCP upload::5::tcp_cwnd          :       535.59       519.00                 508
 TCP upload::5::tcp_delivery_rate :       144.29       135.61                 508
 TCP upload::5::tcp_pacing_rate   :       220.29       203.96                 508
 TCP upload::5::tcp_rtt           :        36.00        35.02                 508
 TCP upload::5::tcp_rtt_var       :         0.72         0.44                 508
 TCP upload::6                    :        49.82        52.06 Mbits/s         800
 TCP upload::6::tcp_cwnd          :       175.54       179.50                 508
 TCP upload::6::tcp_delivery_rate :        47.22        48.45                 508
 TCP upload::6::tcp_pacing_rate   :        68.95        67.17                 508
 TCP upload::6::tcp_rtt           :        37.14        36.50                 507
 TCP upload::6::tcp_rtt_var       :         1.48         1.22                 507
 TCP upload::7                    :        55.95        57.09 Mbits/s         800
 TCP upload::7::tcp_cwnd          :       195.55       200.00                 508
 TCP upload::7::tcp_delivery_rate :        52.88        53.68                 508
 TCP upload::7::tcp_pacing_rate   :        78.04        74.91                 508
 TCP upload::7::tcp_rtt           :        36.72        36.05                 508
 TCP upload::7::tcp_rtt_var       :         1.36         1.04                 508
 TCP upload::8                    :        70.22        72.13 Mbits/s         800
 TCP upload::8::tcp_cwnd          :       244.83       244.00                 509
 TCP upload::8::tcp_delivery_rate :        66.24        66.35                 509
 TCP upload::8::tcp_pacing_rate   :        99.16        93.99                 509
 TCP upload::8::tcp_rtt           :        36.25        35.59                 507
 TCP upload::8::tcp_rtt_var       :         1.17         0.86                 507
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-04T180755.494117.tcp_ndown-8-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_wmm_tx_data2_mods.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:07:55.494117
  Title: 'tcp_ndown-8-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off wmm tx_data2 mods'

                             avg       median          # data pts
 Ping (ms) ICMP   :        24.10        25.95 ms              799
 TCP download avg :        57.60          N/A Mbits/s         799
 TCP download sum :       460.79          N/A Mbits/s         799
 TCP download::1  :        57.69        59.53 Mbits/s         799
 TCP download::2  :        51.42        53.64 Mbits/s         799
 TCP download::3  :        55.95        57.42 Mbits/s         799
 TCP download::4  :        60.80        60.40 Mbits/s         799
 TCP download::5  :        66.62        64.65 Mbits/s         799
 TCP download::6  :        61.32        60.70 Mbits/s         799
 TCP download::7  :        52.60        53.90 Mbits/s         799
 TCP download::8  :        54.39        56.09 Mbits/s         799
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_nup test. Expected run time: 40 seconds.
Data file written to ./tcp_nup-2022-08-04T180841.586505.tcp_nup-16-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_wmm_tx_data2_mods.flent.gz

Summary of tcp_nup test run from 2022-08-04 18:08:41.586505
  Title: 'tcp_nup-16-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off wmm tx_data2 mods'

                                              avg       median          # data pts
 Ping (ms) ICMP                    :        38.21        35.40 ms              795
 TCP upload avg                    :        35.10          N/A Mbits/s         799
 TCP upload sum                    :       561.67          N/A Mbits/s         799
 TCP upload::1                     :        32.48        33.12 Mbits/s         799
 TCP upload::10                    :        39.66        40.25 Mbits/s         799
 TCP upload::10::tcp_cwnd          :       140.93       142.00                 506
 TCP upload::10::tcp_delivery_rate :        37.47        37.87                 506
 TCP upload::10::tcp_pacing_rate   :        55.34        52.94                 506
 TCP upload::10::tcp_rtt           :        37.09        37.27                 505
 TCP upload::10::tcp_rtt_var       :         1.94         1.47                 505
 TCP upload::11                    :        32.97        33.55 Mbits/s         799
 TCP upload::11::tcp_cwnd          :       117.74       120.00                 506
 TCP upload::11::tcp_delivery_rate :        30.96        31.83                 506
 TCP upload::11::tcp_pacing_rate   :        45.45        43.70                 506
 TCP upload::11::tcp_rtt           :        37.47        37.66                 506
 TCP upload::11::tcp_rtt_var       :         2.18         1.66                 506
 TCP upload::12                    :        40.62        40.88 Mbits/s         799
 TCP upload::12::tcp_cwnd          :       144.84       145.00                 505
 TCP upload::12::tcp_delivery_rate :        38.25        38.23                 505
 TCP upload::12::tcp_pacing_rate   :        56.78        54.58                 505
 TCP upload::12::tcp_rtt           :        37.14        37.30                 503
 TCP upload::12::tcp_rtt_var       :         1.96         1.54                 503
 TCP upload::13                    :        28.55        29.21 Mbits/s         799
 TCP upload::13::tcp_cwnd          :       103.92       105.00                 505
 TCP upload::13::tcp_delivery_rate :        26.79        26.78                 505
 TCP upload::13::tcp_pacing_rate   :        39.18        38.09                 505
 TCP upload::13::tcp_rtt           :        38.36        38.39                 505
 TCP upload::13::tcp_rtt_var       :         2.45         2.03                 505
 TCP upload::14                    :        42.47        42.84 Mbits/s         799
 TCP upload::14::tcp_cwnd          :       150.95       151.00                 506
 TCP upload::14::tcp_delivery_rate :        40.14        39.93                 506
 TCP upload::14::tcp_pacing_rate   :        59.18        56.63                 506
 TCP upload::14::tcp_rtt           :        37.12        37.25                 506
 TCP upload::14::tcp_rtt_var       :         1.94         1.51                 506
 TCP upload::15                    :        32.01        32.05 Mbits/s         799
 TCP upload::15::tcp_cwnd          :       116.16       119.00                 505
 TCP upload::15::tcp_delivery_rate :        30.11        30.77                 505
 TCP upload::15::tcp_pacing_rate   :        44.11        42.68                 505
 TCP upload::15::tcp_rtt           :        38.12        38.12                 505
 TCP upload::15::tcp_rtt_var       :         2.28         1.76                 505
 TCP upload::16                    :        33.45        33.88 Mbits/s         799
 TCP upload::16::tcp_cwnd          :       119.32       121.50                 506
 TCP upload::16::tcp_delivery_rate :        31.48        32.32                 506
 TCP upload::16::tcp_pacing_rate   :        46.36        44.44                 506
 TCP upload::16::tcp_rtt           :        37.36        37.51                 504
 TCP upload::16::tcp_rtt_var       :         2.16         1.63                 504
 TCP upload::1::tcp_cwnd           :       117.89       120.00                 505
 TCP upload::1::tcp_delivery_rate  :        30.62        31.50                 505
 TCP upload::1::tcp_pacing_rate    :        45.11        43.19                 505
 TCP upload::1::tcp_rtt            :        38.06        38.10                 505
 TCP upload::1::tcp_rtt_var        :         2.22         1.73                 505
 TCP upload::2                     :        33.08        33.66 Mbits/s         799
 TCP upload::2::tcp_cwnd           :       118.41       121.00                 505
 TCP upload::2::tcp_delivery_rate  :        31.16        31.97                 505
 TCP upload::2::tcp_pacing_rate    :        45.73        44.21                 505
 TCP upload::2::tcp_rtt            :        37.53        37.62                 503
 TCP upload::2::tcp_rtt_var        :         2.19         1.72                 503
 TCP upload::3                     :        33.75        33.74 Mbits/s         799
 TCP upload::3::tcp_cwnd           :       120.57       123.00                 506
 TCP upload::3::tcp_delivery_rate  :        31.71        32.39                 506
 TCP upload::3::tcp_pacing_rate    :        46.65        44.92                 506
 TCP upload::3::tcp_rtt            :        37.53        37.47                 505
 TCP upload::3::tcp_rtt_var        :         2.22         1.72                 505
 TCP upload::4                     :        35.19        35.79 Mbits/s         799
 TCP upload::4::tcp_cwnd           :       125.44       128.00                 506
 TCP upload::4::tcp_delivery_rate  :        33.34        34.01                 506
 TCP upload::4::tcp_pacing_rate    :        48.69        46.86                 506
 TCP upload::4::tcp_rtt            :        37.46        37.53                 504
 TCP upload::4::tcp_rtt_var        :         2.11         1.65                 504
 TCP upload::5                     :        32.67        33.00 Mbits/s         799
 TCP upload::5::tcp_cwnd           :       117.05       119.00                 505
 TCP upload::5::tcp_delivery_rate  :        30.64        31.48                 505
 TCP upload::5::tcp_pacing_rate    :        45.11        43.57                 505
 TCP upload::5::tcp_rtt            :        37.56        37.64                 505
 TCP upload::5::tcp_rtt_var        :         2.21         1.71                 505
 TCP upload::6                     :        33.12        33.86 Mbits/s         799
 TCP upload::6::tcp_cwnd           :       118.54       121.00                 505
 TCP upload::6::tcp_delivery_rate  :        31.18        32.01                 505
 TCP upload::6::tcp_pacing_rate    :        45.74        44.03                 505
 TCP upload::6::tcp_rtt            :        37.57        37.50                 502
 TCP upload::6::tcp_rtt_var        :         2.18         1.76                 502
 TCP upload::7                     :        48.73        48.23 Mbits/s         799
 TCP upload::7::tcp_cwnd           :       171.96       165.00                 506
 TCP upload::7::tcp_delivery_rate  :        46.09        43.93                 506
 TCP upload::7::tcp_pacing_rate    :        67.60        63.12                 506
 TCP upload::7::tcp_rtt            :        37.12        37.38                 506
 TCP upload::7::tcp_rtt_var        :         1.80         1.41                 506
 TCP upload::8                     :        29.93        30.14 Mbits/s         799
 TCP upload::8::tcp_cwnd           :       107.38       108.00                 505
 TCP upload::8::tcp_delivery_rate  :        28.20        28.26                 505
 TCP upload::8::tcp_pacing_rate    :        41.19        40.17                 505
 TCP upload::8::tcp_rtt            :        37.73        37.80                 505
 TCP upload::8::tcp_rtt_var        :         2.30         1.82                 505
 TCP upload::9                     :        32.99        33.55 Mbits/s         799
 TCP upload::9::tcp_cwnd           :       117.65       120.00                 505
 TCP upload::9::tcp_delivery_rate  :        30.98        31.64                 505
 TCP upload::9::tcp_pacing_rate    :        45.50        43.84                 505
 TCP upload::9::tcp_rtt            :        37.43        37.48                 505
 TCP upload::9::tcp_rtt_var        :         2.19         1.82                 505
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-04T180928.507790.tcp_ndown-16-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_wmm_tx_data2_mods.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:09:28.507790
  Title: 'tcp_ndown-16-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off wmm tx_data2 mods'

                             avg       median          # data pts
 Ping (ms) ICMP   :        24.80        27.35 ms              800
 TCP download avg :        29.97          N/A Mbits/s         800
 TCP download sum :       479.56          N/A Mbits/s         800
 TCP download::1  :        32.16        32.03 Mbits/s         800
 TCP download::10 :        31.41        31.47 Mbits/s         800
 TCP download::11 :        28.51        29.52 Mbits/s         800
 TCP download::12 :        27.98        28.80 Mbits/s         800
 TCP download::13 :        30.22        30.53 Mbits/s         800
 TCP download::14 :        30.19        30.56 Mbits/s         800
 TCP download::15 :        27.65        28.49 Mbits/s         800
 TCP download::16 :        30.66        30.89 Mbits/s         800
 TCP download::2  :        32.20        31.56 Mbits/s         800
 TCP download::3  :        30.68        30.84 Mbits/s         800
 TCP download::4  :        28.23        29.31 Mbits/s         800
 TCP download::5  :        29.25        29.80 Mbits/s         800
 TCP download::6  :        28.08        29.09 Mbits/s         800
 TCP download::7  :        30.43        30.52 Mbits/s         800
 TCP download::8  :        32.65        31.83 Mbits/s         800
 TCP download::9  :        29.26        29.97 Mbits/s         800

@dtaht

Last round of requested DOWNLOAD tests.

Test parameters (only):

tx_queue_data2_burst=5.0

I executed them in Linux and macOS. Under each output log below you'll find links to a folder with all the tests, filenames help identify the ones for each operating system.

Linux output:

Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-04T181756.063666.tcp_ndown-1-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_tx_queue_data2_burst_5_0.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:17:56.063666
  Title: 'tcp_ndown-1-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off tx_queue_data2_burst=5.0'

                             avg       median          # data pts
 Ping (ms) ICMP   :        17.80        16.25 ms              797
 TCP download avg :       474.14          N/A Mbits/s         797
 TCP download sum :       474.14          N/A Mbits/s         797
 TCP download::1  :       474.14       488.75 Mbits/s         797
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-04T181841.410507.tcp_ndown-2-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_tx_queue_data2_burst_5_0.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:18:41.410507
  Title: 'tcp_ndown-2-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off tx_queue_data2_burst=5.0'

                             avg       median          # data pts
 Ping (ms) ICMP   :        23.70        24.60 ms              800
 TCP download avg :       233.18          N/A Mbits/s         800
 TCP download sum :       466.36          N/A Mbits/s         800
 TCP download::1  :       246.81       244.62 Mbits/s         800
 TCP download::2  :       219.55       226.04 Mbits/s         800
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-04T181926.854477.tcp_ndown-4-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_tx_queue_data2_burst_5_0.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:19:26.854477
  Title: 'tcp_ndown-4-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off tx_queue_data2_burst=5.0'

                             avg       median          # data pts
 Ping (ms) ICMP   :        20.60        20.70 ms              799
 TCP download avg :       114.89          N/A Mbits/s         799
 TCP download sum :       459.54          N/A Mbits/s         799
 TCP download::1  :       106.50       110.47 Mbits/s         799
 TCP download::2  :       109.41       114.40 Mbits/s         799
 TCP download::3  :       115.42       119.53 Mbits/s         799
 TCP download::4  :       128.21       130.73 Mbits/s         799
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-04T182012.520083.tcp_ndown-8-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_tx_queue_data2_burst_5_0.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:20:12.520083
  Title: 'tcp_ndown-8-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off tx_queue_data2_burst=5.0'

                             avg       median          # data pts
 Ping (ms) ICMP   :        22.30        23.40 ms              799
 TCP download avg :        58.05          N/A Mbits/s         799
 TCP download sum :       464.36          N/A Mbits/s         799
 TCP download::1  :        59.76        60.11 Mbits/s         799
 TCP download::2  :        61.47        62.12 Mbits/s         799
 TCP download::3  :        58.82        59.64 Mbits/s         799
 TCP download::4  :        65.04        64.83 Mbits/s         799
 TCP download::5  :        62.41        62.42 Mbits/s         799
 TCP download::6  :        50.70        51.66 Mbits/s         799
 TCP download::7  :        55.60        56.76 Mbits/s         799
 TCP download::8  :        50.56        52.27 Mbits/s         799
Starting Flent 2.0.1 using Python 3.10.4.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-04T182058.593037.tcp_ndown-16-threads-linux_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_tx_queue_data2_burst_5_0.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:20:58.593037
  Title: 'tcp_ndown-16-threads-linux v22.03-rc6 mt76 WLAN ECN-on loc servs off tx_queue_data2_burst=5.0'

                             avg       median          # data pts
 Ping (ms) ICMP   :        30.60        32.70 ms              800
 TCP download avg :        29.02          N/A Mbits/s         800
 TCP download sum :       464.36          N/A Mbits/s         800
 TCP download::1  :        30.32        31.33 Mbits/s         800
 TCP download::10 :        28.92        30.10 Mbits/s         800
 TCP download::11 :        27.43        29.05 Mbits/s         800
 TCP download::12 :        28.99        30.30 Mbits/s         800
 TCP download::13 :        29.34        30.47 Mbits/s         800
 TCP download::14 :        27.88        29.27 Mbits/s         800
 TCP download::15 :        30.98        31.56 Mbits/s         800
 TCP download::16 :        29.73        30.86 Mbits/s         800
 TCP download::2  :        27.58        29.13 Mbits/s         800
 TCP download::3  :        27.80        29.46 Mbits/s         800
 TCP download::4  :        29.71        30.45 Mbits/s         800
 TCP download::5  :        28.48        29.67 Mbits/s         800
 TCP download::6  :        29.51        30.47 Mbits/s         800
 TCP download::7  :        29.18        30.37 Mbits/s         800
 TCP download::8  :        27.62        29.51 Mbits/s         800
 TCP download::9  :        30.89        31.35 Mbits/s         800

macOS output:

Starting Flent 2.0.1 using Python 3.9.13.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-05T042917.037349.tcp_ndown-1-threads-macOS_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_tx_queue_data2_burst_5_0.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:29:17.037349
  Title: 'tcp_ndown-1-threads-macOS v22.03-rc6 mt76 WLAN ECN-on loc servs off tx_queue_data2_burst=5.0'

                             avg       median          # data pts
 Ping (ms) ICMP   :        20.80        23.60 ms              799
 TCP download avg :       471.16          N/A Mbits/s         799
 TCP download sum :       471.16          N/A Mbits/s         799
 TCP download::1  :       471.16       475.47 Mbits/s         799
Starting Flent 2.0.1 using Python 3.9.13.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-05T043002.520946.tcp_ndown-2-threads-macOS_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_tx_queue_data2_burst_5_0.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:30:02.520946
  Title: 'tcp_ndown-2-threads-macOS v22.03-rc6 mt76 WLAN ECN-on loc servs off tx_queue_data2_burst=5.0'

                             avg       median          # data pts
 Ping (ms) ICMP   :        24.10        26.30 ms              799
 TCP download avg :       217.68          N/A Mbits/s         799
 TCP download sum :       435.36          N/A Mbits/s         799
 TCP download::1  :       203.92       210.31 Mbits/s         799
 TCP download::2  :       231.44       229.42 Mbits/s         799
Starting Flent 2.0.1 using Python 3.9.13.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-05T043048.130109.tcp_ndown-4-threads-macOS_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_tx_queue_data2_burst_5_0.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:30:48.130109
  Title: 'tcp_ndown-4-threads-macOS v22.03-rc6 mt76 WLAN ECN-on loc servs off tx_queue_data2_burst=5.0'

                             avg       median          # data pts
 Ping (ms) ICMP   :        26.40        28.60 ms              799
 TCP download avg :       106.34          N/A Mbits/s         799
 TCP download sum :       425.36          N/A Mbits/s         799
 TCP download::1  :       105.41       106.02 Mbits/s         799
 TCP download::2  :       107.53       106.54 Mbits/s         799
 TCP download::3  :       103.04       102.44 Mbits/s         799
 TCP download::4  :       109.38       107.00 Mbits/s         799
Starting Flent 2.0.1 using Python 3.9.13.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-05T043133.985191.tcp_ndown-8-threads-macOS_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_tx_queue_data2_burst_5_0.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:31:33.985191
  Title: 'tcp_ndown-8-threads-macOS v22.03-rc6 mt76 WLAN ECN-on loc servs off tx_queue_data2_burst=5.0'

                             avg       median          # data pts
 Ping (ms) ICMP   :        25.30        28.30 ms              799
 TCP download avg :        53.02          N/A Mbits/s         799
 TCP download sum :       424.17          N/A Mbits/s         799
 TCP download::1  :        56.93        55.93 Mbits/s         799
 TCP download::2  :        50.91        51.62 Mbits/s         799
 TCP download::3  :        52.50        52.52 Mbits/s         799
 TCP download::4  :        53.09        52.97 Mbits/s         799
 TCP download::5  :        50.79        51.04 Mbits/s         799
 TCP download::6  :        50.71        51.22 Mbits/s         799
 TCP download::7  :        55.26        54.67 Mbits/s         799
 TCP download::8  :        53.98        53.07 Mbits/s         799
Starting Flent 2.0.1 using Python 3.9.13.
Starting tcp_ndown test. Expected run time: 40 seconds.
Data file written to ./tcp_ndown-2022-08-05T043220.294767.tcp_ndown-16-threads-macOS_v22_03-rc6_mt76_WLAN_ECN-on_loc_servs_off_tx_queue_data2_burst_5_0.flent.gz

Summary of tcp_ndown test run from 2022-08-04 18:32:20.294767
  Title: 'tcp_ndown-16-threads-macOS v22.03-rc6 mt76 WLAN ECN-on loc servs off tx_queue_data2_burst=5.0'

                             avg       median          # data pts
 Ping (ms) ICMP   :        25.20        28.20 ms              799
 TCP download avg :        26.91          N/A Mbits/s         799
 TCP download sum :       430.54          N/A Mbits/s         799
 TCP download::1  :        28.01        27.12 Mbits/s         799
 TCP download::10 :        26.28        26.16 Mbits/s         799
 TCP download::11 :        25.94        26.21 Mbits/s         799
 TCP download::12 :        27.29        27.15 Mbits/s         799
 TCP download::13 :        27.16        26.69 Mbits/s         799
 TCP download::14 :        28.37        28.13 Mbits/s         799
 TCP download::15 :        25.29        25.24 Mbits/s         799
 TCP download::16 :        25.82        25.88 Mbits/s         799
 TCP download::2  :        28.57        27.34 Mbits/s         799
 TCP download::3  :        26.88        26.77 Mbits/s         799
 TCP download::4  :        26.97        27.14 Mbits/s         799
 TCP download::5  :        28.36        27.28 Mbits/s         799
 TCP download::6  :        25.37        25.52 Mbits/s         799
 TCP download::7  :        26.26        26.27 Mbits/s         799
 TCP download::8  :        26.54        26.52 Mbits/s         799
 TCP download::9  :        27.43        26.98 Mbits/s         799

Bonus round (free rrul_be test):

And, finally for the gamers in the family (does not look very good, tho'):

tx_queue_data2_burst=0.5

Okay, these, I hope, were a couple of hours well spent. :wink: Now time for a good and healthy training and back to business.

1 Like

Thx for the miserable results. They are always useful. This was one of my all time favorite rants, at SIGCOMM:

Nobody's invited me back.

I would have hoped that by increasing

tx_queue_data2_burst=3.0

to 5, we would have seen the 570Mbit download that @nbd claimed in his commit. It could be that the VI-queue style

tx_queue_data2_aifs=1
tx_queue_data2_cwmin=7
tx_queue_data2_cwmax=15 

could be getting in the way here. I'd only intended btw to test that part briefly, and then actually test the VI queue by itself and vs the BE queue. We have a lot of people in the IETF that really want to use up the VI queue for a new kind of traffic, and it concerns me. Similarly Qosify is trying to land stuff there also.

Wow 6 reviewers required... not like the old days. Nagging @nbd or @jow to do two 1 liners like the above PRs via irc used to work for me...

This is only active in the first round, in last round with tx_queue_data2_burst=5.0 and 0.5 those 3 parameters are left as they are by default in BE queue, so tests are supposed to be valid.

To test BE vs VI, I knew I was forgetting something, I'll see what I can do.

That's what I was expecting too... oh, well, I'm in no hurry at all :wink:

rrul (not _be at least used to) exercise the BK, BE, and VI queues.

Arguably rrulv2 if we ever get around to it, will exercise all four.

There's a pic in the above preso of what that used to look like in the ath9k before all the new stuff landed. Even after it landed.. it was still pretty miserable.

1 Like
#% flent markings seem to be full decimal TOS byte values
#% conversion: TOS(dec) = DSCP(dec) * 4
#dscp(dec): EF:46 -> 184
#markings=CS0,CS1,CS2,CS3,CS4,CS5,CS6,CS7
#markings=0,32,64,96,128,160,192,224


date ; ping -c 10 netperf-eu.bufferbloat.net ; ./run-flent --ipv4 -l 300 -H netperf-eu.bufferbloat.net rrul_var --remote-metadata=root@192.168.42.1 --test-parameter=cpu_stats_hosts=root@192.168.42.1 --step-size=.05 --socket-stats --test-parameter bidir_streams=8 --test-parameter markings=0,32,64,96,128,160,192,224 --test-parameter ping_hosts=1.1.1.1 -D . -t IPv4_SQM_cake_layer-cake_LLA-ETH_OH34_U097pct34500of35483K-D090pct105000of116797K_work-horse-eth0_2_TurrisOmnia-TurrisOS.5.7.2-pppoe-wan-eth2.7_2_bridged-BTHH5A-OpenWrt-r17498-07203cb253-Hvt-VDSL100_2_netperf-eu.bufferbloat.net --log-file

rrul_var to the rescue, just define the number of flows and which DSCPs to use, see above for all 8 class selectors... just pick which DSCPs you want to include...

hostapd: add wmm qos map set by default
author Felix Fietkau <[nbd@nbd.name](mailto:nbd@nbd.name)>
Wed, 3 Nov 2021 22:40:53 +0100 (22:40 +0100)
committer Felix Fietkau <[nbd@nbd.name](mailto:nbd@nbd.name)>
Wed, 3 Nov 2021 22:47:55 +0100 (22:47 +0100)
commit a5e3def1822431ef6436cb493df77006dbacafd6
tree f4494efd6e08a872524eedb5081564a6f5ece20c tree | snapshot
parent b14f0628499142a718a68be7d1a7243f7f51ef0a commit | diff
hostapd: add wmm qos map set by default

This implements the mapping recommendations from RFC8325, with an
update from RFC8622. This ensures that DSCP marked packets are properly
sorted into WMM classes.
The map can be disabled by setting iw_qos_map_set to something invalid
like 'none'

Signed-off-by: Felix Fietkau <[nbd@nbd.name](mailto:nbd@nbd.name)>

Which introduces the following new RFC8325 inspired DSCP to AC mappings:
set_default iw_qos_map_set 0,0,2,16,1,1,255,255,18,22,24,38,40,40,44,46,48,56

Which translates into the following mappings (according to the hostapd rules below*):

unraveling this gets us to (0 is coded as DSCP Exception, the rest as DSCP ranges):

UP	DSCP	    AC	  PHBs(decDSCP)
Ex0	BE	        BE(0) BE/CS0(0)
Range0	2-16	BE	  CS1(8)**, AF11(10), AF12(12), AF13(14), CS2(16)
Range1	1-1	    BK	  LE(1)
Range2	-	
Range3	18-22	BE	  AF21(18), AF22(20), AF23(22)
Range4	24-38	VI	  CS3(24), AF31(26), AF32(28), AF33(30), CS4(32), AF41(34), AF42(36), AF43(38)
Range5	40-40	VI	  CS5(40)
Range6	44-46	VO	  VA(44), EF(46)
Range7	48-56	VO	  CS6(48), CS7(56)

So e.g. 4,4,0,0,160,160,184,184 (LE,LE,BE,BE,CS5,CS5,EF,EF) should put two flows into each AC of current OpenWrt...

date ; ping -c 10 netperf-eu.bufferbloat.net ; ./run-flent --ipv4 -l 300 -H netperf-eu.bufferbloat.net rrul_var --remote-metadata=root@192.168.42.1 --test-parameter=cpu_stats_hosts=root@192.168.42.1 --step-size=.05 --socket-stats --test-parameter bidir_streams=8 --test-parameter markings=4,4,0,0,160,160,184,184 --test-parameter ping_hosts=1.1.1.1 -D . -t IPv4_SQM_cake_layer-cake_LLA-ETH_OH34_U097pct34500of35483K-D090pct105000of116797K_work-horse-eth0_2_TurrisOmnia-TurrisOS.5.7.2-pppoe-wan-eth2.7_2_bridged-BTHH5A-OpenWrt-r17498-07203cb253-Hvt-VDSL100_2_netperf-eu.bufferbloat.net --log-file
1 Like

Thanks heaps for taking the time to post this, mate. I'll use it in my next tests, tomorrow, tho'.

@dtaht

Next round of tests, I hope these are useful.

Test rrul, AC_BE all default:

Test rrul, AC_VI all default:

Test rrul, AC_BE and AC_VI, all default:

Test rrul, AC_BE and AC_VI, tx_burst=5.0:

Test rrul, AC_BE and AC_VI, tx_burst=0.5:

Test rrul, AC_BE and AC_VI, BE parameters equal to VI:

Test rrul, AC_BE and AC_VI, BE parameters equal to VI and BE TXOP=94:

  • Please, find all data for this 2nd round clicking here

Updated: mistakenly used 40 as DSCP value in place of 160

I do not think 40 is AC_VI. As far as I understand flent accepts and prints the TOS values, so 40 would be TOS 40/4=10, which still maps to AC_BE... However, flent might print decimal DSCP values while requiring decimal TOS values for configuration, so you might have done the right thing and I am just confused...

But the fact that all flows get the same throughput indicates that the marking might not be as intended.

(Why is TOS = DSCP * 4? Because this essentially is a shift by two bits to get from 6bit DSCP to 8 bit TOS with those two ECN bits zero by default)

2 Likes

You are right! The lack of sleep is affecting me clearly. Going to redo them. Argh.

Update: it should be fixed now, I'm going and mind my Saturday and have another coffee to see if I can wake up properly!

2 Likes

thank you, esp for showing, in particular, how badly the BE queue performs under contention vs VI. BK oughta be worse! There is a lot of traffic mismarked as CS1 out there, in the vain hope that background actually means what L3 protocol designers meant as background, where 4seconds of delay with only one station on the case is well beyond what we meant as background. Trying to spit tcp through there, which has typical timeouts at 250ms, 1s and 2s essentially, means we end up sending more packets in a somewhat futile manner. If we could force upon application designers the idea that the BK queue might be delayed 10s of seconds, and restrict usage of it to just those apps, that would be great.

Back when we were thinking about 802.11e, the problem as then (2003!) seen was that VOIP really really really wanted a 10ms interval (now it's 20ms), we didn't have good jitter buffers, and ulaw and gsm encodings were the law of the land. So a limited number of VOIP phones on an AP worked better - ship it! (and again, this was a client, not as much AP, option at the time. The APs were supposed to figure out how to schedule responses, and many (enterprise) APs actually did do some of the right things here...

VI ended up as a bucket for where videoconferencing was to go. It seemed to make sense... to some...

but the complexities of 802.11e's bus arbitration don't make a lot of sense, period. IMHO.

After 802.11n showed up with aggregation which was vastly superior in terms of fitting packets into a txop (if you managed the queues right)... and atheros sold out to qcomm... most of the detailed AP knowledge began to fade from the field.

I turned off mappings via qos-map almost entirely (EF-only) years ago, and have in general not looked back. WMM is required that it work to pass the wifi alliance's tests! and thus it's on by default for nearly everybody else still, and the effect on real traffic, well... I'm in general thankful that so few applications have tried to use it to date. Used carefully from certain kinds of STAs still seems to be a decent idea. Note "carefully". There are a few wifi joystick-game controllers that use VI or VO....

Despite my opinion, I'd never got sufficient data from real world usage to convince enough people I was right.

With enough data, perhaps we can convince the openwrt folk to obsolete qos-map into the bk queue, at least. The VI queue isn't looking all that good either. Scheduling smartly, and intelligently reducing txop size under contention seemed the best strategy to me (in 2016)

I've sometimes hoped we could find another use for the 4 hardware queues. Or that they would work better in a mu-mimo situation. I keep hoping we find a benchmark that shows a demonstrable benefit for some form of real world traffic for the VI and VO queues for some generation of wifi.

I should probably note also that there are all sorts of other possible sources for observing the 4sec spikes on icmp seen here...

3 Likes

Hence my principled objection against the harebrained idea of making "NQB" inhabit AC_VI... clearly nobody in the IETF WG bothers to look at actual data...

I thought 802.11e was finalized in 2005?

You convinced me; am I not enough people? :wink:

Oh I think there is, but it requires that you have <= 4 different levels of priority traffic and are willing to accept that higher priority, if not rate-limited sufficiently*, will severely choke lower priority traffic.

*) it is not that rate limiting on a variable rate link like WiFi is conceptually all that "simple"

I'd worked on wifi, 1998 - 2005 - http://the-edge.blogspot.com/2010/10/who-invented-embedded-linux-based.html - as well as on various voip products like asterisk and the sophia SIP stack. I tapered off after 2005. So I was aware that what became 802.11e was kind of a brain damaged idea, except for voip. I didn't really grok the real damage of 802.11n packet aggregation until 2012? 2013? All I really understood is that sometime around 2008 or so, wondershaper had stopped working worth a darn. Looking back in history (now) txqueuelen's had grown to 1000 packets and GSO and GRO had become a thing, and nobody else had noticed, either (and I was still doing things like SFQ by default and vegas, not realizing nobody else was doing that. I didn't get out much) After I believed jim enough to repeat his experiments in 2010? 2011?

I didn't get how big the problem was for everyone, either. I just thought it was my tin cans and string connecting me to the internet.

Anyway, a little more data on VI vs BE regarding just a BE flow competing with a high rate irtt -i3ms --dscp 160 - really need a test to integrate that sort of thing directly in flent - plotting irtt loss and marks -

My hope was that a test downloading via the VI queue exclusively would have, oh, no more than 4-8ms observed latency on this chipset. 20ms seems really excessive, and must be coming from ... AQL? the hardware? don't know.

2 Likes

A great deal of the testing I'd wanted to do on this thread took place over here: AQL and the ath10k is *lovely* - #859 by dtaht

I'd prefer to try and close out the aql and ath10k over there and move to here.

So, @dtaht, what feedback do you have about that ath10k bug?

My ath10k is in a storage unit 200 miles from here as are the remains of my lab. On my little boat I am using an ath9k/lte device, and recently picked up a starlink. I'm tempted to hack into the starlink and fix it ( https://www.youtube.com/watch?v=c9gLo6Xrwgw ) Anyway, the best I can do is help analyze tests, at the moment, until I find a cheap place to have a lab on land... or get a bigger boat.