I had time to run a few tests today and try to replicate the issues you've been experiencing with the Archer C7. Unfortunately, I was unable to replicate your issue. Even behind three 15cm-thick walls, with the nodes roughly 15m apart of each other and connected over 5Ghz, I was still able to get a mean 52Mbits/sec via TCP and 85Mbits/sec via UDP. For reference, when the nodes are within line of sight and 2m apart of each other (mesh baseline), I was able to get a mean 125Mbits/sec via TCP and 143 Mbits/sec via UDP. (You can find the details of my tests at the end of this reply.)
Regarding your own iperf tests, it seems you're running them on the nodes themselves, rather than through the nodes. I get very different readings when running iperf on the nodes vs. through them, which I assume has to do with the CPU limitations of the Archer C7. As @16F84 pointed out, many of such network devices have limited processing power. This is the reason why I mentioned using laptops connected to the mesh nodes via an Ethernet cable.
I also played around with MTU sizes and fragmentation options (e.g., https://www.open-mesh.org/projects/batman-adv/wiki/Fragmentation-technical) but did not notice any remarkable differences that could explain your throughput issues. I think @16F84 provided a few interesting suggestions regarding tuning your radio settings (see channel selection) but honestly, I've been using pretty much default values and not running into anything like you reported.
All this makes me think there is something peculiar about the environment in which the nodes were deployed or maybe the nodes are almost not seeing each other. If you cannot take them down for further testing, then try adding a few nodes between them, then play around with hop penalty settings, and see how that impacts proper throughput measurements (through your mesh nodes). To help you debug this further, check logs, see what batctl s
says before and after a throughput test, and save the output of batctl td bat0
to a file and later on, check it for errors (you might need to change the log level).
Tests
-
Iperf3 server (S): Laptop with gigabit Intel NIC and i7-4700mq CPU running Linux;
-
iperf3 client (C): Desktop with gigabit Intel NIC and i7-4790k CPU running Linux;
- Client iperf3 TCP mode:
iperf3 -t 60 -P 5 -w 64K -c SERVER_IP
- Client iperf3 UDP mode:
iperf3 -u -t 60 -b 1G -c SERVER_IP
-
Mesh nodes 01 and 02: Archer C7 v2;
-
Mesh node 03: Archer C7 v4;
-
All nodes:
Architecture: Qualcomm Atheros QCA9558 ver 1 rev 0
Firmware Version: OpenWrt 19.07.7 r11306-c4a6851c72 / LuCI openwrt-19.07 branch git-21.128.50949-ec81a49
Kernel Version: 4.14.221
Relevant pkgs:
ath10k-firmware-qca988x - 2019-10-03-d622d160-1
kmod-ath10k - 4.14.221+4.19.161-1-1
wpad-mesh-openssl - 2019-08-08-ca8c2bd2-7
batctl-default - 2019.2-8
kmod-batman-adv - 4.14.221+2019.2-11
-
Openwrt /etc/config/wireless:
config wifi-device 'radio0'
option type 'mac80211'
option channel '44'
option hwmode '11a'
option path 'pci0000:00/0000:00:00.0'
option htmode 'VHT80'
option country 'BR'
option disabled '0'
config wifi-iface 'wmesh'
option device 'radio0'
option ifname 'if-mesh'
option network 'mesh'
option mode 'mesh'
option mesh_id 'REDACTED'
option encryption 'sae'
option key 'REDACTED'
option mesh_fwding '0'
option mesh_ttl '1'
option mcast_rate '24000'
option disabled '0'
-
Openwrt /etc/config/network:
config interface 'lan'
option type 'bridge'
option ifname 'eth1.1 bat0.1'
option proto 'static'
option ipaddr '192.168.1.1'
option netmask '255.255.255.0'
list dns '8.8.8.8'
config switch
option name 'switch0'
option reset '1'
option enable_vlan '1'
config switch_vlan
option device 'switch0'
option vlan '1'
option ports '2 3 4 5 0t'
config interface 'bat0'
option proto 'batadv'
option routing_algo 'BATMAN_IV'
option aggregated_ogms '1'
option ap_isolation '0'
option bonding '0'
option bridge_loop_avoidance '1'
option distributed_arp_table '1'
option fragmentation '1'
option gw_mode 'off'
option hop_penalty '30'
option isolation_mark '0x00000000/0x00000000'
option log_level '0'
option multicast_mode '1'
option multicast_fanout '16'
option network_coding '0'
option orig_interval '1000'
config interface 'mesh'
option proto 'batadv_hardif'
option master 'bat0'
option mtu '2304'
option throughput_override '0'
Test scenarios
- Very noisy environment (multiple apt buildings nearby)
- Tested conditions:
- SC same node (baseline mesh-less)
- S - Node 01 - Node 02 - C (baseline mesh): within line of sight + 2 meters apart
- S - Node 01 - Node 02 - C (close): 01 concrete wall (15cm thick) + roughly 7 meters apart
- S - Node 01 - Node 03 - Node 02 - C (far): 03 concrete walls + 15 meters apart
Results
The first output is always for the TCP test, while the second is for UDP. All results are summaries over 1 minute. (Of note, for TCP, the window size was changed to 64K because I was getting lots of retries with the default value and over the mesh connections. This made me play around with MTU and fragmentation settings but the results did not show anything relevant to OP's issue, so I'm not reporting them here.)
Baseline mesh-less
$ iperf3 -t 60 -P 5 -w 64K -c S_IP
[ ID] Interval Transfer Bitrate Retr
[SUM] 0.00-60.00 sec 6.57 GBytes 941 Mbits/sec 0 sender
[SUM] 0.00-60.00 sec 6.57 GBytes 941 Mbits/sec receiver
$ iperf3 -u -t 60 -b 1G -c S_IP
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-60.00 sec 6.68 GBytes 956 Mbits/sec 0.000 ms 0/4953791 (0%) sender
[ 5] 0.00-60.00 sec 6.68 GBytes 956 Mbits/sec 0.022 ms 123/4953791 (0.0025%) receiver
Baseline mesh
$ iperf3 -t 60 -P 5 -w 64K -c S_IP
[ ID] Interval Transfer Bitrate Retr
[SUM] 0.00-60.00 sec 893 MBytes 125 Mbits/sec 1 sender
[SUM] 0.00-60.02 sec 893 MBytes 125 Mbits/sec receiver
$ iperf3 -u -t 60 -b 1G -c S_IP
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-60.00 sec 6.66 GBytes 954 Mbits/sec 0.000 ms 0/4940958 (0%) sender
[ 5] 0.00-60.03 sec 1021 MBytes 143 Mbits/sec 0.085 ms 4201504/4940913 (85%) receiver
Close
$ iperf3 -t 60 -P 5 -w 64K -c S_IP
[ ID] Interval Transfer Bitrate Retr
[SUM] 0.00-60.00 sec 753 MBytes 105 Mbits/sec 1 sender
[SUM] 0.00-60.01 sec 753 MBytes 105 Mbits/sec receiver
$ iperf3 -u -t 60 -b 1G -c S_IP
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-60.00 sec 6.66 GBytes 954 Mbits/sec 0.000 ms 0/4940942 (0%) sender
[ 5] 0.00-60.04 sec 868 MBytes 121 Mbits/sec 0.682 ms 4312030/4940422 (87%) receiver
Far
$ iperf3 -t 60 -P 5 -w 64K -c S_IP
[ ID] Interval Transfer Bitrate Retr
[SUM] 0.00-60.00 sec 377 MBytes 52.7 Mbits/sec 23 sender
[SUM] 0.00-60.05 sec 376 MBytes 52.6 Mbits/sec receiver
$ iperf3 -u -t 60 -b 1G -c S_IP
[ ID] Interval Transfer Bitrate Jitter Lost/Total Datagrams
[ 5] 0.00-60.00 sec 6.66 GBytes 954 Mbits/sec 0.000 ms 0/4938732 (0%) sender
[ 5] 0.00-60.08 sec 608 MBytes 84.9 Mbits/sec 0.029 ms 4497992/4938407 (91%) receiver