Optimizing Wireguard speed in Openwrt

Hi all,
I have a fully running Wireguard VPN client running on Openwrt (TPLink Archer 1750). The server is on a AWS T2 Micro . I conducted speedests on the router and found that the speeds are averagin 24 Mbps. I find the speed to be quite low.

root@OpenWrt:/tmp# ./speedtest.sh -p 8.8.8.8
2024-01-02 04:50:28 Testing against netperf.bufferbloat.net (ipv4) with 5 simultaneous sessions while pinging 8.8.8.8 (60 seconds in each direction)
...................................................................................................................................
 Download: 0.00 Mbps
  Latency: (in msec, 131 pings, 0.00% packet loss)
      Min: 13.204 
    10pct: 16.043 
   Median: 20.591 
      Avg: 21.814 
    90pct: 26.455 
      Max: 35.751
.....................................................................................................................................
   Upload: 0.00 Mbps
  Latency: (in msec, 133 pings, 0.00% packet loss)
      Min: 12.716 
    10pct: 16.150 
   Median: 21.074 
      Avg: 23.034 
    90pct: 27.957 
      Max: 89.637
root@OpenWrt:/tmp# ./speedtest.sh
2024-01-02 04:55:53 Testing against netperf.bufferbloat.net (ipv4) with 5 simultaneous sessions while pinging gstatic.com (60 seconds in each direction)
......................................................................................................................................
 Download: 0.00 Mbps
  Latency: (in msec, 134 pings, 0.00% packet loss)
      Min: 13.815 
    10pct: 16.553 
   Median: 21.286 
      Avg: 22.185 
    90pct: 26.953 
      Max: 35.945
.....................................................................................................................................
   Upload: 0.00 Mbps
  Latency: (in msec, 133 pings, 0.00% packet loss)
      Min: 16.567 
    10pct: 16.837 
   Median: 22.039 
      Avg: 24.615 
    90pct: 31.134 
      Max: 74.617

My internet speed is 500 Mbps. The AWS t2 Micro speed is

 curl -s https://raw.githubusercontent.com/sivel/speedtest-cli/master/speedtest.py | python3 -
Retrieving speedtest.net configuration...
Testing from Amazon.com ()...
Retrieving speedtest.net server list...
Selecting best server based on ping...
Hosted by Bell Mobility (Montréal, QC) [1.39 km]: 2.143 ms
Testing download speed................................................................................
Download: 1007.56 Mbit/s
Testing upload speed......................................................................................................
Upload: 1046.04 Mbit/s

The MTU is set to 1412 on both the client and server. Is there a way to further tune the Wireguard setup to improve the avg connection speed?

I also have clamping set on the Wireguard server
-A ufw-after-forward -p tcp --tcp-flags SYN,RST SYN -j TCPMSS --clamp-mss-to-pmtu

What does htop say about the router's CPU usage while using the tunnel?

Here is the output of top. Currently, all traffic is routed through the VPN.

Mem: 57200K used, 65088K free, 1180K shrd, 0K buff, 14832K cached
CPU:   2% usr   3% sys   0% nic  88% idle   0% io   0% irq   4% sirq
Load average: 0.13 0.08 0.03 1/56 7674
  PID  PPID USER     STAT   VSZ %VSZ %CPU COMMAND
 1567  1563 network  S     4564   4%   7% /usr/sbin/hostapd -s -g /var/run/hostapd/global
 6650     2 root     IW       0   0%   3% [kworker/0:1-wg-]
 7674     2 root     IW       0   0%   1% [kworker/0:3-wg-]
 7663     2 root     IW       0   0%   1% [kworker/0:2-eve]
    8     2 root     SW       0   0%   0% [ksoftirqd/0]
 7673  7665 root     R     1324   1%   0% top
 7664  1459 root     S     1228   1%   0% /usr/sbin/dropbear -F -P /var/run/dropbear.1.pid -p 22 -K 300 -T 3 -2 9
 1590  1564 network  S     4436   4%   0% /usr/sbin/wpa_supplicant -n -s -g /var/run/wpa_supplicant/global
 1812     1 root     S     4188   3%   0% /usr/sbin/uhttpd -f -h /www -r OpenWrt -x /cgi-bin -u /ubus -t 60 -T 30 -k 20 -A 1 -n 3 -N 100 -R -p 0.0.0.0:80 -p [::]:80 -C /etc/uhttpd.crt -K /et
 1564     1 root     S     2656   2%   0% {wpa_supplicant} /sbin/ujail -t 5 -n wpa_supplicant -U network -G network -C /etc/capabilities/wpad.json -c -- /usr/sbin/wpa_supplicant -n -s -g /va
 4350     1 root     S     2656   2%   0% {dnsmasq} /sbin/ujail -t 5 -n dnsmasq -u -l -r /bin/ubus -r /etc/TZ -r /etc/dnsmasq.conf -r /etc/ethers -r /etc/group -r /etc/hosts -r /etc/passwd -
 3196     1 root     S     2656   2%   0% {ntpd} /sbin/ujail -t 5 -n ntpd -U ntp -G ntp -C /etc/capabilities/ntpd.json -c -u -r /bin/ubus -r /usr/bin/env -r /usr/bin/jshn -r /usr/sbin/ntpd-h
 1563     1 root     S     2656   2%   0% {hostapd} /sbin/ujail -t 5 -n hostapd -U network -G network -C /etc/capabilities/wpad.json -c -- /usr/sbin/hostapd -s -g /var/run/hostapd/global
 1253     1 root     S     2268   2%   0% /sbin/rpcd -s /var/run/ubus/ubus.sock -t 30

New to this and maybe a little wet behind the ears ... BUT:

The first entry shows a speedtest against a DNS server (8.8.8.8:Google) and got a ZERO transfer of data but a median LATENCY of 26ms. Which is okay (I generally get 10ms from router, 13ms from desktop). I don't know why a DNS service would support speedtest; its got more important things to do ...

The second says you have 500Mbps service but upload and download speed result of 1000+ Mbps. Seems more like the ISP is not correctly throttling your modem, or you may be seeing a slightly larger than expected bill this month.

The TPLink Archer 1750 would not seem to be a good choice for a VPN tunnel @500Mbps as it does not have hardware encryption support and only runs about 700MHz which doesn't leave a lot of process space, but itseems like there is very little going on with your "top" output; was the network loaded or a speedtest underway during the "top" screenshot?

If you're really in need of hot VPN performance, and the router is getting cpu bound, either choose router hardware that helps the process, or off-load the Tunnel to a secondary machine so as not to load the typical router.

However - the test results would not seem to support your concerns voiced.

1 Like

I think the output of top was gathered while the tunnel was not under heavy duty.
Otherwise you will see it's usage under ksoftirq as the router/os is busy to move packets from one interface to another while en- and de-capsulation packets...

I tried again , this time trying to push the VPN. Ran a Video call via Google Meet to test it out. Not much difference on the AC1750.

CPU:   5% usr   2% sys   0% nic  88% idle   0% io   0% irq   3% sirq
Load average: 0.18 0.30 0.27 1/60 14196
  PID  PPID USER     STAT   VSZ %VSZ %CPU COMMAND
 1567  1563 network  R     4700   4%   7% /usr/sbin/hostapd -s -g /var/run/hostapd/global
13465     2 root     IW       0   0%   2% [kworker/0:0-wg-]
13855     2 root     IW       0   0%   2% [kworker/0:1-eve]
14196 13533 root     R     1320   1%   0% top
13532  1459 root     S     1228   1%   0% /usr/sbin/dropbear -F -P /var/run/dropbear.1.pid -p 22 -K 300 -T 3 -2 9
 4353  4350 dnsmasq  S     1504   1%   0% /usr/sbin/dnsmasq -C /var/etc/dnsmasq.conf.cfg01411c -k -x /var/run/dnsmasq/dnsmasq.cfg01411c.pid
    8     2 root     SW       0   0%   0% [ksoftirqd/0]
 1590  1564 network  S     4436   4%   0% /usr/sbin/wpa_supplicant -n -s -g /var/run/wpa_supplicant/global
 1812     1 root     S     4188   3%   0% /usr/sbin/uhttpd -f -h /www -r OpenWrt -x /cgi-bin -u /ubus -t 60 -T 30 -k 20 -A 1 -n 3 -N 100 -R -p 0.0.0.0:80 -p [::]:80 -C /etc/uhttpd.crt -K /et
 1564     1 root     S     2656   2%   0% {wpa_supplicant} /sbin/ujail -t 5 -n wpa_supplicant -U network -G network -C /etc/capabilities/wpad.json -c -- /usr/sbin/wpa_supplicant -n -s -g /va
 4350     1 root     S     2656   2%   0% {dnsmasq} /sbin/ujail -t 5 -n dnsmasq -u -l -r /bin/ubus -r /etc/TZ -r /etc/dnsmasq.conf -r /etc/ethers -r /etc/group -r /etc/hosts -r /etc/passwd -
 3196     1 root     S     2656   2%   0% {ntpd} /sbin/ujail -t 5 -n ntpd -U ntp -G ntp -C /etc/capabilities/ntpd.json -c -u -r /bin/ubus -r /usr/bin/env -r /usr/bin/jshn -r /usr/sbin/ntpd-h
 1563     1 root     S     2656   2%   0% {hostapd} /sbin/ujail -t 5 -n hostapd -U network -G network -C /etc/capabilities/wpad.json -c -- /usr/sbin/hostapd -s -g /var/run/hostapd/global
 1253     1 root     S     2244   2%   0% /sbin/rpcd -s /var/run/ubus/ubus.sock -t 30
 1630     1 root     S     1832   1%   0% /sbin/netifd
    1     0 root     S     1700   1%   0% /sbin/procd

Although I did see a big drop in speed when the Video call was on.
VPN on, no video call

root@OpenWrt:/tmp# ./speedtest.sh 
2024-01-05 22:48:41 Testing against netperf.bufferbloat.net (ipv4) with 5 simultaneous sessions while pinging gstatic.com (60 seconds in each direction)
......................................................................................................................................
 Download: 0.00 Mbps
  Latency: (in msec, 134 pings, 0.00% packet loss)
      Min: 20.155 
    10pct: 21.536 
   Median: 27.217 
      Avg: 28.300 
    90pct: 31.307 
      Max: 111.049
....................................................................................................................................
   Upload: 0.00 Mbps
  Latency: (in msec, 133 pings, 0.00% packet loss)
      Min: 20.723 
    10pct: 21.729 
   Median: 27.630 
      Avg: 35.953 
    90pct: 71.400 
      Max: 121.311

VPN on, with Video call on

root@OpenWrt:/tmp# ./speedtest.sh 
2024-01-05 22:42:57 Testing against netperf.bufferbloat.net (ipv4) with 5 simultaneous sessions while pinging gstatic.com (60 seconds in each direction)
..................................................................................................................................
 Download: 0.00 Mbps
  Latency: (in msec, 131 pings, 0.00% packet loss)
      Min: 15.669 
    10pct: 17.325 
   Median: 21.203 
      Avg: 22.555 
    90pct: 27.188 
      Max: 66.555
.....................................................................................................................................
   Upload: 0.00 Mbps
  Latency: (in msec, 133 pings, 0.00% packet loss)
      Min: 13.955 
    10pct: 14.100 
   Median: 21.056 
      Avg: 21.764 
    90pct: 26.077 
      Max: 41.925

I'm trying to understand , if its possible to increase the Avg Speed from 20-30 to close to 100 , given I have 500 Mbps download speed. Or is it curtailed because my upload speed is only ~ 25Mbps?

Thanks

I got between 60 and 70 MBit/s on LAN with Openwrt 18 and 19. Afterwards the performance has dipped and speeds ranged from 25 to 40 MBit/s with Openwrt 21 and 22 on LAN. Tested on dozens of devices with several self build images.

If you want at least 100 MBit/s with Wireguard, look for something like the TP-Link AX23 (up to 140 MBit/s) or even more powerful devices.

1 Like

Ideally - you should be sending some significant amount of data to bufferbloat. When it shows "0.00 Mbps", ask yourself "How much data is being actually transfered."(????)

For giggles, I loaded speedtest-netperf on OpenWRT 23.05 and ran it.
Testing this morning seems to show netperf.bufferbloat.net down which will result in "Download: 0.00 Mbps" - try using "netperf-west.bufferbloat.net"

root@[Redacted]:~# speedtest-netperf.sh -H netperf-west.bufferbloat.net -t 5
2024-01-07 09:30:55 Starting speedtest for 5 seconds per transfer session.
Measure speed to netperf-west.bufferbloat.net (IPv4) while pinging gstatic.com.
Download and upload sessions are sequential, each with 5 simultaneous streams.
......
  Download: 315.71 Mbps
  Latency: [in msec, 5 pings, 0.00% packet loss]
      Min:  21.131
    10pct:   0.000
   Median:   0.000
      Avg:  22.856
    90pct:   0.000
      Max:  27.026
 CPU Load: [in % busy (avg +/- std dev) @ avg frequency, 3 samples]
     cpu0:  88.7 +/-  0.0  @ 1400 MHz
     cpu1:  38.5 +/-  0.9  @ 1400 MHz
 Overhead: [in % used of total CPU available]
  netperf:  55.5
......
   Upload: 322.22 Mbps
  Latency: [in msec, 6 pings, 0.00% packet loss]
      Min:  25.405
    10pct:   0.000
   Median:   0.000
      Avg:  29.057
    90pct:   0.000
      Max:  35.089
 CPU Load: [in % busy (avg +/- std dev) @ avg frequency, 4 samples]
     cpu0: 100.0 +/-  0.0  @ 1400 MHz
     cpu1:   7.9 +/-  2.3  @ 1400 MHz
 Overhead: [in % used of total CPU available]
  netperf:   4.8

AND for gawds sake - set "-t 5" ; 60 seconds is just much more than you need to run to get a valid result for your purposes.

The speedtest client itself uses a lot of router CPU. For an accurate benchmark of routing and VPN capacity, don't run the speedtest client on the router itself, run it on a PC connected through the router.

1 Like