GL-B1300 routing performance degradation on 21.02

I have noticed a significant degradation in performance on the GL.iNet GL-B1300 running 21.02. I can speed test at around 900 Mbps using 19.07.8, If I am running 21.02.1 my performance is in the 300 MBps range. Tests were performed with minimal configuration and without SQM or other shaping packages installed.

Here's an example Speedtest result, wired, on 19.07.

Here's some results from an Eero AP that was connected by wire to the GL-B1300 showing the ~ 300MBps performance. The older 900 Mbps results were with the Eero acting as a router.

I've tried to find other people talking about this on the b1300 or IPQ4028 based boards without any luck; thanks in advance for any guidance you can offer.

Test enabling packet_steering (which was a new default-on setting in 19.07.x, but got disabled by default again, as it does regress performance on some systems, while it helps others), installing and enabling irqbalance would also be a good idea (test all premutations). Personally I'm sceptical about ~900 MBit/s figures on this hardware, around 300 MBit/s would be more within my expectations (obviously you can push those limits by enabling software flow-offloading, but that's cheating (yes, massive speedup, but it also breaks other use cases (e.g. traffic accounting, sqm, etc.)).

1 Like

Thanks very much, I'm going to test out the different things you've recommend. I suspect that the software flow offloading is what's really missing.

I have a 500 Mbps symmetric Internet connection and don't feel as much of a need for SQM compared to when I was more bandwidth constrained. It's probably still quite valuable if you have one of the 900/20 offerings that seem common with US cable providers.

This issue was reported to me by a friend of mine who purchased a b1300 intending to run stock OpenWrt on it. When he told me about the issue I posted here and have since upgraded my own router so that I can use my home network for testing.

With a default configuration after upgrading to 21.02.1 using sysupgrade -n I tried to send traffic between two hosts connected on different vlan interfaces of the eth0 interface of the router. I had somewhat variable but still very good performance using iperf3, here are two results:

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1002 MBytes   840 Mbits/sec  206             sender
[  5]   0.00-10.00  sec  1000 MBytes   838 Mbits/sec                  receiver

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   917 MBytes   769 Mbits/sec  234             sender
[  5]   0.00-10.00  sec   915 MBytes   767 Mbits/sec                  receiver

This traffic was routed but not NATed, so I configured a NAT entry for my test client and server, this resulted in relatively comparable performance results:

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   913 MBytes   766 Mbits/sec  160             sender
[  5]   0.00-10.00  sec   911 MBytes   764 Mbits/sec                  receiver

I then tested between a client on my network and a a laptop connected to my ISPs router, so that traffic was sent out the WAN eth0 interface. This resulted in dramatically slower traffic in upstream (sending out WAN interface, default iperf client to server behaviour):

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   356 MBytes   299 Mbits/sec  435             sender
[  5]   0.00-10.00  sec   355 MBytes   298 Mbits/sec                  receiver

Downloading using iperf3 -R was somewhat faster:

[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec   593 MBytes   497 Mbits/sec                  sender
[  5]   0.00-10.00  sec   593 MBytes   497 Mbits/sec                  receiver

I do not understand why the WAN interface behaves so differently, and so much worse than routing or even NAT between two different networks on the LAN (eth0) interface.

I enabled packet_steering by adding the following to /etc/config/network and running /etc/init.d/networking restart:

config globals 'globals'
        option ula_prefix 'some-v6-stuff'
        option packet_steering 1

This doesn't appear to have resulted in significant changes in performance when testing using iperf, looking at the output of /proc/interrupts as discussed in another thread I see that all eth related interupt processing is occuring on CPU0:

[...]
 60:     927913          0          0          0     GIC-0  97 Edge      edma_eth_tx0
 61:      15625          0          0          0     GIC-0  98 Edge      edma_eth_tx1
 62:     334264          0          0          0     GIC-0  99 Edge      edma_eth_tx2
 63:        326          0          0          0     GIC-0 100 Edge      edma_eth_tx3
[...]

Enabling software flow offloading (with packet_steering still "active") did have a bigger impact, with performance improved as follows for upload (lan to wan):

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   597 MBytes   501 Mbits/sec   44             sender
[  5]   0.00-10.00  sec   596 MBytes   500 Mbits/sec                  receiver

Download (wan to lan) results were more mixed and confusing:

[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  42.0 MBytes   352 Mbits/sec                  
[  5]   1.00-2.00   sec  44.6 MBytes   375 Mbits/sec                  
[  5]   2.00-3.00   sec  36.9 MBytes   309 Mbits/sec                  
[  5]   3.00-4.00   sec  24.1 MBytes   202 Mbits/sec                  
[  5]   4.00-5.00   sec  22.6 MBytes   190 Mbits/sec                  
[  5]   5.00-6.00   sec  26.3 MBytes   220 Mbits/sec                  
[  5]   6.00-7.00   sec  42.1 MBytes   353 Mbits/sec                  
[  5]   7.00-8.00   sec  50.7 MBytes   425 Mbits/sec                  
[  5]   8.00-9.00   sec  51.1 MBytes   429 Mbits/sec                  
[  5]   9.00-10.00  sec  37.9 MBytes   318 Mbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec   378 MBytes   317 Mbits/sec                  sender
[  5]   0.00-10.00  sec   378 MBytes   317 Mbits/sec                  receiver

I haven't tested with irqbalance but I am surprised by the performance results I'm seeing and the significant differences between traffic across eth0 and out eth1. If anything I would expect to see better performance through two distinct interfaces.

Finally, prior to upgrading, and while on version 18.06.9 I ran some tests between VLANs that ran very close to line rate; I don't have many examples on hand but here's one that was clearly faster than any of the tests above.

[  5]   0.00-10.01  sec  1.07 GBytes   921 Mbits/sec                  receiver

I installed irqbalance and enabled it by setting the following value in the config file:

config irqbalance 'irqbalance'
        option enabled '1'

I then restarted the irqbalance service. With this enabled and the (apparently) ineffective packet_steering still active I was able to achieve the following speed using iperf3 out the WAN interface NATed.

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  1.03 GBytes   888 Mbits/sec   13             sender
[  5]   0.00-10.00  sec  1.03 GBytes   888 Mbits/sec                  receiver

It's possible that I can go faster than this, the laptop I'm using for testing isn't amazing.

I can also see the irq handling is now somewhat balanced across the four cores. The router has been up for about an hour and only had the irqbalance service installed a few minutes ago so the balancing is probably a lot better than this output looks like:

root@hydrogen:~# cat /proc/interrupts 
           CPU0       CPU1       CPU2       CPU3       
 18:     152778     102123     641424      42674     GIC-0  20 Level     arch_timer
 22:       9057         18          0          0     GIC-0 270 Level     bam_dma
 23:      33903          0          0        462     GIC-0 127 Level     78b5000.spi
 24:          0          0          0          0     GIC-0 239 Level     bam_dma
 25:          5          0          0          0     GIC-0 139 Level     msm_serial0
 42:     295625      23025          0          0     GIC-0 200 Level     ath10k_ahb
 59:     205672          0          0      18973     GIC-0 201 Level     ath10k_ahb
 60:     369743          0          0          0     GIC-0  97 Edge      edma_eth_tx0
 61:        492          0         53          0     GIC-0  98 Edge      edma_eth_tx1
 62:     139518     166026          0          0     GIC-0  99 Edge      edma_eth_tx2
 63:        622          0          0         25     GIC-0 100 Edge      edma_eth_tx3
 64:       2516          0          0          0     GIC-0 101 Edge      edma_eth_tx4

I can confirm that 21.02.3 on my B1300 degraded my gigabit service to around 300Mbps through the modem in transparent bridge mode. I fiddled with MTU's, packet steering, and irqbalance but wasn't able to achieve high speed. I downgraded to 19.07.8 and got back to around 700Mbps with stock settings, which is close to what I got direct to the modem in PPoE mode. Hope this helps for anyone else running 21.02.3 on gigabit who wants that speed out-of-the-box or can't get the 21 settings right. If 21 gets this working with stock I'll give it another go but for now I'll stick with 19.