LAN issue with ipq40xx (24.10 and main)

Thanks.

Let's put all the ports back into the same bridge.

Edit br-lan to look like this:

config device
	option name 'br-lan'
	option type 'bridge'
	list ports 'lan1'
	list ports 'lan2'
	list ports 'lan3'
	list ports 'lan4'

Now create bridge-VLANs:

config bridge-vlan
	option device 'br-lan'
	option vlan '1'
	list ports 'lan3:u*'
	list ports 'lan4:u*'

config bridge-vlan
	option device 'br-lan'
	option vlan '2'
	list ports 'lan1:u*'
	list ports 'lan2:u*'

Delete this:

And finally, edit the lan and test interfaces to use br-lan.x (where x is the vlan ID) like this:

config interface 'lan'
	option device 'br-lan.1'
	option proto 'static'
	option ipaddr '192.168.1.1'
	option netmask '255.255.255.0'
	option ip6assign '60'

config interface 'test'
	option proto 'static'
	option device 'br-lan.2'
	option ipaddr '192.168.2.1'
	option netmask '255.255.255.0'

Restart and test again.

2 Likes

That was more or less, what I have been proposing he do since the start:
Separate the 4 LAN Ports into 2 Groups of 2, with VLANs 1 for LAN{1,2} and 7 for LAN{3,4}. My suggestion was then to make VLAN7 non-local, and test iperf between LAN{1,2}, and after that between LAN{3,4}.

Now his previous message seemed to indicate, after doing so, that LAN3 can't reach LAN4, if "local" is not set for that VLAN. And if he then sets "local", then like on VLAN1, traffic between LANx and LANy works, but not at wirespeed, and with high CPU-load.

Badulesia: A DSA switch requires bridge-vlans inside a single bridge to set up hardware switching properly, so you shouldn't have (several) br-lans, always just 1 'switch', with all ports, and the VLANs all on there. So I believe you are not starting from a "clean" DSA config, and/or (unintentionally) creating several bridges with the ports split between them.

1 Like

I will try again tomorrow. Stay tuned. Thanks.

Hi.
I restarted from default settings, and added what you advised. I understand better now.

I used an old laptop on LAN4, to put settings and monitor the router (LuCI). Than I put two modern computers on LAN1 and LAN2 (static IPs on 192.168.2.x range) and perform file transfers (6 GB iso file, 50s to transfer).

  • bandwith still capped at 100MB/s with load average of 1.20. I disabled packet steering : 110 MB/s with load average of 1.60
  • I unset ‘local’ on VLAN2 : 105 MB/s with load average of 0.70. I disabled packet steering : 115 MB/s with load average of 1.40

There seems indeed some kind of CPU interference as @stragies suggested. I remember monitoring IRQs a few months ago and seeing a slightly increase with 24.10 (comparing to 23.05).

config interface 'loopback'
	option device 'lo'
	option proto 'static'
	option ipaddr '127.0.0.1'
	option netmask '255.0.0.0'

config globals 'globals'
	option ula_prefix 'fd4b:ac43:8371::/48'

config device
	option name 'br-lan'
	option type 'bridge'
	list ports 'lan1'
	list ports 'lan2'
	list ports 'lan3'
	list ports 'lan4'

config device
	option name 'lan1'
	option macaddr 'xx:xx:xx:xx:xx:xx'

config device
	option name 'lan2'
	option macaddr 'xx:xx:xx:xx:xx:xx'

config device
	option name 'lan3'
	option macaddr 'xx:xx:xx:xx:xx:xx'

config device
	option name 'lan4'
	option macaddr 'xx:xx:xx:xx:xx:xx'

config bridge-vlan
	option device 'br-lan'
	option vlan '1'
	list ports 'lan3:u*'
	list ports 'lan4:u*'

config bridge-vlan
	option device 'br-lan'
	option vlan '2'
	list ports 'lan1:u*'
	list ports 'lan2:u*'

config interface 'lan'
	option device 'br-lan.1'
	option proto 'static'
	option ipaddr '192.168.1.1'
	option netmask '255.255.255.0'
	option ip6assign '60'

config interface 'test'
	option proto 'static'
	option device 'br-lan.2'
	option ipaddr '192.168.2.1'
	option netmask '255.255.255.0'

config device
	option name 'wan'
	option macaddr 'xx:xx:xx:xx:xx:xx'

config interface 'wan'
	option device 'wan'
	option proto 'dhcp'

config interface 'wan6'
	option device 'wan'
	option proto 'dhcpv6'

On 23.05.5 with packet steering on I get 110-115 MB/s (which is near the 1Gbps port limit itself) with a minimal 1 minute load average of around 0.1.

That's why we are discussing here why 24.10 doesn't provide such performance :wink: Until this is solved I keep the MR8300 on 23.05.5.

3 Likes

Subject: Slow Speeds on OpenWrt 23.05, Drives Are Fast.

Hi all,

I'm running OpenWrt 23.05.5 on a router with an IPQ40xx chipset, but I'm experiencing slow speeds on my network despite having fast drives on the server. Here's what I've tested so far:

  • Server and Client: Both are running at 1Gbps with no packet loss.
  • CPU Usage: Both the server and client are not experiencing high CPU usage during transfers.
  • Disk Performance: The read/write speeds on the server’s drives are fast (~1076 MB/sec), so disk I/O is not a bottleneck.
  • Server-to-Server HDD: Speed between HDDs on the server is only 65MB/s (which should be faster).
  • Client-to-Server HDD: Transfer from the NVMe client to the server HDD is 73MB/s (this should be around 150MB/s).

Seeing this thread I suspect there might be something in OpenWrt 23.05 that is affecting the speed, particularly with the IPQ40xx chipset.

Does anyone else experience this issue on 23.05, or could anyone point me to areas where I should look for a solution?

Thanks in advance!

I'm very satisfied of the performance on 23.05.

This is not an OpenWrt related issue. Performance between HDDs is only limited by the server HW. Indeed a much higher performance should be expected.

With a GB capable router, you should expect about 115 MB/s

Thanks for the reply!

I am too! In fact, I am impressed.
I get an 8 ms ping to a distant CZ.NIC DNSSEC resolver. I’m running it with 1.5 million blocked domains in AdBlock Lean, with no issues. My original bufferbloat rating was C, but under OpenWrt 23.05.5, it improved to A, with speeds of 295.8 Mbps down and 45.0 Mbps up (from the contracted 330/50 Mbps), thanks to packet steering and SQM.

I’ve solved the issue. The bottleneck was Samba when accessed through Nautilus (GNOME Files). Previously, I was mounting the disks directly in Nautilus, by clicking on Vault’s SMB/CIFS share under Computer > Networks.

When you access a Samba share through "Computer > Other Locations" in your file manager, it uses GVfs (Gnome Virtual File System). While convenient, GVfs is often slower than direct mounting because it relies on the network protocol for file system access. This can result in reduced performance, especially when handling many or large files.

To fix this, I:

  • Set Samba protocol in OpenMediaVault to minimum SMB3.
  • Tweaked my CIFS mounting command in the terminal, and my system automatically negotiated SMB 3.1.1.

After these changes, my transfer speeds jumped to 117.5 MB/s.

Next, I’ll be looking into NFS, as it’s generally faster than SMB for Linux-to-Linux transfers due to lower overhead and better integration with the native file system. Unlike SMB, NFS relies on UID/GID-based permissions instead of passwords, making it more seamless in Unix-based environments.

Thanks again for the support!

1 Like

There's this on the mailing list, not sure if it's relevant, perhaps it will be backported and integrated into Openwrt when it's merged into main

https://lore.kernel.org/netdev/20250207150340.sxhsva7qz7bb7qjd@skbuf/T/