SQM optimization for NanoPi R6S with 24.10.4

I need some assistance please,

Ive installed OpenWrt 24.10.4 on a R6S (running from microsd card), all working ok.

I can max my isp connection (1.2Gb Download/ 110Mb Upload) but as soon as I turn on SQM, the speed drops to around 700Mb speeds.

Ive looked at most guides and also followed @StarWhiz guides to no avail.

Packet Steering is enabled on all CPUs, Ive tried all 3 of the affinity settings as well as stock from https://github.com/StarWhiz/NanoPi-R6S-CPU-Optimization-for-Gigabit-SQM but nothing works.

Ive tried software offloading and Hardware offloading too.

in SQM im using cake and piece_of_cake and for Per Packet Overhead (bytes) ive tried 22, 42, and 44 with and without “nat dual-dsthost” ingress and “nat dual-srchost”.

Im not sure what else to try?

Check CPU load while maxing out connection with SQM enabled ?

1 Like

max speed with SQM enabled is 76Mb/s (max is 135Mb/s with SQM disabled)

ingress set to: 1200000

egress set to 110000

HTOP:

this is with the below affinity set:

friendlyelec,nanopi-r6s)
        set_interface_core 1 "eth0"
        echo c0 > /sys/class/net/eth0/queues/rx-0/rps_cpus
        echo 30 > /sys/class/net/eth0/queues/tx-0/xps_cpus
        set_interface_core 2 "eth1-0"
        set_interface_core 2 "eth1-16"
        set_interface_core 2 "eth1-18"
        echo c0 > /sys/class/net/eth1/queues/rx-0/rps_cpus
        echo 30 > /sys/class/net/eth1/queues/tx-0/xps_cpus
        set_interface_core 4 "eth2-0"
        set_interface_core 4 "eth2-16"
        set_interface_core 4 "eth2-18"
        echo c0 > /sys/class/net/eth2/queues/rx-0/rps_cpus
        echo 30 > /sys/class/net/eth2/queues/tx-0/xps_cpus
        ;;

ok, i changed the affinity to:

friendlyarm,nanopi-r6s)
	set_interface_core 2 "eth0"
	set_interface_core 4 "eth1"
	set_interface_core 8 "eth2"
	find /sys/class/net/eth*/queues/[rt]x-[01]/[rx]ps_cpus -exec sh -c '[ -w {} ] && echo ff > {} 2>/dev/null' \;
	;;

and looks like that last line has done the trick!

now easily getting 120Mb/s to 135Mb/s with SQM enabled.

1 Like

If you are using the 8125-rss driver you should remove it asap and stick to the non rss driver.

The following settings gave me the best performance with layer of cake with the 8125 driver after going through various configurations with the 8125-rss, 8125 and the 8169 driver for the 2.5 Gb/s nic’s

And after that you will find out that when you do manual changes to Ethernet queues it gets reset at 1000 seconds intervals regardless of what setting you chose for packet steering in the GUI, to avoid that you can make the changes you want to the packet_steering file in /etc/init.d/

#!/bin/sh /etc/rc.common

START=25
USE_PROCD=1

start_service() {
	reload_service
}

service_triggers() {
	procd_add_reload_trigger "network"
	procd_add_reload_trigger "firewall"
	procd_add_raw_trigger "interface.*" 1000 /etc/init.d/packet_steering reload
}

reload_service() {
	packet_steering="$(uci -q get "network.@globals[0].packet_steering")"
	steering_flows="$(uci -q get "network.@globals[0].steering_flows")"
	[ "${steering_flows:-0}" -gt 0 ] && opts="-l $steering_flows"
	if [ -e "/usr/libexec/platform/packet-steering.sh" ]; then
		/usr/libexec/platform/packet-steering.sh "$packet_steering"
	else
		/usr/libexec/network/packet-steering.uc $opts "$packet_steering"
		echo 40 > /proc/irq/97/smp_affinity		
		echo 40 > /proc/irq/113/smp_affinity	
		echo 80 > /sys/class/net/eth1/queues/rx-0/rps_cpus
		echo 10 > /proc/irq/129/smp_affinity  	
		echo 10 > /proc/irq/145/smp_affinity	
		echo 20 > /sys/class/net/eth2/queues/rx-0/rps_cpus
	fi
}

You don’t have to do IRQ affinity in the script i just put it there to have everything the same place.

1 Like