RPS/XPS Packet Steering appears to be broken

I have a question regarding 19.07 stable.
Linux R7800 4.14.221 #0 SMP Mon Feb 22 15:36:55 2021 armv7l GNU/Linux

Recently I looked into the /proc/interrupts and figured out that interrupts are completely imbalanced between CPU cores.
All IRQ processing is performed on CPU0 but at the same time I can observe a near-zero IRQ load on CPU1.
It seems like the packet steering (RPS/XPS) is disabled somewhere globally in your build.
Would you please to advise me how to workaround this issue.
Thank you in advance!

First of all I have verified that RPS/XPS queue processing policies are configured already for eth0/eth1 devices (in a default setup).
The dump below shows that RPS/XPS queues for eth0/eth1 are already set up to be processed on CPU1 (decimal 2 = bitmask 10).

root@R7800:~# grep . /sys/class/net/eth?/queues/?x-0/?ps_cpus
/sys/class/net/eth0/queues/rx-0/rps_cpus: 2
/sys/class/net/eth0/queues/tx-0/xps_cpus: 2
/sys/class/net/eth1/queues/rx-0/rps_cpus: 2
/sys/class/net/eth1/queues/tx-0/xps_cpus: 2

But the /proc/interrupts dump shows zero IRQs from eth0/eth1 were processed by CPU1.

root@R7800:~# cat /proc/interrupts
           CPU0       CPU1
 16:   27251761   10341481     GIC-0  18 Edge      gp_timer
 18:         33          0     GIC-0  51 Edge      qcom_rpm_ack
 19:          0          0     GIC-0  53 Edge      qcom_rpm_err
 20:          0          0     GIC-0  54 Edge      qcom_rpm_wakeup
 26:          0          0     GIC-0 241 Edge      ahci[29000000.sata]
 27:          0          0     GIC-0 210 Edge      tsens_interrupt
 28:     329028          0     GIC-0  67 Edge      qcom-pcie-msi
 29:         33          0     GIC-0  89 Edge      qcom-pcie-msi
 30:     464664          0     GIC-0 202 Edge      adm_dma
 31:   44722532          0     GIC-0 255 Level     eth0
 32:   38133156          0     GIC-0 258 Level     eth1
 33:          0          0     GIC-0 130 Level     bam_dma
 34:          0          0     GIC-0 128 Level     bam_dma
 35:          0          0   PCI-MSI   0 Edge      aerdrv
 36:     329028          0   PCI-MSI   1 Edge      ath10k_pci
 68:          0          0   PCI-MSI   0 Edge      aerdrv
 69:         33          0   PCI-MSI   1 Edge      ath10k_pci
101:         13          0     GIC-0 184 Level     msm_serial0
102:          2          0   msmgpio   6 Edge      keys
103:          2          0   msmgpio  54 Edge      keys
104:          2          0   msmgpio  65 Edge      keys
105:          0          0     GIC-0 142 Level     xhci-hcd:usb1
106:          0          0     GIC-0 237 Level     xhci-hcd:usb3
IPI0:          0          0  CPU wakeup interrupts
IPI1:          0          0  Timer broadcast interrupts
IPI2:    7575209    4041545  Rescheduling interrupts
IPI3:   26798724   75092959  Function call interrupts
IPI4:          0          0  CPU stop interrupts
IPI5:    9195359    3254587  IRQ work interrupts
IPI6:          0          0  completion interrupts
Err:          0

Note: All results above have been gathered without any IRQ tweaks or custom balancing.

The next thing I have tried was to change RPS/XPS policy to instruct kernel that BOTH CPU cores should process packets from any network device queue in the system including eth0/eth1 queues (by setting bitmask to decimal 3 = binary '11').
Unfortunately I got absolutely the same result as above - no packets have been processed by CPU1.

for file in /sys/class/net/* ; do
   echo 3 > "${file}/queues/rx-0/rps_cpus"
   echo 3 > "${file}/queues/tx-0/xps_cpus"
done

PS: My purpose is to redistribute RPS/XPS ethernet packet processing between cores prior to enabling the WiFi interface that will strictly bound to CPU0 (according information from this forum).
For now I see no need in using of irqbalance. I think that manual redistribution will be enough for my needs.

System info:

root@R7800:~# ubus call system board
{
        "kernel": "4.14.221",
        "hostname": "R7800",
        "system": "ARMv7 Processor rev 0 (v7l)",
        "model": "Netgear Nighthawk X4S R7800",
        "board_name": "netgear,r7800",
        "release": {
                "distribution": "OpenWrt",
                "version": "19.07-SNAPSHOT",
                "revision": "r11312-e9c0c5021c",
                "target": "ipq806x/generic",
                "description": "OpenWrt 19.07-SNAPSHOT r11312-e9c0c5021c"
        }
}

That has nothing related specifically to my build.

You might read about R7800 and IRQs in the R7800 exploration thread.

And you might also search the forum for irqbalance.

Already done.
Spent two days in reading 'exploration', 'performance' and this thread but found no answer.
I have not find any complaints on non-functioning receive packet steering on 19.07, but it could be I miss something because of too many builds, too many brunchs...
Thank you for your exhaustive answer.

Interrupt balancing works pretty acceptable for me, here's how mine looks like after almost one week of uptime

root@R7800:~# uptime
 22:18:59 up 6 days, 23:30,  load average: 0.00, 0.00, 0.00
root@R7800:~# cat /proc/interrupts 
           CPU0       CPU1       
 16:    7891151   14047725     GIC-0  18 Edge      gp_timer
 18:         33          0     GIC-0  51 Edge      qcom_rpm_ack
 19:          0          0     GIC-0  53 Edge      qcom_rpm_err
 20:          0          0     GIC-0  54 Edge      qcom_rpm_wakeup
 26:          0          0     GIC-0 241 Edge      ahci[29000000.sata]
 27:          0          0     GIC-0 210 Edge      tsens_interrupt
 28:   33427935          0     GIC-0  67 Edge      qcom-pcie-msi
 29:         25   62374142     GIC-0  89 Edge      qcom-pcie-msi
 30:     193282          0     GIC-0 202 Edge      adm_dma
 32:         36   62864186     GIC-0 258 Level     eth1
 33:          0          0     GIC-0 130 Level     bam_dma
 34:          0          0     GIC-0 128 Level     bam_dma
 35:          0          0   PCI-MSI   0 Edge      aerdrv
 36:   33427935          0   PCI-MSI   1 Edge      ath10k_pci
 68:          0          0   PCI-MSI   0 Edge      aerdrv
 69:         25   62374142   PCI-MSI   1 Edge      ath10k_pci
101:         10          0     GIC-0 184 Level     msm_serial0
102:          2          0   msmgpio   6 Edge      keys
103:          2          0   msmgpio  54 Edge      keys
104:          2          0   msmgpio  65 Edge      keys
105:          0          0     GIC-0 142 Level     xhci-hcd:usb1
106:          0          0     GIC-0 237 Level     xhci-hcd:usb3
IPI0:          0          0  CPU wakeup interrupts
IPI1:          0          0  Timer broadcast interrupts
IPI2:    1502103     928173  Rescheduling interrupts
IPI3:     489423    3885336  Function call interrupts
IPI4:          0          0  CPU stop interrupts
IPI5:    5813977    7805630  IRQ work interrupts
IPI6:          0          0  completion interrupts
Err:          0
root@R7800:~# ubus call system board
{
        "kernel": "4.14.229",
        "hostname": "R7800",
        "system": "ARMv7 Processor rev 0 (v7l)",
        "model": "Netgear Nighthawk X4S R7800",
        "board_name": "netgear,r7800",
        "release": {
                "distribution": "OpenWrt",
                "version": "19.07-SNAPSHOT",
                "revision": "r11333-cc0b70467d",
                "target": "ipq806x/generic",
                "description": "OpenWrt 19.07-SNAPSHOT r11333-cc0b70467d"
        }
}
root@R7800:~#

I do however have my own recipe for setting up packet steering and irq balancing for wifi radios, plus minimum CPU frequency, here it is below

root@R7800:~# grep -v ^# /etc/rc.local 

/usr/sbin/ethtool -C eth0 tx-usecs 0
/usr/sbin/ethtool -C eth1 tx-usecs 0
/usr/sbin/ethtool -C eth0 rx-usecs 31
/usr/sbin/ethtool -C eth1 rx-usecs 31

echo 3 > /proc/irq/30/smp_affinity
echo 3 > /proc/irq/32/smp_affinity
echo 3 > /proc/irq/36/smp_affinity
echo 3 > /proc/irq/69/smp_affinity

echo min_power > /sys/devices/platform/soc/29000000.sata/ata1/host0/scsi_host/host0/link_power_management_policy
echo 800000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_min_freq
echo 800000 > /sys/devices/system/cpu/cpu1/cpufreq/scaling_min_freq
echo 35 > /sys/devices/system/cpu/cpufreq/ondemand/up_threshold
echo 10 > /sys/devices/system/cpu/cpufreq/ondemand/sampling_down_factor
for FILE in /sys/class/net/*/queues/[rt]x-0/[rx]ps_cpus; do
   [ -w "$FILE" ] && echo 3 > "$FILE" 2>/dev/null
done

exit 0
root@R7800:~#

PS: my R7800 is used as an access point, hence there is no IRQ activity triggered by WAN, otherwise it would be visible in the output as well.

You can fine tune my recipe to suite your needs, or at least you can look at it as a starting point for a recipe of your own.

1 Like

Thank you for sharing your config.
I think this information will be useful for every user as a starting point in SMP affinity setup.

But my question was about non-functioning Receive Packet Steering technology only.

I have performed further investigations and found a very buggy hotplug script called "/etc/hotplug.d/net/20-smp-tune" that is shiitting to users starting year 2018.

https://github.com/openwrt/openwrt/blob/openwrt-19.07/package/network/config/netifd/files/etc/hotplug.d/net/20-smp-tune

The script mentioned above preserved in 19.07 in it's original version and It continues its sabotage action on users' systems.
This script contains several gently inserted bugs leading to breaking normal RPS functionality because of faulty queue-to-CPU bindings.

In the search for the method to force RPS to work on 19.07 I have replaced this script with it's lates version from the 'master' branch (fixed version). And I removed the original version of course.

https://github.com/openwrt/openwrt/blob/master/package/network/config/netifd/files/etc/hotplug.d/net/20-smp-packet-steering

Then I enabled packet steering via network config:

config globals 'globals'
	...
	option packet_steering 1

After reboot I saw the correct bitmask values set (11) for my ethernet RPS/XPS queues.

root@R7800:~# grep . /sys/class/net/eth?/queues/?x-0/?ps_cpus
/sys/class/net/eth0/queues/rx-0/rps_cpus: 3
/sys/class/net/eth0/queues/tx-0/xps_cpus: 3
/sys/class/net/eth1/queues/rx-0/rps_cpus: 3
/sys/class/net/eth1/queues/tx-0/xps_cpus: 3

Unfortunately after checking '/proc/interrupts' I saw no difference at all.
This means that CPU1 still not included in RPS/XPS packet processing .

The next thing I have tried was to set bitmast '11' for all queues in the system using a very popular command from this forum:

for file in /sys/class/net/*/queues/[rt]x-*/[rx]ps_cpus; do [ -w "$file" ] && echo 3 > "$file"; done

Unfortunately after checking '/proc/interrupts' on the next day I found no difference.

According to documentation RPS introduces a large amount of inter-processor interrupts.
I was not able to find any evidence that such interrupts are present on 19.07 even after all my tweaks.

.

Could you please open a separate thread about your questions, as they have nothing to do specifically with my build, but instead they are more generic dicussion.

@tmomas (splitting the thread?)