I'm running snapshot on RPi4 having irqbalance 1.9.0-6 package installed. etc>config>irqbalance:
config irqbalance 'irqbalance'
option enabled '1'
# Level at which irqbalance partitions cache domains.
# Default is 2 (L2$).
#option deepestcache '2'
# The default value is 10 seconds
#option interval '10'
# List of IRQ's to ignore
#list banirq '36'
#list banirq '69'
It's enabled and running yet it seems like it's not spreading internet traffic & interrupts among all 4 cores. Here is the screenshot:
You are correct, not working as you expect. I ended adding this to my /etc/rc.local:
# Move RPS/XPS to CPU3 and CPU4 only (disable packet steering or it gets overwritten)
echo c > /sys/class/net/eth0/queues/rx-*/rps_cpus
echo c > /sys/class/net/eth0/queues/tx-*/xps_cpus
echo c > /sys/class/net/eth1/queues/rx-*/rps_cpus
#echo c > /sys/class/net/eth1/queues/tx-*/xps_cpus # not available in eth1 (USB NIC)
# eth0 (LAN NIC) in CPU2 (rx/tx) to help improve bandwidth (might hurt latency?)
echo 2 > /proc/irq/39/smp_affinity
echo 2 > /proc/irq/40/smp_affinity
irqbalance is designed to move each distinct interrupt to the 'least busy' core per sampling period.
It's not possible to "split" single-interrupt (read: single NIC inbound, for instance) traffic across multiple cores, it is necessarily serial delivery. You can flip that interrupt around cores as often as you like, but that really just makes a bad problem worse.
came here to report back. At the moment I couldn't get snapshot to work with samba4, so I'm running 22.03.2 firmware with packet steering off and following code in /etc/rc.local
config irqbalance 'irqbalance'
option enabled '1'
# Level at which irqbalance partitions cache domains.
# Default is 2 (L2$).
#option deepestcache '2'
# The default value is 10 seconds
#option interval '10'
# List of IRQ's to ignore
#list banirq '36'
#list banirq '69'
But why do we have to enable it from the file option enabled '1' even when it shows enabled in Luci startup?
Seems like it doesn't use CPU2 I maybe wrong. But the idle cpu usage is 81% at load. while if we use manual echo 8 >/proc/irq/18/smp_affinity echo 8 >/proc/irq/32/smp_affinity echo 2 >/proc/irq/39/smp_affinity echo 4 >/proc/irq/40/smp_affinity then it performs better and with less cpu usage (~85% ideal cpu usage)
What do you think?
the startup list in LuCI shows all services that have their init scripts enabled. irqbalance among them. That is a pretty low-level on/off toggling, which does not survive sysupgrade.
most applications offer detailed config via the uci config file and do also offer there a separate enable/disable config option.
Irqbalance is no magic. It has some guessing logic about the role of the IRQs, but it might not correctly recognize all IRQs, and may leave them unhandled.
In my case, manually assigned IRQ affinity works better than irqbalance, then would you recommend it? Or should I stick with irqbalance because it has more pros than cons compared to manual?
I have recently noted with ipq807x/DL-WRX36 that the dynamic IRQ assignments from irqbalance may make the router crash, while manual affinity assignment done once seems to work well.
Irqbalance is no magic. It has started from the x86 world, and may not be perfect for the ARM chips.
Hey!
Any Idea why I cannot balance IRQ between the 4 cores of my Raspberry PI 4B?
I have installed irqbalance latest version (1.9.2-2).
after that I've set enabled '1' in the config file
and also checked the Packet Steering option in LuCi.