25.12 irqbalance Is it still necessary?

I’m using irqbalance in 24.10.5, but is it still necessary to use it in the new version 25.12?
Not:I'm using SQM and I get an A in the bufferblock when I don't install this package, but an A+ when I do.

Youreself claims it is necessary. In general kernel relies on irqbalance for irq balancing, itself doing nothing about it.

Why? It's not neccessary in 25.12?

If it helps you, just use it.

But Irqbalance is not that good in recognising properly the network related IRQs in non-x86 processors, so some targets/devices have manual IRQ allocations scripts in use. E.g. qualcommax/ipq807x:
https://github.com/openwrt/openwrt/blob/main/target/linux/qualcommax/ipq807x/base-files/etc/init.d/smp_affinity

4 Likes

For generic defects it does not detect drivers which balance by themselves already like nvme or mtk crypto

So, it shouldn't cause any problems in the new version; I was using it for my AX3000T xaomi and I can continue using it.

is there a script like this you sent for qualcomm/ipq40xx?
netgear orbi rbr50v1 I want the ethernet irqs on one core the wifi on other and adguard home on final core and the pci wifi chip won't move does this make sense?? is it actually a good idea or bad I have packet steering right now on on all cpu's with steering flow set to 128

Sorry, but no knowledge about ipq40xx.

The script currently used in ipq807x evolved from attempts like this:

What I did was not sure correct but here is what I used and added this to /etc/rc.local

# Pin Ethernet hardware queues to CPU1
for irq in 33 37 41 45 49 51 53 55; do
    echo 2 > /proc/irq/$irq/smp_affinity
done

# Pin ath10k_ahb WiFi queues to CPU2 (Skipping PCI-MSI IRQ 66)
for irq in 67 68; do
    echo 4 > /proc/irq/$irq/smp_affinity
done

this is the irq snapshot its recent snapshot there is alot of power outages here so I can't give a day uptime stats but I think it works? let me know if what I did was correct or not that's why when i saw your script I got surprised that what I'm using is so simple and you had a big script to do these manual irq pining

           CPU0       CPU1       CPU2       CPU3       
 26:     243827     239436     264549     212939 GIC-0  20 Level     arch_timer
 29:          5          0          0          0 GIC-0 270 Level     bam_dma
 30:          0          0          0          0 GIC-0 239 Level     bam_dma
 31:          5          0          0          0 GIC-0 139 Level     msm_serial0
 33:         10     119782          0          0 GIC-0  97 Edge      c080000.ethernet:txq0
 37:         13     119566          0          0 GIC-0 101 Edge      c080000.ethernet:txq4
 41:          0     198340          0          0 GIC-0 105 Edge      c080000.ethernet:txq8
 45:          2     135304          0          0 GIC-0 109 Edge      c080000.ethernet:txq12
 49:         14     663016          0          0 GIC-0 272 Edge      c080000.ethernet:rxq0
 51:         24      23260          0          0 GIC-0 274 Edge      c080000.ethernet:rxq2
 53:          9      36592          0          0 GIC-0 276 Edge      c080000.ethernet:rxq4
 55:         12      40624          0          0 GIC-0 278 Edge      c080000.ethernet:rxq6
 57:         31          0          0          0 GIC-0 129 Level     i2c_qup
 58:       2607          0          0          0 GIC-0 155 Level     mmc0
 59:          2          0          0          0 GIC-0 170 Level     7824900.mmc
 61:          0          0          0          0   PCI-MSI   0 Edge      aerdrv
 62:          0          0          0          0   msmgpio  22 Edge      7824900.mmc cd
 63:          0          0          0          0   msmgpio  18 Edge      keys
 64:          0          0          0          0   msmgpio  49 Edge      keys
 65:          0          0          0          0 GIC-0 168 Level     xhci-hcd:usb1
 66:     308895          0          0          0   PCI-MSI 524288 Edge      ath10k_pci
 67:         24          0     403915          0 GIC-0 200 Level     ath10k_ahb
 68:         22          0          0          0 GIC-0 201 Level     ath10k_ahb
IPI0:          0          0          0          0  CPU wakeup interrupts
IPI1:          0          0          0          0  Timer broadcast interrupts
IPI2:       2737       2963       2859       2936  Rescheduling interrupts
IPI3:     287103      22666     270386     322491  Function call interrupts
IPI4:          0          0          0          0  CPU stop interrupts
IPI5:          2          0          0          0  IRQ work interrupts
IPI6:          0          0          0          0  completion interrupts
Err:          0

You dont show baseline - ie no irqbalance no script. It would go better to spread eth queues one of type per core. It is network router after all...

ohh yeah sure I can show you that wait

1 Like

This was the baseline before I saved it in a text beforehand

           CPU0       CPU1       CPU2       CPU3       

 26:    2994759    2602121    2157948    2240521 GIC-0  20 Level     arch_timer

 29:          5          0          0          0 GIC-0 270 Level     bam_dma

 30:          0          0          0          0 GIC-0 239 Level     bam_dma

 31:          5          0          0          0 GIC-0 139 Level     msm_serial0

 33:     715099          0       2049       2700 GIC-0  97 Edge      c080000.ethernet:txq0

 37:     449511          0       1367       6839 GIC-0 101 Edge      c080000.ethernet:txq4

 41:     546931         12       1005       1493 GIC-0 105 Edge      c080000.ethernet:txq8

 45:    4649119          0     132926       2560 GIC-0 109 Edge      c080000.ethernet:txq12

 49:   11336949          0          0          0 GIC-0 272 Edge      c080000.ethernet:rxq0

 51:     261855          0         59       2005 GIC-0 274 Edge      c080000.ethernet:rxq2

 53:     225113          0        855      16141 GIC-0 276 Edge      c080000.ethernet:rxq4

 55:     361264        618       2926       8221 GIC-0 278 Edge      c080000.ethernet:rxq6

 57:         34          0          0          0 GIC-0 129 Level     i2c_qup

 58:       8320          0        149         18 GIC-0 155 Level     mmc0

 59:          2          0          0          0 GIC-0 170 Level     7824900.mmc

 61:          0          0          0          0   PCI-MSI   0 Edge      aerdrv

 62:          0          0          0          0   msmgpio  22 Edge      7824900.mmc cd

 63:          0          0          0          0   msmgpio  18 Edge      keys

 64:          0          0          0          0   msmgpio  49 Edge      keys

 65:          0          0          0          0 GIC-0 168 Level     xhci-hcd:usb1

 66:    9042101          0          0          0   PCI-MSI 524288 Edge      ath10k_pci

 67:    2805044       2094      20596       3007 GIC-0 200 Level     ath10k_ahb

 68:         18          0          0          0 GIC-0 201 Level     ath10k_ahb

IPI0:          0          0          0          0  CPU wakeup interrupts

IPI1:          0          0          0          0  Timer broadcast interrupts

IPI2:      21899      25645      25515      19680  Rescheduling interrupts

IPI3:     913878    5788607    3362563   15937409  Function call interrupts

IPI4:          0          0          0          0  CPU stop interrupts

IPI5:       6370       6376       6511       6515  IRQ work interrupts

IPI6:          0          0          0          0  completion interrupts

Err:          0

You mean spread the tx queues on one core and the rx on other? did I understand you correctly?

See upstream https://www.kernel.org/doc/html/latest/networking/scaling.html#rss-irq-configuration
You can make irqbalance do automatic irq assignments and exclude ones you tune manually. As much as network (or video card) irqs are concerned i'd trust standard tools to do the work.

1 Like

I know that you explained it completely fine for a person who has more knowledge but can you explain to me in a simpler way :sweat_smile:

what I did was correct the manual IRQs tuning script I have and from the snapshot of the IRQs is everything looking good?

1 Like

I would guess irqbalance would spread 4tc 4rx irqs to 4 cores. So there is softirqd to process network traffic on each.
You probably should not use packet steering along with irqbalance.

2 Likes