With 21.02 going stable and Master switching over to kernel 5.10, I know it doesn't effect these builds much but it's a very interesting week for this target.
Care to elucidate with any prognostications?
Novice question - what does all this mean?
Are you stating that you have CPU0 for receiving and CPU1 for transmitting using a DSA config? So throughput is not impacted (or less impacted) vs the 'official' builds?
My main reason for staying on 19.x is my speeds reduce due to DSA.
Are there any other caveats or constraints before I go jumping in?
What about VLANs?
What speed reduce do you see with DSA?
It's been about 5months since I last played w/ DSA on my WRT1900ACSv2, however I was seeing about a 200mb drop on ingress - my connection is 950/500 fibre to the house.
People may say, "So what? At that speed..." but to me things also 'felt' a bit laggy across my entire network (3 wired PCs, AndroidTV, a couple NAS, and half a dozen wifi devices) but I am not smart enough to quantify this.
I am really hanging out for DSA multi-CPU support but not holding my breathe for a quick resolution as the solution has been in debate for a few years now based on what I have read.
I experience the same as you have described since moving to DSA. I've got 910/110 FTTH and there is a speed reduction compared to swconfig used in 19.07.
A good article exists which explains that the kernel doesn't support multiple CPU at the moment due to current attempts not being accepted upstream.
I haven't been able to find a patch that enables multi CPU for 5.10
Would this one not be of help?
Seems they managed to get multi-cpu working with DSA with a few other patches following that commit
I have seen this but don't think it is designed for this target (mvebu).
@Ansuel will this work for mvebu?
It's generic to the kernel itself, i.e., it is not architecture specific other than how many CPU's you have. Mostly the patches allow one to manually configure which CPU will be assigned to process data from a specific switch port. The default is for one CPU to process all the ports.
So the above pull request is target specific…
I also mentioned a few posts ago the comments you made regarding the kernel and the fact that DSA uses 1 cpu port by default.
Not the way it reads to me.
Okay. What architecture are the patches limited to?
I took comment in post to mean it was applicable to other targets; that being bits indicated by the comment. I have not bothered to kick it around myself as the pipe to my shack is far from a symmetric pipe that would warrant the need on the wrtpac targets.
My bad - ipq806x, ar71xx, ath79.
Misconstrued the context.
I too read it this way.
I see our benefactor has created a new build on 15 Sept which I want to test if multi CPU is working before I roll my own (I use this as a base and add a couple pkg).
What is an easy way to see CPU use by CPU? "top" only shows total percentage and the other top commands to breakdown by proc don't work.
Multi-CPU DSA works on Rango, but additional patches are needed. Will create a PR when I have them all ready and packed. Obviously, the original multi-CPU DSA support PR must be upstreamed beforehand.
To see the CPU usage:
To see which port is assigned to a CPU:
That performance test by pkgadd on your link shows the huge performance hit moving to DSA did, at leaast on ipq806x target. We know it hits our targets as well, just less so. DSA is clearly a big performance problem unless it improves, and maybe not working out like they expected it to even though using upstream kernel features is generally preferred.
Using the Divested 15th Sept build (kernel v5.10.65)
I take it that as I am only seeing eth0 that DSA Multi-CPU is not yet available
cat /proc/interrupts root@OpenWrt:~# cat /proc/interrupts CPU0 CPU1 25: 0 0 GIC-0 27 Edge gt 26: 43714 36775 GIC-0 29 Edge twd 27: 0 0 MPIC 5 Level armada_370_xp_per_cpu_tick 29: 12900 0 GIC-0 34 Level mv64xxx_i2c 30: 22 0 GIC-0 44 Level ttyS0 40: 0 0 GIC-0 41 Level f1020300.watchdog 44: 0 0 GIC-0 96 Level f1020300.watchdog 45: 13620 0 MPIC 8 Level eth0 46: 0 0 GIC-0 50 Level ehci_hcd:usb1 47: 0 0 GIC-0 51 Level f1090000.crypto 48: 0 0 GIC-0 52 Level f1090000.crypto 49: 0 0 GIC-0 58 Level ahci-mvebu[f10a8000.sata] 50: 370 0 GIC-0 116 Level marvell-nfc 51: 0 0 GIC-0 49 Level xhci-hcd:usb2 52: 2 0 GIC-0 54 Level f1060800.xor 53: 2 0 GIC-0 97 Level f1060900.xor
Irqbalance doesn't do much on mvebu (way less than other targets unfortunately). Mostly irqs just stay on CPU0. But you should at least see mwlwifi driver running on CPU1. Check that 'enabled' line is set to '1' in /etc/config/irqbalance. Multi-CPU DSA is likely a ways off if it'll ever work on our target.