What happened to eth1 now that we are using DSA?

Back in 19.06 and earlier, this is the default switch config, where WAN and LAN on their separate "CPU port", and when I run speedtests, both cores would be utilized

In snapshot/21.02, I just have eth0, lan1@eth0...lan4@eth0, wan@eth0

And when I do speed tests (~550mbps), only core 0 is utilized

Is this working as intended? or a regression... Not sure if there's any actual performance bottlenecking going on


Yes, the DSA framework currently only supports a single CPU port; eth1 -while present- is ignored by the kernel for the time being. While there have been multiple attempts to rectify this already, none have passed netdev (upstream/ mainline linux) scrutiny so far (and their goal posts have changed multiple times).

Obviously this has consequences on routing performance, as you effectively end up with half-duplex capabilities (WAN and LAN having to share a single 1 GBit/s CPU port). How much of a performance regression this is in practice would be another question, as the situation is a bit more complex than 'simply' halving your effective throughput to 500 MBit/s - but anyone with WAN speeds below 500 MBit/s probably doesn't have to worry anyways.

This current status quo probably won't remain being an issue for all eternity, the hardware is still physically present and DSA development has considerably sped up over the last year (also for >>10-40 GBit/s networking), so future improvements are likely (be it via mainline or merely downstream in OpenWrt). We are currently in a difficult situation, swconfig is better integrated into OpenWrt and allows easier access to the simple- and common use cases of typical OpenWrt devices - but it has been rejected in mainline linux and therefore has no future. DSA might not technically be 'young' anymore, but its (mainline) adoption has sharply increased just recently, allowing seamless integration with iproute2 and far reaching hardware offloading capabilities (it is used for >=40-100 GBit/s networking) and highend switch fabrics - but integration into OpenWrt has to start somewhere/ sometime and will take a little longer to become feature complete (relative to swconfig, it already allows other things that have never been possible with swconfig before). As expected, early adopter targets (mvebu and ramips) will (temporarily) have to suffer to some extent, while others (realtek (rtl838x/ 93xx) profit from it straight away (remember these are switches, not routers - for these devices the CPU port doesn't have to be fast).


DSA only supports 1 CPU port currently.

There are patches that enable multi CPU ports in downstream Turris OS, but are not accepted by OpenWrt.

In your case, you can adjust RPS/XPS CPUs to do some load balancing.

1 Like

Thank you for the detailed write-up!

Now I also see a new option "Packet Steering" (well at least it wasn't there in 19.07 under the "globals" tab), defaulting to "off"

I googled and this came up, saying that Though mvebu with its hardware scheduling issues [5] might want to enable packet steering by default. https://git.openwrt.org/?p=openwrt/openwrt.git;a=commit;h=d3868f15f876507db54afacdef22a7059011a54e

Is this still relevant? (I have mvebu/rango) I've just enabled packet steering for now.

1 Like

Packet steering can significantly speed up throughput on some devices (that's why it had been introduced after being tested on mt7621), but it considerably hurt performance on others (e.g. lantiq) - therefore the feature has been made configurable (default-off), rather than being enabled for everyone (at least all SMP targets). You'll have to test it on your own device, to evaluate its effect on mvebu.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.