[netifd | odhcpd] concept for placing DSA ports into same DHCP subnet?

{"kernel":"5.4.50","hostname":"OpenWrt","system":"ARMv7 Processor rev 1 (v7l)","model":"Turris Omnia","board_name":"cznic,turris-omnia","release":{"distribution":"OpenWrt","version":"SNAPSHOT","revision":"r13719-66e04abbb6","target":"mvebu/cortexa9","description":"OpenWrt SNAPSHOT r13719-66e04abbb6"}}


With DSA it seems redundant to create a kernel soft bridge for Lan ports since the switch chip fabric is already hard bridging those ports.

That said, is there a concept that netifd can place/manage multiple DSA Lan ports in the same DHCP subnet without creating a soft bridge, e.g.

config interface 'lan'
	list ifname 'lan0'
	list ifname 'lan1'
	list ifname 'lan2'

?

Tried a few variations but neither worked out.

@jow suppose netifd is lacking something like option type 'dsa_bridge' to address DSA ports already bridged within the switch's fabric?
Creating a soft bridge does not only seem redundant and consuming unnecessarily CPU cycles for the kernel to handle traffic on the soft bridge but may cause issues with the DSA concept in general.

Just create the linux bridge (option type 'bridge'), the DSA driver / switchdev framework will ensure to configure the HW path.

1 Like

What is the point of the soft bridge when the DSA ports are already bridged in the switch fabric? it just adds unnecessary overhead and sort of defies the DSA concept.

No, the DSA concept is about exposing the switch ports as netdevs to linux and allowing to configure the switching with standard commands.So by default there should be no switching between the ports and only by creating the bridge the driver should configure that.

1 Like

DSA exposes the internal switch fabric to Linux network userspace exactly for this reason that you can configure and use the datapath by the means of standard ip / bridge utilities.

Setting up bridges over per-port netdevs is the canonical way to bridge DSA ports. It does not introduce overhead for port-to-port traffic.

Edit: @tl71 was faster.

1 Like

The concept of a managed switch is to switch packets within its hardware (fabric) bridge by default but does not require a linux kernel software bridge for that functionality.

I have just removed the kernel software bridge and packets are being switched just fine between the switch ports.


What does not make sense is

since the Lan ports are independent ethernet ports and thus require proper DHCP subnetting each on its own (that is without the soft bridge).


Regarding odhcpd it seems that bundling the ports is not working though, e.g.

option interface 'lan0 lan1 lan2'

or

list interface 'lan0'
list interface 'lan1'
list interface 'lan2'

is that expected?

Correct, it doesn't require the kernel to do that. The kernel bridge serves as configuration interface for the hardware switch.

No idea what you mean with that. If you want to have individual subnets / broadcast domains on each port, just configure them as independent interfaces.

Yes, because there must be only one interface per pool. If you want to treat multiple switch ports as one logical interface, you must bridge them with a Linux software bridge, there is no way around that.

1 Like

I would not concur; maybe you just omitted the driver part though as in bridge driver because the soft bridge (device) does nothing about the configuration of the hardware switch but the bridge driver in conjunction with the switchdev framework.


Yes, that is what I meant and how I configured netifd in the meantime, except that the goal is not individual subnets but to have multiple DSA ports in the same subnet, just a bit more work to setup but saves the kernel soft bridge.


Thank you for the clarification.

Too bad, just a bit more work to setup the config then, I most certainly prefer to do away with the soft bridge and lets us disagree here as I am remaining of the opinion of it being unnecessary overhead with DSA ports on a managed switch.

I made a simplified statement.

I am really curious on how you intend to implement that. Even the official upstream DSA documentation uses software bridges to handle multiple DSA ports as one logical interface on the host system. https://www.kernel.org/doc/html/latest/networking/dsa/configuration.html#gateway

I disagree with assertion that the soft bridge in conjunction with DSA introduces overhead.

Separate section for each Lan/WLan port, mind adjusting the firewall zone and being diligent with the subnet planning, resulting in


Those are just showcases

In this documentation some common configuration scenarios are handled as showcases:

chosen for whatever use they had in mind but nothing set in stone to be followed by the letter. DHCP for the soft bridge is just a bit more convenient than configuring each DSA port separately, latter apparently my preference though.

So different subnets / logical interfaces.

:slight_smile: what is your semantic of (different) subnet? In this case on IPv4 the Lan ports and clients connecting to the Lan ports are within the 192.168.84.0/24 mask, just need to be diligent with the pool range for each port (prevent clashing).

Having the same subnet range with different IPs on the various DSA ports might work - I didn't check myself how it behaves in practice, likely similar to having multiple IP addresses on a normal ethernet NIC. But I can't see how such a setup could work if you factor in WLAN. If you would set 192.168.84.*/24 on wlan0, routing will probably break down.

Furthermore it is not clear how dnsmasq reacts if multiple candidate interfaces are available to satisfy a given DHCP pool. Could be that it all works because it simply selects the first DSA port interface which then floods to all others, but that is all a lot of "ifs" and certainly not the way things are intended to work.

Also you're going to need a bridge anyway if you want a common broadcast domain for ethernet and wireless.

It also feels a bit odd that you're essentially adding N additional IP addresses plus ARP entries, associated source selection overhead etc. just to skip an unproven, claimed overhead of a software bridge.

I am totally fine if you implement your personal setups this way but since you open such a generic topic, then propose such a configuration deviating from the official DSA guidelines and marking your odd configuration as the solution, I am worried that you're setting a wrong and misleading precedent here.

As the documentation states - it just just showcases not guidelines. Notwithstanding your link

picks the gateway config whilst there is also the single port config https://www.kernel.org/doc/html/latest/networking/dsa/configuration.html#single-port (sans soft bridge)


Changed the solution, if you are more comfortable with it.


Asked as a question and had a discourse and you disagreed - suppose users reading through the thread will have more trust in you than me. You want me to put up an augmented :warning: in one particular post or the topic (if that is feasible)?


Did not test dnsmasq since not utilising it, odhcpd (as stated in the topic) is fine for me; might see how kea works out.

To my understanding, the single port config does not mean "treat switch as a single port" but rather "configure each port independently".

From quoting the documentation:

single port
Every switch port acts as a different configurable Ethernet port

So it essentially the same as having three ethernet NICs installed on a PC.

Yep, which is what I am doing with my config

Not quite. Note how the DSA configuration example chooses adjacent, non-overlapping IP subnets while your config (at least judging from the screenshots) uses overlapping subnets (actually the same subnet on each port).

2 Likes

Yep, that had to be changed as pointed out

it happened already with two Lan ports on the same subnet, producing two gateways for the subnet and that did not go down well with routing. After rectification it works fine, incl. Wlan


:warning: downside of this the setup due to the absence of the soft bridge:

  • no cross traffic WLan <> Lan
  • no 802.1Q tag management with the bridge command
  • smaller subnet segments for each port
  • increased (initial) administrative effort for setting up
1 Like

My understanding was as given above by tl71 and jow, but then presumably bridge fdb should indicate offload as per document, which I am not seeing.

1 Like