WRT3200ACM and DSA

does this vlan bug has been reported to the maintainer?

Out of curiosity -

to my understanding the device is a dual CPU with a Marvell 88E6352 switch and thus wondering whether each CPU switch facing port (eth0 | eth2) is actually connected to the switch or like the Turris Omnia without a Multi-CPU-DSA patch only one CPU port is connected to the switch?

Ach well, seems to be a similar case

Just as cross reference

wait what...
what is the correlation between multicpu and vlan bug?

I am not sure what counts as vlan bug in this thread? VLAN tag/untag with the downstream ports (LAN X) works for me (on the Turris Omnia) with the bridge v command and WAN facing VLAN egress tagging with the ip l command (or via UCI iface settings).

What does not work for me without the Multi-CPU-DSA patch, aside from one CPU switch facing port not being connected to the switch, is VLAN filtering (ip l s dev <device name> ty bridge vlan_filtering 1)

can you pass me this multi port dsa patch?

[4] being the TOS patch applied for the Turris Omnia. The patch developer been sympathetic to the request to upstream it to OpenWrt:

I fear it won't apply smoothly and needs some work. Sadly I have more pressing issues now to work on

[5] is the attempt to get it accepted upstream in the Linux kernel but that seems to be going nowhere

if it's the same patch that was proposed upstream i think it got rejected...

Following through the kernel mailing list thread it seems the discussion to have fizzled out but not being rejected.

Nonetheless, the patch works for the TO.

And apparently the matter would have to be sorted, not sure how many devices with such Multi-CPU-DSA device tree are currently supported by OpenWrt, probably predominately the mvebu target tree, or how many more might spring up.

A kick at the cat

1 Like

Can someone explain to me how DSA with only one CPU can even work?
I thought one CPU Port is used for the LAN Ports and the other one is used for the WAN Port.

The problem is on WRT1200 (and on the other WRT* devices too, I guess) that DSA uses the CPU Port that has no hardware mac assigned. So on each reboot a random mac is assigned.
I tried to modify the DTS file to use the CPU Port with the static mac.
But that didn't work. The Links come online (lan1-4,wan) but no communication is possible.
Only the TX packet counters do increase.

Atleast, I was able to disable one CPU Port.
So the unused CPU Port doesn't show up anymore in the system.

Can someone explain why DSA can use one CPU Port and use all ports lan1-4+wan and when using the other CPU Port nothing works, please?

The CPU port that is used by DSA, uses SGMII isn't that limited to 2 RX / 2 TX Queues?
The other one uses RGMII which is limited to 4 RX / 4 TX Queues?

Are those queues, the queues that the driver set ups and uses (and can be viewied with tc)?

//edit
i found this table:

MII - Media Independent Interface - 100 Mbps
GMII - Gigabit MII - 1 Gbps (24 pins) ( 8TX - 8RX )
RGMII - Reduced GMII - 1 Gbps (12 pins) ( 4TX - 4RX )
SGMII - Serial GMII - 1 Gbps (8 pins) ( 2TX - 2RX )
XAUI - XGMII Extender - 10 Gbps (XY pins) ( 8TX - 8RX )
SPI-4.2 - System Packet Interface Level 4, Phase 2 ( 16TX - 16RX )

So it makes even less sense that both CPU Ports have 8 RX / 8 TX queues?

//edit 3
got it working with the help of dengqf6.
DSA now uses the CPU Port with the hardware mac and the left over CPU port is removed/disabled.
Next I will try to modify mvneta, so it only uses 4 RX/TX (because the CPU port uses rgmii) queues and see how this goes...

//edit 4
Okaayyy that also works, needs some testing but here are the patches:
Reduce tx/rx queues to 4 to match RGMII interface capabilities:

--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -637,8 +637,8 @@ static enum cpuhp_state online_hpstate;
 /* The hardware supports eight (8) rx queues, but we are only allowing
  * the first one to be used. Therefore, let's just allocate one queue.
  */
-static int rxq_number = 8;
-static int txq_number = 8;
+static int rxq_number = 4;
+static int txq_number = 4;
 
 static int rxq_def;

Switch CPU Ports.
Use the CPU Port with hardware mac and faster RGMII interface (over SGMII)
And disable the left over CPU Port.

--- a/arch/arm/boot/dts/armada-385-linksys.dtsi
+++ b/arch/arm/boot/dts/armada-385-linksys.dtsi
@@ -115,18 +115,6 @@
	};
};

-&eth2 {
-	status = "okay";
-	phy-mode = "sgmii";
-	buffer-manager = <&bm>;
-	bm,pool-long = <2>;
-	bm,pool-short = <3>;
-	fixed-link {
-		speed = <1000>;
-		full-duplex;
-	};
-};
-
&i2c0 {
	pinctrl-names = "default";
	pinctrl-0 = <&i2c0_pins>;
@@ -200,10 +188,10 @@
				label = "wan";
			};

-			port@5 {
-				reg = <5>;
+			port@6 {
+				reg = <6>;
				label = "cpu";
-				ethernet = <&eth2>;
+				ethernet = <&eth0>;

				fixed-link {
					speed = <1000>;

1 Like

The same way it works on all the (usually cheaper-) routers which only have a single CPU port to the switch to begin with, by using VLANs to distinguish WAN and LAN traffic over the (trunked) CPU port, splitting and untagging it in the switch fabric.

Hmm... so DSA "combines" both CPU Ports?

The RX / TX queues of the various (*)RMII implementations....
Are those the actual "hardware lines"?
So it makes sense to have RGMII with 4 RX/TX Queues attached to the switch (because of 4 ports)
And the SGMII attached to WAN interface (2 RX/TX Queues vs 1 interface)
Are those queues the ones that the driver utilizes?

Is it better to reduce the "driver queues" to 2. So each CPU can handle its own queue?

The WRT routers use two CPU ports connected to the switch for all of the switched ports. Meaning that the WAN and LAN ports use the same CPU port(s).

The Turris Omnia on the other hand has that + an extra ethernet interface dedicated to WAN (5 LAN + 1 WAN).

I have absolutely no idea about that queue stuff. There are 5 ports on the WRT switch, not 4.

The Turris people have a dual CPU DSA patch that can and probably should be imported here.

2 Likes

hmm okay..

I think the queues used by (*)MII are not the actual queues used by the nic.

But the (8) queues are pretty much useless anyway.
Because it is not possible to configure them with ethtool.
ethtool -l eth0

Channel parameters for eth0:
Cannot get device channel parameters
: Not supported

This means that your driver has not implemented the ethtool get_channels operation. This could be because the NIC doesn’t support adjusting the number of queues, doesn’t support RSS / multiqueue, or your driver has not been updated to handle this feature.

ethtool -x eth0 (RX flow indirection table)

RX flow hash indirection table for eth0 with 4 RX ring(s):
    0:      0
RSS hash key:
Operation not supported
RSS hash function:
    toeplitz: on
    xor: off
    crc32: off

ethtool -n eth0 (RX network flow classification)

4 RX rings available
rxclass: Cannot get RX class rule count: Not supported
RX classification rule retrieval failed

ethtool -k eth0 | grep 'ntuple'
ntuple-filters: off [fixed]

So configuring RX/RSS queues doesn't work?

I currently have a build running with only 2 queues (Well actually, 4 queues 2 RX / 2 TX). (one queue for each CPU.)
Then enable XPS, one queue assign to each CPU.
And don't use RPS (and don't see any significant change in CPU utilization anyway), I have read that it can introduce unwanted latency and RSS is the better choice.
But unfortunately, it isn't possible to configure RSS. (I think this is also the reason why one CPU core gets maxed out all the time when there is much traffic load)

//edit
2 queues also work fine.

One "strange" thing to add...

With the packet steering script disabled and 4 queues enabled, the XPS mapping is as follows:
Queue/CPU Mask:
1:1
2:2
3:0
4:1
Why is XPS automatically disabled for one Queue?

With the packet steering script disabled and 2 queues enabled, the XPS mapping is as follows:
Queue/CPU Mask:
1:1
2:2

Interesting...

//edit

qdisc mq 0: root
 Sent 574394796 bytes 1915770 pkt (dropped 0, overlimits 0 requeues 5)
 backlog 0b 0p requeues 5
qdisc fq_codel 0: parent :2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 **Sent 3364472 bytes 15081 pkt** (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  **maxpacket 0 drop_overlimit 0 new_flow_count 0** ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 0: parent :1 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 4Mb ecn
 **Sent 571030324 bytes 1900689 pkt** (dropped 0, overlimits 0 requeues 5)
 backlog 0b 0p requeues 5
  **maxpacket 1522 drop_overlimit 0 new_flow_count 4** ecn_mark 0
  new_flows_len 0 old_flows_len 0

Hmm...
The first queue shows: 15081 packets transmitted
maxpacket: 0 (max packet size?) how can this be 0?

The second queue shows: 1900689 packets transmitted
maxpacket: 1522, makes more sense...
new_flow_count: 4, seems a bit low to me....

And when I search the forum for some tc output, for example here:

Its the same, most queues show a maxpacket value of 0.

I will try a build with 1 queue, with swconfig that didn't work. maybe it works with DSA...
//edit
Nope.. doesn't work. Interfaces come up but no communication is possible.
The interface(s) packet counter shows a large amount of RX packets..
Hmm...

@nitroshift
Why did you delete your post?
Should the bm paramters ported over or not?

//edit
I was thinking about a solution to make better use of the hardware queues..
mq is a classful qdisc.

tc -s -d class show dev eth0
class mq 1:1 root leaf 110:
 Sent 248443426 bytes 241991 pkt (dropped 0, overlimits 0 requeues 1)
 backlog 0b 0p requeues 1
class mq 1:2 root leaf 120:
 Sent 356548 bytes 1650 pkt (dropped 0, overlimits 0 requeues 1)
 backlog 0b 0p requeues 1

So in theory it should be possible to filter/classify traffic...

TC=$(which tc)
IPT=$(which iptables)
IPT6=$(which ip6tables)

"${TC}" qdisc replace dev eth0 handle 1 root mq
"${TC}" qdisc replace dev eth0 parent 1:1 handle 110 fq_codel quantum 300 target 50us interval 1ms noecn
"${TC}" qdisc replace dev eth0 parent 1:2 handle 120 fq_codel quantum 300 target 50us interval 1ms noecn

"${TC}" filter del dev eth0 parent 110: > /dev/null 2>&1
"${TC}" filter del dev eth0 parent 120: > /dev/null 2>&1

"${TC}" filter add dev eth0 parent 110: protocol all prio 10 u32 match u32 0 0 flowid 1:1 > /dev/null 2>&1
"${TC}" filter add dev eth0 parent 120: protocol all prio 10 u32 match u32 0 0 flowid 1:2 > /dev/null 2>&1

"${TC}" filter add dev eth0 parent 110:0 protocol all prio 1 u32 match u32 0 0 action connmark continue > /dev/null 2>&1
"${TC}" filter add dev eth0 parent 120:0 protocol all prio 1 u32 match u32 0 0 action connmark continue > /dev/null 2>&1

"${TC}" filter add dev eth0 parent 110:0 protocol all prio 5 handle 2 fw flowid 1:2 > /dev/null 2>&1
"${TC}" filter add dev eth0 parent 120:0 protocol all prio 5 handle 1 fw flowid 1:1 > /dev/null 2>&1

### IPv4
"${IPT}" -N MQ -t mangle > /dev/null 2>&1 \
  || "${IPT}" -F MQ -t mangle

"${IPT}" -C MQ -t mangle -m connmark ! --mark 0 -j RETURN > /dev/null 2>&1 \
	|| "${IPT}" -A MQ -t mangle -m connmark ! --mark 0 -j RETURN
	
"${IPT}" -t mangle -C MQ -m statistic --mode nth --every 2 --packet 0 -j CONNMARK --set-mark 1 > /dev/null 2>&1 \
	|| "${IPT}" -t mangle -A MQ -m statistic --mode nth --every 2 --packet 0 -j CONNMARK --set-mark 1

"${IPT}" -t mangle -C MQ -m statistic --mode nth --every 2 --packet 1 -j CONNMARK --set-mark 2 > /dev/null 2>&1 \
	|| "${IPT}" -t mangle -A MQ -m statistic --mode nth --every 2 --packet 1 -j CONNMARK --set-mark 2

"${IPT}" -t mangle -C POSTROUTING -m conntrack --ctstate NEW -j MQ > /dev/null 2>&1 \
	|| "${IPT}" -t mangle -I POSTROUTING -m conntrack --ctstate NEW -j MQ

### IPv6
"${IPT6}" -N MQ -t mangle > /dev/null 2>&1 \
	|| "${IPT6}" -F MQ -t mangle

"${IPT6}" -C MQ -t mangle -m connmark ! --mark 0 -j RETURN > /dev/null 2>&1 \
	|| "${IPT6}" -A MQ -t mangle -m connmark ! --mark 0 -j RETURN
	
"${IPT6}" -t mangle -C MQ -m statistic --mode nth --every 2 --packet 0 -j CONNMARK --set-mark 1 > /dev/null 2>&1 \
	|| "${IPT6}" -t mangle -A MQ -m statistic --mode nth --every 2 --packet 0 -j CONNMARK --set-mark 1

"${IPT6}" -t mangle -C MQ -m statistic --mode nth --every 2 --packet 1 -j CONNMARK --set-mark 2 > /dev/null 2>&1 \
	|| "${IPT6}" -t mangle -A MQ -m statistic --mode nth --every 2 --packet 1 -j CONNMARK --set-mark 2

"${IPT6}" -t mangle -C POSTROUTING -m conntrack --ctstate NEW -j MQ > /dev/null 2>&1 \
	|| "${IPT6}" -t mangle -I POSTROUTING -m conntrack --ctstate NEW -j MQ

//edit 3
updated once more...
classify doesn't seem to work either...
Next bet, skbedit...
// edit 4
skbedit also not working...
Maybe it only works with mqprio qdisc but mqprio also doesn't work (also tried hw 0).
So back to the connmark match approach...
The hierarchy looks like this

  • root/parent 1:0
  • mq class 1:1 -> leaf 110: fq_codel
  • mq class 1:2 -> leaf 120: fq_codel
    It is not possible to attach filters to root/parent 1: or class 1:1 / 1:2
    It is only possible to attach filters to 110: / 120:
    Matching for fw mark 1 in 110: makes no sense because it is already in there.
    So match for fw mark 2 in 110: and redirect it to 1:2 and the other way around...
    But actually I'm not sure if this works properly...

Buffer manager is defined for eth0 a few lines up in armada-385-linksys.dtsi, there's no need to define it again.

nitroshift

@nitroshift
Okay, thank you!

So does XPS/RPS also need driver support?

Without proper "select queue function", like here:
https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/net/ethernet/intel/ixgbe/ixgbe_main.c?h=v5.4.40
Line: 8502
RSS, RPS/XPS will never work?

//edit
Hmm...
Why not use dev_pick_tx_cpu_id for ndo_select_queue ?

cat target/linux/mvebu/patches-5.4/300-mvneta-tx-queue-workaround.patch 
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -4342,6 +4342,7 @@ static const struct net_device_ops mvnet
        .ndo_fix_features    = mvneta_fix_features,
        .ndo_get_stats64     = mvneta_get_stats64,
        .ndo_do_ioctl        = mvneta_ioctl,
+       .ndo_select_queue    = dev_pick_tx_cpu_id,
 };

//edit

 tc -s -d qdisc show dev eth0
qdisc mq 1: root
 Sent 14351860806 bytes 14620840 pkt (dropped 0, overlimits 0 requeues 375)
 backlog 0b 0p requeues 375
qdisc fq_codel 120: parent 1:2 limit 1024p flows 16384 quantum 1514 target 400us interval 8.0ms memory_limit 4Mb
 Sent 9603145320 bytes 8761895 pkt (dropped 0, overlimits 0 requeues 250)
 backlog 0b 0p requeues 250
  maxpacket 1522 drop_overlimit 0 new_flow_count 241 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 110: parent 1:1 limit 1024p flows 16384 quantum 1514 target 400us interval 8.0ms memory_limit 4Mb
 Sent 4748715486 bytes 5858945 pkt (dropped 0, overlimits 0 requeues 125)
 backlog 0b 0p requeues 125
  maxpacket 1522 drop_overlimit 0 new_flow_count 112 ecn_mark 0
  new_flows_len 0 old_flows_len 0
root@openwrt:~# cat /proc/interrupts
           CPU0       CPU1
...................................
 37:   10319612    6049869      MPIC   8 Level     eth0
...................................

CPU0 still has twice as much interrupts as CPU1 but I think this has something to do with DSA...?
But this fixed the weird lag on my WiFi.
For example, when browsing on my phone the loading bar almost always got stuck at ~80% for a few seconds (but the website was fully loaded).

1 Like

Master has a new backport to enable GRO for DSA.
I'm not sure if it actually does anything useful for mvebu because mvneta uses napi_gro_receive() instead of napi_gro_frags().
But re-enabled GRO to see if it makes any difference.

 tc -s -d qdisc show dev eth0
qdisc mq 1: root
 Sent 43798515470 bytes 34595471 pkt (dropped 0, overlimits 0 requeues 1165)
 backlog 0b 0p requeues 1165
qdisc fq_codel 120: parent 1:2 limit 10240p flows 1024 quantum 1514 target 1.5ms interval 15.0ms memory_limit 4Mb
 Sent 21339883886 bytes 16979546 pkt (dropped 0, overlimits 0 requeues 673)
 backlog 0b 0p requeues 673
  maxpacket 3596 drop_overlimit 0 new_flow_count 665 ecn_mark 0
  new_flows_len 0 old_flows_len 0
qdisc fq_codel 110: parent 1:1 limit 10240p flows 1024 quantum 1514 target 1.5ms interval 15.0ms memory_limit 4Mb
 Sent 22458631584 bytes 17615925 pkt (dropped 0, overlimits 0 requeues 492)
 backlog 0b 0p requeues 492
  maxpacket 1522 drop_overlimit 0 new_flow_count 481 ecn_mark 0
  new_flows_len 0 old_flows_len 0

So one queue shows a maxpacket value of 3596, I guess that means GRO is active.
But the other queue shows the "normal" maxpacket value of 1522.

what's the difference between _frags and _receive?

Hi,
I tried latest Openwrt release r13601-d93da0d016 and converting swconfig configuration to DSA.
I have problems with DSA configuration.

  1. swconfig setup
sw.port | 0 | 1 | 2 | 3 | 4 | 5 | 6 | vlan id | vid |
-----------------------------------------------------
vlan201 | u | u | t | t |   | t |   | 1       | 201 |
vlan202 |   |   | t | t |   | t |   | 3       | 202 |
vlan204 |   |   | t | t |   | t |   | 4       | 204 |
wan     |   |   |   |   | u |   | t | 2       |     |
root@linksys0:~# cat /etc/config/network

config interface 'vlan201'
        option type 'bridge'
        option ifname 'eth0.201'
        option proto 'static'
        option ipaddr '10.254.201.1'
        option netmask '255.255.255.0'

config interface 'vlan202'
        option type 'bridge'
        option ifname 'eth0.202'
        option proto 'static'
        option ipaddr '10.254.202.1'
        option netmask '255.255.255.0'

config interface 'vlan204'
        option type 'bridge'
        option ifname 'eth0.204'
        option proto 'static'
        option ipaddr '10.254.204.1'
        option netmask '255.255.255.0'

config interface 'wan'
        option 'ifname' 'eth1.2'
        option _orig_ifname 'eth1.2'
        option _orig_bridge 'false'
        option 'proto' 'pppoe'
        option 'username' 'user'
        option 'password' 'password'
        option 'timeout' '10'

config switch_vlan
        option device 'switch0'
        option vlan '1'
        option ports '0 1 2t 3t 5t'
        option vid '201'

config switch_vlan
        option device 'switch0'
        option vlan '2'
        option ports '4 6t'
        option vid '2'

config switch_vlan
        option device 'switch0'
        option vlan '3'
        option ports '2t 3t 5t'
        option vid '202'

config switch_vlan
        option device 'switch0'
        option vlan '4'
        option ports '2t 3t 5t'
        option vid '204'
  1. DSA setup 1 (not fully working)
VID | lan0 | lan1 | lan2 | lan3 | wan  |
----------------------------------------
201 | t    | t    | u    | u    |      |
202 | t    | t    |      |      |      |
204 | t    | t    |      |      |      |
2   |      |      |      |      | u    |
root@linksys0:~# cat /etc/config/network 

config interface 'vlan201'
	option type 'bridge'
	option ifname 'lan1.201 lan2.201 lan3 lan4'
	option proto 'static'
	option ipaddr '10.254.201.1'
	option netmask '255.255.255.0'

config interface 'vlan202'
	option type 'bridge'
	option ifname 'lan1.202 lan2.202'
	option proto 'static'
	option ipaddr '10.254.202.1'

config interface 'vlan204'
	option type 'bridge'
	option ifname 'lan1.204 lan2.204'
	option proto 'static'
	option ipaddr '10.254.204.1'
	option netmask '255.255.255.0'

config interface 'wan'
	option 'ifname' 'wan'
	option 'proto' 'pppoe'
	option 'username' 'user'
	option 'password' 'password'
	option 'timeout' '10'

root@linksys0:~# cat /etc/rc.local 
# Put your custom commands here that should be executed once
# the system init finished. By default this file does nothing.

#### enable vlan filtering
ip link set dev br-vlan201 type bridge vlan_filtering 1
ip link set dev br-vlan202 type bridge vlan_filtering 1
ip link set dev br-vlan204 type bridge vlan_filtering 1

#### set vlans
bridge vlan add dev lan3 vid 201 pvid untagged
bridge vlan add dev lan4 vid 201 pvid untagged
ip link set br-vlan201 type bridge vlan_default_pvid 201
ip link set br-vlan202 type bridge vlan_default_pvid 202
ip link set br-vlan204 type bridge vlan_default_pvid 204

#### clear out vlan 1
bridge vlan del dev lan3 vid 1
bridge vlan del dev lan4 vid 1

I have problem with communication between untagged and tagged interfaces ( lan1.201 and lan3 ) in bridge vlan201 .
This is tcpdump on openwrt when pinging something connected on lan1.201 from PC connected to lan3 .

root@linksys0:~# tcpdump -n -i any host 10.254.201.2
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on any, link-type LINUX_SLL (Linux cooked v1), capture size 262144 bytes
12:12:28.018672 ARP, Request who-has 10.254.201.2 tell 10.254.201.46, length 46
12:12:28.018672 ARP, Request who-has 10.254.201.2 tell 10.254.201.46, length 46
12:12:29.032057 ARP, Request who-has 10.254.201.2 tell 10.254.201.46, length 46
12:12:29.032057 ARP, Request who-has 10.254.201.2 tell 10.254.201.46, length 46
12:12:30.045341 ARP, Request who-has 10.254.201.2 tell 10.254.201.46, length 46
12:12:30.045341 ARP, Request who-has 10.254.201.2 tell 10.254.201.46, length 46
^C

Output:

root@linksys0:~# bridge vlan
port              vlan-id  
lan4              201 PVID Egress Untagged
lan3              201 PVID Egress Untagged
br-vlan201        201 PVID Egress Untagged
lan1.201          201 PVID Egress Untagged
lan2.201          201 PVID Egress Untagged
br-vlan202        202 PVID Egress Untagged
lan1.202          202 PVID Egress Untagged
lan2.202          202 PVID Egress Untagged
br-vlan204        204 PVID Egress Untagged
lan1.204          204 PVID Egress Untagged
lan2.204          204 PVID Egress Untagged
root@linksys0:~# 
root@linksys0:~# brctl show
bridge name	bridge id		STP enabled	interfaces
br-vlan201		7fff.6038e0cd87c0	no		lan1.201
							lan2.201
							lan3
							lan4
br-vlan202		7fff.6038e0cd87c0	no		lan1.202
							lan2.202
br-vlan204		7fff.6038e0cd87c0	no		lan1.204
							lan2.204
root@linksys0:~# 
  1. DSA setup 2 (router is not booting)

This setup is based on setup provided by dengqf6 in PR2942.

The difference from DSA setup 1 is that there is only one bridge.
I tried configuration based on that setup, but the router is dead (bricked) after reboot.
I don't have serial cable to see where is the problem.
I saw that there is setup in /etc/hotplug.d/iface/21-lan file (not in /etc/rc.local ).
Ports lan1 and lan2 are trunk ports (with vlans 201 , 202 , and 203 ), and ports lan3 and lan4 are untagged ports with vlan 201.
I can boot the previous partition with old swconfig configuration.

# vi /etc/config/network
config interface 'lan'
        option type 'bridge'
        option ifname 'lan1 lan2 lan3 lan4'
        option proto 'none'

config interface 'vlan201'
        option ifname '@lan.201'
        option proto 'static'
        option ipaddr '10.254.201.1'
        option netmask '255.255.255.0'

config interface 'vlan202'
        option ifname '@lan.202'
        option proto 'static'
        option ipaddr '10.254.202.1'
        option netmask '255.255.255.0'

config interface 'vlan204'
        option ifname '@lan.204'
        option proto 'static'
        option ipaddr '10.254.204.1'
        option netmask '255.255.255.0'

# vi /etc/hotplug.d/iface/21-lan
#!/bin/sh
[ $INTERFACE = lan -a $ACTION = ifup ] || exit 0

# enable VLAN filtering
ip link set dev br-lan type bridge vlan_filtering 1

# clear out vlan 1
bridge v del dev lan1 vid 1
bridge v del dev lan2 vid 1
bridge v del dev lan3 vid 1
bridge v del dev lan4 vid 1
bridge v del dev br-lan self vid 1

# set vlans lan1
bridge v add dev lan1 vid 201
bridge v add dev lan1 vid 202
bridge v add dev lan1 vid 204

# set vlans lan2
bridge v add dev lan2 vid 201
bridge v add dev lan2 vid 202
bridge v add dev lan2 vid 204

# set vlans lan1
bridge v add dev lan3 vid 201 pvid untagged

# set vlans lan2
bridge v add dev lan4 vid 201 pvid untagged

# set vlans cpu port
bridge v add dev br-lan self vid 201 pvid untagged
bridge v add dev br-lan self vid 202
bridge v add dev br-lan self vid 204

I hope somebody can find / locate the problem.
I am not sure where is is the problem, and is configuration in file /etc/hotplug.d/iface/21-lan correct, or it should go also in /etc/rc.local.

Best regards!