OpenWrt 21.02.0 third release candidate

Works OK on Netgear R7800. This router does the typical routing stuff (with native IPv6), an extra VLAN for less-trusted devices, a PPtP VPN against censorship, policy routing with mwan3 (so that I can decide what goes through the VPN), Nginx with a static site, ksmbd, and Transmission.

This bug is minor, and I'm not sure if it is an issue in current master, 21.02 rc3, or both. I suspect it happens in both, but after spending my day converting from 19.07.7 swconfig to DSA on my main ER-X home gateway with current r17089 snapshot installed on it, I'm not about to repeat that process with rc3 anytime soon. It is definitely an issue in current snapshot.

When using the luci menus to set up the below firewall rule, UDP and TCP protocols are both selected by default in the luci pull down menu. When the rule is saved and applied, Luci does not save the last two "list" lines below to the firewall config file. There is a work-around: Luci can be forced to save these lines by deselecting one of the protocols, saving and applying, then selecting both protocols again and saving and applying.

config rule
        option target 'ACCEPT'
        option src 'gst'
        option dest_port '53'
        option name 'Allow-gst-DNS'
        list proto 'tcp'
        list proto 'udp'

Perhaps the two "list proto...." lines are superfluous if that is what OpenWrt assumes if no protocols are specified, but even if this is the case, it would be less confusing if Luci saved these lines to the firewall config file regardless.

Seems to work OK all around, except network speeds are notably slower. See: https://forum.openwrt.org/t/openwrt-21-02-0-third-release-candidate/99363/125?u=eginnc

I also notice SQM is a bit slower on the ER-X with current snapshot (assumed comparable to 21.02 rc3 at this point) than it was with 19.07.7. My ISP is provisioned at ~230/12 Mbps. On a good day I could get the high end of 165-190 Mbps with 19.07 out of the MT7621. With current snapshot I'm only getting 130-150 Mbps, so that is a bit of a disappointment. Tolerable, but looking forward to better.

Hello, could you give me the link to download the .BIN of the firmware I have a Linksys WRT32X

Upgraded TP-Link RE200 v4 to RC3, 2.4GHz band works great but 5GHz band suffer low TX power (MT7610EN)
image

@NeMe_FuUuRyyyyY It's at the top of this post not sure why you're asking for it. Just use the firmware selector and either flash Factory or Upgrade, depending on if you are coming from OEM parition or an OpenWrt partition:

https://firmware-selector.openwrt.org/?version=21.02-SNAPSHOT&target=mvebu%2Fcortexa9&id=linksys_wrt32x

@Nick01 if you're using a Divested base in mind he has some security 'hardening' patches that he mentioned on his community thread there will be some performance loss. You may want to grab a latest 21.02 snapshot and compare performance if that's an issue on your setup.

1 Like

I'm getting less than half the wired throughput on my MT7621 ER-X that I was getting with 19.07. This is with software flow offloading, packet steering and irqbalance all in use. I've tried various combinations of disabling these features as well, with the result being slightly less performance. The ER-X provides DHCP to the AP's in the network for 4 VLANs on their own subnet (Guest, IOT, etc.) and also CAKE SQM for WAN/WAN6.

Between an EA8500 AP (ipq8064) connected by Ethernet back-haul to an ER-X (MT7621) gateway, iperf3 reports ~288 Mbps (both directions) with the ER-X as the server. That's depressing.

Between the ER-X and EA8500, with the EA8500 as the server, iper3 reports ~820 Mbps (both directions). Between the EA8500 and an EA6350v3 (ipq4018) AP connected by Ethernet back-haul through the ER-X, again with the EA8500 as the server, iperf3 reports a maxed out connection at ~935 Mbps (both directions), and ~820 Mbps (both directions) between them with the EA6350v3 as the server. The EA6350v3 and EA8500 both have software flow offloading enabled, but no irqbalance and no packet steering.

Is this just how things will be for MT7621 with 21.02, DSA and 5.4 kernel?

iirc someone said that now flow offloading not working anymore in 21.02 for mt7621 soc.

Same here, but I thought that it was just hardware flow offloading that wasn't working in 21.02 for MT7621 and that that would not be restored until the kernel was updated to 5.10 or later? Could be both hardware and software off load are lost in 21.02 though - that would explain the performance drop I'm seeing. FWIW, I didn't use hardware flow offload in either 19.07 or 21.02.

VLANs break MT7621's hardware (and software?) flow offloading in 21.02 + kernel 5.4.

I had a quick look at DSA. Besides some benefits there is also a drawback which causes the throughput issues mentioned a few times. In DSA each ethernet frame requires a special switch tag to be inserted, similar to VLANs. So all frames passing the CPU need additional processing for the switch tag. Hence the lower throughput. DSA might not be the best idea for older routers with a single core CPU.

5 Likes

MT7621 (at least mine) is also affected by Mtk_soc_eth watchdog timeout after r11573, not sure i can justify moving off 19.07.03...

Thanks for all MT7621 reports so far!

Interesting. Do you know how many bytes comprise this DSA switch tag? I'm thinking the per packet overhead in SQM/QoS link layer adaption should be increased by the size of the DSA switch tag?

Aside, the MT7621 has 2 MIPS cores (4 threads), but they obviously are not enough to overcome the DSA overhead. Bummer.

The size of the switch tag differs based on the vendor and tag type:

  • Broadcom 4 bytes
  • Marvell DSA 4 bytes
  • Marvell EDSA 8 bytes
  • Qualcomm 2 bytes
2 Likes

Yes, I had this issue, to the extent that I abandoned wifi on my C7 v2 and went to a commercial AP. As at RC2 I tried it again and have been using for a month without any issues except the channel survey tool causes the 5G radio to crash and recover. I also had the 5G radio decide to go into client isolation mode by itself with nothing showing in the logs, that seems to be a one-off though. The 2.4G has been solid. I am a bit nervous about going to RC3 but I do like an adventure :roll_eyes:

The world has simply moved on. 100Mbit internet feels like it starts to be low speed standard in the industrial world as of today.
My feeling on my Linksys WRT3200ACM with DSA is that the data flows a lot faster with 21.02 then with 19.07 or older. My EdgeRouter 4 didn’t have any 19.07 experience but it runs very fast on 21.02 with DSA.

If your hardware cant handle this additional bytes that DSA has with the internet speed the world demands then you are on the very edge of internet connection collapse and really should be thinking about a upgrade of hardware because the world isn’t stopping on this internet thing, that I can promise you.

1 Like

Instead of the discussion being based on individual feelings I suggest measured numbers for latency, maximum capacity for throughput and packets per second. This is comparable.

3 Likes

It is 98Mbit/800Mbit on both. It isn’t the speed itself that it smoother or faster. It is the lagging that is less noticeable.

So now with the facts do you think this will stop DSA in the world?

This whole forum is only based on individual feelings like “I successfully installed it”! So what?

After installing 21.02-RC3, I found that my browsing experience was miserable, on both wireless and wired connections. There were large delays in displaying content and many sites completely failed. After some investigations, I found that the issue appeared to be with DNS and in particular with DNS over IPv6. DNS over IPv4 is fine.

As a workaround, I disabled the WAN6 interface, and performance was great again. I'm still using IPv6 internally, but not externally.

Are there any commands that I can execute to provide further information? I am comfortable in the CLI but not expert.

What I see from ip addr

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
    link/ether 60:e3:27:c8:4a:11 brd ff:ff:ff:ff:ff:ff
    inet xxx.xxx.120.28/18 brd xxx.xxx.127.255 scope global eth0
       valid_lft forever preferred_lft forever
    inet6 xxxx:xxxx:c002::1:b923/128 scope global dynamic noprefixroute
       valid_lft 3586sec preferred_lft 3586sec
    inet6 fe80::62e3:27ff:fec8:4a11/64 scope link
       valid_lft forever preferred_lft forever

from Luci

Protocol: DHCPv6 client
Uptime: 0h 0m 18s
MAC: 60:E3:27:C8:4A:11
RX: 326.29 MB (44339348 Pkts.)
TX: 2.76 GB (20814158 Pkts.)
IPv6: xxxx:xxxx:c002::1:b923/128
IPv6-PD: xxxx:xxxx:c802:9c2a::/64

What I suspect is the issue: my ISP is providing short Valid and Preferred IPv6 lifetimes. The Valid lifetime is the same as Preferred, when Valid should be twice the Preferred lifetime.
This interacts with changes to odhcpd for 21.02-RC3 regarding the lifetime of IPv6 addresses and leases. All IPv6 addresses, both externally routed and internal fda6:xxx addresses on the internal network are given mostly 30 mins leases, occasional 60 mins leases, and occasionally no IPv6 leases is allocated at all.
After disabling the WAN6 interface, all devices are given 14d leases on the fda6:xxxx addresses.

The ISP (in Singapore) provides no means to report this issue - surprise, surprise.

We won’t find consensus if we only discuss individual feelings.

I could understand „the lagging is less noticeable“ if I would see measured numbers for the perceived lag. For example: measured latency for specific router tasks with one release compared to measured latency for the same task in the same configuration on another release.

This would be a fact based discussion.

Exactly. I tested my WRT32X a bit in with the RC builds (21.02-snapshot has had a ton of bug fixes last few weeks, mostly with LuCI, so it'll only improve).

Going from 19.07.7 and 21.02-snapshot my performance is roughly the same. I do use irqbalance now, mainly because it puts WiFi from CPU0 over to CPU1. SQM Cake on 500Mbit down/ 35Mbit up cable modem: maxes out the connection set to 95% throughput on up/dl. My ping spread is 1-5ms range under max load, A+ bufferbloat A+ quality on dslreports speedtest. Haven't gone any further, but it's certainly working well, no stability problems over the last couple weeks. It's good news because I'd rather use upstream kernel code whenever possible, DSA included.