No HW_OFFLOAD byte/packet accounting on MT7621

Hey y'all,

I have an ER-X as my main router/firewall with OpenWrt firmware. OpenWrt is quite awesome when compared to the stock firmware. I'm routing a couple of VLANs and my internet connection is 1Gbps symmetrical, so to keep up I enabled HW offloading.
On the MT7621, this messes up the per connection packet/byte accounting (as well as per-interface accounting), as none of the HW_OFFLOADed packets are accounted for. It seems there was recently a patch that enables SoC-based per-flow accounting for the MT7622 and a few others. Being the optimist that I am (not), I did a custom build with this enabled for my ER-X. Unfortunately this didn't work at all; while HW offloading still worked just fine, I got a bunch of "MIB table busy" messages, presumably because the '21 doesn't have the right features or register layout for the feature as implemented.
Now, I've looked high and low for any kind of programming information for the MT7621, but I'm not finding anything of substance. I pestered a couple of our interpid developers who pointed me at other documentation and some patches, none of which ultimately helped.
So I guess I'm at an impasse with this, alas, unless someone can point me in a profitable direction.

I really like the ER-X for how inexpensive, tiny and capable it is, and because I can PoE-power it, but I'd really like to be able to see accurate netflow information.
Are there other little, inexpensive router devices that can deal with 1Gbps+ and be powered with PoE?

Siggi

1 Like

Someone else noted this behavior in another thread, but we were not aware that a patch was being submitted to rectify the issue.

  • You mentioned MIB - so I assume SNMP?
  • Have you ever attempted to observe softflowd?

Click this link: https://hardkr.oss-cn-shenzhen.aliyuncs.com/media%2Fdownloadpage%2Fresource%2F2021%2F04%2Fc86ad119ad76b2a31dfc6317074eb6e2.pdf?security-token=CAIS6gF1q6Ft5B2yfSjIr5b%2FLcjhhqpbhJqeZxX9oDMWX8lkgYnxuDz2IHlFfnRsAu0dv%2FkxlWBW5%2FoYlqVoRoReREvCKM1565kPEaUpsWKY6aKP9rUhpMCPOwr6UmzWvqL7Z%2BH%2BU6muGJOEYEzFkSle2KbzcS7YMXWuLZyOj%2BwMDL1VJH7aCwBLH9BLPABvhdYHPH%2FKT5aXPwXtn3DbATgD2GM%2BqwMlsfrhnJzMtyCz1gOqlrUnwK3qOYWhYsVWO5Nybsy4xuQedNCaiX8KtkQRqfwn0fIVqWmW4ouHYFlY%2BRmFKa%2FO9dliPI6l48kagAF%2BjhEeS79PEiQHrHCV%2BIfJTB2iTXZfhIiSsw1RlD2HhB5hCMzuLWBI2AB%2F8uFyPGaNsApzqwl%2FHM%2BPEo0tkVDdSL6%2BoyYtNBeiPZyhysKglom65X%2FBjNMb7ikATLlwn%2BnT%2FPO6G1SYUTrVOMu3y%2BTWIEOvAYJfPPBy48b9agQe0w%3D%3D&OSSAccessKeyId=STS.NUJfrUkuz3Xue3LD3CSFHnLZZ&Expires=1683750397&Signature=eiIXoydwFuVpaTiwQwNQn4dmCwM%3D#scrollbars=0&toolbar=0&statusbar=0

(Source: https://www.hardkr.com/download-69433) :wink:

It's a PDF of the Programming Guide. See section 2 of this guide - or any developer, please take a look!

I hope this helps!

1 Like

No, MIB here refers to the table of metrics the SoC keeps for oflloaded flows. Looks like for the MT7622 et al, it's a parallel structure to the the offload flows (hash?) table, where the SoC (PPE?) keeps track of packets/bytes it's handled for each offloaded flow.

I did try one of the "flow" things, but AFAIK they'd get the information from netfilter. Looking at /proc/net/nf_contrack, it's evident that on the ER-X, for connections in HW_OFFLOAD state, it only keeps track of the packets/bytes that initiate and/or terminate the connection. I can funnel GBs through an offloaded TCP stream and it never accounts for more than the SYN/FIN packets :cry:.

Ooooh, nice, though unfortunately this only covers the CPU and its peripherals.
I'd found a doc covering the GSW (titled "MT7621 Giga Switch Programming Guide"), and I guess the doc that's missing for my purposes would cover the "Frame Engine" or the "Packet Processing Engine".
Any chance you've seen such a doc?

WOW, yes!!! :partying_face:

I didn't think it was relevant, but they all said (I think in Chinese) that this was it:

See sections 2.19 and 2.20.

1 Like

Thanks, I found that one. Sadly it doesn’t go into any detail on the PPE that I can find.
The MT7621 may simply not have the hardware to keep track of per-flow byte/packet counts, though it's proudly listed as a feature of the PPE in the MT7620 Programming Guide: "Per flow accounting or rate limiting".

Recall I asked about netflow (i.e. softflowd)?

You noted:

I'll have to look at my own records (i.e. I have to recall when I installed the MT-based device and compare the flow records), but netflow seemed accurate to me - hence why I never pursued the snmp thing further.

I'll update you.

Thanks - I went back to try and remember what the heck I did, and it turns out I installed luci-app-nlbwmon, which uses kmod-nf-conntrack-netlink for data source.

With the ER-X and other MT7621 routers you have two choices at present:

  1. No HW offloading and crappy performance.
    Mine maxed out at something like 200Mbps when routing LAN/WAN and between VLANs.
    However, the kernel's connection tracking will (presumably) have correct per-connection accounting.
  2. HW offloading and crappy metrics.
    The connection tracking absolutely does not track packets that go through the HW offload path on MT7621. This is most everything beyond TCP SYN etc. You'll still see all the connections that occur, they're just all empty-ish.

I'm not familiar with Softflowd, but if it's snooping the traffic, then I assume it has to either sample packets, or it has to disable HW offloading on MT7621 - or both (or it's just blind to HW offloads).
Either case, the nf conn tracking is already collecting the data (that I care about), so it'd be wonderful if it had the correct packet/octet counts.

1 Like

So I've been poking at this a little bit through a build with devmem enabled.
I'm not sure the MT7621 has the same accounting implementation as the MT7620. I did find a PDF data sheet for the MT7621 on hardkr, which unfortunately doesn't have any detail on the frame engine or the PPE, though it does have a memory map - which is good.
It's easy enough to download PDFs off hardkr without an account, but there are some other docs (archives) there that look interesting. Sadly I'm not having any luck signing up with my country code-1 mobile phone. I wonder if there's a trick to it?

Staring at the PPE code, I've noticed that there's a flag to request 80 byte FOE entries. I wonder if without this flag the, on V1 PPE engine, the metrics will co-habitate the FOE entries.
I guess I'll play with that a little bit, see whether I can get anywhere...

I took another look at the Ubuntu ER-X GPL code dump, which is actually way more comprehensible than the mainline Linux kernel MTK offloading code. This is mainly due to some inline comments and the use of bitfields, rather than bitmask constructing macros :/. It also helps that it only handles IPv4, I guess.

There's some interesting code that seems to cleverly exfiltrate the PPE-processed packets to the per-port metrics. For the life of me I can't find this GPL dump again, nor anywhere on the interwebs that hosts it, or I'd link it here.
The gist of the code is that each offloaded flow is assigned an accounting group that's a function of the input and output ports. The packet/octet counts are then read periodically from the accounting groups, and seemingly added to the corresponding port pair's metrics.
E.g. if for the accounting group that corresponds to flows from port i to port j, the packet and octet counts are applied to port i in and port j out (or vice versa, I'm fuzzy on the directions).
It feels like a bit of a hack, but it should work just fine.
I don't know whether the PPE ever handles multicast or broadcast, but at least for unicast this should maintain reasonably timely and accurate counts.

I found this message that references a MediaTek SDK for the MT7620/21.

The SDK contains a pretty decent description of the register structure of the MT7621s packet processing engine in particular. Having looked through it, I think I can safely give up on the idea of maintaining per-flow metrics on that SoC.
It looks like the only per-flow accounting mechanism on that SoC is that a flow can be assigned to one of 64 accounting groups, where metrics associated with the flow will then accrue. As the flow table has up to 16k entries, these 64 accounting groups don't go very far.
It would be possible to limit the number of active flows in the table to 64 so that each active flow had a distinct accounting group. However, I imagine that would cause insane churn on the flow offload table, which in turn would probably make routing CPU bound.

2 Likes