Zyxel NR7101 low performance

I installed OpenWrt on my Zyxel NR7101. It mostly works well, except I'm only getting about a third of the bandwidth I had with the official firmware.

Is this expected? Is there anything I could try to improve this?

Have you enabled flow offloading and packet steering?

I tried various combinations of those, but they seem to make no meaningful difference. I also tried installing irqbalance but that made no difference either.

Are you running the latest Zyxel modem firmware:

What sort of bandwidth range are we talking about here?

In terms of LAN-LAN I see from my Zyxel NR7101 to my RT3200:

root@OpenWrt-1:~# iperf3 -c -R
Connecting to host, port 5201
Reverse mode, remote host is sending
[  5] local port 43124 connected to port 5201
[ ID] Interval           Transfer     Bitrate
[  5]   0.00-1.00   sec  92.0 MBytes   771 Mbits/sec
[  5]   1.00-2.01   sec  96.8 MBytes   807 Mbits/sec
[  5]   2.01-3.00   sec  94.9 MBytes   800 Mbits/sec
[  5]   3.00-4.00   sec  95.5 MBytes   802 Mbits/sec
[  5]   4.00-5.00   sec  96.1 MBytes   806 Mbits/sec
[  5]   5.00-6.00   sec  94.1 MBytes   790 Mbits/sec
[  5]   6.00-7.00   sec  95.5 MBytes   799 Mbits/sec
[  5]   7.00-8.00   sec  95.4 MBytes   802 Mbits/sec
[  5]   8.00-9.00   sec  94.4 MBytes   792 Mbits/sec
[  5]   9.00-10.00  sec  94.0 MBytes   789 Mbits/sec
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.02  sec   950 MBytes   796 Mbits/sec   17             sender
[  5]   0.00-10.00  sec   949 MBytes   796 Mbits/sec                  receiver

iperf Done.

My 4G maxes out at circa 80Mbit/s so I can't verify the WAN performance.

Perhaps @bmork has an idea or two. @bmork shouldn't the bandwidth associated with the modem be the same since OpenWrt doesn't modify the firmware on the modem? OpenWrt shouldn't affect carrier aggregation and the like, right?

The modem bandwidth should be the same. But using the full modem bandwidth requires frame aggreagation on the USB link. And this is not trivial with OpenWrt and qmi_wwan, yet. The easiest workaround is to change the modem to MBIM mode and use the cdc_mbim driver instead.

Now I have heard that this still doesn't match the OEM firmware wrt top bandwidth. And I haven't tested the max bw myself, so I just have to trust those reports. If you really need to push this device above, say 4-500 Mbits/s, then you're probably better off using the OEM firmware.

Unless yout don't want to do the work I am too lazy do do: Figure out how to tune either the qmi_wwan or cdc_mbim driver to achieve the same results the OEM firmware does. Because that's obviously possible. They don't do magic.


The modem is running R13A02.

On the WAN with the OEM firmware I get the 600 Mbps (down) my connection is limited to, with OpenWrt I'm getting about 200 Mbps.

I didn't consider LAN performance could be the problem and enabling flow offloading does seem to actually reduce that a lot, but with that disabled it seems fine:

root@OpenWrt:~# iperf3 -c
Connecting to host, port 5201
[  5] local port 49628 connected to port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  96.2 MBytes   807 Mbits/sec    0   1.25 MBytes       
[  5]   1.00-2.00   sec  96.2 MBytes   807 Mbits/sec    0   1.25 MBytes       
[  5]   2.00-3.00   sec   100 MBytes   839 Mbits/sec    0   1.25 MBytes       
[  5]   3.00-4.00   sec  98.8 MBytes   828 Mbits/sec    0   1.25 MBytes       
[  5]   4.00-5.00   sec  98.8 MBytes   828 Mbits/sec    0   1.25 MBytes       
[  5]   5.00-6.00   sec   100 MBytes   839 Mbits/sec    0   1.25 MBytes       
[  5]   6.00-7.00   sec  95.0 MBytes   797 Mbits/sec   53    892 KBytes       
[  5]   7.00-8.00   sec  96.2 MBytes   807 Mbits/sec    0    892 KBytes       
[  5]   8.00-9.00   sec  96.2 MBytes   807 Mbits/sec    0    892 KBytes       
[  5]   9.00-10.00  sec  96.2 MBytes   807 Mbits/sec    0    892 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec   974 MBytes   817 Mbits/sec   53             sender
[  5]   0.00-10.00  sec   970 MBytes   814 Mbits/sec                  receiver

iperf Done.

@Zerion it seems we have our answer from our resident Zyxel NR7101 expert.

If I am understanding the point correctly, the bottleneck is the USB interface between the modem and OpenWrt relating to the way data units are handled. And the problem is fixable. Seems an interesting challenge.

I didn't even know that this was a possibility. Why isn't this the default? What are its disadvantages? How would @Zerion make this change?

Dumb question, but is the Zyxel OEM firmware not open source, and can we not just look at the code to see what they do? Or can it be reverse engineered?

I tested MBIM and it's maybe little bit faster (about 260 Mbps), but still far from the OEM firmware. What would I be tuning if I wanted to improve this? Settings? Code?

Is NR-NSA expected to work here? How would I confirm the modem is connecting to NR? I can only seem to get information on LTE from the tools.

Yup. I should have tried this before. I just tested OEM, OpenWrt qith QMI and OpenWrt MBIM, and the results are depressing. I have no idea where the problem is.

OEM download speed is about 50% higher for me, while the upload speed is about the same. MBIM or QMI makes no difference.

I did a number of measurements, using different MBIM settings, and the results were all similar My speed increased slightly during the test, but that was probably just network conditions. Repeating the QMI test after finishing MBIM testing showed about the same result as my last MBIM test.

FWIW, my approximate numbers were

OEM: 430 down, 150 up
OpenWrt: 280 down, 150 up

This was tested on the same router, using the same APN, with OpenWrt from a current snapshot download (since the 23.05 images have been removed). I used ModemManager to connect because that's easiest for a quick test without actually configuring anything. I configured OpenWrt to do two-way routing between lan and wwan0, similar the OEM bridgemode, and measured the throughput from my laptop.

I'm speaking out of ignorance here but it's not related to something like foregoing routing altogether in respect of the bridge mode is it?

And what about your USB framing hypothesis?

That should just work AFAIK. I'm back to OEM now, but the same AT command will obviously work in OpenWrt too. For example:

root@NR7101:~# atcmd /dev/ttyUSB3 'at+qnwinfo'
+QNWINFO: "FDD LTE","24201","LTE BAND 3",1450
+QNWINFO: "TDD NR5G","24201","NR5G BAND 78",643296


Yeah, it does show me connected to NR5G:

+QNWINFO: "FDD LTE","24405","LTE BAND 3",1825
+QNWINFO: "TDD NR5G","24405","NR5G BAND 78",641280


If I want to return to the OEM firmware, do I need to any preparation for it, or can I just flash the firmware?

The OEM firmware is always routing too. That's the only sane way to do a LTE/5G bridge.

Dead. I was wrong. Or, I still believe it's correct if we can only get past 500 Mbits/s. But there's something else causing the 280 Mbits/s limit(?)

You can simply write an OEM image to the "Kernel" partition using mtd.

Or you can write it to the "Kernel2" partition and keep OpenWrt in "Kernel" if you like. Then you can switch to OEM from OpenWrt using

fw_setenv BootingFlag 1

and from OEM to OpenWrt using

nvram setro uboot BootingFlag 0
nvram commit

Argh. I've been bending my mind around the fix @kristrev added for qmimux here: https://github.com/torvalds/linux/commit/2e4233870557ac12387f885756b70fc181cb3806

Trying to figure out if this affects non-muxed forwarding too. Which would be nice since the fix then is obvious.

We allocate rx skbs for qmi_wwan in usbnet.c using __netdev_alloc_skb() when running in raw ip mode. And __netdev_alloc_skb() adds an extra NET_SKB_PAD headroom, which is defined as max(32, L1_CACHE_BYTES). We have CONFIG_MIPS_L1_CACHE_SHIFT=5 for the MT7621, meaning that L1_CACHE_BYTES is 32 too. So NET_SKB_PAD is 32 bytes. Fine so far.

So we have received a raw IP packet into skb->data, and we have 32 bytes headroom. Then we enter ip_forward() which does

	/* We are about to mangle packet. Copy it! */
	if (skb_cow(skb, LL_RESERVED_SPACE(rt->dst.dev)+rt->dst.header_len))
		goto drop;

Which means that skb_cow ends up reallocating the skb if LL_RESERVED_SPACE(rt->dst.dev)+rt->dst.header_len)) > skb_headroom(skb).

That's the hard part to predict. "rt->dst" is a "struct dst_entry" describing the target destination. AFAICS, rt->dst.header_len is used for tunnels and will be 0 for any other sort of destination. So we're left with this macro applied to the outgoing interface:

#define LL_RESERVED_SPACE(dev) \
        ((((dev)->hard_header_len + READ_ONCE((dev)->needed_headroom)) \
          & ~(HH_DATA_MOD - 1)) + HH_DATA_MOD)

where HH_DATA_MOD is 16. So that's 16 byte aligning the sum of the outgoing interface hard_header_len and needed_headroom. Which in our case, with a DSA slave on a MT7530 switch, should be 14 (ethernet header) and 4 (MTK_HDR_LEN). Ending up with LL_RESERVED_SPACE rounded up to 32 bytes.

Which is exactly our skb_headroom() and therefore should be just fine.

Or am I missing something here? Guess I should test thes things, but that sounds like work....

EDIT: Reading the qmimux patch again I see that we do use netdev_alloc_skb() there too. I sort of imagined we didn't. That should have been fine using the same logic as above. Wonder why it wasn't? But that's good news. I have to test this


Looks like you've identified a possible cause and solution for this low throughput issue @bmork. The comment on the commit gives a description that seems extremely relevant to the issue at hand.

Is the issue that this patch is upstream relative to where OpenWrt 23.05 is at? If so are you figuring out an OpenWrt-specific patch that tries to properly work out the requisite headroom?

Well, now tested. And no, this is not the problem.

Actually was surprised to see that the available headroom is 64 bytes in ip_forward(). Turns out we have an OpenWrt specific patch modifying NET_SKB_PAD:

FWIW, this is what I see from a couple of debug printks in usbnet.c and ip_forward, using the source interface as reference in ip_forward:

[ 1248.350913] ip_forward: mt7530-mdio mdio-bus:1f lan: NET_SKB_PAD=64, skb_headroom()=80, LL_RESERVED_SPACE(rt->dst.dev)=16, rt->dst.header_len=0
[ 1248.523878] rx_submit: qmi_wwan 2-1:1.4 wwan0: size=1500, skb_headroom()=64
[ 1248.524041] ip_forward: qmi_wwan 2-1:1.4 wwan0: NET_SKB_PAD=64, skb_headroom()=64, LL_RESERVED_SPACE(rt->dst.dev)=32, rt->dst.header_len=0
[ 1249.352542] ip_forward: mt7530-mdio mdio-bus:1f lan: NET_SKB_PAD=64, skb_headroom()=80, LL_RESERVED_SPACE(rt->dst.dev)=16, rt->dst.header_len=0
[ 1249.390269] rx_submit: qmi_wwan 2-1:1.4 wwan0: size=1500, skb_headroom()=64
[ 1249.390383] ip_forward: qmi_wwan 2-1:1.4 wwan0: NET_SKB_PAD=64, skb_headroom()=64, LL_RESERVED_SPACE(rt->dst.dev)=32, rt->dst.header_len=0

So rt->dst.header_len is 0 as expected, and we have plenty of headroom left in both directions. And even 32 bytes would have been enough.

1 Like

Ah, drat. That seemed so promising. Might not @nbd have any other ideas to try I wonder?

If I run a speedtest on the NR7101 directly instead of from my PC, I'm getting about 350 Mbps of bandwidth, so a fair bit of bandwidth is disappearing between the WAN and LAN.

1 Like