How can we make the lantiq xrx200 devices faster

I have good results using FB7412 and Open master 5.4 with pull 4353 and 4339 and with VPE-IRQ patch
Just three clicks to include them in OpenWrt master branch to benefit all

1 Like

now that there has been more of a push towards the kernel 5.10 and the DSA driver instead of the legacy driver it appers that there's been a fair amount of work to bring the things pc2005 worked on into the new driver

Extra stuff appears to be in this developer's branch :


However I am unable to get the ethernet ports working on my BT home hub 5a after merging this branch. Can anyone with serial access (so they don't brick their router unless they have a pre-existing config set to bridge a usb ethernet adapter onto br-lan) confirm that ? (kernel 5.10). The ports acknowlege there is a link is up but no ping or other access for me.

Hi @wilsonyan,
I am the original author of these patches. This is an older patch version and it is broken. Here is a newer version of the patch. This patch probably doesn't improve performance. You will find more patches in my repository. Some are untested.

Combining 4 patches increases performance by 25%:

I will send patches upstream on the next net-next cycle.

1 Like

I also saw you have a patch to use 4 TX queues, was any testing done on that one? I'm seeing a frequent 'tx ring buffer full' error that sometimes stalls the ethernet connection. Any idea if this patch could help to resolve that? Thanks!

The patch for TX queue 4 has not been tested. 'Tx ring buffer full' messages appear on the console because DMA is slow. The system fills the descriptors faster than dma releases them. I think patches from this PR should solve this problem.

Using all the patches I was able to achieve 725 Mb / s upload and around 600 Mb / s download speed.


impressive. So in this case the 4 patches you mentioned in your earlier post? If I understand correctly then most of them were sent upstream and it might take some time until they are pulled in openwrt?

I have installed the patch you mentioned, and saw the message again today, this also gave me a short
hiccup in the network. Any other ideas on what I can try/test? maybe the IRQ balancing? I just saw your commit here which seems to contain all patches you previously mentioned.

Weird also is that it seemed to coincide with a dsl connection drop

[61678.440916] lantiq,xrx200-net 1e10b308.eth eth0: tx ring full

Are you connected to the router with a 100M link?

yes correct, it's an older PLC adapter to which the port is connected, only 100M.

None of my patches solve this problem. My patches increase the performance of the CPU port. Probably the faster the CPU port, the more frequently these messages appear on the console. This is because the CPU is connected to the switch with a 1GBps link, and your device is connected to the switch with a 100Mbps link. The CPU can send data faster than the switch forwards it over a slow link and the buffers run out quickly. There can be a maximum of 255 buffers per TX channel.

Good news. Some of the performance-enhancing patches were merged yesterday.


@olek210 Does this branch contain all of these patches? I'd like to test them out and it would save me some time to apply the diffs to the openwrt repo myself.

There is also a PR that increases efficiency. It contains the previously mentioned patches.

im testing this currently, one thing I cant confirm is whether it's possible to have a port carry 2 tagged vlans on it

1 is ok, I haven't confirmed 2 work

Are you testing QinQ?

no, just basically want a public lan and a private lan with 1 port tagged in on both
maybe it's just the difficulty of DSA and UCI config or perhaps the vlan id was too high or something
i'll try again later

everything seems stable with your performance branch, been up for 3-4 days now no problems

My patches have been applied to the master in 7e484b9.


Is that from WAN to LAN? I can only get 150mbps from wan to lan, on lan to lan I get 950 so there its great

Measurement results are between LAN and WAN. I can get 700Mbps with software flow offload enabled.