Netfilter "Flow offload" / HW NAT

I not have anything problem with leak connections, even after 4 or 5 hours... with and without nbd commit.

I also haven't any problems with connection leaks, but i currently use a build with kernel 4.14.43 because i don't know if both wifi cards will work if i updaten to a recent build.

My only problem is related with many "Connection Reset" error, but I have this problem for months, and not is related Flow Offload..

@sotux, please show me the output of cat /proc/net/nf_conntrack when it has accumulated a large number of connections

1 Like

@nbd, I've uploaded my nf_conntrack file to Mega
Please download from: https://mega.nz/#!4zwHXTyJ!AFJ3YZCjshyvC2qM12SjFoZZ_Ik2Ar7nxLGCUCoKQJQ

Thanks, it seems that my fix is working for TCP and only UDP still has issues. I will look into this today.

3 Likes

Fixed now, please test.

3 Likes

Thank you @nbd, I'll test it.

Compiling now, will report back.

is ok now, I have around 200 clients on test and the value is ~2000 and earlier by this time would be ~14000

Thanks for testing

I think it is fixed for me as well. I have far less clients, so it takes a bit longer to really show. But it's looking much better than it did before after 1 hour of up-time. Thank you very much for your work :)!

Edit: Definitely fixed for me as well. Thanks again :slight_smile:

After one night testing, the bug had been fixed.
Thank you @nbd.

@nbd

Fixed it for me. IPSec still doesn't work with flow offloading though.

The fix also seems to work for me, no huge amount of active connections anymore after ±16 hours of testing with ± 10 devices :slight_smile:

Is anyone seeing quite a bit higher CPU usage with hw flow offloading enabled with the latest master branch? Not sure whether it is one of these fixes or the "kernel: allow hardware NAT offload drivers to keep a priv pointer" commit. I remember only seeing >95% idle CPU at all times during speedtests, while now I can see <85% idle values. Not sure if I am miss remembering things or if there is a performance regression. Either way, it's just a minor thing since I am still easily able to max out the connection.

One other thing that I find strange is that I am seeing a much higher CPU utilization when I bridge two ports with two different VLANs together in my LAN bridge and run an iperf3 test from one computer connected to the first port, while a second computer connected to the second port, compared to similar speeds over WAN. Does flow offload only apply to connections to/from WAN?

Edit: I know I can simply put both ports on the same VLAN and let the switch handle the traffic for 0% CPU utilization. But I am just playing around with this new feature to learn more about it. And the fact it doesn't seem to work on LAN <-> LAN has me confused.

With the latest commits Flow Offload work even most stable in my environment (x86_64 CPU). My VPN connection is really most stable and little faster. Especially considering that my connection is via public WiFi, and the AP of my ISP is approximately 500m away, in a central area saturated with wireless networks.

Thanks @nbd for your work.

PD: Why wireless driver in OpenWrt Master Branch is stalled in a version date from November 2017?

Flow offload doesn’t work (yet) in combination with SQM. But reading this http://blog.cerowrt.org/post/bbrs_basic_beauty/ it seems that TCP BBR is getting close. I know there is a thread discussion BBR, but will this work together with HW Offload so we can keep PPPoE offload and have BBR do it’s thing? This will give us the best of both worlds?? Then adding HW QoS to e.g. prioritize VOIP should get things pretty close to “perfect”.

“Perfect” is of course a relative term given the SoC/hardware we are using. Running everything on a Xeon/i7 type of processor will be “more perfect”

BBR will only do its thing at the endpoints of a TCP connection, as it is a tcp congestion control algorithm it will not be useful on a (pure) router, unless you use your router as a server that terminates TCP connections.

using a device,which connect to r7800's lan port,as an iperf3 server,and r7800 as an iperf3 client,using tcp bbr,I got very slow speed.