Offloading on x86

Hi,

there are those two flags that are supposed to work on specific hardware only.

firewall.defaults.flow_offloading='1'
firewall.defaults.flow_offloading_hw='1'

What about x86 (virtualized virtio)? Cannot find much about it.
I don't wann to use any traffic shaping feature.

thx
juni

In general, Software Flow Offloading shouldn't require any specific hardware afaik and also works in case of QoS/SQM...

1 Like

Hardware flow-offloading is currently only supported on Mediatek hardware.

Software Flow Offloading may work with SQM but it causes my port forwards to get disabled. Is that normal?

a VM does not expose anything to offload to, also on x86 there is no supported hardware for offloading anyway. The only thing you can do is the "software offloading" which is more of a "do a lighter form of firewalling" thing than a real offload, and it might break some firewall rules or traffic shaping.

that said, on x86 the CPU is usually strong enough to deal with multi-Gbit routing, no problem. Do you have performance issues?

1 Like

Yes, I do.
I'm running a virtualized OpenWrt 21.02 on a host that has 2 10G ethernet ports (ConnectX3).
When transfer via SMB from a workstation to a server (both 10G) I won't get more than around 1,7 Gbit/s.

Switch has Jumbo frames activated. MTU on server, virtualization host, client and OpenWrt is set to 9000.
Traffic is IPv6, there is no NAT of any kind.

Client: ConnectX3, i7 8700k (Fedora 34)
Server: ConnectX3, Ryzen 7 3700x (TrueNAS 12.5U5)
Virtualization: ConnectX3 Dual Port, i7 10700t (Proxmox 7.0-11)
Router: 4 Cores, CPU Type: host (OpenWrt 21.02)

Eh since you mention SMB file sharing I'm having some doubts the bottleneck is the network here. Even a low end x86 device can route faster than that, all your processors should be able to route even 40Gbit no problem.

Is your storage device/array actually able to serve files at 10 Gbit? I mean the actual array read/write speed before it goes on the network.

1.7 Gbit/s is around 217 MB/s, which isn't that bad for a normal spinning rust storage array on ZFS/TrueNAS.

Saturating a 10Gbit link means your storage can read/write at 1.3 GB/s, that's some serious SSD speeds or some seriously large RAID0 arrays.

Also did you check the CPU load on the router VM while running this transfer? Even just from Proxmox's vm data page.

Pure network speed tests are done with tools like iperf3 that actually generate fake traffic out of thin air and are not limited by anything else (like storage of a NAS).

Can you try doing a network test with iperf3? TrueNAS has that tool too.

if the only thing that is happening is moving files within a LAN, then there is nothing to offload to anything else. Offloading is for routing and firewall, if traffic isn't changing subnet or isn't going through a firewall there should be no real load on the CPU.

I mean yeah ok software bridging of two interfaces isn't free but it's negligible on that kind of hardware

1 Like

thanks for the hints.
With iperf3 I'm maxing out at around 7 Gbit/s...
Well that's a lot better. Thanks again for clearing that up.

1 Like

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.