Dear all,
I am wondering if I need to turn on Routing/NAT flow offloading if I use an Intel X550-T2 ? Or is the feature rather related to the CPU (4 vCPUs "host" of i7-3770 or i7-2600, the X550-T2 is PCIe passthrough in proxmox) ?
Thanks and Cheers, Blinton.
NAT hardware offload is device-to-device DMA, it is not part of a netcard driver alone.
Your system is fast enough to forward 10GbE forwarding packets via main memory. Just get irqbalance outside and make queue per core in guest virtio to not choke the speed locking all processing on CPU0
Hi brada4,
Thanks a lot for your suggestions. Can you help me to make sure I understood.
Openwrt 24.10.0
- Irqbalance was in place already, just enabled and it works acc. to /proc/interrupts
- Network > Interfaces > Global network options > Packet steering enabled, Steering flows (RPS) is set on "Standard: none"
- Network > Firewall > General Settings > Routing/NAT Offloading > Hardware flow offloading -> should I turn it off ?
Proxmox 8.3.3
- How do I set queue per code in the VM ? btw, my settings are
agent: 1
balloon: 1024
bios: ovmf
boot: order=scsi0;ide2
cores: 4
(i7-3770)
cpu: host
efidisk0: local-lvm:vm-400-disk-0,efitype=4m,pre-enrolled-keys=1,size=4M
(X550-T2 WAN and LAN)
hostpci0: 0000:01:00.0,pcie=1,rombar=0
hostpci1: 0000:01:00.1,pcie=1,rombar=0
(for increasing the root size to 2048 MB I use systemrescue after upgrading openwrt)
ide2: local:iso/systemrescue-11.00-amd64.iso,media=cdrom,size=853M
machine: q35
memory: 2048
meta: creation-qemu=9.0.0,ctime=1720858309
name: OpenWrt-24.10.0
numa: 0
onboot: 1
ostype: l26
scsi0: local-lvm:vm-400-disk-1,cache=writeback,iothread=1,size=2G,ssd=1
scsihw: virtio-scsi-single
smbios1: uuid=hidden
sockets: 1
startup: order=1
vmgenid: hidden
Many thanks in advance !
Cheers, Blinton
Do you see proxmox helpdesk anywhere around?
You have to enable multi-channel operation of vitio_net in proxmox, and install irqbalance inside OpenWRT. First part is between you and proxmox, and dont get creative with second. Nobody said you to mess with packet steering.
Hi brada4,
Thanks for your quick reply.
- In proxmox I am passing through the whole network card X550-T2 to OpenWrt as in my conf shared above (q35, hostpciX, pci-e) (for pve itself I use a separate 2.5G NIC TX201).
- Similar for the vCPU where I choose "host" to get full access to all features.
That's where I'm struggling to understand, multiqueue virtio net seems to be used only for paravirtualized networks or can I somehow use it to boost the CPU usage in OpenWrt ?
Many thanks in advance !
Cheers, Blinton
Since you pass through whole device you need only irqbalance (which is default on any desktop or server linux out there)
Make sure you run any speed tests multi-threaded as you have to utilize multiple cores for 10Gbps
Thanks a lot brada4 !
I turned off hardware offload and packet steering as you suggested.
Cheers, Blinton
hardware offload e.g overrides arp and switch fib even if no hardware offload is supported. It saves some CPU cycles on low-end platforms. You really are not concerned about one in 1000 router-powers maintaining network map.
This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.