So, I upgraded to gigabit Ethernet, and I have tested with a laptop. I get about 900 Mbps/900 Mbps.
I'm using a VM running OpenWrt x86_64...I move my configs...and do some speed tests...
I'm getting about 4 up/4 down
...but it get weirder
Thru a Wireguard connection, I'm getting 200/300
...despite I'm still missing some bandwidth...how in the world am I'm only getting 4/4 with normal LAN-to-WAN; but the same client sent thru the the same interfaces to and passes Wiregusrd traffic at 200/300???
Basically, I want to increase the speed of my WAN traffic (obviously).
What hypervisor and host OS?
Which (emulated) network cards did you configure?
Just for a comparison, I can achieve 934 MBit/s via iperf3 through a test-VM (qemu-kvm, virtio-net-pci, with 2 cores and 1 GB RAM assigned on a sandy-bridge i7-2600k/ linux v5.16 host); the throughput is limited by the rest of my wired network. Getting my wired (1000BASE-T) network out of the way, testing the throughput between host and VM, I can achieve 26 GBit/s.
It's been roughly 15 years since I last used virtualbox, on non-kvm capable hardware…
I know the networking setup with qemu isn't quite as convenient (no GUI, manual bridge setup and tap interfaces), but it would be interesting to test that on the same hardware as well. The choice of the emulated network card (virtio vs fully emulated hardware) also has a major performance impact (block i/o as well, but that doesn't matter much for OpenWrt).
On my proxmox 7 (KVM-based) setup I have a OpenWrt VM that is connected to a MPTCP VM (Openwrt-based appliance that does multipath bonding to aggregate two WAN connections), and I routinely get 100-200MBit/s down and 50-100Mbit/s up internet speeds so I'm already well beyond your 4/4.
I'll have to test if my setup is actually capable of gigabit for the local network but I would be surprised if it isn't, especially after slh also said his kvm-based VMs can reach gigabit.
My understanding is that TSO and GSO in the virtio driver work by offloading the job to the actual hardware, and if the hardware or the host driver does not support it properly (Realtek is a common offender, Intel is usually good) then the VM will have issues and you will need to disable it.
I have used virt-manager in the past, it can also connect to a headless server to remote-control its virtualization capabilities (using SSH tunnels). I have used it for a while before proxmox and it's fine for a single headless hypervisor system.
I do use OpenWrt/ x86_64 under qemu-kvm (no orchestration, other than some custom start scripts to save me the typing) regularly, to temporarily bring internet access to a walled-off VLAN (usually no internet access) for updating. When I originally set it up, I didn't care about the speed that much, as my WAN was the limiting factor anyways, but now I can reliably achieve 1 GBit/s wire speed (~934 MBit/s) from that qemu-kvm instance (running on a 14 year old sandy-bridge i7-2600k host). This instance also doubles up for testing various OpenWrt related things, as the virtualization makes that easy to accomplish (and to reset afterwards).
About a months ago, I switched my main router to x86_64 (ivy-bridge celeron 1037u), with OpenWrt/ master running on the bare iron. While testing the setup, I could confirm that this ~9 year old ULV CPU originally targeted at the mobile market can easily achieve 1 GBit/s wire speed between its two Intel 82574L (e1000e) onboard network cards (I would have been happy with r8168 as well, I didn't know what cards to expect when I bought it), with SQM/ cake being active (~53% CPU load on one core, without even clocking all the way up). I've been very happy with this setup so far, it's totally bored handling my 400/200 MBit/s ftth WAN uplink.
Personally I wouldn't want to rely on virtualization for the main router in a common household, as that's the single device that really must remain functional at all times - including the times when the hypervisor is off, updating, or broken. Additionally the router is the only device in my network that knows static DHCP leases and local DNS overrides and resolution, allowing my to quickly change the complete network topology as needed, and without having to care about syncing the configuration with other systems (or a hypervisor underneath the VM…) - maybe aside from the managed switch(es) (but VLAN assignments tend to remain more stable for the majority of devices than IP assignments). It's fine to manage 'optional' devices, e.g. other VM clusters or the update-once-a-month needs of an otherwise offline subnet, but imho not really as the sole bastion host in your network, that needs to remain functional for VoIP/ SIP, mail (-client usage), DHCP/ DNS and is needed for updating/ bootstrapping or fixing any system within the network. Obviously this perception may be different in an enterprise network (or sufficiently complex enthusiast networks managed as such), but I personally don't want to treat any individual system in my home-network as mission critical to that extent (down to the hypervisor); obviously I do have preconfigured spares for router/ SIP pbx, etc. as cold-standby (they won't give me 100% performance, but they'll work until I've sorted the problem).