OpenWrt within Hyper-V VM - connectivity / other issues

Hello,

I've just set up a complete new 19.07.2 VM in Hyper-V (Windows 10 Pro x64) following this guide.
Basic information:

  • modem: AVM fritz!box 7520
  • host hardware:
    • CPU: AMD Ryzen 3200G
    • MB: MSI A320
    • 32 GiB RAM
    • NIC 1: Intel X550-T --> connected to client laptop; in Hyper-V settings marked as not shared with host
    • NIC 2: Intel X550-T#2 --> connected to modem; in Hyper-V settings marked as not shared with host
    • WiFi 1: QCA AR9287
    • WiFi 2: QCA 988x (WLE900VX)
  • VM-settings:
    • [eth0=virtual internal network switch --> first attempt - results see below. Don't do this!]
    • [eth0=NIC 1 --> secound attempt; working!]
    • eth1=NIC 2
    • [eth2=NIC 1 --> first attempt]
  • OpenWrt-settings:
    • eth2 is bridged to lan
    • all other settings are just the basic ones

It is working, both clients have internet access with full line speed.
But unfortunately I have the following connection problems:

  1. [I cannot connect to the VM under 192.168.1.1 from neither the client nor the host PC: SSH-accesses are refused and iperf-connections time out and I cannot connect to luci (I installed luci-ssl beforehand and restartet uhttpd). And when trying to ping, it says "the host cannot be found".] --> Reason see post#3; solved.
  2. [Ping is significantly higher when connected to the VM (45ms) instead of the physical (~20ms) router.] --> seems to have been an artifact and is gone after activating SR-IOV for Intel X550-T inside BIOS; solved.
  3. Cannot hand trough WiFi-cards. I somehow knew it would get tricky at this point.
    1. Followed the infamous blog post from Microsoft: https://devblogs.microsoft.com/scripting/passing-through-devices-to-hyper-v-vms-by-using-discrete-device-assignment/.
    2. Got the following error below
    3. Digging deeper into this, it seems, the following has to be present to hand trough devices:
      • Interrupt remapping: Intel’s VT-d with the Interrupt Remapping capability (VT-d2) or any version of AMD I/O Memory Management Unit (I/O MMU).
      • DMA remapping: Intel’s VT-d with Queued Invalidations or any AMD I/O MMU.
      • Access control services (ACS) on PCI Express root ports.
      • The firmware tables must expose the I/O MMU to the Windows hypervisor. Note that this feature might be turned off in the UEFI or BIOS. For instructions, see the hardware documentation or contact your hardware manufacturer.
    4. Unfortunately I'd chosen the MSI MB just because it was cheap... :money_mouth_face: :face_with_symbols_over_mouth: - it doesn't have the relevant UEFI functionality. I ordered another board which I thought to be functional and will adjust this post accordingly.
    5. Found a board, that supports ACS on PCI Express root ports: Asus B450M PRO-GAMING. The WiFi-cards can be detached from the physical machine and attached to the VM. Still the VM will not start displaying error code <0xC035001E>, with meaning a hypervisor feature not available user.. This might work on Windows Server 2016/2019, but that's way beyond what I was willing to pay. So in the end I'm giving up on this and will only use dedicated access points for WiFi-access.

Any help is warmly welcomed! :slightly_smiling_face:
Thank you, at least for reading,
ssdnvv

addendum:

Error Message when trying to hand through WiFi-card
PS C:\Users\Server> Dismount-VMHostAssignableDevice -force -LocationPath $locationPath
Dismount-VMHostAssignableDevice : The operation failed.
The current configuration does not allow for OS control of the PCI Express bus. Please check your BIOS or UEFI settings.
At line:1 char:1
+ Dismount-VMHostAssignableDevice -force -LocationPath $locationPath
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    + CategoryInfo          : InvalidArgument: (:) [Dismount-VMHostAssignableDevice], VirtualizationException
    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.DismountVMHostAssignableDevice

It seems the ping-issues are gone, so I deleted this in the OP.

For the connectivity I found out the following:
On my Windows 10 client I the output of route is (I translated the text from German so it may not be the correct termini)

[...]
IPv4-Routentabelle
===========================================================================
Active Routes:
   Network target          Netmask          Gateway         Interface Metric
          0.0.0.0          0.0.0.0    192.168.135.1    192.168.135.7     25
        127.0.0.0        255.0.0.0   Auf Verbindung         127.0.0.1    331
        127.0.0.1  255.255.255.255   Auf Verbindung         127.0.0.1    331
  127.255.255.255  255.255.255.255   Auf Verbindung         127.0.0.1    331
    192.168.135.0  255.255.255.240   Auf Verbindung     192.168.135.7    281
    192.168.135.7  255.255.255.255   Auf Verbindung     192.168.135.7    281
   192.168.135.15  255.255.255.255   Auf Verbindung     192.168.135.7    281
        224.0.0.0        240.0.0.0   Auf Verbindung         127.0.0.1    331
        224.0.0.0        240.0.0.0   Auf Verbindung     192.168.135.7    281
  255.255.255.255  255.255.255.255   Auf Verbindung         127.0.0.1    331
  255.255.255.255  255.255.255.255   Auf Verbindung     192.168.135.7    281
===========================================================================
[...]

ipconfig:

   IPv4-Adresse  . . . . . . . . . . : 192.168.135.7
   Subnetzmaske  . . . . . . . . . . : 255.255.255.240
   Standardgateway . . . . . . . . . : 192.168.135.1

So the reason I cannot reach my VM with 192.168.1.1 must be realted to this curiousity.
I expected 192.168.1.1 to be working because when looking into my VM, it prints for route as default gateway 192.168.100.1 (my ISP modem, which I can also reach from my client machine with 192.168.100.1) for eth1 and 192.168.1.0 for br-lan.
So this has to be Hyper-V related (at least in my eyes).
But I also cannot access this 192.168.135.0-subnet: ping times out, ssh-connection gets refused and when trying to access 192.168.135.1 on the browser (no metter whether firefox or edge) a complete blank page is shown.
Maybe a Hyper-V experienced user can help me find out, why my client doesn't get a 192.168.1.0/24-based adress/route - I did never define that 192.168.135.0-range when setting up the VM...

I finally found the culprit:
Setting this Hyper-V internal switch as eth0 (first defined network card) somehow set up this weird IP-adress. When using NIC1 as eth0, everything works as expected.
Ping and speeds are just fine :smile:
Next steps will be to pass through the WiFi-cards to the VM - I will adjust the OP accordingly as soon as I proceeded.

In OpenWrt, the first interface (eth0) is setup as LAN by default. The second (eth1) is setup as WAN.

Good luck in your further setup and glad you figured it out.

Also, feel free to reference the official Wiki: https://openwrt.org/docs/guide-user/virtualization/start

1 Like

Thanks, I was aware of this.
The funny thing is - when setting the external adapter (NIC1) to be the first interface (eth0) and the internal adapter (vEthernet) to be the third (or later) interface (eth2/eth3/...) and bridging that to the lan interface, it does work as expected. I suppose that has something to do with the DHCP-server functionality of Hyper-V internal switches. This might be adjustable with powershell commands, but I didn't invest more time on this field.

Thanks - the next issue was hiding just right after the first one :crazy_face: But this time this is a hardware/UEFI issue which cannot be solved on the respective MB. I had to order another one and adjusted the OP accordingly.

Unfortunately Hyper-V is not referrenced in there. Maybe I improve that situation whenever I finish the setup successfully...

1 Like

I finally gave up on trying to hand trough WiFi-cards to the OpenWrt guest - I found a µATX-MB, that provides the necessary firmware functionality (Asus B450M PRO-GAMING) with which I can attach the respective device to the OpenWrt VM, but unfortuantely it still fails out of a reason I cannot spot (error code <0xC035001E>). Maybe it works with Windows Server, maybe not.
But the base functionality is working great - the trick is to just use Intel NICs, that provide SR-IOV functionality, activate it within firmware and than have the virtual cards directly attached to the OpenWrt VM. I Maybe later on post the speeds with my 10GBE connection.

Outstanding activities:

  • Activate Jumbo-frames (haven't tried yet)

what worked for me is that I used an internal switch and an external switch not connected to the OS. when adding the ethernet interfaces in the hyper-v setting you have to put a static Mac address to know which one to set to eth 0 and 1 in OpenWrt. the interface names will be the same and can't be changed so the only way to identify the interfaces is by their Mac address. the external switch has to be set to wan eth1 and the internal set to eh0 in hyper-v in OpenWrt the br-lan has to be set to eth0 and the wan has to be set to eth 1