Hello,
I've just set up a complete new 19.07.2 VM in Hyper-V (Windows 10 Pro x64) following this guide.
Basic information:
- modem: AVM fritz!box 7520
- host hardware:
- CPU: AMD Ryzen 3200G
- MB: MSI A320
- 32 GiB RAM
- NIC 1: Intel X550-T --> connected to client laptop; in Hyper-V settings marked as not shared with host
- NIC 2: Intel X550-T#2 --> connected to modem; in Hyper-V settings marked as not shared with host
- WiFi 1: QCA AR9287
- WiFi 2: QCA 988x (WLE900VX)
- VM-settings:
- [eth0=virtual internal network switch --> first attempt - results see below. Don't do this!]
- [eth0=NIC 1 --> secound attempt; working!]
- eth1=NIC 2
- [eth2=NIC 1 --> first attempt]
- OpenWrt-settings:
- eth2 is bridged to lan
- all other settings are just the basic ones
It is working, both clients have internet access with full line speed.
But unfortunately I have the following connection problems:
- [I cannot connect to the VM under 192.168.1.1 from neither the client nor the host PC: SSH-accesses are refused and iperf-connections time out and I cannot connect to luci (I installed luci-ssl beforehand and restartet uhttpd). And when trying to ping, it says "the host cannot be found".] --> Reason see post#3; solved.
- [Ping is significantly higher when connected to the VM (45ms) instead of the physical (~20ms) router.] --> seems to have been an artifact and is gone after activating SR-IOV for Intel X550-T inside BIOS; solved.
- Cannot hand trough WiFi-cards. I somehow knew it would get tricky at this point.
- Followed the infamous blog post from Microsoft: https://devblogs.microsoft.com/scripting/passing-through-devices-to-hyper-v-vms-by-using-discrete-device-assignment/.
- Got the following error below
- Digging deeper into this, it seems, the following has to be present to hand trough devices:
- Interrupt remapping: Intel’s VT-d with the Interrupt Remapping capability (VT-d2) or any version of AMD I/O Memory Management Unit (I/O MMU).
- DMA remapping: Intel’s VT-d with Queued Invalidations or any AMD I/O MMU.
- Access control services (ACS) on PCI Express root ports.
- The firmware tables must expose the I/O MMU to the Windows hypervisor. Note that this feature might be turned off in the UEFI or BIOS. For instructions, see the hardware documentation or contact your hardware manufacturer.
- Unfortunately I'd chosen the MSI MB just because it was cheap...
- it doesn't have the relevant UEFI functionality. I ordered another board which I thought to be functional and will adjust this post accordingly.
- Found a board, that supports ACS on PCI Express root ports: Asus B450M PRO-GAMING. The WiFi-cards can be detached from the physical machine and attached to the VM. Still the VM will not start displaying error code <0xC035001E>, with meaning
a hypervisor feature not available user.
. This might work on Windows Server 2016/2019, but that's way beyond what I was willing to pay. So in the end I'm giving up on this and will only use dedicated access points for WiFi-access.
Any help is warmly welcomed!
Thank you, at least for reading,
ssdnvv
addendum:
Error Message when trying to hand through WiFi-card
PS C:\Users\Server> Dismount-VMHostAssignableDevice -force -LocationPath $locationPath
Dismount-VMHostAssignableDevice : The operation failed.
The current configuration does not allow for OS control of the PCI Express bus. Please check your BIOS or UEFI settings.
At line:1 char:1
+ Dismount-VMHostAssignableDevice -force -LocationPath $locationPath
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidArgument: (:) [Dismount-VMHostAssignableDevice], VirtualizationException
+ FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.DismountVMHostAssignableDevice