Tips for VirtualBox 6.x i/o performance regarding our SDK?

Hi, i just switched from Hyper-V to VirtualBox 6.1 and i noticed way slower i/o performance. This is with Debian9 as Guest and Windows10 as Host. I use the same SSD as in Hyper-V, while having the guest additions installed, i also switched the i/o-scheduler to "noop". The vDisk is a fixed VDI and i tried ext4 and a F2FS partition, all with noatime. I also tried different controllers and enable/disable "host i/o cache", all without any real improvements.

Simple commands like "make menuconfig" take like 2-4 times longer and the compile/build process takes also way longer.

So anyone has any tips regarding i/o performance (specifically for building/compiling images), since i would like to use VirtualBox over Hyper-V, because of the simpler networking and not locking down the hypervisor as L1?

PS: I may try via vmdk and attach a physical disk directly and see if this helps.

ok just tested a vmdk "raw" disk and this now performs similar to hyper-v, so something was slowing down VB6.1 via virtual disks.


if you ever run the slow one again...

VBoxManage showvminfo --details VMNAME | grep -E '(Para|VT|Hardw|Pag|PAE|Memory)'

I just changed the disk, so here is the info:

VBoxManage showvminfo --details Debian9 | grep -E '(Para|VT|Hardw|Pag|PAE|Memory)'
Memory size                  4096MB
Page Fusion:                 disabled
PAE:                         enabled
Nested VT-x/AMD-V:           disabled
Hardware Virtualization:     enabled
Nested Paging:               enabled
Large Pages:                 enabled
VT-x VPID:                   enabled
VT-x Unrestricted Exec.:     enabled
Paravirt. Provider:          Default
Effective Paravirt. Prov.:   KVM

I don't run under windows...

if anything will help it's likely under the settings > system > acceleration TAB... ( my system seems to have large pages disabled dunno if that's of any consequence )...

will try and bench raw vs vdi next time I build... ( but for a very rough guide... last time I built in vbox it seemed ~35%+ slower using vdi )... but I may not have allocated more than one cpu and I think the disk .vdi was also on the same physical disk as the host... So, i'll up the cpu count and place the .vdi on a non-host disk for a clearer figure...

Yeah for me the overall i/o "slowness" was immediate noticeable, since i did build a lot and are used to the "make menuconfig" and "make -j8" build output timings. So i noticed that things are progressing much slower, not just 35% in my case. I did not really notice any improvements switching from a dynamic vdi to a fixed, which was the second thing i tried.

I will keep the raw disk, since i had a 120gb sata M.2 laying around, so this is perfect to attach to my OpenWrt-Dev VM. There is also the option to just use a raw partition, but i had some problems getting this to work.

PS: Compared to hyper-v, where you must offline a disk and only than you can easily hand it down to the VM via the ui-manager, the VirtualBox steps feel like a workaround. Especially since the disk is not offline under Win10, which means some partition/raid/ssd managers could potentially corrupt the disk, while its also in use by the VM. The hyper-v approach looked much more safer in this case.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.