[25.12.0] Issues on shutdown x86 with vagrant-libvirt

Please don't hit me!

But I encounter issues with Vagrant and vagrant-libvirt while shutdown an instance.
Vagrant and/or vagrant-libvirt is waiting for some file descripter as far as I can read strace :shrug: and declares the instances "inaccessible" but virsh is listing the instance still as "running", and can it "force off"; after that vagrant status is fine again.

Any one having issues with shutting down virtualized x86 images, too?

PS: Yes I should check the serial output first....

PPS: Yes yes of course with 23.x and 24.x this was never an issue... But I'm not so deep in low level hardware events...

What's causing this is changed behavior of busybox's halt applet. It used to shutdown and poweroff before v25 instead of just shutting down. Looks like now in v25 it works exactly like it should have been

root@OpenWrt:~# halt --help
BusyBox v1.37.0 (2026-03-03 00:14:15 UTC) multi-call binary.

Usage: halt [-d DELAY] [-nf]

Halt the system

        -d SEC  Delay interval
        -n      Do not sync
        -f      Force (don't go through init)
root@OpenWrt:~# poweroff --help
BusyBox v1.37.0 (2026-03-03 00:14:15 UTC) multi-call binary.

Usage: poweroff [-d DELAY] [-nf]

Halt and shut off power

        -d SEC  Delay interval
        -n      Do not sync
        -f      Force (don't go through init)
root@OpenWrt:~# 

Unfortunately vagrant uses halt instead of poweroff: https://github.com/hashicorp/vagrant/blob/main/plugins/guests/openwrt/cap/halt.rb#L10
Just checked by temporarily editing /opt/vagrant/embedded/gems/gems/vagrant-2.4.9/plugins/guests/openwrt/cap/halt.rb - it shuts down fine with poweroff.

Looks like an easy fix, but will depend on how difficult it is to make a PR to be accepted in vagrant project.

1 Like

A thousand thanks for debugging and following the lead.
I may can workaround by an explicit power off or shutdown via ssh[1]... Thanks again for the hint on the BusyBox changes too. Let me check this the other day.

[1] I already need a wrapper around vagrant halt because the libvirt provider does not support parallel execution like with vagrant up, so it's not a deal breaker to add just another wrapper. Oh boy.

Edit ps. Welcome.

No problem :slightly_smiling_face:
I'm considering switching from routeros to openwrt on my mikrotik routers and need a virtual testlab for testing everything up. And also stumbled upon this problem. A wrapper like this worked fine:

rm /sbin/halt && printf '#!/bin/sh\nexec /sbin/poweroff "$@"\n' > /sbin/halt && chmod +x /sbin/halt

BTW which box do you use for v25? Didn't find anything in vagrant registry and had to fix https://github.com/vladimir-babichev/vagrant-openwrt-box to create a working local box. But definitely would prefer using some existing box instead.

I build an image with the image builder and then convert to qcow2 and from there on I build a vagrant box

Why? You need modifications on the OpenWrt config at least.
Also I would not use anything other then the libvirt provider.
I also find the libvirt networking far more usable then those from virtualbox

Btw that's the script I use https://github.com/vagrant-libvirt/vagrant-libvirt/blob/main/tools/create_box.sh

@cheretbe

Btw that's the script I use https://github.com/vagrant-libvirt/vagrant-libvirt/blob/main/tools/create_box.sh

Well, generally I avoid creating VMs by hand :slightly_smiling_face: Especially testlabs of these kind. I tend to forget after a while how it was created and all the details. The beauty of vagrant is that everything is described in code and therefore easy to recreate and inspect.
When the base box is published on vagrant cloud there is no need to rebuild it, vagrant will download and use it as needed.
Config modifications also can (and should) be done in code using provisioners. Either shell scripts or Ansible. There is an (outdated) example. OpenWRT modules are now the part of community.general, but the general principle is the same.

But building, publishing and maintaining the boxes takes some effort. Not sure if I'll have free time for this, but will try to find some in a short while.