[SOLVED] Install issue: 24.10, x64, NVMe

So I am installing 24.10 onto an x64 device (Lenovo M720q, if that matters) with an NVMe drive. Legacy boot enabled. The device boots fine from a USB stick (ext4, non-UEFI). The NVMe drive is detected by lsblk as /dev/nvme0n1.

Once the device is up and running, I go:

cd /tmp
wget -O fw.img.gz https://downloads.openwrt.org/releases/24.10.0/targets/x86/64/openwrt-24.10.0-x86-64-generic-ext4-combined.img.gz
gunzip fw.img.gz
dd if=fw.img of=/dev/nvme0n1

dd returns the appropriate messages ([number] in, [number] out) and quits. I halt, remove the USB stick and try to boot from the NVMe drive. No joy; the device tries to do a network boot instead (in BIOS, it's in the boot order after the NVMe drive). In other words, the NVMe drive is not recognized as a bootable device.

I repeat all the steps above (boot from a USB stick, etc.), but rather than halt, I run fdisk /dev/nvme0n1. Surprise: fdisk flashes me a red "disk is currently in use" message. On a whim, while still in fdisk, I issue a w command (write partitions), then q (quit fdisk), then halt. Remove the USB stick, try to boot from NVMe, and... it boots!

So correct me if I'm wrong, but it seems dd missed a step somewhere. Could this be because I was missing a package required to handle NVMe drives correctly? If so, what is that package?

On a totally unrelated topic, I would like to thank the developers for incorporating i915 firmware into 24.10...

Have you tried UEFI mode? I am using a similar setup on nvme with x86_64 and UEFI works perfectly.

No. I use the root partition resizer, which seems to have an interoperability problem with UEFI; it changes root partition's UUIDs when resizing it, so UEFI freaks out on reboot, demanding keyboard input...

Is this the filesystems UUID? Cos that can be updated easily via tune2fs from the resizer script. You can capture the old UUID, resize, then update the UUID to match. If it's a partition (not filesystem) UUID you could try looking for parted commands.

I wonder if you get the same problem when you write the image with the blocksize argument as written on the wiki?

 bs=1M

I don't know the low level details of nvme storage but I guess there is the possibility that there already is data in the final storage block and you end up with a partition that does not get recognised properly. By using the bs=1M argument I guess the remainder of the last block gets zeroed out and this results in a working partition?

Not sure about this but when installing on a google onhub there were some weird issues sometimes and zeroing out the emmc storage and writing with bs=1M fixed those.

if you can boot via usb why you need the resize script? you can run the same commands thus you'll see the outputs instead of not knowing what the script does as it happens during reboot.

also i highly recommend to change /boot/grub/grub.cfg using root=/dev/xxxx in linux command parameter instead of default UUID mode. as it does not add any value but complicates life like in case of partition resize, so you better use the fixed partition names imho.

or you can create your custom image with the size you fancy.

1 Like

Because I occasionally need to upgrade devices to which there is no convenient physical access.

Thank you! I really like this idea. Testing it out on one of my devices now...

I'm using this on all my x86 openwrt devices, no issues, ever.

1 Like

UEFI is hard-required for booting from NVMe, BIOS/ legacy-CSM has no NVMe drivers.

1 Like

Ah, that explains it all... Thank you! I think I have my solution now: install the UEFI version, then, edit grub.cfg to identify the root filesystem by its /dev/* name rather than by UUID.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.