Noob question on building OpenWrt

Hello,

I built OpenWrt successfully on Multipass on a Mac M1 for target x86/64 (Intel N5105).
It worked, so that is great.

Now, I did some changes in the config, particularly the kernel config. I compiled the whole thing again on my VM (same version as first time: OpenWrt 22.03.0), and I just used the newly produces openwrt-x86-64-generic-kernel.bin instead of vmlinuz in /boot/ (adding an entry in grub.cfg to select old or new kernel).
New kernel boots, but then I have no network connectivity and it is not working very well (reboot is not doing anything for example…).

Before I go crazy trying to figure out if a config change is the reason of this behavior, maybe the problem is my way of doing this (maybe booting alternate kernel this way is what breaks it, and I should dd the whole boot partition instead, or even dd boot and root as maybe links and libs are broken by just flashing boot but not root…)

Of course, using original vmlinuz is making all work again (just selecting the original entry in grub).

Could someone clarify this for me?

My goal is to be able to upgrade my system without a long process to recover… (x86 is a little of a pain compared to flash routers for that).
I do use the usual boot and root file systems (but increased rootfs to 2 Gb), and use a third partition for my custom scripts and docker bind volumes and data.

Thank you.

You need the root FS too, not only the kernel.

This might help Sysupgrade help for x86_64 - #14 by frollic

1 Like

Great, thank you!

I found how to make easy upgrades without hassles:
I have 2 disks in my device : one 512 Gb NVMe drive (/dev/nvme0n1) and one 512 Gb SSD drive (/dev/sda).

The NVMe drive is the main one (production), and has:
/dev/nvme0n1p1 (bootfs)
/dev/nvme0n1p2 (rootfs, production one)
/dev/nvme0n1p3 (custom partition mounted like /custom)

The SSD drive has:
/dev/sda1 (bootfs)
/dev/sda2 (rootfs, upgrade one)

I just need to:

  1. add the newly compiled kernel in /dev/nvme0n1p1 at /boot/boot/vmlinuznew

  2. make sure at /dev/sda1 in /boot/boot/grub/grub.cfg that I have an entry to use vmlinuznew with rootfs as /dev/sda2
    NOTE: In the BIOS, I do have the NVMe disk as startup disk, but the grub config file used is the one in the SSD (/dev/sda1). However, the vmlinuz or vmlinuznew is read from nvme0n1p1 :woozy_face:

  3. dd the newly compiled rootfs (matching the new kernel) to /dev/sda2

  4. copy /etc (the one in current NVMe) into the /dev/sda2 partition (to keep all my settings and configs).

Then just reboot the device and select the grub entry for the SSD, et voilà, I can test the new version and roll back if needed.
If all is fine, then I just have to dd the new rootfs to /dev/nvme0n1p2 and copy back the /etc.

This topic was automatically closed 10 days after the last reply. New replies are no longer allowed.