OpenWrt x86 with RAID1 setup

I have used OpenWrt for quite some time so I have a friend doing some work for a non-profit and am trying to help him set up something for him to use there. They have ZERO money currently so a used PC is about all that is available.

Question. Is there a how to or a document somewhere about installing OpenWrt x86 in a RAID 1 setup around? I see it has mdadm so it "should" be able to do it but I am not sure how to set that up AND get it to boot off of that created array. Can anyone point me in the right direction or is it even possible? The only reason for the RAID is that if the drive goes...then network goes...and this is a school for VERY special needs children...it's just got to work for their classes.

Thanks for the help!

The set up should be as for any Linux soft raid1 config.

If you're looking for a "ZERO effort" (and zero cost-) solution, do away with the RAID requirement and use a plain disk (backup USB stick, if you want).

--
Adding RAID support for OpenWrt (the running OS itself, not later mounted data partitions), including boot-, sysupgrade and UEFI support (especially the later is still a largely unsolved mystery, even for general purpose distributions) would require quite significant changes and low-level development. Admittedly, the situation gets a lot easier if you'd use a hardware RAID controller, but then we're far away from zero costs (and probably not quite zero efforts either).

Ubuntu has working uefi support since a couple of years.

There isn’t really a choice soon for Linux and friends to solve this if he want to exist on the market, new HP laptops doesn’t even have legacy boot any more and legacy boot will never return once removed.

And for the tread question.
Why not simply turn off the raid? The only meaning for raid 1 is for backup if one drive fails. But does it matter for OpenWRT?

I don’t know if you have HDD or SSD but SSD doesn’t really die like the old mechanical drives did from time to time.

Just make the configs for the router and save a backup.

I didn't say UEFI would be a problem, but UEFI booting from software RAID is.

Let's go back to legacy BIOS, how does software RAID work here:

  • bootsector written to all RAID members
  • /boot/ on RAID1
    (but still mirrored, so readable individually without having to know about the RAID array)
  • OS on any flavour of RAID

The important parts for booting are mirrored on all disks (bootsector embedded behind the partition table and/ or a BIOS_BOOT partition (ef02)), if one fails (and the assuming the board is smart enough not to wait forever, scanning the half-dead disk; which is not a given), booting continues from one of the remaining disks.

With UEFI, you need to have one EFI System Partition (ESP, ef00), which contains your bootloader (grub2), which then loads your OS from RAID.

Assuming you have hardware RAID, the UEFI 'BIOS' and your operating system only see a single disk(-array), all good - failure modes are handled transparently by the hardware RAID controller.

With software RAID, the UEFI 'BIOS' only sees the individual disks, it doesn't know anything about the RAID itself. This means you need multiple ESPs (first problem, not all mainboards like this (and the UEFI boot spec only defines having a single ESP)), the boot order is defined in EFI variables (BootOrder, BootCurrent and multiple entries for Boot000x), hardcoding the bootloader to load (at a point where there UEFI firmware doesn't know anything about your RAID array, yet).

From a technical perspective, this still -kind of- works, if your mainboard firmware can cope with multiple ESPs and regresses 'silently' when encountering a missing (failed-) disk to the next configured ESP. The problem is with the bootloader (grub-) installation and upgrade orchestration, which has to keep all 2+ ESPs in sync (/EFI/BOOT/BOOTX64.EFI and /boot/grub/x86_64-efi/ getting out of sync is not pretty, boot failures and recovery from removable media almost guaranteed) and update the individual EFI variables accordingly. At least for Debian/ Ubuntu I know that the grub2 packages can not handle this[0], I'm not sure about the situation for the other main distributions (arch, gentoo, fedora, mageia, mandriva, opensuse, whatever).

Fake RAID is a different can of worms, if working properly it can help you to present a single ESP to the UEFI 'BIOS', akin to the hardware RAID situation, but you're locked to your mainboard vendor here - in other words, don't go there[1]. And OS support (especially thinking about OpenWrt here) gets rather tricky…

--
[0] about a decade ago, I manually set this up for testing on Debian with a mixture of grub2 and gummiboot (now systemd-boot), grub2 installed to the first disk, gummiboot to the second (RAID1) disk, which mostly worked without digging too deep into the distribution's bootloader orchestration (the premise here was upgrade safety over years and multiple version dist-upgrades, so changing too much of the packaged upgrade scripts was not an option). This worked in my test scenarios, but wasn't really anything I felt too comfortable with to deploy on remote systems - at that point BIOS booting was sadly the more reliable approach in combination with software RAID.
[1] if the mainboard dies, the RAID array goes with it. Yes, Intel RST fake RAID claims some kind of compatibility, but you can't trust this in production, when switching to a different mainboard (different UEFI version, maybe even a different/ newer chipset/ CPU) is needed.