I didn't say UEFI would be a problem, but UEFI booting from software RAID is.
Let's go back to legacy BIOS, how does software RAID work here:
- bootsector written to all RAID members
/boot/
on RAID1
(but still mirrored, so readable individually without having to know about the RAID array)
- OS on any flavour of RAID
The important parts for booting are mirrored on all disks (bootsector embedded behind the partition table and/ or a BIOS_BOOT partition (ef02
)), if one fails (and the assuming the board is smart enough not to wait forever, scanning the half-dead disk; which is not a given), booting continues from one of the remaining disks.
With UEFI, you need to have one EFI System Partition (ESP, ef00
), which contains your bootloader (grub2), which then loads your OS from RAID.
Assuming you have hardware RAID, the UEFI 'BIOS' and your operating system only see a single disk(-array), all good - failure modes are handled transparently by the hardware RAID controller.
With software RAID, the UEFI 'BIOS' only sees the individual disks, it doesn't know anything about the RAID itself. This means you need multiple ESPs (first problem, not all mainboards like this (and the UEFI boot spec only defines having a single ESP)), the boot order is defined in EFI variables (BootOrder
, BootCurrent
and multiple entries for Boot000x
), hardcoding the bootloader to load (at a point where there UEFI firmware doesn't know anything about your RAID array, yet).
From a technical perspective, this still -kind of- works, if your mainboard firmware can cope with multiple ESPs and regresses 'silently' when encountering a missing (failed-) disk to the next configured ESP. The problem is with the bootloader (grub-) installation and upgrade orchestration, which has to keep all 2+ ESPs in sync (/EFI/BOOT/BOOTX64.EFI
and /boot/grub/x86_64-efi/
getting out of sync is not pretty, boot failures and recovery from removable media almost guaranteed) and update the individual EFI variables accordingly. At least for Debian/ Ubuntu I know that the grub2 packages can not handle this[0], I'm not sure about the situation for the other main distributions (arch, gentoo, fedora, mageia, mandriva, opensuse, whatever).
Fake RAID is a different can of worms, if working properly it can help you to present a single ESP to the UEFI 'BIOS', akin to the hardware RAID situation, but you're locked to your mainboard vendor here - in other words, don't go there[1]. And OS support (especially thinking about OpenWrt here) gets rather tricky…
--
[0] about a decade ago, I manually set this up for testing on Debian with a mixture of grub2 and gummiboot (now systemd-boot), grub2 installed to the first disk, gummiboot to the second (RAID1) disk, which mostly worked without digging too deep into the distribution's bootloader orchestration (the premise here was upgrade safety over years and multiple version dist-upgrade
s, so changing too much of the packaged upgrade scripts was not an option). This worked in my test scenarios, but wasn't really anything I felt too comfortable with to deploy on remote systems - at that point BIOS booting was sadly the more reliable approach in combination with software RAID.
[1] if the mainboard dies, the RAID array goes with it. Yes, Intel RST fake RAID claims some kind of compatibility, but you can't trust this in production, when switching to a different mainboard (different UEFI version, maybe even a different/ newer chipset/ CPU) is needed.