I have a machine with UEFI BIOS. I want to install Ubuntu 20.04 desktop with LVM on top of RAID 1, so my system will continue to work even if one of the drives fail. I haven't found a HOWTO for that. The 20.04 desktop installer supports LVM but not RAID. The answer to this question describes the process for 18.04. However, 20.04 does not provide an alternate server installer. The answer to this question and this question describe RAID but not LVM nor UEFI. Does anyone have a process that works for 20.04 with LVM on top of RAID 1 for a UEFI machine?
5 Answers
After some weeks of experimenting and with some help from this link, I have finally found a solution that works. The sequence below was performed with Ubuntu 20.04.2.0 LTS. I have also succeeded with the procedure with 21.04.0 inside a virtual machine. (However, please note that there is a reported problem with Ubuntu 21.04 and some older UEFI systems.
In short
- Download and boot into Ubuntu Live for 20.04.
- Set up mdadm and lvm.
- Run the Ubuntu installer, but do not reboot.
- Add mdadm to target system.
- Clone EFI partition to second drive.
- Install second EFI partition into UEFI boot chain.
- Reboot
In detail
1. Download the installer and boot into Ubuntu Live
1.1 Download
- Download the Ubuntu Desktop installer from https://ubuntu.com/download/desktop and put it onto a bootable media. (As of 2021-12-13, the iso was called ubuntu-20.04.3-desktop-amd64.iso.)
1.2 Boot Ubuntu Live
- Boot onto the media from step 1.1.
- Select
Try Ubuntu. - Start a terminal by pressing Ctrl-Alt-T. The commands below should be entered in that terminal.
2. Set up mdadm and lvm
In the example below, the disk devices are called /dev/sdaand /dev/sdb. If your disks are called something else, e.g., /dev/nvme0n1 and /dev/sdb, you should replace the disk names accordingly. You may use sudo lsblk to find the names of your disks.
2.0 Install ssh server
If you do not want to type all the commands below, you may install want to log in via ssh and cut-and-paste the commands.
Install
sudo apt install openssh-serverSet a password to enable external login
passwdIf you are testing this inside a virtual machine, you will probably want to forward a suitable port. Select
Settings,Network,Advanced,Port forwarding, and the plus sign. Enter, e.g.,3022as theHost Portand22as the Guest Port and pressOK. Or from the command line of your host system (replace VMNAME with the name of your virtual machine):VBoxManage modifyvm VMNAME --natpf1 "ssh,tcp,,3022,,22" VBoxManage showvminfo VMNAME | grep 'Rule'
Now, you should be able to log onto your Ubuntu Live session from an outside computer using
ssh <hostname> -l ubuntu
or, if you are testing on a virtual machine on localhost,
ssh localhost -l ubuntu -p 3022
and the password you set above.
2.1 Create partitions on the physical disks
Zero the partition tables with
sudo sgdisk -Z /dev/sda sudo sgdisk -Z /dev/sdbCreate two partitions on each drive; one for EFI and one for the RAID device.
sudo sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"EFI System" /dev/sda sudo sgdisk -n 2:0:0 -t 2:fd00 -c 2:"Linux RAID" /dev/sda sudo sgdisk -n 1:0:+512M -t 1:ef00 -c 1:"EFI System" /dev/sdb sudo sgdisk -n 2:0:0 -t 2:fd00 -c 2:"Linux RAID" /dev/sdbCreate a FAT32 system for the EFI partition on the first drive. (Will be cloned to the second drive later.)
sudo mkfs.fat -F 32 /dev/sda1
2.2 Install mdadm and create md device
Install mdadm
sudo apt-get update
sudo apt-get install mdadm
Create the md device. Ignore the warning about the metadata since the array will not be used as a boot device.
sudo mdadm --create /dev/md0 --bitmap=internal --level=1 --raid-disks=2 /dev/sda2 /dev/sdb2
Check the status of the md device.
$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb2[1] sda2[0]
1047918528 blocks super 1.2 [2/2] [UU]
[>....................] resync = 0.0% (1001728/1047918528) finish=69.6min speed=250432K/sec
bitmap: 8/8 pages [32KB], 65536KB chunk
unused devices: <none>
In this case, the device is syncing the disks, which is normal and may continue in the background during the process below.
2.4 Partition the md device
sudo sgdisk -Z /dev/md0
sudo sgdisk -n 1:0:0 -t 1:E6D6D379-F507-44C2-A23C-238F2A3DF928 -c 1:"Linux LVM" /dev/md0
This creates a single partition /dev/md0p1 on the /dev/md0 device. The UUID string identifies the partition of be an LVM partition.
2.3 Create LVM devices
Create a physical volume on the md device
sudo pvcreate /dev/md0p1Create a volume group on the physical volume
sudo vgcreate vg0 /dev/md0p1Create logical volumes (partitions) on the new volume group. The sizes and names below are my choices. You may decide differently.
sudo lvcreate -Z y -L 25GB --name root vg0 sudo lvcreate -Z y -L 10GB --name tmp vg0 sudo lvcreate -Z y -L 5GB --name var vg0 sudo lvcreate -Z y -L 10GB --name varlib vg0 sudo lvcreate -Z y -L 200GB --name home vg0
Now, the partitions are ready for the Ubuntu installer.
3. Run the installer
- Double-click on the
Install Ubuntu 20.04.2.0 LTSicon on the desktop of the new computer. (Do NOT start the installer via any ssh connection!) - Answer the language and keyboard questions.
- On the
Installation typepage, selectSomething else. (This is the important part.) This will present you with a list of partitions called/dev/mapper/vg0-home, etc. - Double-click on each partition starting with
/dev/mapper/vg0-. SelectUse as:Ext4, check theFormat the partitionbox, and choose the appropriate mount point (/forvg0-root,/homeforvg0-home, etc.,/var/libforvg0-varlib). - Select the first device
/dev/sdafor the boot loader. - Press
Install Nowand continue the installation. - When the installation is finished, select
Continue Testing.
In a terminal, run lsblk. The output should be something like this:
$ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
...
sda 8:0 0 1000G 0 disk
├─sda1 8:1 0 512M 0 part
└─sda2 8:2 0 999.5G 0 part
└─md0 9:0 0 999.4G 0 raid1
└─md0p1 259:0 0 999.4G 0 part
├─vg0-root 253:0 0 25G 0 lvm /target
├─vg0-tmp 253:1 0 10G 0 lvm
├─vg0-var 253:2 0 5G 0 lvm
├─vg0-varlib 253:3 0 10G 0 lvm
└─vg0-home 253:4 0 200G 0 lvm
sdb 8:16 0 1000G 0 disk
├─sdb1 8:17 0 512M 0 part
└─sdb2 8:18 0 999.5G 0 part
└─md0 9:0 0 999.4G 0 raid1
└─md0p1 259:0 0 999.4G 0 part
├─vg0-root 253:0 0 25G 0 lvm /target
├─vg0-tmp 253:1 0 10G 0 lvm
├─vg0-var 253:2 0 5G 0 lvm
├─vg0-varlib 253:3 0 10G 0 lvm
└─vg0-home 253:4 0 200G 0 lvm
...
As you can see, the installer left the installed system root mounted to /target. However, the other partitions are not mounted. More importantly, mdadm is not yet part of the installed system.
4. Add mdadm to the target system
4.1 chroot into the target system
First, we must mount the unmounted partitions:
sudo mount /dev/mapper/vg0-home /target/home
sudo mount /dev/mapper/vg0-tmp /target/tmp
sudo mount /dev/mapper/vg0-var /target/var
sudo mount /dev/mapper/vg0-varlib /target/var/lib
Next, bind some devices to prepare for chroot...
cd /target
sudo mount --bind /dev dev
sudo mount --bind /proc proc
sudo mount --bind /sys sys
...and chroot into the target system.
sudo chroot .
4.2 Update the target system
Now we are inside the target system. Install mdadm
apt install mdadm
If you get a dns error, do
echo "nameserver 1.1.1.1" >> /etc/resolv.conf
and repeat
apt install mdadm
You may ignore any warnings about pipe leaks.
Inspect the configuration file /etc/mdadm/mdadm.conf. It should contain a line near the end similar to
ARRAY /dev/md/0 metadata=1.2 UUID=7341825d:4fe47c6e:bc81bccc:3ff016b6 name=ubuntu:0
Remove the name=... part to have the line read like
ARRAY /dev/md/0 metadata=1.2 UUID=7341825d:4fe47c6e:bc81bccc:3ff016b6
Update the module list the kernel should load at boot.
echo raid1 >> /etc/modules
Update the boot ramdisk
update-initramfs -u
Finally, exit from chroot
exit
5. Clone EFI partition
Now the installed target system is complete. Furthermore, the main partition is protected from a single disk failure via the RAID device. However, the EFI boot partition is not protected via RAID. Instead, we will clone it.
sudo dd if=/dev/sda1 of=/dev/sdb1 bs=4096
Run
$ sudo blkid /dev/sd[ab]1
/dev/sda1: UUID="108A-114D" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="ccc71b88-a8f5-47a1-9fcb-bfc960a07c16"
/dev/sdb1: UUID="108A-114D" TYPE="vfat" PARTLABEL="EFI System" PARTUUID="fd070974-c089-40fb-8f83-ffafe551666b"
Note that the FAT UUIDs are identical but the GPT PARTUUIDs are different.
6. Insert EFI partition of second disk into the boot chain
Finally, we need to insert the EFI partition on the second disk into the boot chain. For this we will use the efibootmgr.
sudo apt install efibootmgr
Run
sudo efibootmgr -v
and study the output. There should be a line similar to
Boot0005* ubuntu HD(1,GPT,ccc71b88-a8f5-47a1-9fcb-bfc960a07c16,0x800,0x100000)/File(\EFI\ubuntu\shimx64.efi)
Note the path after File. Run
sudo efibootmgr -c -d /dev/sdb -p 1 -L "ubuntu2" -l '\EFI\ubuntu\shimx64.efi'
to create a new boot entry on partition 1 of /dev/sdb with the same path as the ubuntu entry. Re-run
sudo efibootmgr -v
and verify that there is a second entry called ubuntu2 with the same path as ubuntu:
Boot0005* ubuntu HD(1,GPT,ccc71b88-a8f5-47a1-9fcb-bfc960a07c16,0x800,0x100000)/File(\EFI\ubuntu\shimx64.efi)
Boot0006* ubuntu2 HD(1,GPT,fd070974-c089-40fb-8f83-ffafe551666b,0x800,0x100000)/File(\EFI\ubuntu\shimx64.efi)
Furthermore, note that the UUID string of each entry is identical to the corresponding PARTUUID string above.
7. Reboot
Now we are ready to reboot. Check if the sync process has finished.
$ cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sdb2[1] sda2[0]
1047918528 blocks super 1.2 [2/2] [UU]
bitmap: 1/8 pages [4KB], 65536KB chunk
unused devices: <none>
If the syncing is still in progress, it should be ok to reboot. However, I suggest to wait until the syncing is complete before rebooting.
After rebooting, the system should be ready to use! Furthermore, should either of the disks fail, the system would use the UEFI partition from the healthy disk and boot ubuntu with the md0 device in degraded mode.
8. Update EFI partition after grub-efi-amd64 update
When the package grub-efi-amd64 is updated, the files on the EFI partition (mounted at /boot/efi) may change. In that case, the update must be cloned manually to the mirror partition. Luckily, you should get a warning from the update manager that grub-efi-amd64 is about to be updated, so you don't have to check after every update.
8.1 Find out clone source, quick way
If you haven't rebooted after the update, use
mount | grep boot
to find out what EFI partition is mounted. That partition, typically /dev/sdb1, should be used as the clone source.
8.2 Find out clone source, paranoid way
Create mount points and mount both partitions:
sudo mkdir /tmp/sda1 /tmp/sdb1
sudo mount /dev/sda1 /tmp/sda1
sudo mount /dev/sdb1 /tmp/sdb1
Find timestamp of newest file in each tree
sudo find /tmp/sda1 -type f -printf '%T+ %p\n' | sort | tail -n 1 > /tmp/newest.sda1
sudo find /tmp/sdb1 -type f -printf '%T+ %p\n' | sort | tail -n 1 > /tmp/newest.sdb1
Compare timestamps
cat /tmp/newest.sd* | sort | tail -n 1 | perl -ne 'm,/tmp/(sd[ab]1)/, && print "/dev/$1 is newest.\n"'
Should print /dev/sdb1 is newest (most likely) or /dev/sda1 is newest. That partition should be used as the clone source.
Unmount the partitions before the cloning to avoid cache/partition inconsistency.
sudo umount /tmp/sda1 /tmp/sdb1
8.3 Updated: Clone with rsync
As pointed out by Jon Hulka, you may use rsync instead of dd:
mkdir mnt
sudo mount /dev/sd?1 mnt #whichever of sda1 or sdb1 is not mounted at /boot/efi
sudo rsync -av --delete /boot/efi/ mnt
sudo umount mnt
8.4 Original: Clone with dd
If /dev/sdb1 was the clone source:
sudo dd if=/dev/sdb1 of=/dev/sda1
If /dev/sda1 was the clone source:
sudo dd if=/dev/sda1 of=/dev/sdb1
Done!
9. Virtual machine gotchas
If you want to try this out in a virtual machine first, there are some caveats: Apparently, the NVRAM that holds the UEFI information is remembered between reboots, but not between shutdown-restart cycles. In that case, you may end up at the UEFI Shell console. The following commands should boot you into your machine from /dev/sda1 (use FS1: for /dev/sdb1):
FS0:
\EFI\ubuntu\grubx64.efi
The first solution in the top answer of UEFI boot in virtualbox - Ubuntu 12.04 might also be helpful.
- 2,194
If you use Niclas Börlin's answer, consider using rsync instead of dd:
mkdir mnt
sudo mount /dev/sd?1 mnt #whichever of sda1 or sdb1 is not mounted at /boot/efi
sudo rsync -av --delete /boot/efi/ mnt
sudo umount mnt
This makes it impossible to accidentally overwrite the drive contents if you get them mixed up.
- 163
These are excellent detailed instructions. I just want to add that Desktop installer for 22.10 and 23.04 has no support for raid or LVM. It does not see partitions/file systems created that way. Solution is to switch to Server installer. After server is installed you do "sudo apt install ubuntu-desktop" as well as installing any additional drivers (for example nvidia).
- 84
I had a problem using ubuntu 24.04 (since you can only select real block-devices, no md,or dms) My "trick" here was: I had 2 nvmes and 2 usb devices. One usb device has the ubuntu live iso. Where we can boot from. Then I installed ubuntu to the second usb stick using LVM (was about 20Gb). If the installer is finished, do not reboot but open a terminal and the fun part can benign.
At that point your block devices shoud look like:
root@ubuntu:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
...
sda 8:0 1 7.5G 0 disk
├─sda1 8:1 1 5.8G 0 part /cdrom
├─sda2 8:2 1 5M 0 part
├─sda3 8:3 1 300K 0 part
└─sda4 8:4 1 1.7G 0 part /var/crash
/var/log
sdb 8:16 1 28.7G 0 disk
├─sdb1 8:17 1 1G 0 part
├─sdb2 8:18 1 2G 0 part
└─sdb3 8:19 1 25.6G 0 part
└─ubuntu--vg-ubuntu--lv 252:0 0 25.6G 0 lvm
nvme1n1 259:0 0 1.8T 0 disk
nvme0n1 259:1 0 1.8T 0 disk
Where sda is my installer usb. sdb is my temporary installation. If you choosed, like me, a LVM installation, you should have 3 paritions:
- sdb1: the efi parition, 1Gb
- sdb2: the boot parition, 2Gb
- sdb3: containing the lvm with the root-fs, the rest of the storage
Now I created 4 paritions on the nvmes, on the first a efi and a plain for raid, on the scound the boot and a plain for raid. I choosed 2Gb for efi and boot, so the other partions are same on size, good for raid.
printf "label: gpt\n,2G,U,*\n,,L," | sfdisk /dev/nvme0n1
printf "label: gpt\n,2G,L,\n,,L," | sfdisk /dev/nvme1n1
no we can copy the partitions from the temporary usb installation
# Copy efi partition
dd if=/dev/sdb1 of=/dev/nvme0n1p1 bs=4M
# Copy boot parition
dd if=/dev/sdb2 of=/dev/nvme1n1p1 bs=4M
Now create the raid-array
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/nvme0n1p2 /dev/nvme1n1p2
Copy over the LVM-Container
dd if=/dev/sdb3 of=/dev/md0 bs=4M
Since we dd the lvm, we have a conflict in lvs, since thy have duplicated uuids, I fixed it just to unplug the temporary installation usb stick and rebooted into the installer and opened a new terminal. The disk layout should now look like:
root@ubuntu:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
...
sda 8:0 1 7.5G 0 disk
├─sda1 8:1 1 5.8G 0 part /cdrom
├─sda2 8:2 1 5M 0 part
├─sda3 8:3 1 300K 0 part
└─sda4 8:4 1 1.7G 0 part /var/crash
/var/log
nvme0n1 259:0 0 1.8T 0 disk
├─nvme0n1p1 259:2 0 2G 0 part
└─nvme0n1p2 259:4 0 1.8T 0 part
└─md127 9:127 0 3.6T 0 raid0
└─ubuntu--vg-ubuntu--lv 252:0 0 25.6G 0 lvm
nvme1n1 259:1 0 1.8T 0 disk
├─nvme1n1p1 259:3 0 2G 0 part
└─nvme1n1p2 259:5 0 1.8T 0 part
└─md127 9:127 0 3.6T 0 raid0
└─ubuntu--vg-ubuntu--lv 252:0 0 25.6G 0 lvm
Since the normal ubuntu desktop does not have mdadm installed, we do it now and resize the root-fs:
# chroot into installed system
mkdir root
mount /dev/mapper/ubuntu--vg-ubuntu--lv root/
mount /dev/nvme1n1p1 root/boot/
mount /dev/nvme0n1p1 root/boot/efi/
mount --rbind /dev root/dev
mount --rbind /sys/ root/sys
mount --rbind /proc/ root/proc
mount --bind /etc/resolv.conf root/etc/resolv.conf
chroot root/
resize pv+lv and fs
pvresize /dev/md127
lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
not really know if necessary but lets update grub
update-grub2
install mdadm, this will also write the mdconf an regenerate a new initram, that contains mdadm
apt update && apt install mdadm
a little hacky, but works good!
- 111
Apologies. My feedback was apparently unclear, so here goes again. The question is "Does anyone have a process that works for 20.04 with LVM on top of RAID 1 for a UEFI machine?"
My "answer" is that the instructions given in steps 1-7 were both precise and appropriate - many thanks - but I had difficulties with 20.04 because it didn't support the XID 641 onboard graphics of my modern motherboard. I tried with 21.10 desktop and had no problems at all. Note that I switched SATA from RAID to AHCI in the BIOS beforehand and waited for syncing to complete in step 7, otherwise, a painless procedure. The target machine is a Ryzen 9 5950X, ASUS Crosshair VIII Hero motherboard, 2x8TB discs.