2

My first install did not include LVM and took about 15 seconds after BIOS to boot. My second install included LVM and took about 45 seconds after BIOS to boot. After much Googling the general consensus seems to be that this is a bug in which choosing LVM while using "certain" SSD's during installation will cause a long boot while the system looks for and does not find a swap file. The timeout is 30 seconds. Has anyone found a work around for this?

3 Answers3

3

Try either of the following two methods, or both.

First method

Verify how long the kernel boot time is by opening a terminal and typing:

systemd-analyze

The wait-for-root in /usr/share/initramfs-tools/scripts/local times out after expiring 30 seconds (slumber value). The dev_id variable is assigned the value of RESUME which is defined at /etc/initramfs-tools/conf.d/resume. This UUID which is assigned to RESUME is the UUID of LVM swap partition. Assign device file path of LVM swap partition to RESUME and use wait_for_udev instead of wait-for-root.

To do this, type (in the terminal):

sudo sed -e 's/^RESUME=/#RESUME=/g' \
   -i /etc/initramfs-tools/conf.d/resume

After that is finished, type:

echo "RESUME=/dev/mapper/ubuntu--YOUR FLAVOR OF UBUNTU HERE--vg-swap_1" | \
sudo tee -a /etc/initramfs-tools/conf.d/resume

Recreate initrd and reboot system.

sudo update-initramfs -u

After that is finished, type:

sudo reboot

The kernel boot time should be faster. Verify by typing:

systemd-analyze

You will also be able to use hibernation after this.

(Source)

Second method

Navigate to /etc/initramfs-tools/conf.d/

Right click on "resume" and choose Edit as Administrator. Change the line

RESUME=UUID=<WHATEVER YOUT NUMBER IS>

(e. g. RESUME=UUID=67b3fe6f-1ec4-413f-8c5a-1136bc7f3270) to:

RESUME=none

Now open a terminal and type:

sudo update-initramfs -u

After that is finished, type:

sudo reboot

The kernel boot time should be faster. Verify by typing:

systemd-analyze

(Source)

David Foerster
  • 36,890
  • 56
  • 97
  • 151
Arni
  • 31
1

Disclaimer: at the time of writing, I haven't enough reputation to comment other answers, so I must enter a new one (mainly as a reference for myself)

I had a similar issue on a fresh Ubuntu install, with bare metal install booting in ~15s while the boot on a LVM install took ~50s (with a pause of about 30s on a black screen).

A first call to sudo systemd-analyze blame pointed out that I had another problem:

$ sudo systemd-analyze blame
     40.699s snapd.seeded.service
     ...

That I have been able to solve thanks to this other Q&A: Long boot delay on Ubuntu loading/splash screen following regular dist-upgrade on clean SSD install (18.04) by installing rng-tools and defining HRNGDEVICE=/dev/urandom as input source for random data in /etc/default/rng-tools.

This solved the snapd entropy issue:

$ sudo journalctl -u snapd.seeded.service --since today
  -- Logs begin at Tue 2018-08-21 18:22:53 CEST, end at Tue 2018-08-21 19:40:09 CE
  # Before: ~40s
  Aug 21 18:22:54 zen systemd[1]: Starting Wait until snapd is fully seeded...
  Aug 21 18:23:36 zen systemd[1]: Started Wait until snapd is fully seeded.
  Aug 21 18:50:18 zen systemd[1]: Stopped Wait until snapd is fully seeded.   
  -- Reboot --
  # After: <1s
  Aug 21 18:51:19 zen systemd[1]: Starting Wait until snapd is fully seeded...
  Aug 21 18:51:19 zen systemd[1]: Started Wait until snapd is fully seeded.
  ....

But the kernel still took ~35s to start so I went through the "Idiot Proof" way of nils-fenner, it did not work at first but after mixing with the first solution from Arni and David I finally managed to lower down starting time to ~10s.

So for (my own) reference, here is my version of a safe path to solve the issue:

 $ cd <whatever back up folder on your machine>
 # backup initial config
 $ cp /etc/initramfs-tools/conf.d/resume .

 # Retrieve the correct path to the swap partition (for manually configured LVMs)
 $ sudo fdisk -l

   ... some partitions

   Disk /dev/mapper/vg_zen-uswap: 4 GiB, 4294967296 bytes, 8388608 sectors
   Units: sectors of 1 * 512 = 512 bytes
   Sector size (logical/physical): 512 bytes / 512 bytes
   I/O size (minimum/optimal): 512 bytes / 512 bytes

   ... some more partitions

 # Update the "resume" file with the new path
 # Caution "vg_zen-uswap" is for *my* machine only :)
 $ echo "RESUME=/dev/mapper/vg_zen-uswap" | sudo tee /etc/initramfs-tools/conf.d/resume
   RESUME=/dev/mapper/vg_zen-uswap   

 # Recreate initrd
 $ sudo update-initramfs -u 
   update-initramfs: Generating /boot/initrd.img-4.15.0-32-generic

 # reboot

That has done the trick for me. HTH.

0

Seems like the second way described above does not work in general. Also I recommend a little more "idiot proof" way to avoid accidentally overriding the swap UUID.

sudo -i    #become root
cd /etc/initramfs-tools/config.d
mv resume resume.uuid
echo "RESUME=/dev/mapper/YOUR UBUNTU FLAVOUR HERE--vg-swap_1" > resume
#Example: echo "RESUME=/dev/mapper/lubuntu--vg-swap_1" > resume

update-initramfs -uk all
sync && reboot

Now we can simply switch back and forth by renaming the two files.