3

System: Ubuntu 20.04 with EFI boot.


Clarification why I need this

I configured two swap partitions:

  • One primary swap on the second HDD. It is used for hibernation. (Hibernation function is better to use on HDD in order to prolong life for SSD.)
  • The second swap partition on SSD with the system. This is for case if I use SSD as single disk - when second HDD is removed and missing. In this case I just want the system to work normally - even without hibernation function (or with one using swap on SSD).

What I did:

  • Created swap partition on HDD.
  • Set its UUID as resume for Grub.

/etc/default/grub:

GRUB_CMDLINE_LINUX_DEFAULT="quiet splash resume=UUID=25d5d4af-736a-4232-a4bb-492499bc1038"
  • Set both swap partitions to fstab with priority for the swap on HDD and the nofail and x-systemd.device-timeout=3s options. This options for case when HDD is missing. Without this options and when HDD is missing the system hangs during the boot for 90 seconds.

/etc/fstab config for swap partitions:

#swap on HDD
UUID=25d5d4af-736a-4232-a4bb-492499bc1038 none            swap    nofail,pri=20,x-systemd.device-timeout=3s              0       0
#swap on SDD
UUID=e78a171a-3c52-4cd1-b86a-17709f4b49d9 none            swap    pri=10              0       0

What the problem I have:

When SSD and HDD connected to laptop together everything is fine. When HDD is missing during system boot Grub tries to find partition with swap on HDD and hangs for this on about 33 seconds.

After system booted in logs there were the following messages about timeout for waiting for missing HDD (which I intentionally removed) with swap partition:

/var/log/boot.log:[ESC[0;1;31m TIME ESC[0m] Timed out waiting for device ESC[0;1;39m/dev/disk/by-uuid/46e39f74-e1b3-4705-9bac-84ee2593b4d
4ESC[0m.

/var/log/syslog:Feb 23 14:23:26 Device-2 kernel: [ 0.032426] Kernel command line: BOOT_IMAGE=/boot/vmlinuz-5.13.0-28-generic root=UUID=a59635ec-2cef-4396-bdf3-be7e4b23fc73 ro quiet splash resume=UUID=25d5d4af-736a-4232-a4bb-492499bc1038 vt.handoff=7

/var/log/syslog:Feb 23 14:56:14 Device-2 systemd[1]: dev-disk-by\x2duuid-25d5d4af\x2d736a\x2d4232\x2da4bb\x2d492499bc1038.device: Job dev-disk-by\x2duuid-25d5d4af\x2d736a\x2d4232\x2da4bb\x2d492499bc1038.device/start timed out.


What I want:

To set timeout for waiting missing HDD not more than 5 seconds. Now it set by default somewhere in the system for 30-33 seconds.


I tried the following:

  • To find timeout property in the Grub config. In /etc/default/grub can be specified two related options:
  • GRUB_HIDDEN_TIMEOUT
  • GRUB_RECORDFAIL_TIMEOUT

But they are not for this case. Information about this options can be found here:

I tried to analyze code for Grub in /etc/grub.d/ and understood that such timeout probably specified in the system boot config - in systemd. I tried to find corresponding timeout option in systemd config:

sudo grep -iR timeout /etc/systemd/
/etc/systemd/system/rescue.target.wants/grub-initrd-fallback.service:TimeoutSec=0
/etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service:ExecStart=/usr/bin/nm-online -s -q --timeout=30
/etc/systemd/system/emergency.target.wants/grub-initrd-fallback.service:TimeoutSec=0
/etc/systemd/system/multi-user.target.wants/ua-reboot-cmds.service:TimeoutSec=0
/etc/systemd/system/multi-user.target.wants/unattended-upgrades.service:TimeoutStopSec=1800
/etc/systemd/system/multi-user.target.wants/grub-initrd-fallback.service:TimeoutSec=0
/etc/systemd/system/multi-user.target.wants/snapd.recovery-chooser-trigger.service:# blocks the service startup until a trigger is detected or a timeout is hit
/etc/systemd/system/sleep.target.wants/grub-initrd-fallback.service:TimeoutSec=0
/etc/systemd/system.conf:#DefaultTimeoutStartSec=90s
/etc/systemd/system.conf:#DefaultTimeoutStopSec=90s
/etc/systemd/system.conf:#DefaultTimeoutAbortSec=
/etc/systemd/user.conf:#DefaultTimeoutStartSec=90s
/etc/systemd/user.conf:#DefaultTimeoutStopSec=90s
/etc/systemd/user.conf:#DefaultTimeoutAbortSec=
/etc/systemd/logind.conf:#HoldoffTimeoutSec=30s

Tried to change options with 30 seconds value to 5 seconds:

/etc/systemd/system/network-online.target.wants/NetworkManager-wait-online.service:ExecStart=/usr/bin/nm-online -s -q --timeout=5
/etc/systemd/logind.conf:#HoldoffTimeoutSec=5s

But this has not given expected result.

Also I tried to set the same labels for swap partitions and to specify resume swap partition by this label (/dev/disk/by-label/...), not by UUID. But in such case there is no determination which swap partition from both will be used to load the system from hibernation state.

I found the similar question: How to set timeout for the systemd start job "dev-md125.device" (mdadm) But in it there is no details how to configure timeout in systemd for HDD.

Here there is example how to set TimeoutStartSec for httpd.service

Is it possible to specify such timeout for HDD mounting during system boot?

Thanks.

1 Answers1

0

In my case the same amount of delay (30s) was due stupid logic in initramfs scripts:

        local slumber=30
        case $DPKG_ARCH in
                powerpc|ppc64|ppc64el)
                        slumber=180
                        ;;
                *)
                        slumber=30
                        ;;
        esac
        if [ "${ROOTDELAY:-0}" -gt $slumber ]; then
                slumber=$ROOTDELAY
        fi

The above ignores rootdelay it it is less than "slumber" (the name is as miraculous as the code).

To fix this you should:

  1. add kernel boot parameter rootdelay=3 (edit /etc/default/grub) and run update-grub;
  2. Patch your initramfs script with this:
--- /usr/share/initramfs-tools/scripts/local    2023-09-25 22:47:50.000000000 +0300
+++ /usr/share/initramfs-tools/scripts/local    2024-02-06 03:35:29.442516895 +0300
@@ -64,9 +64,7 @@
                        slumber=30
                        ;;
        esac
-       if [ "${ROOTDELAY:-0}" -gt $slumber ]; then
-               slumber=$ROOTDELAY
-       fi
+       slumber=${ROOTDELAY:-$slumber}
    case "$dev_id" in
    UUID=*|LABEL=*|PARTUUID=*|/dev/*)

  1. Update initrd image:
update-initramfs -k 6.7.3-060703-generic -c

or you may save update-initrd to your /boot:

#!/bin/bash
kv=${1#initrd.img-}
kv=${kv#vmlinuz-}
img="initrd.img-$kv"
update-initramfs -k $kv -c
lsinitramfs $img > /boot/$img.txt
  1. If you are on Debian and don't want the above patch overwritten when initramfs-tools-core is updated do this:
cd /usr/share/initramfs-tools/scripts
dpkg-divert --rename --divert $_/local.orig $_/local

All the above commands must be done under root (obviously, so I'm not stupidly clutterning them with sudo commands).

midenok
  • 848
  • 1
  • 10
  • 15