280

Upon trying to upgrade from 10.10 to 11.04 all seemed to go well until the restart. This error message is what comes up:

Kernel Panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)

How do we fix that?

David Foerster
  • 36,890
  • 56
  • 97
  • 151
Jeffrywith1e
  • 3,055

12 Answers12

278

You are missing the initramfs for that kernel. Choose another kernel from the GRUB menu under Advanced options for Ubuntu and run sudo update-initramfs -u -k version to generate the initrd for version (replace version with the kernel version string such as 4.15.0-36-generic) then sudo update-grub.

Zanna
  • 72,312
psusi
  • 38,031
115

Start with a livecd, open a a terminal and execute:

sudo fdisk -l
sudo mount /dev/sdax /mnt
sudo mount --bind /dev /mnt/dev
sudo mount --bind /dev/pts /mnt/dev/pts
sudo mount --bind /proc /mnt/proc
sudo mount --bind /sys /mnt/sys
sudo chroot /mnt 

If you /boot is on a separate partition also call:

sudo mount /dev/sday /mnt/boot

and now you can make update-initramfs and update-grub without errors.

update-initramfs -u -k 2.6.38-8-generic (or your version)

If you don't know your version. Use:

dpkg --list | grep linux-image

And just update Grub.

update-grub

Reboot your system.

mchid
  • 44,904
  • 8
  • 102
  • 162
Tomeu Roig
  • 1,151
75

In case this happened after an aborted kernel update (e.g. system crash while aptitude safe-upgrade),

  1. boot with an older kernel and
  2. run dpkg --configure -a.

This will complete the upgrade, including configuring the boot settings as psusi explains.

Raphael
  • 2,052
31

In my situation the problem was that /boot was at 100% capacity, so the last 2 kernel updates had not completed successfully, hence on reboot when GRUB2 selected the latest Kernel, it failed.

I resolved the issue by booting into the oldest kernel installed, and removing some unused kernels using aptitude. By using aptitude, after the uninstall had happened, dpkg automatically tried to configure the broken packages, and this time succeeded.

tshepang
  • 2,007
11

Full diagnosis procedure based on kernel messages

But using this QEMU emulation setup I tried to produce minimal examples of every possible failure type to help you debug your problem.

In that simple setup, QEMU emulates a system with:

  • a single virtio disk which represents a hard disk or SDD of real hardware
  • that virtio disk has a raw unpartitioned ext4 image in it. In normal operation, that device would appear under /dev/vda (v is the indicator letter for virtio, if it were partitioned the partitions would be /dev/vda1, /dev/vda2, etc.)

The possible errors you could get are:

  1. Linux cannot read bytes from the disk.

    This could be either because the disk is broken, or because you didn't configure Linux with the ability to read from that hardware type.

    In my QEMU case I can reproduce this by removing the key options that allow the kernel to read that virtio disk:

    CONFIG_VIRTIO_BLK=y
    CONFIG_VIRTIO_PCI=y
    

    The resulting error message is looks like this

    <4>[    0.541708] VFS: Cannot open root device "vda" or unknown-block(0,0): error -6
    <4>[    0.542035] Please append a correct "root=" boot option; here are the available partitions:
    <0>[    0.542562] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
    

    So here Linux tells us that it can't read from vda at all at: VFS: Cannot open root device "vda" or unknown-block(0,0): error -6.

    Then, at Please append a correct "root=" boot option; here are the available partitions: it gives a list of partitions it could read.

    In our case, the list is empty however, since the next line is completely unrelated.

  2. Linux can read bytes from the disk, but it doesn't understand the filesystem to read files out of it.

    This is normally because you didn't configure the kernel to read that filesystem type.

    I can reach this case by removing the kernel's ability to read an ext4 filesystem:

    CONFIG_EXT4_FS=y
    

    With that removed, the error message is:

    <4>[    0.585296] List of all partitions:
    <4>[    0.585913] fe00          524288 vda
    <4>[    0.586123]  driver: virtio_blk
    <4>[    0.586471] No filesystem could mount root, tried:
    <4>[    0.586497]  squashfs
    <4>[    0.586724]
    <0>[    0.587360] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(254,0)
    

    So Linux tells us that it managed to find a vda partition by reading the disk with the virtio_blk device.

    But then, it was not able to read that partition. It tried squashfs, which is the only other filesystem we have enabled, but that didn't work, because we have an ext4 partition.

  3. You passed the wrong root= kernel command line option.

    This one is easy, just pass the correct one! The kernel even gives you a list of the ones it knows about!

    For example, if we pass a wrong:

    root=/dev/vda2
    

    which doesn't even exist, the kernel gives an error of type:

    <4>[    0.608475] Please append a correct "root=" boot option; here are the available partitions:
    <4>[    0.609563] fe00          524288 vda
    <4>[    0.609723]  driver: virtio_blk
    <0>[    0.610433] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(254,2)
    

    clearing telling us that "hey: there is no vda2, but there is a vda!"

    This example also clarifies well what the (0,0), (254,0) and (254,2) meant from previous cases:

    • (0,0): first number 0 means could not read from the disk at all
    • (254,2): 254 is some ID that got assigned to the disk. 2 is the partition withing that ID as in /dev/vda2. And partition 0 means a raw non-partitioned partition as in /dev/vda.

Tested on Linux 5.4.3.

3

I faced this problem, as linux headers were getting updated, and electricity was gone. I recovered as below,

Go to grub menu and select advanced options > select a previous kernel and boot,

Once you get terminal, run below command,

sudo dpkg --configure -a

here from man page of dpkg,

--configure package...|-a|--pending
              Configure a package which has been unpacked but not yet configured.  If -a or --pending is given instead of package, all unpacked but unconfigured packages are configured.

              To reconfigure a package which has already been configured, try the dpkg-reconfigure(8) command instead.

              Configuring consists of the following steps:

              1. Unpack the conffiles, and at the same time back up the old conffiles, so that they can be restored if something goes wrong.

              2. Run postinst script, if provided by the package.

logs as below,

Setting up linux-image-4.15.0-76-generic (4.15.0-76.86) ...
Processing triggers for initramfs-tools (0.130ubuntu3.9) ...
update-initramfs: Generating /boot/initrd.img-4.15.0-74-generic
Processing triggers for linux-image-4.15.0-76-generic (4.15.0-76.86) ...
/etc/kernel/postinst.d/dkms:
 * dkms: running auto installation service for kernel 4.15.0-76-generic
   ...done.
/etc/kernel/postinst.d/initramfs-tools:
update-initramfs: Generating /boot/initrd.img-4.15.0-76-generic
/etc/kernel/postinst.d/zz-update-grub:
Sourcing file `/etc/default/grub'
Generating grub configuration file ...
Found linux image: /boot/vmlinuz-4.15.0-76-generic
Found initrd image: /boot/initrd.img-4.15.0-76-generic
Found linux image: /boot/vmlinuz-4.15.0-74-generic
Found initrd image: /boot/initrd.img-4.15.0-74-generic
Found linux image: /boot/vmlinuz-4.15.0-72-generic
Found initrd image: /boot/initrd.img-4.15.0-72-generic
Found memtest86+ image: /boot/memtest86+.elf
Found memtest86+ image: /boot/memtest86+.bin
Found Windows 7 on /dev/sda1
done

and voila, newer package that was downloaded but not configured is working.

mrigendra
  • 217
  • 2
  • 8
1

In my case:

  • It was caused by a crash during upgrade to LTS 20.04.

  • dpkg --configure -a opened the recovery menu again, so the packages were not (re)configured.

  • So I had to list the installed kernels

    dpkg --list | grep linux-kernel | more
    
  • and configure specifically the kernel that was newly installed:

    dpkg --configure linux-kernel-5.20.0-52-generic
    

On a related note, the causes of the upgrade crash may be:

  • Installation ran out of space on the volume with kernels:

    dpkg --purge remove linux-kernel-<someOldVersion>
    

    I wouldn't go with "remove all old kernels" right away because you want some to boot to if the newest is broken.

  • Your disk is wearing off - run smartctl --health --all and e2fsck ...

  • Some driver caused the whole OS to hang - for me this happens with nVidia driver when playing 4K movie on 4K screen.

1

In my case, it was because my Dell XPS 15 9550 has some kind of weird problem of not being able to load the full initrd image in the RAM on the UEFI procedure. Answered on another question specific to that one also.

https://askubuntu.com/a/1412273/170833

morhook
  • 1,671
1

Tested for Lubuntu 22.04 with LUKS encryption.
Easy way. No terminal commands, no editing files.

  1. reboot your computer, enter your LUKS encryption password, press Enter
  2. Select other boot options
  3. Select old kernel (recovery mode)
  4. Click "fix broken packages"
  5. Click "update grub"
  6. Reboot in normal mode
Nairum
  • 251
0

You can also boot the server in rescue mode, and reinstall only the grub

http://info.w3calculator.com/free-code/linux/recover-from-corrupted-boot-image/

Math
  • 1
0

I got this problem due to my /boot partition was full so my kernel updates had failed. I managed to fix this by booting from an old kernel in the GRUB menu.

When managed to boot I began purging old kernels, but I had manage to get some dependency issues so first I had to uninstall linux-server package

apt-get remove linux-server
apt-get update
apt-get -f install
apt-get upgrade

Then I rebooted and everything was working fine!

0

In addition to Tomeu's instructions, before chroot I needed to:

sudo mount --bind /dev /mnt/dev

Additionally, after the chroot:

cp -r /usr/lib/i386-linux-gnu/pango /usr/lib/

(Got this from here.)

Kris Harper
  • 13,705
Jason
  • 1