22

I recently installed Ubuntu 20.04 server*, and after a while got into trouble because of "no space left on device". It was only then when I realized that the Ubuntu install had not used the available disk space in full:

$ lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
[...]
nvme0n1                   259:0    0 931.5G  0 disk 
├─nvme0n1p1               259:1    0   1.1G  0 part /boot/efi
├─nvme0n1p2               259:2    0   1.5G  0 part /boot
└─nvme0n1p3               259:3    0   929G  0 part 
  └─ubuntu--vg-ubuntu--lv 253:0    0   100G  0 lvm  /

As I understand the output, the partition nvme0n1p3 has a size of 929 GB, but Ubuntu uses only 100 GB of that. I don't know why that happened, as I have an older Ubuntu 20.04 Server installation that looks as I would expect it:

$ lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
[...]
nvme0n1                   259:0    0 953.9G  0 disk
├─nvme0n1p1               259:1    0   512M  0 part /boot/efi
├─nvme0n1p2               259:2    0     1G  0 part /boot
└─nvme0n1p3               259:3    0 952.4G  0 part 
  └─ubuntu--vg-ubuntu--lv 253:0    0 952.4G  0 lvm  /

I found this answer and this answer and tried it, but the response was:

$ sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv 929G 
resize2fs 1.45.5 (07-Jan-2020)
The containing partition (or device) is only 26214400 (4k) blocks.
You requested a new size of 243531776 blocks.

So it tells me that the containing partition is only 100 GB in size, while lsblk obviously tells me something different.

How can I make Ubuntu use all of the available 929 GB?


*The same still happens with Ubuntu 22.04.1 LTS server, although I'm sure I told the installer to use all available disk space.

not2savvy
  • 845

2 Answers2

50

Because Ubuntu uses an LVM volume, the size of the volume needs to be changed first before the resize2fs can increase the size of the underlying file system.

This can be done using lvextend like so:

$ sudo lvextend -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv
  Size of logical volume ubuntu-vg/ubuntu-lv changed from 100.00 GiB (25600 extents) to <928.96 GiB (237813 extents).
  Logical volume ubuntu-vg/ubuntu-lv successfully resized.

The -l +100%FREE option tells lvextend to add all of the free space of the containing volume group to the logical volume.

Now we can use resize2fs to modify the filesystem so it uses all available space:

~$ sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv
resize2fs 1.45.5 (07-Jan-2020)
Filesystem at /dev/mapper/ubuntu--vg-ubuntu--lv is mounted on /; on-line resizing required
old_desc_blocks = 13, new_desc_blocks = 117
The filesystem on /dev/mapper/ubuntu--vg-ubuntu--lv is now 243520512 (4k) blocks long.

And indeed now:

$ lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
[...]
nvme0n1                   259:0    0 931.5G  0 disk 
├─nvme0n1p1               259:1    0   1.1G  0 part /boot/efi
├─nvme0n1p2               259:2    0   1.5G  0 part /boot
└─nvme0n1p3               259:3    0   929G  0 part 
  └─ubuntu--vg-ubuntu--lv 253:0    0   929G  0 lvm  /

I found the solution at How to resize the root LVM partition of Ubuntu. I recommend the article for more detailed background information.


You can also do the extend/resize in one step:

sudo lvextend -l +100%FREE -r /dev/mapper/ubuntu--vg-ubuntu--lv
qed
  • 463
not2savvy
  • 845
2

I wanted to add this here since I stumbled upon it from google looking for a way to increase my partition size in proxmox after resizing the disk. Having just performed these steps. I'll use them as my example.

This should generally apply to any circumstance where you need to also increase the partition size and not just the underlying lvm, thus this extends not2savvy's original response.

This also shows how to use parted instead of fdisk as not2savvy's source link

Assuming you have something similar as below:

> lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0 188.5G  0 disk
├─sda1                      8:1    0     1M  0 part
├─sda2                      8:2    0     2G  0 part /boot
└─sda3                      8:3    0    30G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0    30G  0 lvm  /
sr0                        11:0    1     3G  0 rom

> df -kh Filesystem Size Used Avail Use% Mounted on tmpfs 1.5G 1020K 1.5G 1% /run /dev/mapper/ubuntu--vg-ubuntu--lv 15G 8.5G 5.5G 61% / tmpfs 7.3G 0 7.3G 0% /dev/shm tmpfs 5.0M 0 5.0M 0% /run/lock /dev/sda2 2.0G 96M 1.7G 6% /boot tmpfs 1.5G 12K 1.5G 1% /run/user/1000

The lvextend is insufficient here since the LVM is already taking 100% of the partitioned space.

Drop into parted with: sudo parted /dev/sda

You can print out the current details with (parted) print

Model: QEMU QEMU HARDDISK (scsi)
Disk /dev/sda: 202GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags:

Number Start End Size File system Name Flags 1 1049kB 2097kB 1049kB bios_grub 2 2097kB 2150MB 2147MB ext4 3 2150MB 34.4GB 32.2GB

We want to resize sda3 so we enter:

(parted) resizepart 3

Note: Parted may print: End? [34.4GB]? You may enter 100% here or on the next line like so (parted) 100%

From here we are done with parted so quit.

(parted) quit

Next we use pvresize /dev/sda3 to increase the physical volume size

> sudo pvresize /dev/sda3
  Physical volume "/dev/sda3" changed
  1 physical volume(s) resized or updated / 0 physical volume(s) not resized
> lsblk
NAME                      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0 188.5G  0 disk
├─sda1                      8:1    0     1M  0 part
├─sda2                      8:2    0     2G  0 part /boot
└─sda3                      8:3    0 186.5G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0    30G  0 lvm  /
sr0                        11:0    1     3G  0 rom

From here you can continue with not2savvy's answer

sudo lvextend -l +100%FREE -r /dev/mapper/ubuntu--vg-ubuntu--lv

Check with lvdisplay

Molebomb
  • 21
  • 2