8

I am trying to extend my lvm / space on a vHost. I extended it in XCP-ng from 20g to 100g. I can see the new space on the disk

NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
loop0                       7:0    0 63.3M  1 loop /snap/core20/1828
loop1                       7:1    0 63.5M  1 loop /snap/core20/2015
loop2                       7:2    0 91.8M  1 loop /snap/lxd/24061
loop3                       7:3    0 49.8M  1 loop /snap/snapd/18357
loop4                       7:4    0 40.8M  1 loop /snap/snapd/20092
sr0                        11:0    1 1024M  0 rom
xvda                      202:0    0  100G  0 disk
├─xvda1                   202:1    0    1M  0 part
├─xvda2                   202:2    0  1.8G  0 part /boot
└─xvda3                   202:3    0 18.2G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0   10G  0 lvm  /

I have tried different lvextend -l +100%FREE xxx with no luck. resize2fs gave me no luck. I am missing a step somewhere, can some kind person help me out?


sudo pvs output:

PV         VG        Fmt  Attr PSize  PFree
/dev/xvda3 ubuntu-vg lvm2 a--  18.22g 8.22g

I think I ended up moving the space to xvda3 and not the LVM

root@Plex:/home/penbrock# lsblk /dev/xvda
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
xvda                      202:0    0  100G  0 disk
├─xvda1                   202:1    0    1M  0 part
├─xvda2                   202:2    0  1.8G  0 part /boot
└─xvda3                   202:3    0 98.2G  0 part
  └─ubuntu--vg-ubuntu--lv 253:0    0   10G  0 lvm  /
Thomas Ward
  • 78,878

1 Answers1

16

As a preamble to the actual answer of how to solve your issue, remember that the Logical Volume Manager (LVM) system is built on multiple layers which work together to make operational logical volumes on a disk:

  • Your actual disk. Whether it be an SSD, HDD, or virtual disks when we talk about things like VMs or VPSes, this is the hardware level of the machine. It has the actual data of the system including the partition tables, etc. which define partitions and data allocation for each on the drive. Those partitions are what comprise the PVs, and the partition that becomes the PV is defined by the partition being created on the disk's partition table.
  • Physical Volumes (PVs) - These are disk partitions that are designated as LVM2 members and make up the Volume Groups of the LVM system. Multiple PVs can be members of a VG, and multiple PVs from multiple disks can comprise a VG. A PV can only be a member of one VG at a time safely.
  • Volume Groups (VGs) - Designated groups of PVs which, combined, designate the total available space for a volume group. LVs are created inside the member PVs that comprise the VG.
  • Logical volumes (LVs) - These are the "logical drives" which comprise the actual Logical Volumes part of LVM. These are "volumes" that are part of a single VG, and can be expanded to fill the VG it's a member of. You can have multiple LVs per VG, but the total size of the LVs in a VG cannot be more than the space in the VG, and cannot overlap (just like on-disk partitions).
  • The actual filesystem installed on a Logical Volume. This is how you have a root filesystem on a Logical Volume, and is where Ubuntu was installed on the LVM environment straight out of the installer. And is how you interact with the volume and mount it for use in the OS.

You have to take into account each layer when messing with LVM volumes.

The general process for doing this on a VM / VPS is: expand the physical disk/drive, expand the partitions that are LVM members, expand the PV if it's not already expanded, resize the logical volume itself, and then resize the actual file system on that volume. (This gets much more complicated when you have multiple PVs and/or multiple VGs involved, and that is not covered here)


If you don't want to read, and simply want the answer...

We know from your post that xvda3 is the physical volume the LVM is using. This is seen in the output of sudo pvs if you actually run and read it.

Whenever we are working with an LVM system, there are several steps you have to do to expand a single-PV single-VG environment*:

  1. Resize the hard drive itself (either by cloning if going from a smaller to a larger physical drive, or at the hypervisor if this is a VM or virtual disk, or the hosting panel etc. for a VPS or a DO node or such) (which you did already).
  2. Resize the partition that is your LVM VG member. (sudo growpart /dev/xvda 3) (you already grew the partition from your second lsblk output, but you can keep this around for future reference).
  3. Resize the PV if it's not already resized. This can be done sometimes automatically during partition growth so you may not need to do this but it never hurts. (sudo pvresize /dev/xvda3)
  4. Resize the LVs themselves (sudo lvresize -l +100%FREE /dev/mapper/ubuntu--vg-ubuntu--lv)
  5. Resize the file system itself to use the larger space. (sudo resize2fs /dev/mapper/ubuntu--vg-ubuntu--lv)

Then your system will reflect the expanded disk space.


* I specifically referred to a single-PV single-VG setup because your system (and most LVM setups directly out of the installers) are set up with LVM on a single disk, and Ubuntu defaults to a single LV, within a single VG, within a single PV (partition) on disk, unless you went with a Custom installation. A multi-PV or multi-VG setup is much more complicated and beyond the scope of your question and this answer.

Thomas Ward
  • 78,878