3

How can I increase swap partition size?

I would like to shrink the size of partition #5(/dev/nvme0n1p5 which is a ZFS partition) by 6Gb and add it to partition #3(/dev/nvme0n1p3).

I'm running Xubuntu 19.10 with ZFS as root.

Notes:

  • Since none of the GUI Partition managers(GParted/Gnome Disks/KDE Partition Manager) currently support changing/moving ZFS partitions, I can't use them.
  • I don't want to create another new swap partition on ZFS, I just want to use the current one and increase its size.
  • I don't want to create a new swapfile on ZFS!

System Info

sudo parted -l

Model: WDC PC SN520 SDAPNUW-512G-1002 (nvme)
Disk /dev/nvme0n1: 512GB
Sector size (logical/physical): 512B/512B
Partition Table: gpt
Disk Flags: 

Number  Start   End     Size    File system     Name                  Flags
 1      1049kB  538MB   537MB   fat32           EFI System Partition  boot, esp
 2      538MB   590MB   52.4MB  ext4
 3      590MB   2738MB  2147MB  linux-swap(v1)
 4      2738MB  4885MB  2147MB  zfs
 5      4885MB  512GB   507GB   zfs
sudo fdisk -l /dev/nvme0n1

Disk /dev/nvme0n1: 476.96 GiB, 512110190592 bytes, 1000215216 sectors
Disk model: WDC PC SN520 SDAPNUW-512G-1002          
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: 9E99ED37-A328-4F95-B9F9-946E5ED049B8

Device           Start        End   Sectors   Size Type
/dev/nvme0n1p1    2048    1050623   1048576   512M EFI System
/dev/nvme0n1p2 1050624    1153023    102400    50M Linux filesystem
/dev/nvme0n1p3 1153024    5347327   4194304     2G Linux swap
/dev/nvme0n1p4 5347328    9541631   4194304     2G Solaris boot
/dev/nvme0n1p5 9541632 1000215182 990673551 472.4G Solaris root
sudo zpool list -v

NAME          SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
bpool        1.88G   131M  1.75G        -         -     0%     6%  1.00x    ONLINE  -
  nvme0n1p4  1.88G   131M  1.75G        -         -     0%  6.82%      -  ONLINE  
rpool         472G   112G   360G        -         -     9%    23%  1.00x    ONLINE  -
  nvme0n1p5   472G   112G   360G        -         -     9%  23.8%      -  ONLINE
sudo swapon --show --output all

NAME           TYPE      SIZE USED PRIO UUID                                 LABEL
/dev/nvme0n1p3 partition   2G 7.8M   -2 52702bf2-1e50-4ece-8d3e-db01cff707fe
lsb_release -a

No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 19.10
Release:    19.10
Codename:   eoan
slashsbin
  • 261

3 Answers3

1

According to Aaron Toponce's guide:

You cannot shrink a zpool, only grow it.

Source:
https://pthree.org/2012/12/04/zfs-administration-part-i-vdevs/

Here is another guide:
http://www.resilvered.com/2011/07/how-to-shrink-zfs-root-pool.html

It looks like this second guide "shrinks" a pool by creating a new (smaller) pool on a different disk, and then sending a snapshot from the old pool to the new pool.

mpb
  • 1,455
1

A zfs partition cannot be shrunk, but a new swap partition can be created in the root zpool just fine like this:

zfs create -V 20G -b "$(getconf PAGESIZE)" \
      -o compression=zle -o logbias=throughput \
      -o sync=always -o primarycache=metadata \
      -o secondarycache=none \
      -o com.sun:auto-snapshot=false \
      rpool/swap
mkswap -f /dev/zvol/rpool/swap
1

Even if this thread is a few years back, I also kept finding articles stating that a running zpool could not be shrinked. This article worked great for me on a running system (root zfs) and thought it was worth sharing.

https://niziak.spox.org/wiki/linux:fs:zfs:shrink#:~:text=e%20nvmpool%20nvme0n1p3-,ZFS%3A%20shrink%20zpool,mirror%2C%20use%20attach%20not%20add

Basically involves making one of the offline, detach, resize and re-add it, to then remove the other partition, resize to same size and add it as a new partition to the already resized one.

Copied from link due to policy recommendations

Shrinking of zpool is not possible, but trick with 2nd device (or even file) works:

  • add 2nd device to zpool (can be smaller - only to fit data)
  • remove 1st device - zpool will copy all data to another device.
  • to create mirror, use attach not add

zpool list rpool -v

zpool offline rpool /dev/disk/by-id/SECOND-part3

zpool detach rpool /dev/disk/by-id/SECOND-part

Resize /dev/disk/by-id/SECOND-part3 to smaller size.

zpool add rpool /dev/disk/by-id/SECOND-part3

zpool remove rpool/dev/disk/by-id/FIRST-part3

Sometimes ZFS refuse to remove device with 'out of space' error (but second device is capable to handle all data). To solve it, add more temporary devices to rpool (see link)

Resize /dev/disk/by-id/FIRST-part3 to smaller but equal to SECOND-part3 size.

zpool attach rpool /dev/disk/by-id/SECOND-part3 /dev/disk/by-id/FIRST-part3


This was done on a two disk running system (root) without downtime or reboot. Impressive!

luison
  • 151