1

I've got a 64GB SSD and a 3TB hard drive in my system running Ubuntu 14.04. The SSD has a small root partition with rest of the device allocated to an LVM physical volume. From that LVM physical volume I have two logical volumes allocated, one for /usr and one for /root. (/home is on the 3TB hard drive.)

Since I had about 25GB of the SSD currently unused, I thought it would be interesting to try using it as a bcache cache device with /home as backing device.

I created a new logical volume using the remaining space on the LVM physical volume on the SSD. That left things looking like this:

# pvs
  PV         VG   Fmt  Attr PSize  PFree
  /dev/sda2  VG4  lvm2 a--  53.57g    0 
  /dev/sdb2  VG6  lvm2 a--   2.69t    0 
# lvs
  LV      VG   Attr      LSize  Pool Origin Data%  Move Log Copy%  Convert
  VG4-usr VG4  -wi-ao--- 19.31g                                           
  VG4-var VG4  -wi-ao---  9.31g                                           
  bcache  VG4  -wi-ao--- 24.95g                                           
  home    VG6  -wi-ao---  2.69t

I then did:

# make-bcache -C /dev/mapper/VG4-bcache

The system immediately locked up completely. (So the above is a reconstruction, I don't have the actual command I executed any more.)

Did I do something stupid without realising it? Is this a supported configuration? I'm wondering if it's worth reporting this as a bug or not. Nothing appears to have been permanently harmed by the crash.

saf
  • 35
  • 1
  • 5

3 Answers3

0

I definitely think you should file a bug. I've never even thought about using a LV as bcache, only PV. And maybe (just maybe) you're the first one who's ever tried... And it's probably not handled...

Do you want me to proceed with a SysBck and try this myself? (Not today any more: too tired!)

Do you have a system backup??? (you're user type 4)

Fabby
  • 35,017
0

Yes it does. I accidently did my setup backwards. I set my lvm as the caching device and my ramdrive as the backing device. But yet, to answer your question it does work.

But I should mention that lvm2 has a caching feature, you might as well opt to use that (which is what I did) and then use bcache if you want to cache the lvm to ram

0

In my case, the problem was due to the device already being actively used in bcache, as confirmed by bcache-super-show.

$ make-bcache -B /dev/ssd/cache
Can't open dev /dev/ssd/cache: Device or resource busy

$ make-bcache -C /dev/ssd/cache
Can't open dev /dev/ssd/cache: Device or resource busy

$ pvs
  PV                      VG     Fmt  Attr PSize   PFree
  /dev/mapper/md100-crypt hdd    lvm2 a--    3.64t 186.30g
  /dev/mapper/md101-crypt ssd    lvm2 a--  119.17g  32.23g
  /dev/md0                system lvm2 a--   13.81g   4.50g

$ lvs
  LV    VG     Attr      LSize  Pool Origin Data%  Move Log Copy%  Convert
  data  hdd    -wi-ao---  3.46t
  cache ssd    -wi-ao--- 29.75g
  data  ssd    -wi-ao--- 57.20g
  root  system -wi-ao—  9.31g

Seems to fail on the following

open("/dev/ssd/cache", O_RDWR|O_EXCL) = -1 EBUSY (Device or resource busy)

Before that failure, the following seems to succeed;

open("/dev/ssd/cache", O_RDONLY) = 4
ioctl(4, BLKSSZGET, 512) = 0
close(4)             = 0

This leads me to believe that O_EXCL is responsible for the EBUSY, indicating that perhaps another process is holding a lock on the device, however I can confirm that /dev/ssd/cache is not mounted, open or in use (as seen in lsof or fuser), and that rebooting does not resolve the problem.

Attempting to remove it from device-mapper also yields no progress;

$ dmsetup remove ssd-cache
device-mapper: remove ioctl on ssd-cache failed: Device or resource busy

So after running lsblk I can see the following;

sdb                        8:16   0 119.2G  0 disk
└─sdb1                     8:17   0 119.2G  0 part
  └─md101                  9:101  0 119.2G  0 raid1
    └─md101-crypt (dm-3) 252:3    0 119.2G  0 crypt
      ├─ssd-data (dm-4)  252:4    0  57.2G  0 lvm   /mnt/ssd/data
      └─ssd-cache (dm-5) 252:5    0  29.8G  0 lvm
        └─bcache0        251:0    0  29.8G  0 disk

As you can see, bcache0 is a child of the device in question, and a quick check confirms this;

$ bcache-super-show /dev/ssd/cache
sb.magic        ok
sb.first_sector     8 [match]
sb.csum         9F5D50331A2A10B9 [match]
sb.version      1 [backing device]
dev.label       (empty)
dev.uuid        8ba675a3-d9e4-4d47-8403-655c226f578f
dev.sectors_per_block   1
dev.sectors_per_bucket  1024
dev.data.first_sector   16
dev.data.cache_mode 0 [writethrough]
dev.data.cache_state    0 [detached]
cset.uuid       c006c316-d396-40cf-bde8-8bd4d0a017e8

Therefore, the root problem in my case was that the device itself was already part of bcache, and make-bcache failed to detect this.

Hopefully this will be useful to someone else in future.

SleepyCal
  • 101