bcache vs. dm-cache
I tested bcache about year ago on my spare/backup server and it worked quite fine. But since then every newer kernel had new issues, even on mailing list there were suggestions that bcache should be marked back as experimental. So, after some time I abandoned bcache and started to wait for something more stable and configuration friendly. In the same time dm-cache was also available, but it’s setup was quite challenging and dangerous (using dmsetup and raw calculated numbers – very easy way to make a horrible mistake). There was also quite friendly way to manage dm-cache using LVM2 tools, but only in plans ๐
Right now, LVM2 have all promised featured for managing dm-cache, so I gave it a try ๐
Just to sum up my experience:
- bcache
- + quite good documentation about cache policies, monitoring interface, disk format, disaster recovery etc.
- + cache works on block device layer, so you can cache whole PV device regardless how many LV you have on top of it (one cache for all)
- + no need for resize as you are caching whole physical device
- + you can tweak cache parameters on the fly, including cache policy (writethrough/writeback)
- – you can’t convert backing device on the fly to cached with data on it
- – unstable
- ~ doesn’t require LVM support
- dm-cache
- + stable
- + easy to setup
- + you can convert any LVM volume to use cache or not to use it, online, without any problem
- – one cache for one LV, so if you have multiple LVs and you want to cache it all, you’d need to divide cache device (ssd disk) to many small chunks for every LV – space wasted
- – very brief documentation – enough to set up things, but without any info about how to tune or monitor it
- – resizing LVs must be done by uncaching, resizing and caching again, so you loose all cache contents
- ~it’s part of LVM
Requirements
It’s time to get back to Slackware. Current version is 14.1, so everything described here is based on this version.
Recent kernel
Stock 3.10.17 kernel is little too old for good dm-cache work, so I recommend to get something at least from 3.14 series. You could compile it yourself or grab from slackware-current tree which have currently 3.14.33.
Recent LVM2
LVM2 version shipped with Slackware 14.1 is almost last version without dm-cache support, so there is a need to grab latest version and compile it. For complete support you need also thin-provisioning-tools with fsck-like tools for dm-cache.
Hard way (compile everything yourself)
- Get sources of LVM2 package from ftp://ftp.osuosl.org/pub/slackware/slackware64-14.1/source/a/lvm2/
- Get more recent LVM2 sources from ftp://sources.redhat.com/pub/lvm2/releases/LVM2.2.02.116.tgz
- Get thin-provisioning-tools from https://github.com/jthornber/thin-provisioning-tools/archive/v0.4.1.tar.gz and save it as thin-provisioning-tools-0.4.1.tar.gz
- Get my patch for SlackBuild from https://majek.sh/dm-cache/slackware-lvm2-dm-cache.patch and apply it in lvm2 directory:
patch -p0 < slackware-lvm2-dm-cache.patch
- Make package:
sh lvm2.SlackBuild
- Upgrade lvm2 package with new one found in /tmp
- If you want cached root volume, you’d need also to patch mkinitrd and rebuild initrd.
Patch: slackware-mkinitrd-dm-cache.patch.
Apply:cd /sbin
patch -p0 < /somewhere/slackware-mkinitrd-dm-cache.patch - Use it ๐
Lazy way
Just get my lvm2 package and install it. You may also need mkinitrd patch (see above).
Configuration
Start is easy and well documented in man lvmcache.
Just quick howto assuming that you have volume group named vg_group, your physical cache device (ssd) is /dev/sdb1 and you want to cache volume named lv_home.
- First, cache/ssd disk must be part of volume group:
vgextend vg_group /dev/sdb1
- Create cache pool on fast device (default mode is writethrough – safer but slower, so for writeback, there is a need to add –-cachemode option):
lvcreate --type cache-pool --cachemode writeback -L 10G -n cacheX vg_group /dev/sdb1
- Attach cache to logical volume:
lvconvert --type cache --cachepool vg_group/cacheX vg_group/lv_home
- Enjoy ๐