A lot of information to get digest, but once you got it all setup and working like me, you may run into the issue that encrypted datasets won’t automatically mount after a reboot and as a result, not shared. This is not unexpected …
I guess having encrypted datasets mounted and shared automatically kind of defeats the purpose of encrypting them in the first place.
Here’s my reasoning behind wanting it anyway. I want to play with it. Period. Fiddling around with this shit is fun. Also, I want to figure out how to do encrypted send/receives with ZFS over SSH to and from a friend of mine. We will store my encrypted backups on his server and he stores his on mine. We don’t want to be able to read each other’s data. I could fiddle around with pipes and on-the-fly-encryption, but let’s be honest, it’s a hassle. Native ZFS encryption is much easier and the proper way to do this. FWIW, he seems think so too ;).
Another reason is that I’m not too worried about unauthorized people gaining access to my server and data or my server being stolen.
To import a pool with one or more encrypted datasets, zpool needs access to a key, or provided a passphrase. If you took the time to read the links I posted above, you’ll recognize this from Philipp Heckel’s blog:
$ zfs create \ -o encryption=
\ -o keysource= , \ -o pbkdf2iters= \
Keysource can be specified as
passphrase, either with a
promptor a specified
file . You can review the current settings with:
$ sudo zfs get
I have created my datasets with the following options:
sudo zfs create \ -o compression=on \ -o encryption=on \ -o keyformat=raw \ -o keylocation=file:///some/loc/some.filename.raw pool/dataset
Again, if you’ve read the links I provided you’ve read that the keyfile can be created using the following command:
$ dd if=/dev/urandom of=some.filename.raw bs=1 count=32
This file should obviously reside in a secure location only accessible by root and
chmod 0400. Might be a good idea to set immutable, too:
$ sudo chattr +i /some/loc/some.filename.raw
As said, to get encrypted datasets to mount, ZFS needs to be provided with the key. When importing a pool manually, you can do:
$ sudo zpool import -l
If the property keylocation is set, this will do the trick and the keys will be loaded. The command:
$ sudo zfs mount -a
should mount all your encrypted datasets for which a key is provided.
However, during boot the keys are not read. The unit file for importing pools is
zfs-import-cache and does not contain
[-l] to load the keys:
[Unit] Description=Import ZFS pools by cache file DefaultDependencies=no Requires=systemd-udev-settle.service After=systemd-udev-settle.service After=cryptsetup.target After=systemd-remount-fs.service Before=dracut-mount.service ConditionPathExists=/etc/zfs/zpool.cache [Service] Type=oneshot RemainAfterExit=yes ExecStartPre=/sbin/modprobe zfs ExecStart=/usr/bin/zpool import -c /etc/zfs/zpool.cache -aN [Install] WantedBy=zfs-mount.service WantedBy=zfs.target
The reason it’s not in there is because you cannot specify
[-l] on the
[-l] is incompatible with
[-N] . I found that out because I thought I was smart and put it in there without thinking twice.
$ sudo cp /usr/lib/systemd/system/zfs-mount.service /usr/lib/systemd/system/zfs-mount-enc.service
Edited it as follows:
[Unit] Description=Mount ZFS filesystems DefaultDependencies=no After=systemd-udev-settle.service After=zfs-import-cache.service After=zfs-import-scan.service After=systemd-remount-fs.service Before=local-fs.target [Service] Type=oneshot RemainAfterExit=yes ExecStartPre=/usr/bin/zfs load-key -r
ExecStart=/usr/bin/zfs mount -a WorkingDirectory=-/sbin/ [Install] WantedBy=zfs-share.service WantedBy=zfs.target
Enabled it and rebooted.
After the reboot, all was mounted, shared and no errors in my logs. If you see errors, investigate
journalctl -b ,
systemctl status zfs-mount-enc.service and
Leave a comment if you have questions.