Tag Archives: zfs

Fedora & Plasma Tips

I’m a serial distro jumper. I have been as far back as I can remember. I’d been on Kubuntu for quite a while, happy as can be, which obviously made me bored. With all the hype surrounding Fedora 35, I thought I’d give it a whirl! In a sense, it’s “coming home” for me: I started out on Mandrake way back in the late 90s followed by RedHat and Fedora Core for quite some time. So far, I’m loving Fedora! Here’s some quick tips and tricks I used to make Fedora my own.

ZFS Encrypted Home Directory

I love ZFS and disto-hopping, so using ZFS on a separate drive as my home directory was a no brainer! Here’s my guide on how you can set this up on Fedora or Ubuntu/Kubuntu.

KDE Native File Dialogs in Firefox

RPMFusion

RPM Fusion is where many of the apps and utilities you’ll want to use, but Fedora can’t/won’t distribute, live. Amongst these are the NVIDIA propriety drivers, HEIF support, etc.

RPMFusion is more or less “official” and is often referenced in the Fedora documentation. You can fairly safely add this repository without fear of configuration explosions or security risks.

Click this link, and choose your version of Fedora under the Graphical Setup heading: https://rpmfusion.org/Configuration

Flathub

One of the great things about Fedora is it’s becoming built around the idea of having all of your apps installed as Flatpaks. This increases system security, ensures you always have the latest versions of apps, and make it easy to always run the same version of apps across multiple distros. This is really handy if you’re sharing files across distros.

While Fedora hosts many Flatpacked (I think that’s a verb) apps on their infrastructure, I prefer Flathub which tends to be a bit more up-to-date, has more apps, and is available on pretty much every distro available.

Here’s Flathub’s instructions on how to make their apps available in Fedora: https://flatpak.org/setup/Fedora/

Flatseal

(coming soon)

HEIC/HEIF Images in Gwenview & Dolphin

Gwenview is KDE’s default image viewer. It’s a great application, but it’s missing HEIF image support out-of-the-box. If you have an iPhone, this probably the format all of your photos are stored in. Luckily, adding HEIF support fore Gwenview is very simple in Fedora!

First, make sure to install the RPM Fusion repository. Instructions are above: https://www.shernet.com/linux/fedora-and-plasma-tips/#rpmfusion

Next, run the lines below to update your repository database and install the plugin.

sudo dnf update
sudo dnf install qt-heif-image-plugin

The steps below do no appear to be necessary in KDE 5.25+

Now that you have support for HEIC/HEIF installed, you can configure Dolphin to show image previews.

sudo nano /usr/share/kservices5/qimageioplugins/heic.desktop

Paste in the following:

[Desktop Entry]
Type=Service
X-KDE-ServiceTypes=QImageIOPlugins
X-KDE-ImageFormat=heic
X-KDE-MimeType=image/heic
X-KDE-Read=true
X-KDE-Write=true

sudo nano /usr/share/kservices5/qimageioplugins/heif.desktop

Paste in the following:

[Desktop Entry]
Type=Service
X-KDE-ServiceTypes=QImageIOPlugins
X-KDE-ImageFormat=heif
X-KDE-MimeType=image/heif
X-KDE-Read=true
X-KDE-Write=true

sudo nano /usr/share/kservices5/imagethumbnail.desktop

Add to the end of MimeTypes:

image/heic;image/heif;

Finally, log out and back in.

Thunderbird

Date/Time

For reasons I still can’t suss out (and I had these same issues with Kubuntu and Thunderbird installed from apt), I always end up with 24 hour dates in the mail list. Now, arguments over the best format aside, I’d really just like to see them as eg: 2:15 PM.

To fix this, 1st open Thunderbird then go to Preferences. Scroll to the bottom and click “Config Editor.”

Type in: intl.date_time.pattern_override.time_short
Click the “+” symbol to add a new config and choose “string”
Set it to: h:mm a

You can find all of your options here: https://support.mozilla.org/en-US/kb/customize-date-time-formats-thunderbird

Hiding the GRUB Boot Menu

(coming soon)

Unity3D

If you using Unity3D, you may find you have issues with Visual Studio Code, Omnisharp, and the version of mono that comes with Fedora. Here’s my quick fix:

That’s all! I hope you found some of these tips and tricks useful and enjoy what I’m finding to be a fantastic distro!

ZFS Home Directory

I tend to hop from Linux distro to Linux distro. One of the things that makes doing so much easier is keeping my home folder on a separate disk. That way I can re-install distributions to my heart’s content without fear of losing my files and settings.

I’m also a big fan of ZFS (ZFS on Ubuntu Server). That means jumping through a few extra hoops to setup ZFS on a separate drive as well as re-importing the zpool every time I swap distributions, but I find it’s well worth it. Here’s a handy guide on how to do just that! I’ll be showing the steps for Fedora and Kubuntu, but they should generally apply to other distros as well.

Disclaimer: I’ve not a ZFS expert, but these steps have worked very well for me on multiple systems. YMMV.

One quick note: ZFS works best with plenty of RAM (it will use everything available to keep data cached). If you are on a RAM-limited system, you can do something similar with encrypted XFS or EXT4.

Pre-Step: Setup Encrypted Home Drive

I’ll be configuring ZFS to use an encryption key stored on the root drive. This is only secure if the root drive is also encrypted. Make sure when you install Linux you tell the installer to use drive encryption.

It will look like this in Fedora:

Encrypted root drive in Fedora

And like this in Ubuntu:

Encrypted root drive in Kubuntu

You’ll be asked to set a password that’s used to encrypt your root drive. You cannot change this password and you’ll be asked to enter this password every time you boot your computer, so make sure you do not forget it!

Don’t worry about configuring your 2nd drive with your home folder during the installation. I find it’s much easier to have the distribution do it’s typical install, then go back and mount your new /home. Just make sure that you create yourself as an administrator or have a root password set.

Once you’ve installed your new distro, reboot into it, but don’t log in. Your computer will get grumpy if you’re logged into a desktop environment while swapping out your home directory.

Press Control-Alt-F3 to get to a terminal window then log in as yourself if you made your account an administrator, or ‘root’ if you did not.

ZFS Installation

Fedora

Make sure Fedora up-to-date

sudo dnf -y update

If there are any updates, reboot (sudo reboot), press Control-Alt-F3, and log back in.

Install ZFS for Fedora by following the official steps below, do not use the zfs-fuse package included with Fedora: https://openzfs.github.io/openzfs-docs/Getting%20Started/Fedora/index.html

Kubuntu/Ubuntu
sudo apt install zfsutils-linux

Creating a New Home Drive

If you already have created a home drive and are re-attaching after re-installing Linux, skip to Importing an Existing Home Drive.

To make things easier, I’ll be running all of the commands as root by first running:

sudo -s

Create an encryption key that will be used to encrypt and decrypt your home drive. Make sure this is only stored on an encrypted root drive and that you have backed up this key somewhere safe. If you lose this key you will lose all access to your drive. You’ve been warned 😉

dd if=/dev/urandom of=/etc/home.key bs=32 count=1 && chmod 600 /etc/home.key

Next you’ll need to find out the name of your drive. Since easy names (e.g. sda, sdb) can change, we want to set it up by something that will not change. I’ll be using the device’s physical location.

Let’s make sure we know which drive has Linux installed on it, and which is going to be used for our home drive, by runing:

lsblk

This will list all of your drives (also called block devices), any partitions on them, and where those partitions are mounted. My output (on a virtual machine) looks like the following. On real hardware, your devices will probably be called sda and sdb (if they’re SATA), or nvme0n1 and nvme1n1 (if they’re nvme):

NAME                                          MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINTS
sr0                                            11:0    1    2G  0 rom    
zram0                                         251:0    0  5.8G  0 disk  [SWAP]
vda                                           252:0    0   64G  0 disk   
├─vda1                                        252:1    0    1G  0 part  /boot
└─vda2                                        252:2    0   63G  0 part   
 â””─luks-a954d91b-fda3-4c22-90a6-2b35554129b1 253:0    0   63G  0 crypt /home
                                                                       /
vdb                                           252:16   0  128G  0 disk  

I can see here that my disk with Linux installed on it is called vda, since it has multiple partitions (vda1 and vda2) that are all mounted (as /boot and /). The disk with nothing installed on it is vdb. Therefore, I’ll need to check the physical location of vdb. Please comment below if you’re having trouble figuring out which drive is which and I’ll try to give you a hand!

To list all disks by their location, run:

ls -lh /dev/disk/by-path/

The result will look something like this:

total 0
lrwxrwxrwx. 1 root root  9 Jan 28 09:42 pci-0000:00:1f.2-ata-1 -> ../../sr0
lrwxrwxrwx. 1 root root  9 Jan 28 09:42 pci-0000:00:1f.2-ata-1.0 -> ../../sr0
lrwxrwxrwx. 1 root root  9 Jan 28 09:42 pci-0000:07:00.0 -> ../../vda
lrwxrwxrwx. 1 root root 10 Jan 28 09:42 pci-0000:07:00.0-part1 -> ../../vda1
lrwxrwxrwx. 1 root root 10 Jan 28 09:42 pci-0000:07:00.0-part2 -> ../../vda2
lrwxrwxrwx. 1 root root  9 Jan 28 09:42 pci-0000:08:00.0 -> ../../vdb

This tells me that the path I’ll be using is /dev/disk/by-path/pci-0000:08:00.0, since that’s the one that’s being called vdb (see the end of the last line).

We’re finally ready to create our ZFS filesystem! First we create a zpool that encompasses all of the drives we’ll be using (we’ll just be using one, but ZFS can be mirrored or RAIDed in more advanced setups).

The command we’ll run is:

zpool create homepool -O xattr=sa -O acltype=posixacl -O atime=off -O compression=lz4 -O encryption=aes-256-gcm -O keyformat=raw -O keylocation=file:///etc/home.key -o ashift=12 /dev/disk/by-path/[your disk here]

Here’s what some of those options mean:
ashift=12 : This specifies the drive’s block size. From what I’ve cobbled together, use the number 12 for most use cases unless it’s a Samsung NVME or you know your drive uses 8K clusters. In that case, use 13.
homepool: this is the name we’ve given to the zpool. You can use something else if you’d prefer.
compression=lz4: This compresses all data, increases the performance of ZFS, and essentially costs no additional CPU resources. More information here: https://www.servethehome.com/the-case-for-using-zfs-compression/
encyption=aes-256-gcm: Use AES 256 GCM encryption which is both highly secure and hardware accelerated

Now, let’s check out that brand new zpool!

zpool status

You should see something like this:

  pool: homepool
 state: ONLINE
config:

        NAME                STATE     READ WRITE CKSUM
        homepool            ONLINE       0     0     0
          pci-0000:08:00.0  ONLINE       0     0     0

errors: No known data errors

A zpool is a container for filesystems. Now that we’ve got one, we can create a filesystem where our home drive will live. In all of the steps below, replace [user] with your username.

zfs create homepool/[user]

To see information on this new filesystem, you can run:

zfs list

Now, let’s replace our old home drive (that was created when Linux was installed) with the filesystem on our second drive:

cd /home
mv /home/[user] /home/[user].bak
mkdir /home/[user]/
zfs set mountpoint=/home/[user] homepool/[user]
zfs set mountpoint=none homepool
chmod --reference=/home/[user].bak /home/[user]
mv /home/[user].bak/* /home/[user]/
mv /home/[user].bak/.* /home/[user]/
rmdir /home/[user].bak
chown -R [user]:[user] /home/[user]
#For Fedora and other distros with selinux, run the next line too:
restorecon -vR /home

Linux doesn’t yet load keys for encrypted zfs mounts automatically. You’ll need to create a simple service to automatically load zfs encryption keys on boot.
Like most good things, this is from the Arch Linux wiki: https://wiki.archlinux.org/title/ZFS#Unlock_at_boot_time:_systemd
You MUST do this before you reboot or you will not be able to log in graphically. If you forget, press Control-Alt-F3 and log into the console.

nano /etc/systemd/system/zfs-load-key.service

Type in the following (if you’re uncomfortable typing by hand, you should be able to switch to the graphical login (Fedora: Control-Alt-F2, *buntu Control-Alt-F1) and copy paste).

[Unit]
Description=Load encryption keys
DefaultDependencies=no
After=zfs-import.target
Before=zfs-mount.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/zfs load-key -a
StandardInput=tty-force

[Install]
WantedBy=zfs-mount.service

Next, tell Linux to start the new service every time it boots:

systemctl enable zfs-load-key

Finally, reboot and log in normally to make sure everything works as anticipated:

reboot

Now you should have a fully functioning install with encrypted ZFS home directory! Remember to backup /etc/home.key somewhere secure that *isn’t* in your home directory, since you’ll need to copy this key back any time you re-install Linux. I’d recommend an encrypted USB key.

If you have multiple users, you can follow those same steps to create a zfs filesystem for each of them in zpool you created.

Steam

If you use Steam and want to keep your game installations separate so they don’t get backed up with zfs snapshots, you can create a separate filesystem for it.

mkdir -p /home/[user]/.local/share/Steam
sudo zfs create homepool/[user]/steam -o mountpoint=/home/[user]/.local/share/Steam

Importing an Existing Home Drive

Only follow these steps if you’ve re-installed Linux. They aren’t necessary if you just created a new zpool above.

After you’ve re-installed Linux, make sure you complete ZFS Installation above. Once those are done, you can proceed from here.

If you are not the root user yet, run:

sudo -s

Next, you’ll need to copy your backed up key to /etc/home.key

If it’s stored on an encrypted flash drive, it may be easiest to log in graphically, restore the file, then log out and return the console with Control-Alt-F3.

Once it’s restored, make sure it still has the correct permissions

chown root:root /etc/home.key && chmod 600 /etc/home.key

Rename your existing home directory:

cd /home
mv /home/[user] /home/[user].bak
mkdir /home/[user]
chmod --reference=/home/[user].bak /home/[user]
chown [user]:[user] /home/[user]

List all zpools the system can find for import:

zpool import

You should see your homepool listed

   pool: homepool
     id: 16378698673868876678
  state: ONLINE
 action: The pool can be imported using its name or numeric identifier.
 config:

        homepool            ONLINE
          pci-0000:08:00.0  ONLINE

You can now import it by name and mount the zfs filesystems:

zpool import homepool

Before the filesystems can be mounted, we’ll need to create and enable the ZFS key loading service.

nano /etc/systemd/system/zfs-load-key.service

Type in the following:

[Unit]
Description=Load encryption keys
DefaultDependencies=no
After=zfs-import.target
Before=zfs-mount.service

[Service]
Type=oneshot
RemainAfterExit=yes
ExecStart=/usr/sbin/zfs load-key -a
StandardInput=tty-force

[Install]
WantedBy=zfs-mount.service

Next, tell Linux to start the new service every time it boots, and start it now:

systemctl enable --now zfs-load-key

Finally, we can mount all of the zfs filesystems:

zfs mount -a

You can confirm they are mounted by typing:

mount

The last line of the output should be something similar to:

homepool/adam on /home/adam type zfs (rw,noatime,seclabel,xattr,posixacl)

If you’re on Fedora, be sure to run the following to make selinux happy:

restorecon -vR /home

Reboot and log in. Since all of your personal settings are saved to your home drive, everything should be exactly how you left it!

Memory Usage

If you find ZFS is using too much memory (apps keep crashing), you can adjust how much RAM ZFS uses for its cache (how much of your drive it keeps in memory for quick access).

To test different settings, set the maximum arch size in bytes and then clear the cache. This setting is temporary, so if you run intro trouble, just reboot.

sudo echo "8589934592" > /sys/module/zfs/parameters/zfs_arc_max
sudo echo 3 > /proc/sys/vm/drop_caches

Once you’ve found a size that works for you, you can set the size permanently.

sudo echo "options zfs zfs_arc_max=8589934592" >> /etc/modprobe.d/zfs.conf

ZFS on Ubuntu server

Ubuntu Home Server Setup Part II

Welcome to Part II of my Ubuntu Home Server build! In Part I, I did a very basic Ubuntu Server install. In this part, I’ll be creating a ZFS pool and volumes to store all my data on.

Other parts of this guide can be found at:

Home Server With Ubuntu

Setup

I’ll be setting up a server with 8 physical drives.

Disk 0: SSD for OS

Disk 1: SSD for ZFS Intent Log (improves write performance)
(read fantastic information about it here: http://nex7.blogspot.com/2013/04/zfs-intent-log.html)

Disk 2: SSD for L2ARC caching (improves read performance)

Disk 3 – 7: HDDs for ZFS Pool (where all my data with be stored)

Quick disclosure: I’m *far* from a ZFS expert. From what I’ve gleaned, this should suffice for home / small business use. If you’re planning something enterprise-grade, find an expert!

Install Ubuntu

Perform a regular Ubuntu server installation, or use an existing server.

SSH Into the server, rather than using the console. You’ll want to be able to copy and paste when you setup the zpool.

Install ZFS

sudo apt install zfsutils-linux

Create the ZPOOL

I’ll be using RAIDZ (which is like RAID-5) to get redundancy on my disks without losing too much usable space.

ZFS offers many other options, like RAID0, 1, 6, etc. Use whichever is appropriate for your workload.

It is very strongly recommended to not use disk names like sdb, sdc, etc. Those might change across reboots.

Many of the articles I’ve read suggest using UUIDs . However, my experience on Ubuntu Server is that these are not assigned to blank disks. Therefore, I will be using disk paths instead.

These are verbose and a bit of a pain to type, but they make sure you know exactly what disk you are referring to should you need to swap drives in the future. They will also not change on reboots.

To see your installed disks run:

ls -lh /dev/disk/by-path

My output looks like

adam@normandy:~$ ls -lh /dev/disk/by-path
 total 0
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:00:1f.2-ata-5 -> ../../sr0
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:0:0 -> ../../sda
 lrwxrwxrwx 1 root root 10 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:0:0-part1 -> ../../sda1
 lrwxrwxrwx 1 root root 10 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:0:0-part2 -> ../../sda2
 lrwxrwxrwx 1 root root 10 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:0:0-part3 -> ../../sda3
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:1:0 -> ../../sdb
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:2:0 -> ../../sdc
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:3:0 -> ../../sdd
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:4:0 -> ../../sde
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:5:0 -> ../../sdf
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:6:0 -> ../../sdg
 lrwxrwxrwx 1 root root  9 Jul  8 09:06 pci-0000:02:00.0-scsi-0:0:7:0 -> ../../sdh

I chose to install Linux on my 1st drive (sda). I’ll be using sdb for the ZIL, sdc for L2ARC, and sdd, sde, sdf, sdg, and sdh to for the data pool.

First, I’ll setup the data pool. This is where SSH is handy, since you can copy/paste your paths from above.

In my example below, I’m naming my pool “data.” You can use a different name if you’d like. If your setup is like mine, you’ll create one pool with many volumes in it.

I’m using drives with 4k physical sectors, so I’m adding the option: -o ashift=12
This should increase performance, but at the cost of total storage space. You an remove this option if you don’t think it’s a good fit for you.

sudo zpool create data -o ashift=12 raidz /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:3:0 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:4:0 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:5:0 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:6:0 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:7:0

To confirm this worked, run:

zpool list

You should have something like:

adam@normandy:~$ zpool list
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
data  18.1T   238K  18.1T         -     0%     0%  1.00x  ONLINE  -

Next I’ll tell ZFS to use sdb as the ZFS Intent Log

sudo zpool add data log /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:1:0

Then I’ll tell ZFS to use sdc as the L2ARCH cache

sudo zpool add data cache /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:2:0

If I run zpool status, I should see my data, ZIL, and cache drives

adam@normandy:/data/download/secure$ zpool status
  pool: data
 state: ONLINE
  scan: none requested
config:

        NAME                               STATE     READ WRITE CKSUM
        data                               ONLINE       0     0     0
          raidz1-0                         ONLINE       0     0     0
            pci-0000:02:00.0-scsi-0:0:3:0  ONLINE       0     0     0
            pci-0000:02:00.0-scsi-0:0:4:0  ONLINE       0     0     0
            pci-0000:02:00.0-scsi-0:0:5:0  ONLINE       0     0     0
            pci-0000:02:00.0-scsi-0:0:6:0  ONLINE       0     0     0
            pci-0000:02:00.0-scsi-0:0:7:0  ONLINE       0     0     0
        logs
          pci-0000:02:00.0-scsi-0:0:1:0    ONLINE       0     0     0
        cache
          pci-0000:02:00.0-scsi-0:0:2:0    ONLINE       0     0     0

errors: No known data errors

Create the Filesystem

Now that the zpool exists, we can create filesystems on top of it.
A pool can have multiple filesystems. I’ll create one for media, and one for virtual machines (because that’s what I need).

sudo zfs create data/media
sudo zfs create data/vm

To confirm it was created correctly run:

zfs list

And it should look something like this:

adam@normandy:~$ zfs list
 NAME         USED  AVAIL  REFER  MOUNTPOINT
 data         210K  14.0T  36.7K  /data
 data/media  35.1K  14.0T  35.1K  /data/media
 data/vm     35.1K  14.0T  35.1K  /data/vm

I would also suggest the following tweaks. Combined, they increased my zfs throughput 50-100%! They were recommended by https://unicolet.blogspot.com/2013/03/a-not-so-short-guide-to-zfs-on-linux.html and https://www.servethehome.com/the-case-for-using-zfs-compression/ as I searched for solutions to my less-than-stellar zfs performance.

zfs set xattr=sa data/media
zfs set atime=off data/media
zfs set compression=lz4

All of your zfs filesystems are automatically mounted.

adam@normandy:~$ mount
...
data on /data type zfs (rw,xattr,noacl)
data/media on /data/media type zfs (rw,xattr,noacl)
data/vm on /data/vm type zfs (rw,xattr,noacl

You can use them just as you would any mounted filesystem. That’s it!