Ubuntu Home Server Setup Part II
Welcome to Part II of my Ubuntu Home Server build! In Part I, I did a very basic Ubuntu Server install. In this part, I’ll be creating a ZFS pool and volumes to store all my data on.
Other parts of this guide can be found at:
Home Server With Ubuntu
Setup
I’ll be setting up a server with 8 physical drives.
Disk 0: SSD for OS
Disk 1: SSD for ZFS Intent Log (improves write performance)
(read fantastic information about it here: http://nex7.blogspot.com/2013/04/zfs-intent-log.html)
Disk 2: SSD for L2ARC caching (improves read performance)
Disk 3 – 7: HDDs for ZFS Pool (where all my data with be stored)
Quick disclosure: I’m *far* from a ZFS expert. From what I’ve gleaned, this should suffice for home / small business use. If you’re planning something enterprise-grade, find an expert!
Install Ubuntu
Perform a regular Ubuntu server installation, or use an existing server.
SSH Into the server, rather than using the console. You’ll want to be able to copy and paste when you setup the zpool.
Install ZFS
sudo apt install zfsutils-linux
Create the ZPOOL
I’ll be using RAIDZ (which is like RAID-5) to get redundancy on my disks without losing too much usable space.
ZFS offers many other options, like RAID0, 1, 6, etc. Use whichever is appropriate for your workload.
It is very strongly recommended to not use disk names like sdb, sdc, etc. Those might change across reboots.
Many of the articles I’ve read suggest using UUIDs . However, my experience on Ubuntu Server is that these are not assigned to blank disks. Therefore, I will be using disk paths instead.
These are verbose and a bit of a pain to type, but they make sure you know exactly what disk you are referring to should you need to swap drives in the future. They will also not change on reboots.
To see your installed disks run:
ls -lh /dev/disk/by-path
My output looks like
adam@normandy:~$ ls -lh /dev/disk/by-path total 0 lrwxrwxrwx 1 root root 9 Jul 8 09:06 pci-0000:00:1f.2-ata-5 -> ../../sr0 lrwxrwxrwx 1 root root 9 Jul 8 09:06 pci-0000:02:00.0-scsi-0:0:0:0 -> ../../sda lrwxrwxrwx 1 root root 10 Jul 8 09:06 pci-0000:02:00.0-scsi-0:0:0:0-part1 -> ../../sda1 lrwxrwxrwx 1 root root 10 Jul 8 09:06 pci-0000:02:00.0-scsi-0:0:0:0-part2 -> ../../sda2 lrwxrwxrwx 1 root root 10 Jul 8 09:06 pci-0000:02:00.0-scsi-0:0:0:0-part3 -> ../../sda3 lrwxrwxrwx 1 root root 9 Jul 8 09:06 pci-0000:02:00.0-scsi-0:0:1:0 -> ../../sdb lrwxrwxrwx 1 root root 9 Jul 8 09:06 pci-0000:02:00.0-scsi-0:0:2:0 -> ../../sdc lrwxrwxrwx 1 root root 9 Jul 8 09:06 pci-0000:02:00.0-scsi-0:0:3:0 -> ../../sdd lrwxrwxrwx 1 root root 9 Jul 8 09:06 pci-0000:02:00.0-scsi-0:0:4:0 -> ../../sde lrwxrwxrwx 1 root root 9 Jul 8 09:06 pci-0000:02:00.0-scsi-0:0:5:0 -> ../../sdf lrwxrwxrwx 1 root root 9 Jul 8 09:06 pci-0000:02:00.0-scsi-0:0:6:0 -> ../../sdg lrwxrwxrwx 1 root root 9 Jul 8 09:06 pci-0000:02:00.0-scsi-0:0:7:0 -> ../../sdh
I chose to install Linux on my 1st drive (sda). I’ll be using sdb for the ZIL, sdc for L2ARC, and sdd, sde, sdf, sdg, and sdh to for the data pool.
First, I’ll setup the data pool. This is where SSH is handy, since you can copy/paste your paths from above.
In my example below, I’m naming my pool “data.” You can use a different name if you’d like. If your setup is like mine, you’ll create one pool with many volumes in it.
I’m using drives with 4k physical sectors, so I’m adding the option: -o ashift=12
This should increase performance, but at the cost of total storage space. You an remove this option if you don’t think it’s a good fit for you.
sudo zpool create data -o ashift=12 raidz /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:3:0 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:4:0 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:5:0 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:6:0 /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:7:0
To confirm this worked, run:
zpool list
You should have something like:
adam@normandy:~$ zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT data 18.1T 238K 18.1T - 0% 0% 1.00x ONLINE -
Next I’ll tell ZFS to use sdb as the ZFS Intent Log
sudo zpool add data log /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:1:0
Then I’ll tell ZFS to use sdc as the L2ARCH cache
sudo zpool add data cache /dev/disk/by-path/pci-0000:02:00.0-scsi-0:0:2:0
If I run zpool status, I should see my data, ZIL, and cache drives
adam@normandy:/data/download/secure$ zpool status pool: data state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM data ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 pci-0000:02:00.0-scsi-0:0:3:0 ONLINE 0 0 0 pci-0000:02:00.0-scsi-0:0:4:0 ONLINE 0 0 0 pci-0000:02:00.0-scsi-0:0:5:0 ONLINE 0 0 0 pci-0000:02:00.0-scsi-0:0:6:0 ONLINE 0 0 0 pci-0000:02:00.0-scsi-0:0:7:0 ONLINE 0 0 0 logs pci-0000:02:00.0-scsi-0:0:1:0 ONLINE 0 0 0 cache pci-0000:02:00.0-scsi-0:0:2:0 ONLINE 0 0 0 errors: No known data errors
Create the Filesystem
Now that the zpool exists, we can create filesystems on top of it.
A pool can have multiple filesystems. I’ll create one for media, and one for virtual machines (because that’s what I need).
sudo zfs create data/media sudo zfs create data/vm
To confirm it was created correctly run:
zfs list
And it should look something like this:
adam@normandy:~$ zfs list NAME USED AVAIL REFER MOUNTPOINT data 210K 14.0T 36.7K /data data/media 35.1K 14.0T 35.1K /data/media data/vm 35.1K 14.0T 35.1K /data/vm
I would also suggest the following tweaks. Combined, they increased my zfs throughput 50-100%! They were recommended by https://unicolet.blogspot.com/2013/03/a-not-so-short-guide-to-zfs-on-linux.html and https://www.servethehome.com/the-case-for-using-zfs-compression/ as I searched for solutions to my less-than-stellar zfs performance.
zfs set xattr=sa data/media zfs set atime=off data/media zfs set compression=lz4
All of your zfs filesystems are automatically mounted.
adam@normandy:~$ mount ... data on /data type zfs (rw,xattr,noacl) data/media on /data/media type zfs (rw,xattr,noacl) data/vm on /data/vm type zfs (rw,xattr,noacl
You can use them just as you would any mounted filesystem. That’s it!