How to install and configure ZFS on Linux using Debian Jessie 8.1

ZFS is a combined filesystem and logical volume manager. The features of ZFS include protection against data corruption, support for high storage capacities, efficient data compression, integration of the filesystem and volume management concept, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs.

ZFS was originally implemented as open-source software, licensed under the Common Development and Distribution License (CDDL).

When we talking about the ZFS filesystem, we can highlight the following key concepts:

  • Data integrity.
  • Simple storage administration with only two commands: zfs and zpool.
  • Everything can be done while the filesystem is online.

For a full overview and description of all available features see this detailed wikipedia article.

In this tutorial, I will guide you step by step through the installation of the ZFS filesystem on Debian 8.1 (Jessie). I will show you how to create and configure pool's using raid0 (stripe), raid1 (Mirror) and RAID-Z (Raid with parity) and explain how to configure a file system with ZFS.

Based on the information from the website www.zfsonlinux.org, ZFS is only supported on the AMD64 and Intel 64 Bit architecture (amd64). Let's get started with the setup.

Prerequisites:

  • Debian 8 with 64bit Kernel.
  • root privileges.

Step 1 - Update Repository and Update the Debian Linux System

To add the zfsonlinux repository to our system, download and install the zfsonlinux package as shown below. This will add the files /etc/apt/sources.list.d/zfsonlinux.list and /etc/apt/trusted.gpg.d/zfsonlinux.gpg on your computer. Afterwards, you can install zfs like any other Debian package with the apt-get command. Another benefit of using the zfsonlinux repository is that you get updates automatically by running "apt-get update && apt-get upgrade".

Log in to the Debian server with SSH access, become root user, and then run the following commands.

# uname -a
Linux debian-zfs 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt11-1 (2015-05-24) x86_64 GNU/Linux
# wget http://archive.zfsonlinux.org/debian/pool/main/z/zfsonlinux/zfsonlinux_6_all.deb
# dpkg -i zfsonlinux_6_all.deb
# apt-get update

Step 2 - Install zfsonlinux

Zfsonlinux has many software dependencies that get installed by apt automatically. This process will take a while. When the installation is finished, reboot the server.

# apt-get install lsb-release
# apt-get install debian-zfs
# shutdown -r now

Step 3 - Create and configure pool

After the server has been rebooted, check that zfsonlinux is installed and runnig well.

# dpkg -l | grep zfs
ii  debian-zfs                     7~jessie                    amd64        Native ZFS filesystem metapackage for Debian.
ii  libzfs2                        0.6.5.2-2                   amd64        Native ZFS filesystem library for Linux
ii  zfs-dkms                       0.6.5.2-2                   all          Native ZFS filesystem kernel modules for Linux
ii  zfsonlinux                     6                           all          archive.zfsonlinux.org trust package
ii  zfsutils                       0.6.5.2-2                   amd64        command-line tools to manage ZFS filesystems

The above result shows that zfs on linux is already installed, so we can go on with creating the first pool.

I've added five disks to this server, each with a size of 2GB. We can check the available disks with this command:

# ls /dev/sd*
/dev/sda  /dev/sda1  /dev/sda2  /dev/sda5  /dev/sdb  /dev/sdc  /dev/sdd  /dev/sde  /dev/sdf  /dev/sdg

We can see that we have /dev/sda until /dev/sdg, /dev/sda is used for the operating system Debian Linux Jessie 8.1. We will use /dev/sdb until /dev/sdg for the ZFS filesystem.

Now we can start to create the pool, for the first one I'll show you how to create a raid0 (stripe).

# zpool list
no pools available
# zpool create -f pool0 /dev/sdb
# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
pool0  1.98G    64K  1.98G         -     0%     0%  1.00x  ONLINE  -

The command "zpool list" shows that we successfully created one raid0 zfs pool, the name of the pool is pool0, and the size is 2GB.

Next we'll create a raid1 (mirror) with the other disks.

# zpool create -f pool1 mirror /dev/sdc /dev/sdd
# zpool list
NAME    SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
pool0  1.98G    64K  1.98G         -     0%     0%  1.00x  ONLINE  -
pool1  1.98G    64K  1.98G         -     0%     0%  1.00x  ONLINE  -

We can see that we have two pools now, pool0 for raid0 and pool1 for raid1.

To check the status of the pool's, we can use the command below:

# zpool status
  pool: pool0
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        pool0       ONLINE       0     0     0
          sdb       ONLINE       0     0     0

errors: No known data errors

  pool: pool1
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        pool1       ONLINE       0     0     0
          mirror-0  ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0

errors: No known data errors

We can check the pool status with the "zpool status" command. We can see the difference between pool0 and pool1, pool0 has only one disk, and pool1 has two disks and the status of disks is mirror (mirror-0).

Next, we'll create a pool with RAID-Z, RAID-Z is a data/parity distribution scheme like RAID-5, but it uses a dynamic stripe width: every block has it's own RAID stripe, regardless of the block size, resulting in every RAID-Z write being a full-stripe write.

RAID-Z requires a minimum of three hard drives and is sort of a compromise between RAID 0 and RAID 1. In an RAID-Z pool: If a single disk in your pool dies, simply replace that disk and ZFS will automatically rebuild the data based on the parity information from the other disks. To loose all of the information in your storage pool, two disks would have to die. To make the drive setup even more redundant, you can use RAID 6 (RAID-Z2 in case of ZFS) to get double parity.

Let's create an RAID-Z pool with one parity first.

# zpool create -f poolz1 raidz sde sdf sdg
# zpool list poolz1
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
poolz1  5.94G   117K  5.94G         -     0%     0%  1.00x  ONLINE  -
# zpool status poolz1
  pool: poolz1
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        poolz1      ONLINE       0     0     0
          raidz1-0  ONLINE       0     0     0
            sde     ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdg     ONLINE       0     0     0

errors: No known data errors
# df -h /poolz1
Filesystem      Size  Used Avail Use% Mounted on
poolz1          3.9G     0  3.9G   0% /poolz1

As we can see, df -h shows that our 6GB pool has now been reduced to 4GB, 2GB are being used to hold parity information. With the zpool status command, we see that our pool is using RAID-Z now.

Next we'll create RAID-Z2 (raid 6), for this purpose we have to remove the existing pool because no more disks are available. Removing a pool is very easy, we can use zpool destroy command for that.

# zpool list
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
pool0   1.98G    64K  1.98G         -     0%     0%  1.00x  ONLINE  -
pool1   1.98G    64K  1.98G         -     0%     0%  1.00x  ONLINE  -
poolz1  5.94G   117K  5.94G         -     0%     0%  1.00x  ONLINE  -
# zpool destroy pool0
# zpool destroy pool1
# zpool destroy poolz1
# zpool list
no pools available

Now all our zpool's are gone, so we can create a RAID-Z2 pool.

# zpool create poolz2 raidz2 sdb sdc sdd sde
# zpool list
NAME     SIZE  ALLOC   FREE  EXPANDSZ   FRAG    CAP  DEDUP  HEALTH  ALTROOT
poolz2  7.94G   135K  7.94G         -     0%     0%  1.00x  ONLINE  -
# df -h /poolz2
Filesystem      Size  Used Avail Use% Mounted on
poolz2          3.9G     0  3.9G   0% /poolz2
# zpool status poolz2
  pool: poolz2
 state: ONLINE
  scan: none requested
config:

        NAME        STATE     READ WRITE CKSUM
        poolz2      ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sdb     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0

errors: No known data errors

As we can see, df -h shows that our 8GB pool has now been reduced to 4GB, since 4GB are being used to hold the parity information twice. With the "zpool status" command, we see that our pool is using RAID-Z2 now.

Step 4 - Simulate a Disk Failure

In this step, we will simulate a catastrophic disk failure (i.e. one of the HDDs in the zpool stops functioning).

Create a file in the poolz2, and make sure we can access it.

# echo "Test Only" > /poolz2/test.txt
# cat /poolz2/test.txt
Test Only

Before we simulate the failure: check the status of poolz2 and ensure that the status is Online and that the status of all disks is Online.

The failure is simulated by writing random data with the dd command to /dev/sdb.

# dd if=/dev/urandom of=/dev/sdb bs=1024 count=20480
# zpool scrub poolz2
# zpool status
  pool: poolz2
 state: ONLINE
status: One or more devices has experienced an unrecoverable error.  An
        attempt was made to correct the error.  Applications are unaffected.
action: Determine if the device needs to be replaced, and clear the errors
        using 'zpool clear' or replace the device with 'zpool replace'.
   see: http://zfsonlinux.org/msg/ZFS-8000-9P
  scan: scrub repaired 17K in 0h0m with 0 errors on Tue Dec  8 22:37:49 2015
config:

        NAME        STATE     READ WRITE CKSUM
        poolz2      ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sdb     ONLINE       0     0    25
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0

errors: No known data errors

Now we can see that one or more disk has experienced an unrecoverable error, so we have to replace the disk. In this case, we replace the /dev/sdb disk with /dev/sdf.

# zpool replace poolz2 sdb sdf
# zpool status
  pool: poolz2
 state: ONLINE
  scan: resilvered 49.5K in 0h0m with 0 errors on Tue Dec  8 22:43:35 2015
config:

        NAME        STATE     READ WRITE CKSUM
        poolz2      ONLINE       0     0     0
          raidz2-0  ONLINE       0     0     0
            sdf     ONLINE       0     0     0
            sdc     ONLINE       0     0     0
            sdd     ONLINE       0     0     0
            sde     ONLINE       0     0     0

errors: No known data errors

After we replaced /dev/sdb by /dev/sdf, the error is gone and we can still access the test file that we created before.

# cat /poolz2/test.txt
Test Only

Until this step, we know about how to create and configure a zpool.

Step 5 - Create and configure ZFS filesystem

In the next step, we'll learn how to create and configure the ZFS filesystem.

# zfs list
NAME     USED  AVAIL  REFER  MOUNTPOINT
poolz2   105K  3.83G  26.1K  /poolz2

We already have one ZFS filesystem, this is automatically added when we create the zpool. Now we will create another ZFS filesystem.

# zfs create poolz2/tank
# zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
poolz2        132K  3.83G  26.1K  /poolz2
poolz2/tank  25.4K  3.83G  25.4K  /poolz2/tank
# df -h | grep poolz2
poolz2          3.9G  128K  3.9G   1% /poolz2
poolz2/tank     3.9G  128K  3.9G   1% /poolz2/tank

Very easy right? We create a new ZFS filesystem called tank and mounted it automatically as /poolz2/tank.

How to create a custom mountpoint for a ZFS filesystem? Use the command below:

# zfs create poolz2/data -o mountpoint=/data
# df -h | grep poolz2
poolz2          3.9G     0  3.9G   0% /poolz2
poolz2/tank     3.9G     0  3.9G   0% /poolz2/tank
poolz2/data     3.9G     0  3.9G   0% /data

How to modify the existing mountpoint? We can do that with the command below:

# zfs set mountpoint=/tank poolz2/tank
# df -h | grep poolz2
poolz2          3.9G     0  3.9G   0% /poolz2
poolz2/data     3.9G     0  3.9G   0% /data
poolz2/tank     3.9G     0  3.9G   0% /tank

To mount and unmount a filesystem, use the command below:

# zfs unmount /data
# df -h | grep poolz2
poolz2          3.9G     0  3.9G   0% /poolz2
poolz2/tank     3.9G     0  3.9G   0% /tank
# zfs mount poolz2/data
# df -h | grep poolz2
poolz2          3.9G     0  3.9G   0% /poolz2
poolz2/tank     3.9G     0  3.9G   0% /tank
poolz2/data     3.9G     0  3.9G   0% /data

Removing a zfs filesystem is very easy, we can use the command zfs destroy for that.

# zfs destroy poolz2/data
# zfs list
NAME          USED  AVAIL  REFER  MOUNTPOINT
poolz2        152K  3.83G  26.1K  /poolz2
poolz2/tank  25.4K  3.83G  25.4K  /tank

The filesystem /data is gone.

Conclusion

The ZFS file system is a revolutionary new file system that fundamentally changes the way file systems are administered on Unix-like operating systems. ZFS provides features and benefits that were not found in any other file system available today. ZFS is robust, scalable, and easy to administer.

Share this page:

Suggested articles

16 Comment(s)

Add comment

Comments

By: Christophe

Please don't forget to say that ZFS requires ECC RAM... It should be on red bold blink whatever you want.

By: Pero

As far as I know this isn't exactly true. ECC isn't recomended for ZFS, it's recomended for enterprise solutions with a high demand for data integrity. Such a demand has nothing to do with ZFS. Please read this: http://zfsonlinux.org/faq.html#DoIHaveToUseECCMemory

It also does'n require an huge amount of RAM. Yes, some pro features will require plenty of RAM but the most features will do fine with 2GiB or less. Please read this: http://distrowatch.com/weekly.php?issue=20150420#myth

By: Donald

I am sorry to say that, although I want to try ZFS, the many typographical errors and grammatical errors in this article make me suspect that also code segments might contain errors that might result in data loss. Please get someone to check this article and the code elements for errors and make corrections. Then I might try to implement the procedure.

By: Slav

English language may not be first languge of the autor and trust me it is not related to his ZFS knowlege in any way.

By: Xen

That's just a lie, you trust the commands just fine, you just want to berate someone for their writing errors.

By: Iulian

 Donald, the code elements from this article it is OK.

By: Nick

Very good article.  Any reason why using the direct block devices instead of by-id for the hard drives?

Christope, You DON'T require ECC, although it is highly recommended.  No need for red bold blink.

http://ianhowson.com/do-you-really-need-ecc-ram-with-zfs.html

By: Ramadoni

For development and testing using /dev/sdX naming is quick and easy.

http://zfsonlinux.org/faq.html#WhatDevNamesShouldIUseWhenCreatingMyPool

 

By: Bill

I'm confused about the chicken and the egg. I am interested in using ZFS with Debian and Xen for VMs on a workstation. Am I to understand that the above installation will create Debian on say ext4 with an additional ZFS filesystem? How to install Debian on ZFS? And should I create a separate ZFS filesysem for each VM? And install Debian on each VM and ZFS on each VM? Is there anyway to use ZFS with the debian installer?

By: Ramadoni

Install ZFS as root filesystem in linux is a big effort, so far... i use zfs only for data filesystem, but you can follow this tutorial if you want to install Debian to a native zfs root filesystem. https://github.com/zfsonlinux/pkg-zfs/wiki/HOWTO-install-Debian-GNU-Linux-to-a-Native-ZFS-Root-Filesystem

By: Anssi

What's the situation with zfs and Debian now in 2016? http://zfsonlinux.org/debian.html says "These packages are no longer actively maintained and will not be updated as new releases are tagged." It mentions zfs might be coming to Debian but it looks like it's only in Debian/kfreebsd.

By: Xen

It simply says that the packages have been moved to an official debian repo: https://github.com/zfsonlinux/zfs/wiki/Debian

By: Joe

The information from pero about ZFS myths is good, but also features simply wrong information about ZFS and ECC Ram.

"Also, it is important to note that data corruption can happen under any file system, there is nothing special about ZFS that would make it more vulnerable to corruption using non-ECC RAM."

This is simply wrong. Due to ZFS's error correction capabilities, these errors could stack up and corrupt ZFS far more than one simple error, therefore it's highly recommended to use ECC.

Another point is the ram requirements. Deduplication (which of course requires huge amounts) aside, yes, you can run ZFS with very little ram. However, it is recommended not to save money on this. If a situation with full memory occurs, no one can save your data and memory is cheap.

(Recommendations e.g. from FreeNAS usually range 8-16 GB minimum)

By: Jens

 Hi all.I've kept my server regurlary updated, yesterday i had to shut it down due to a power takeout.When the power came back on, i booted it up.Now it won't modprobe the ZFS module.I've tried about 6-7 hours yesterday to get it up and running, with no luck.Anyone have any pointers?

By: Mirco

There is a script to install Debian 8 Jessie to a native ZFS root filesystem (no ext2/3/4 or msdos partitions needed anymore).

By: Xen

I wish you would have given more attention to maintenance rather than creating RAID arrays.

I really don't know how safe this all is and what tools you can use.