1 What is this ?
ZFS is a combined file system and logical volume manager designed by Sun Microsystems. The features of ZFS include protection against data corruption, support for high storage capacities, efficient data compression, integration of the concepts of filesystem and volume management, snapshots and copy-on-write clones, continuous integrity checking and automatic repair, RAID-Z and native NFSv4 ACLs.
2 Usage
2.1 Show status
If you want to list all zfs pool and snapshot, use:
root@bus-ind-zfs-01:[~]# zfs list -t all
NAME USED AVAIL REFER MOUNTPOINT
zfs-datastore 663K 3.40T 140K /zfs-datastore
zfs-datastore@28July2017 0 - 140K -
If you want to show the status of RAIDz use:
root@bus-ind-zfs-01:[~]# zpool status
pool: zfs-datastore
state: ONLINE
scan: scrub repaired 0 in 0h0m with 0 errors on Sun Jul 9 00:24:02 2017
config:
NAME STATE READ WRITE CKSUM
zfs-datastore ONLINE 0 0 0
raidz2-0 ONLINE 0 0 0
scsi-36001c230c6dacc00204ad9df50371381-part1 ONLINE 0 0 0
scsi-36001c230c6dacc00204ad9e94401c004-part1 ONLINE 0 0 0
scsi-36001c230c6dacc00204ad9f3373942be-part1 ONLINE 0 0 0
scsi-36001c230c6dacc00204ad9fb2db0d88d-part1 ONLINE 0 0 0
errors: No known data errors
root@bus-ind-bak-sd-01:[~]# zpool status
pool: zroot
state: DEGRADED
status: One or more devices are faulted in response to persistent errors.
Sufficient replicas exist for the pool to continue functioning in a
degraded state.
action: Replace the faulted device, or use 'zpool clear' to mark the device
repaired.
scan: scrub repaired 52K in 43h23m with 0 errors on Mon Nov 12 19:47:06 2018
config:
NAME STATE READ WRITE CKSUM
zroot DEGRADED 0 0 0
raidz1-0 DEGRADED 0 0 0
scsi-36a4badb009ce8300210a00c906a9ca2b ONLINE 0 0 0
scsi-36a4badb009ce8300210a00d8078c1ef6 ONLINE 0 0 0
scsi-36a4badb009ce8300210a00e108174999 ONLINE 0 0 0
scsi-36a4badb009ce8300210a00e9088d9443 ONLINE 0 0 0
scsi-36a4badb009ce8300210a00f109055358 ONLINE 0 0 0
scsi-36a4badb009ce8300210af40b0750b87f FAULTED 1 375 0 too many errors
errors: No known data errors
2.2 Filesystem use
Presuming I have a pool called zfs-datastore create a /zfs-datastore/apache filesystem
zfs create zfs-datastore/apache
For destroying zpool, you can use
zfs destroy zfs-datastore/apache
If you want remove all recursive, use the recusive options -r = all children, -R = all dependants
2.2.1 How to mount zfs pool
zfs mount data01
# you can create temporary mount that expires after unmounting
zfs mount -o mountpoint=/tmpmnt data01/oracle
Note: there are all the normal mount options that you can apply i.e ro/rw, setuid
2.2.2 What is zfs snapshot ?
Snapshotting is like taking a picture, delta changes are recorded to the snapshot when the original file system changes, to remove a dataset all previous snapshots have to be removed, you can also rename snapshots. You cannot destroy a snapshot if it has a clone
# creating a snapshot
zfs snapshot zfs-datastore@28July2017
# renaming a snapshot
zfs snapshot zfs-datastore@28July2017 zfs-datastore@today_1145
# destroying a snapshot
zfs destroy zfs-datastore@today_1145
The advantage of snapshot is rollback. By default you can only rollback to the latest snapshot, to rollback to older one you must delete all newer snapshots
zfs rollback zfs-datastore@today_1145
2.2.3 How to replace broken disk ?
List the zpools and identify the failed disk
# zpool status pool: zroot state: DEGRADED status: One or more devices are faulted in response to persistent errors. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the faulted device, or use 'zpool clear' to mark the device repaired. scan: scrub repaired 52K in 43h23m with 0 errors on Mon Nov 12 19:47:06 2018 config: NAME STATE READ WRITE CKSUM zroot DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 scsi-36a4badb009ce8300210a00c906a9ca2b ONLINE 0 0 0 scsi-36a4badb009ce8300210a00d8078c1ef6 ONLINE 0 0 0 scsi-36a4badb009ce8300210a00e108174999 ONLINE 0 0 0 scsi-36a4badb009ce8300210a00e9088d9443 ONLINE 0 0 0 scsi-36a4badb009ce8300210a00f109055358 ONLINE 0 0 0 scsi-36a4badb009ce8300210af40b0750b87f FAULTED 1 375 0 too many errors
=> Here you can see that the disk with the id scsi-36a4badb009ce8300210af40b0750b87f is broken
Take the disk offline:
# zpool offline zroot scsi-36a4badb009ce8300210af40b0750b87f
Next, physically replace the disk
Next, make sure that the new disk has a GPT partition label (and not MBR):
# parted /dev/sdg # mklabel gpt # quit
Next, get the scsi-id of the new disk:
# ls -al /dev/disk/by-id/ total 0 lrwxrwxrwx 1 root root 9 Jan 14 16:36 scsi-36a4badb009ce8300210a00e9088d9443 -> ../../sde lrwxrwxrwx 1 root root 10 Jan 14 16:43 scsi-36a4badb009ce8300210a00e9088d9443-part1 -> ../../sde1 lrwxrwxrwx 1 root root 10 Jan 14 16:36 scsi-36a4badb009ce8300210a00e9088d9443-part9 -> ../../sde9 lrwxrwxrwx 1 root root 9 Jan 14 16:36 scsi-36a4badb009ce8300210a00f109055358 -> ../../sdf lrwxrwxrwx 1 root root 10 Jan 14 16:43 scsi-36a4badb009ce8300210a00f109055358-part1 -> ../../sdf1 lrwxrwxrwx 1 root root 10 Jan 14 16:36 scsi-36a4badb009ce8300210a00f109055358-part9 -> ../../sdf9 lrwxrwxrwx 1 root root 9 Jan 14 16:43 scsi-36a4badb009ce830023cf681c2fe62523 -> ../../sdg
In this example, the new /dev/sdg disk has the id scsi-36a4badb009ce830023cf681c2fe62523.
Then, perform the replacement in the ZFS config:
# zpool replace zroot scsi-36a4badb009ce8300210af40b0750b87f scsi-36a4badb009ce830023cf681c2fe62523
Verify the status of the ZFS pool:
# zpool status pool: zroot state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Mon Jan 14 16:43:23 2019 62.8G scanned out of 19.7T at 29.9M/s, 191h18m to go 9.94G resilvered, 0.31% done config: NAME STATE READ WRITE CKSUM zroot DEGRADED 0 0 0 raidz1-0 DEGRADED 0 0 0 scsi-36a4badb009ce8300210a00c906a9ca2b ONLINE 0 0 0 scsi-36a4badb009ce8300210a00d8078c1ef6 ONLINE 0 0 0 scsi-36a4badb009ce8300210a00e108174999 ONLINE 0 0 0 scsi-36a4badb009ce8300210a00e9088d9443 ONLINE 0 0 0 scsi-36a4badb009ce8300210a00f109055358 ONLINE 0 0 0 replacing-5 OFFLINE 0 0 0 scsi-36a4badb009ce8300210af40b0750b87f OFFLINE 0 0 0 scsi-36a4badb009ce830023cf681c2fe62523 ONLINE 0 0 0 (resilvering)