ZFS is an awesome filesystem, developed by Sun and ported to Linux. Although not distributed, it emphasizes durability and simplicity. It’s essentially an alternative to the common combination of md and LVM.
I’m not going to actually go into a RAID configuration, here, but the following should be intuitive-enough to send you on your way. I’m using Ubuntu 13.10 .
$ sudo apt-get install zfs-fuse Reading package lists... Done Building dependency tree Reading state information... Done Suggested packages: nfs-kernel-server kpartx The following NEW packages will be installed: zfs-fuse 0 upgraded, 1 newly installed, 0 to remove and 34 not upgraded. Need to get 1,258 kB of archives. After this operation, 3,302 kB of additional disk space will be used. Get:1 http://us.archive.ubuntu.com/ubuntu/ saucy/universe zfs-fuse amd64 0.7.0-10.1 [1,258 kB] Fetched 1,258 kB in 1s (750 kB/s) Selecting previously unselected package zfs-fuse. (Reading database ... 248708 files and directories currently installed.) Unpacking zfs-fuse (from .../zfs-fuse_0.7.0-10.1_amd64.deb) ... Processing triggers for ureadahead ... Processing triggers for man-db ... Setting up zfs-fuse (0.7.0-10.1) ... * Starting zfs-fuse zfs-fuse [ OK ] * Immunizing zfs-fuse against OOM kills and sendsigs signals... [ OK ] * Mounting ZFS filesystems... [ OK ] Processing triggers for ureadahead ... $ sudo zpool list no pools available $ dd if=/dev/zero of=/home/dustin/zfs1.part bs=1M count=64 64+0 records in 64+0 records out 67108864 bytes (67 MB) copied, 0.0588473 s, 1.1 GB/s $ sudo zpool create zfs_test /home/dustin/zfs1.part $ sudo zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zfs_test 59.5M 94K 59.4M 0% 1.00x ONLINE - $ sudo dd if=/dev/zero of=/zfs_test/dummy_file bs=1M count=10 10+0 records in 10+0 records out 10485760 bytes (10 MB) copied, 1.3918 s, 7.5 MB/s $ ls -l /zfs_test/ total 9988 -rw-r--r-- 1 root root 10485760 Mar 7 21:51 dummy_file $ sudo zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zfs_test 59.5M 10.2M 49.3M 17% 1.00x ONLINE - $ sudo zpool status zfs_test pool: zfs_test state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zfs_test ONLINE 0 0 0 /home/dustin/zfs1.part ONLINE 0 0 0 errors: No known data errors
So, now we have one pool with one disk. However, ZFS also allows hot reconfiguration. Add (stripe) another disk to the pool:
$ dd if=/dev/zero of=/home/dustin/zfs2.part bs=1M count=64 64+0 records in 64+0 records out 67108864 bytes (67 MB) copied, 0.0571095 s, 1.2 GB/s $ sudo zpool add zfs_test /home/dustin/zfs2.part $ sudo zpool status zfs_test pool: zfs_test state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM zfs_test ONLINE 0 0 0 /home/dustin/zfs1.part ONLINE 0 0 0 /home/dustin/zfs2.part ONLINE 0 0 0 errors: No known data errors $ sudo dd if=/dev/zero of=/zfs_test/dummy_file2 bs=1M count=70 70+0 records in 70+0 records out 73400320 bytes (73 MB) copied, 12.4728 s, 5.9 MB/s $ sudo zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT zfs_test 119M 80.3M 38.7M 67% 1.00x ONLINE -
I should mention that there is some diskspace overhead, or, at least, some need for explicitly optimizing the disk (if possible). Though I assigned two 64M “disks” to the pool, I received “out of space” errors when I first wrote a 10M file and then attempted to write a 80M file. It was successful when I chose to write a 70M file, instead.
You can also view IO stats:
$ sudo zpool iostat -v zfs_test capacity operations bandwidth pool alloc free read write read write ------------------------ ----- ----- ----- ----- ----- ----- zfs_test 80.5M 38.5M 0 11 127 110K /home/dustin/zfs1.part 40.4M 19.1M 0 6 100 56.3K /home/dustin/zfs2.part 40.1M 19.4M 0 5 32 63.0K ------------------------ ----- ----- ----- ----- ----- -----
For further usage examples, look at these tutorials:
- https://flux.org.uk/tech/2007/03/zfs_tutorial_1.html
- http://www.jamescoyle.net/how-to/478-create-a-zfs-volume-on-ubuntu
Image may be NSFW.
Clik here to view.
Clik here to view.
