The crown storage jewel of the Solaris operating system, ZFS is a Solaris file system that uses storage pools to manage physical storage. The ZFS pooled storage model eliminates the concept of volumes and the associated problems of partitions, provisioning and stranded storage by enabling thousands of file systems to draw from a common storage pool, using only as much space as it actually needs.
Sun’s (NASDAQ: JAVA) open-source ZFS file system has some amazing features. It was originally designed for Solaris and unveiled in 2005, but you’ll also find it in OpenSolaris and related distributions. In the future it may well become a popular file system to run with Linux and BSD as well.
1. Checksums in Metadata for Data Integrity
Data integrity is of paramount importance in ZFS, and is the driver for many ZFS features.
The file system uses a 256-bit checksum, which is stored as metadata separate from the data it relates to, when it writes information to disk. Unlike a simple disk block checksum, this can detect phantom writes, misdirected reads and writes, DMA parity errors, driver bugs and accidental overwrites as well as traditional “bit rot.”
2. Copy on Write
ZFS ensures that data is always consistent on the disk using a number of techniques, including copy-on-write. What this means is that when data is changed it is not overwritten — it is always written to a new block and checksummed before pointers to the data are changed. The old data may be retained, creating snapshots of the file system through time as changes are made. File writes using ZFS are transactional — either everything or nothing is written to disk.
3. Data Snapshots With Time Slider
The latest version of OpenSolaris illustrates the power for ZFS’s snapshot capability with a small graphical application called TimeSlider. ZFS can be configured to take a snapshot of the file system (or a section of it, such as just a user’s home folder) on a regular basis — every 15 minutes, or every hour, and so on. These snapshots are very small and efficient, as only the deltas from the previous snapshot are stored.
TimeSlider offers a view of the file system (or a home folder), with a slider that can be moved back along a timeline to earlier snapshot times. As this is done, the view changes to show the state of the file system or the contents of a folder at the corresponding snapshot time. Recovering a file that has been overwritten by mistake or rolling back the system after an unsuccessful update is then just a matter of moving the slider back to the appropriate snapshot time.
Snapshots can also be made writable to create clones of existing file systems.
4. Pooled Data Storage
ZFS takes available storage drives and pools them together as a single resource, called a zpool. This can be optimized for capacity, or I/O performance, or redundancy, using striping, mirroring or some form of RAID. If more storage is needed, then more drives can simply be added to the zpool — ZFS sees the new capacity and starts using it automatically, balancing I/O and maximizing throughput.
5. RAIDZ and RAIDZ2
RAID 5 has a well-known flaw called the RAID 5 write hole. This causes a problem when a data block is written to a stripe but a power failure occurs before the corresponding parity block can be written. As a result, the data and parity for the stripe will be inconsistent. If a disk then fails, the RAID reconstruction process will result in incorrect data. The only way out of this is if an entire stripe happens to be overwritten, thus generating a correct parity block.
RAIDZ gets around this problem by using a variable width stripe, so every write is effectively a full stripe write. This, together with ZFS’s copy on write characteristic, eliminates the RAID 5 write hole completely. RAIDZ2 works in a similar way, but can tolerate the loss of two disks in the array using double parity.
Setting up a RAIDZ (or RAIDZ2) array is very easy and involves issuing one command.
6. SSD Hybrid Storage Pools
High performance SSDs can be added to a storage pool to create a hybrid storage pool. When these are configured as high performance cache disks, ZFS uses them to hold frequently accessed data to improve performance. It also uses a technology called L2 ARC (adaptive replacement cache) to write data that has to be stored immediately. This can slowly be moved over to conventional hard drives for more permanent storage when time and resources allow.
ZFS is a 128-bit file system, which means that in theory it could store 256 quadrillion ZB (a ZB is a billion TB.) In practice, this is larger than would ever be necessary, for the foreseeable future at least.
8. Data Scrubbing
ZFS can be made to scrub all the data in a storage pool, checking each piece of data with its corresponding checksum to verify its integrity, detect any silent data corruption, and to correct any errors in encounters where possible.
When the data is stored in a redundant fashion — in a mirrored or RAID-type array — it can correct any corrupt data it detects invisibly and without any administrator intervention. Since data corruption is logged, ZFS can bring to light defects in memory modules (or other hardware) that cause data to be stored on hard drives incorrectly.
Scrubbing is given low I/O priority so that it has a minimal effect on system performance, and can operate while the storage pool that is being scrubbed is in use.
9. Simple, Efficient Administration
Using ZFS commands, you can administer a system with short, efficient commands. For example, a five-disk RAIDZ array could be set up with the single command:
zpool create poolname raidz c1t0d0 c2t0d0 c3t0d0 c4t0d0 c5t0d0
10. More on the Way
ZFS is still evolving, and new features will appear regularly. The roadmap for 2009 includes encryption for increased security, and data deduplication to increase storage efficiency.
If you are interested in trying ZFS out, the easiest way is to get started is to download OpenSolaris or a related distro from http://opensolaris.org/os/downloads/. There is also a ZFS for FUSE/Linux project.