FreeNAS Hardware Requirements

Reprint from FreeNAS website:

Since FreeNAS™ 8.0.2 is based on FreeBSD 8.2, it supports the same hardware found in the amd64 and i386 sections of the FreeBSD 8.2 Hardware Compatibility List.

Actual hardware requirements will vary depending upon what you are using your FreeNAS™ system for. This section provides some guidelines to get you started. You should also skim through the FreeNAS™ Hardware Forum for performance tips from other FreeNAS™ users. The Hardware Forum is also an excellent place to post questions regarding your hardware setup or the hardware best suited to meet your requirements.
Contents

1 Architecture
2 RAM
3 Compact or USB Flash
4 Storage Disks and Controllers
5 Network Interfaces
6 RAID Overview
7 ZFS Overview

Architecture

While FreeNAS™ is available for both 32-bit and 64-bit architectures, you should use 64-bit hardware if you care about speed or performance. A 32-bit system can only address up to 4GB of RAM, making it poorly suited to the RAM requirements of ZFS. If you only have access to a 32-bit system, consider using UFS instead of ZFS.
RAM

The best way to get the most out of your FreeNAS™ system is to install as much RAM as possible. If your RAM is limited, consider using UFS until you can afford better hardware. ZFS typically requires a minimum of 6 GB of RAM in order to provide good performance; in practical terms (what you can actually install), this means that the minimum is really 8 GB. The more RAM, the better the performance, and the Forums provide anecdotal evidence from users on how much performance is gained by adding more RAM. For systems with large disk capacity (greater than 6 TB), a general rule of thumb is 1GB of RAM for every 1TB of storage.

NOTE: by default, ZFS disables pre-fetching (caching) for systems containing less than 4 GB of usable RAM. Not using pre-fetching can greatly reduce performance. 4 GB of usable RAM is not the same thing as 4 GB installed RAM as the operating system resides in RAM. This means that the practical pre-fetching threshold is 6 GB, or 8 GB of installed RAM. You can still use ZFS with less RAM, but performance will be affected.

If you use Active Directory with FreeNAS™, add an additional 2 GB of RAM for winbind’s internal cache.

If you are installing FreeNAS™ on a headless system, disable the shared memory settings for the video card in the BIOS.
Compact or USB Flash

The FreeNAS™ operating system is a running image. This means that it should not be installed onto a hard drive, but rather to a USB or compact flash device that is at least 2 GB in size. A list of compact flash drives known to work with FreeNAS™ can be found on the .7 wiki. If you don’t have compact flash, you can instead use a USB thumb drive that is dedicated to the running image and which stays inserted in the USB slot. While technically you can install FreeNAS™ onto a hard drive, this is discouraged as you will lose the storage capacity of the drive. In other words, the operating system will take over the drive and will not allow you to store data on it, regardless of the size of the drive.

The FreeNAS™ installation will partition the operating system drive into two ~ 1GB partitions. One partition holds the current operating system and the other partition is used when you upgrade. This allows you to safely upgrade to a new image or to revert to an older image should you encounter problems.
Storage Disks and Controllers

The Disk section of the FreeBSD Hardware List lists the supported disk controllers. In addition, support for 3ware 6gbps RAID controllers has been added along with the CLI utility tw_cli for managing 3ware RAID controllers.

FreeNAS™ supports hot pluggable drives. Make sure that AHCI is enabled in the BIOS and that you have read Hot Swapping a ZFS Failed Drive before implementing this feature.

If you have some money to spend and wish to optimize your disk subsystem, consider your read/write needs, your budget, and your RAID requirements.

For example, moving the ZIL (ZFS Intent Log) to a dedicated SSD only helps performance if you have synchronous writes, like a database server. SSD cache devices only help if your working set is larger than system RAM, but small enough that a significant percentage of it will fit on the SSD.

If you have steady, non-contiguous writes, use disks with low seek times. Examples are 10K or 15K SAS drives which cost about $1/GB. An example configuration would be six 15K SAS drives in a RAID 10 which would yield 1.8 TB of usable space or eight 15K SAS drives in a RAID 10 which would yield 2.4 TB of usable space.

7200 RPM SATA disks are designed for single-user sequential I/O and are not a good choice for multi-user writes.

If you have the budget and high performance is a key requirement, consider a Fusion-I/O card which is optimized for massive random access. These cards are expensive and are suited for high end systems that demand performance. A Fusion-I/O can be formatted with a filesystem and used as direct storage; when used this way, it does not have the write issues typically associated with a flash device. A Fusion-I/O can also be used as a cache device when your ZFS dataset size is bigger than your RAM. Due to the increased throughput, systems running these cards typically use multiple 10 GigE network interfaces.

If you will be using ZFS, Disk Space Requirements for ZFS Storage Pools recommends a minimum of 16 GB of disk space. Due to the way that ZFS creates swap, you can not format less than 3GB of space with ZFS. However, on a drive that is below the minimum recommended size you lose a fair amount of storage space to swap: for example, on a 4 GB drive, 2GB will be reserved for swap.

If you are new to ZFS and are purchasing hardware, read through ZFS Storage Pools Recommendations first.
Network Interfaces

The FreeBSD Ethernet section of the Hardware Notes indicates which interfaces are supported by each driver. While many interfaces are supported, FreeNAS™ users have seen the best performance from Intel and Chelsio interfaces, so consider these brands if you are purchasing a new interface.

At a minimum you will want to use a GigE interface. While GigE interfaces and switches are affordable for home use, it should be noted that modern disks can easily saturate 110 MB/s. If you require a higher network throughput, you can “bond” multiple GigE cards together using the LACP type of Link Aggregation. However, any switches will need to support LACP which means you will need a more expensive managed switch rather than a home user grade switch.

If network performance is a requirement and you have some money to spend, use 10 GigE interfaces and a managed switch. If you are purchasing a managed switch, consider one that supports LACP and jumbo frames as both can be used to increase network throughput.

NOTE: at this time the following are not supported: InfiniBand, FibreChannel over Ethernet, or wireless interfaces.

If network speed is a requirement, consider both your hardware and the type of shares that you create. On the same hardware, CIFS will be slower than FTP or NFS as Samba is single-threaded. If you will be using CIFS, use a fast CPU.
RAID Overview

Data redundancy and speed are important considerations for any network attached storage system. Most NAS systems use multiple disks to store data, meaning you should decide what type of RAID to use before installing FreeNAS™. This section provides an overview of RAID types to assist you in deciding which type best suits your requirements.

RAID 0: uses data striping to store data across multiple disks. It provides zero fault tolerance, meaning if one disk fails, all of the data on all of the disks is lost. The more disks in the RAID 0, the more likely the chance of a failure.

RAID 1: all data is mirrored onto two disks, creating a redundant copy should one disk fail. If the disks are on separate controllers, this form of RAID is also called duplexing.

RAID 5: requires a minimum of 3 disks and can tolerate the loss of one disk without losing data. Disk reads are fast but write speed can be reduced by as much as 50%. If a disk fails, it is marked as degraded but the system will continue to operate until the drive is replaced and the RAID is rebuilt. However, should another disk fail before the RAID is rebuilt, all data will be lost. If your FreeNAS™ system will be used for steady writes, RAID 5 is a poor choice due to the slow write speed.

RAID 6: requires a minimum of 4 disks and can tolerate the loss of 2 disks without losing data. Benefits from having many disks as performance, fault tolerance, and cost efficiency are all improved relatively with more disks. The larger the failed drive, the longer it takes to rebuild the array. Reads are very fast but writes are slower than a RAID 5.

RAID 10: requires a minimum of 4 disks and number of disks is always even as this type of RAID mirrors striped sets. Offers faster writes than RAID 5. Can tolerate multiple disk loss without losing data, as long as both disks in a mirror are not lost.

RAID 60: requires a minimum of 8 disks. Combines RAID 0 striping with the distributed double parity of RAID 6 by striping 2 4-disk RAID 6 arrays. RAID 60 rebuild times are half that of RAID 6.

RAIDZ1: ZFS software solution that is equivalent to RAID5. Its advantage over RAID 5 is that it avoids the write-hole and doesn’t require any special hardware, meaning it can be used on commodity disks. If your FreeNAS™ system will be used for steady writes, RAIDZ is a poor choice due to the slow write speed. Requires a minimum of 3 disks though 5 disks is recommended (over 3, 4, or 6 disks). It should be noted that you cannot add additional drives to expand the size of a RAIDZ1 after you have created it. The only way to increase the size of a RAIDZ1 is to replace each drive with a larger drive one by one while allowing time for restriping between each drive swap out. However, you can combine two existing RAIDZ1’s to increase the size of a ZFS volume (pool).

RAIDZ2: double-parity ZFS software solution that is similar to RAID-6. It avoids the write-hole and doesn’t require any special hardware, meaning it can be used on commodity disks. Requires a minimum of 3 disks. RAIDZ2 allows you to lose 1 drive without any degradation as it basically becomes a RAIDZ1 until you replace the failed drive and restripe. At this time, RAIDZ2 on FreeBSD is slower than RAIDZ1.

RAIDZ3: triple-parity ZFS software solution. FreeNAS™ wil not support this form of RAIDZ until 8.3.

NOTE: It isn’t recommended to mix ZFS RAID with hardware RAID. It is recommended that you place your hardware RAID controller in JBOD mode and let ZFS handle the RAID. According to Wikipedia: ZFS can not fully protect the user’s data when using a hardware RAID controller, as it is not able to perform the automatic self-healing unless it controls the redundancy of the disks and data. ZFS prefers direct, exclusive access to the disks, with nothing in between that interferes. If the user insists on using hardware-level RAID, the controller should be configured as JBOD mode (i.e. turn off RAID-functionality) for ZFS to be able to guarantee data integrity. Note that hardware RAID configured as JBOD may still detach disks that do not respond in time; and as such may require TLER/CCTL/ERC-enabled disks to prevent drive dropouts. These limitations do not apply when using a non-RAID controller, which is the preferred method of supplying disks to ZFS.

When comparing hardware RAID types conventional wisdom recommends the following in order of preference: Raid6, Raid10, Raid5, then Raid0. If using ZFS, the recommended preference changes to RAIDZ2 then RAIDZ3.

Advertisements