Setting up a RAID array is an important step in setting up any home server. Doing so will ensure that your data is protected if one (or more, in some circumstances) of your hard disks fails.
What’s more there are usually some performance benefits to a RAID setup as data can be sourced from multiple drives, effectively increasing performance beyond that of a single drives normal read speed.
There are many different types of RAID, some focused on increasing performance, some on increasing resiliency and some doing both. If you’re unaware of how RAID works I would recommend you check out this link for more information.
For this build, I’m going to use a RAID 5 setup, and I recommend you do the same. RAID 5 hits the sweet spot between a performance boost and giving resiliency in the event of a drive failure. What’s more, it also offers you the best amount of available storage on an array (group of disks) with only the equivalent of one disks space being lost.
As with most RAID setups, RAID 5 requires that all the disks be the same size. It’s recommended that they all be the same product line as well, but disks from different manufacturers will work together due to standardisation.
Following on from Part 2 of this guide, the next step is to run a series of commands from the command line in order to setup the RAID array. This can be done either directly at the server (with a keyboard and monitor attached) or via SSH using PuTTY.
The first command you’ll need to run will show you the drives connected to your system:
lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT
This will give you an output similar to this:
You should be able to discern the drives you wish to use for RAID based upon their size. I’m using three virtual disks in my emulated version above, each of 10GB size. They’re labelled sdb, sdc, sdd. You’ll need to record the labels for your disks for the next steps.
Next we’re going to build a command that will create a RAID 5 array given these drives, as follows:
sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sdb /dev/sdc /dev/sdd
- sudo mdadm – this runs the mdadm command with root priviliges
- –create – indicates we’re going to create a “device”
- –verbose – means that the output of the command will be displayed on the screen (useful for confirming it’s worked)
- /dev/md0 – This is the name of our resulting RAID drive
- –level=5 – We will be using RAID5 as our RAID type
- –raid-devices=3 – our RAID array will contain 3 drives
- /dev/sdb /dev/sdc /dev/sdd – the names of our drives
You’ll need to make sure you input the charecteristics for your drives for this command to work. You should be able to copy my example into a text editor and change the charecters you need to before pasting it into your command line.
If you’re planning on using more than 3 drives in your array you’ll also need to change the value of –raid-devices.
Once you’re happy with your command press Enter, then enter your password, you should see something similar to this:
Your RAID array is now being built in the background. This can take some time to complete, especially for larger drives, but you can check the progress by running the following command:
This will give you an output similar to this:
You can see the status of the build based on the “recovery = ” field shown above. Once it reaches 100% your RAID array is built and functioning.
It’s recommended you let the array finish building before continuing with this guide. Note, this could take several hours for larger drives!
Once you run the command above and confirm the array is 100% built you can continue.
Next, you’ll need to format the array and create a filesystem on it:
sudo mkfs.ext4 -F /dev/md0
Next, to make the array accessible to the filesystem we have to create a mount point and then map the array to it:
sudo mkdir -p /mnt/md0 sudo mount /dev/md0 /mnt/md0
Next you can check that the new space shows as an accessible drive:
df -h -x devtmpfs -x tmpfs
This will give you an output like this:
As shown above, the drive /dev/md0 is mounted to the mount point /mnt/md0, success!
Next we’ll need to save the configuration above so that it loads each time the server reboots. This is a single line of code which will scan for drive changes and then append them to the end of the file mdadm.conf
sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
Next, you’ll need to add the array to the Initial RAM File System (initramfs). This will make the array available during the early boot process, useful if you choose to install any apps to the RAID array:
sudo update-initramfs -u
The last step is to add the RAID array to the /etc/fstab file so that the RAID array is automatically mounted at boot:
echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
And we’re done! We now have a RAID 5 array configured on our server, it’s mounted to a folder and we’ve set up the system so that all the changes we’ve made will still prevail after a reboot. You should now be able to write information to the RAID array and read it back.
But how? How do we get our data onto the server? In the next part I’ll show how to setup folders on your new array and how to set these up as network drives, accessible from any other PC on your network.
Stay tuned for Part 4!