JustToThePoint English Website Version
JustToThePoint en español
Colaborate with us

RAID, Adding SATA drives, TrueNAS

We shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender, Winston Churchill

“Home Server

Create a RAID

RAID (Redundant Array of Independent Disks) is a technology that combines multiple physical disk drives into one or more logical units for data redundancy, improved performance, or both.

There are two options:

  1. Motherboard RAID. It is generally a better option when you need a simple (minimal setup complexity), single-system storage and you want to maximize raw performance.
  2. TrueNAS in Proxmox is usually a better alternative when data integrity is essential, you need advanced storage features, storage will be shared across multiple systems/VMs and future scalability is important.

For a simple and budget-friendly home setting, let’s create a RAID using a MSI B550 Gaming Plus motherboard (other motherboard will require similar steps):

First step. Installing SATA hard drive(s) in your computer.

  1. Install the hard drives/SSDs in the SATA/M.2 connectors on the motherboard.
  2. Connect the power connectors from your power supply to the hard drives.
    Consider using drives from the same manufacturer/model.

Second step. BIOS Configuration.

  1. Power on your system and press the Delete key to enter your BIOS setup.

  2. In the BIOS, navigate to the Settings menu, Advanced, Integrated Peripherals.

  3. Set SATA Configuration, Sata Mode to RAID Mode. If you want to use NVMe PCIe SSDs for RAID, ensure that “NVMe RAID mode” is set to “Enabled”. Save the settings, exit the BIOS, and reboot your system.

  4. After restarting, re-enter the BIOS, navigate to Settings, IO Ports, RAIDXpert2 Configuration Utility. Select Array Management and press Enter. Choose Create Array and select your desired RAID level. RAID 0, 1, or 10 for SATA devices; RAID 0 or 1 for NVMe devices. The selections available depend on the number of the hard drives being installed.

    RAID 0 (Striping). Data is split evenly across two or more disks, offering improved performance since reads/writes are done in parallel. However, there is no redundancy, meaning that if one drive fails, all data is lost. RAID 1 (Mirroring): Data is written identically to two or more drives. It offers complete redundancy, all data exists on multiple drives. However, the total capacity is the size of one drive, that is, 50% storage efficiency. RAID 10. Combines both and requires a minimum of four drives. Data is both mirrored and striped. It offers both redundancy and performance improvements.

  5. On the Select Physical Disks screen, select the hard drives to be included in the RAID array and set them to Enabled. Next, use the down arrow key to move to Apply Changes and press Enter. Then, return to the previous screen and configure the Select CacheTagSize, Read Cache Policy and Write Cache Policy if desired.

  6. Move to Create Array and press Enter to finalize the RAID setup. After completing, you’ll be brought back to the Array Management screen. Under Manage Array Properties you can see the new RAID volume and information on RAID level, array name, array capacity, etc.

Adding new Attached SATA drives on a Proxmox machine

First step. Seeing the New SATA Drives.

Option A. Log in to your Promox serving via SSH or directly on the console.

root@myserver:~ # lsblk (or fdisk -l) lists all disks and partitions in a tree-like format
NAME               MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS
sda                  8:0    0   3.6T  0 disk
sdb                  8:16   0   3.6T  0 disk
# /dev/sda and /dev/sdb are each 3.6T and currently unpartitioned.
sdc                  8:32   0 238.5G  0 disk
# This is my boot drive (with Proxmox installed)
├─sdc1               8:33   0  1007K  0 part
├─sdc2               8:34   0     1G  0 part /boot/efi
└─sdc3               8:35   0 237.5G  0 part
  ├─pve-swap       252:0    0     8G  0 lvm  [SWAP]
  ├─pve-root       252:1    0  69.4G  0 lvm  /
  ├─pve-data_tmeta 252:2    0   1.4G  0 lvm
  │ └─pve-data     252:4    0 141.2G  0 lvm
  └─pve-data_tdata 252:3    0 141.2G  0 lvm
    └─pve-data     252:4    0 141.2G  0 lvm
# I can see two new disks (/dev/sda and /dev/sdb)

Option B. Open the Proxmox Web GUI by visiting https://Proxmox-IP:8006, go to NodeName (e.g. Datacenter, myserver), under Disks on the left-hand menu, we should see a list of our attached drives. The new SATA drives should appear if they are recognized by the system.

Step 2. Creating the ZFS Pool

Using the Command Line

# A. Wipe the Disks (optional but recommended if the disks had partitions).
sgdisk --zap-all /dev/sda
sgdisk --zap-all /dev/sdb

# B.1 It creates a simple mirrored ZFS named "mypool" or ...
root@myserver:~# zpool create mypool mirror /dev/sdb /dev/sdc

# B.2 ... creates a pool named "mypool" that stripes across /dev/sda and /dev/sdb
root@myserver:~# zpool create mypool /dev/sda /dev/sdb

# C. Check the pool
root@myserver:~# zpool status # It shows the health of the new pool, the devices, etc.
  pool: mypool
 state: ONLINE
config:

	NAME        STATE     READ WRITE CKSUM
	mypool      ONLINE       0     0     0
	  sda       ONLINE       0     0     0
	  sdb       ONLINE       0     0     0

errors: No known data errors
root@myserver:~# zfs list # It shows any ZFS dataset or volumes in mypool
NAME     USED  AVAIL  REFER  MOUNTPOINT
mypool   396K  7.12T    96K  /mypool

D. Add the Pool to Proxmox’s Storage: In the Proxmox Web UI, go to Datacenter, Storage, Add, ZFS. Under ZFS Pool, select mypool. Give it an ID/name and choose what Content, e.g., Disk image/Container are used for hosting Virtual Machines and Containers.

image info

Via Proxmox Web

Go to https://Proxmox-IP:8006/ and log in with your root credentials. On the left sidebar, click the node name (e.g., myserver). Under your node’s menu, go to Disks (you should see /dev/sda and /dev/sdb listed among the disks), ZFS, click on the Create ZFS button.

Then, enter a ZFS pool name (e.g., mypool), check the boxes next to /dev/sda and /dev/sdb to include both disks, for RAID Level choose “Stripe” if you want to combine them as one big pool without redundancy, “Mirror”, etc. Finally, create.

Step 3. Verifying and Using the New ZFS Storage. Under Datacenter, Storage, you should see the recently created mypool. When you create a new VM, in the wizard’s Hard Disk step, choose your new ZFS storage from the drop-down list.

Pools, Datasets, Volumes, and Virtual devices

A ZFS pool is a collection of storage devices (like hard drives or SSDs) that are grouped together to provide a large, unified storage space. They are typically used to store files, create snapshots, and perform other storage-related tasks.

A dataset is a logical unit within a ZFS pool. It is like a folder or directory that can be used to organize and manage data. Datasets can be created within a pool to store files, directories, or even other datasets.

A volume is a block storage device that can be used like a traditional disk. ZFS volumes are often referred to as zvols (ZFS volumes). They provide block-level access, meaning they appear as raw block devices to the operating system and can be formatted with any file system.

Virtual devices or vdevs are the individual storage devices (like hard drives or SSDs) that are part of a ZFS pool. These devices are combined to form the pool and provide the storage capacity. Virtual Devices can be composed of one or multiple disks:

  1. Single Disk: A vdev composed of a single disk that provides no redundancy, so if the disk fails, data is lost. A very budget friendly option, but clearly insecure.
  2. Mirror: Composed of two (this is the most typical configuration) or more disks (two mirror vdevs is also a widely used configuration) that mirror each other. Data is written identically to all disks in the mirror, providing redundancy. If one disk fails, the data is still available on the other disk(s), but the usable disk capacity is cut in half with a two-way mirror or even worse, cut down to just a third with a three-way mirror.
  3. RAID-Z1/Z2/Z3: Composed of at least three/four/five disks. One/two/three drives’s capacity is used for parity data, allowing for one/two/three disk(s) to fail without data loss.
  4. Stripe: Composed of two or more disks, where all data is split across the disks to improve performance and no space is wasted for parity data. However, there is no redundancy, meaning that data exists only on one disk, so if one disk fails, data is lost and that’s why they’re rarely use in most configurations.

Therefore, there is an equilibrium to decide between performance, resilience, and capacity. The rule of life is: You can have two “Big Things” in your life, but not three.

TrueNAS install

Network Attached Storage, or NAS, is a dedicated storage solution that allows multiple users and devices to access and store data from a centralized location. Unlike traditional hard drives that are directly connected to a single computer, a NAS is connected to a network, providing more flexible and collaborative data management. Step 1. Download the latest version of TrueNAS Scala (Includes built-in container support, can run Linux applications natively, and is also free. TrueNAS-SCALE-24.10.1.iso) in the website TrueNAS OPEN ENTERPRISE STORAGE.

Step 2. After the download finishes, navigate to Proxmox, then upload the ISO image. You can upload the image by selecting Datacenter, myserver.local or Storage, ISO Images, Upload.

Step 3. Create a virtual machine. In the top right corner, select Create VM, then follow these instructions:

Step 4. Add physical hard drives to a TrueNAS VM in Promox. To configure a storage pool in TrueNAS, you'll need to pass physical hard drives to the VM. There are two option.

Option A. Simply pass the hard disk path that you want to attach the physical hard disk to the TrueNAS VM.

❯ lsblk -o +MODEL,SERIAL,WWN
# List the hard disks, their model names, and serial numbers
NAME MAJ:MIN RM   SIZE RO TYPE MOUNTPOINTS MODEL SERIAL WWN
sda    8:0    0   2,7T  0 disk             ST300 Z5023M 0x5000c500875d3487
└─sda1
       8:1    0   2,7T  0 part                          0x5000c500875d3487
❯ ls -l /dev/disk/by-id # List the hard disks and their unique IDs
total 0
lrwxrwxrwx 1 root root  9 dic 26 07:58 ata-ST3000DM001-1ER166_Z5023MGJ -> ../../sda
lrwxrwxrwx 1 root root 10 dic 26 07:58 ata-ST3000DM001-1ER166_Z5023MGJ-part1 -> ../../sda1

qm set 201 -scsi1 /dev/disk/by-id/ata-ST3000DM001-1ER166_Z5023MGJ
# ata-ST3000DM001-1ER166_Z5023MGJ is the full ID of the disk
# 201 is the VM_ID
# After this command, you'll see the newly added disk in the Hardware section.

Option B. Purchase a SAS/SATA RAID Controller Card, plug the SATA hard drives directly into them (rather than plugging them directly into the motherboard), and pass through the entire PCIe device to the TrueNAS VM, e.g., 12G (data transfer rate which is 12Gpbs) Internal PCI-E (the interfaced used to connect the HBA to your motherboard) SAS/SATA HBA Controller Card, Broadcom’s SAS 3008 (this is the controller chip on the HBA), Compatible for SAS 9300-8I. Finally, select All Functions and add it (enable PCI Express).

Select Hardware, Add, PCI Device. Select in Device your device, e.g., Realtek RTL8125B 2.5GE Controller, then select All Functions and add it (enable PCI Express).

Start the virtual machine and run through the TrueNAS installation process: install/upgrade, Choose Destination media (sda, 32gb -Create a virtual machine, Disk (virtual drive)-), root password, shutdown, remove the virtual optical drive (select Hardware, CD/DVD Drive, and click on Remove), and start the virtual machine again.

Launch your favorite web browser and go to the web user interface address. Select Storage, Create Pool, give it a name proxmoxpool, select all available disks and move them into Data VDevs, pick a raid (Raid-z, Raid-z2 -very safe, but less capacity, etc.), and click on Create.

Next, we need to create a TrueNAS dataset (a file system within a data storage pool) and a user. Go to Storage, select your pool (proxmoxpool), the three vertical dots in the corner, Add Dataset, give it a name (proxmoxshare), and click on Save. Then, go to Credentials, Local Users, hit Add, enable the checkbox Permit Sudo, then Save.

Click on Network, Interfaces, select your interface, typically you want to disable DHCP, and give it a static IP.

Faster Ethernet Speed

A Network Interface Card (NIC), also called a network adapter or network card, is a hardware component that allows a computer to connect to a network. It serves as the physical interface between a computer and the network infrastructure, enabling the computer to send and receive data over the network.

Modern cards such as Realtek RTL8125B 2.5GE, Intel Ethernet Network Adapter I225-V 2.5 GbE or Intel X550-T2 (10 GbE) provide high-speed (2.5/10 Gigabit) ethernet connectivity, offering data transfer speeds of up to 2.5/10 Gbps, which is significantly faster than traditional Gigabit Ethernet (1 Gbps) and advanced features like Wake-on-LAN.

Select Hardware, Add, PCI Device. Select in Device your device, e.g., Realtek RTL8125B 2.5GE Controller, then select All Functions and add it.

Using rcp to Remotely Copy Files to the Proxmox Storage

zpool list # Check if mypool exists and is mounted
root@myserver:/mypool/backup# zfs list # To see where mypool is mounted
NAME                               USED  AVAIL  REFER  MOUNTPOINT
mypool                             322G  6.81T  1.90G  /mypool

mkdir -p /mypool/backup
ssh-copy-id root@dir-IP-YOUR-PROXMOX # Copy your public key to the PROXMOX server

BACKUP_DEST="root@dir-IP-YOUR-PROXMOX:/mypool/backup/"
rsync -av --delete "$HOME/dotfiles/" "$BACKUP_DEST/dotfiles"
# Copying my dotfiles directory to the ZFS Pool

Freeing up space

You may have found out that ProxMox run out of space. This is because /dev/mapper/pve-root is a separate logical volume (LVM) that is used for the root filesystem, and it has a limited size by default in Proxmox.

Let’s see what you can do about it:

# 1. Step
# Remove cached package files
apt-get clean
# Remove unused packages and dependencies
apt-get autoremove --purge
# Remove old log files
journalctl --vacuum-size=100M
# Check for Large Files
du -sh /var/*

# 2. Resize the Root Filesystem
# If freeing up space isn’t enough, you can resize the root filesystem (/dev/mapper/pve-root) to use more of the available disk space.
# Check the available space in the LVM volume group
vgdisplay
# Extend the Logical Volume
lvextend -l +100%FREE /dev/mapper/pve-root
# After extending the logical volume, resize the filesystem to use the new space
resize2fs /dev/mapper/pve-root
# Verify the changes
df -h

# 3. Proxmox stores backups, ISO images, and templates in /var/lib/vz. You can move this directory to mypool:
mv /var/lib/vz /mypool/vz
ln -s /mypool/vz /var/lib/vz

mv /var/log /mypool/log
ln -s /mypool/log /var/log

Biography

Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.