We shall fight on the beaches, we shall fight on the landing grounds, we shall fight in the fields and in the streets, we shall fight in the hills; we shall never surrender, Winston Churchill
Proxmox VE supports both traditional firmware RAID (via your motherboard BIOS/UEFI) and ZFS. This article provides step-by-step instructions and guidelines to create and manage storage using motherboard RAID and ZFS on Proxmox VE, and how to integrate a ZFS pool for VM storage and backups.
A solid understanding of these four ZFS concepts is essential when you’re using Proxmox VE for bare-metal virtualization or container hosting.
A ZFS pool is a collection of storage devices (like hard drives or SSDs) that are grouped together to provide a large, unified storage space. They are typically used to store files, create snapshots, and perform other storage-related tasks.
In Proxmox, when you add ZFS storage, you’re creating or importing a zpool (e.g. mypool). It abstracts away individual disks (or groups of disks) into one filesystem namespace, mounted (by default) under /mypool.
A vdev consists of one or more physical disks or partitions that are combined to form a single logical unit. A zpool is composed of one or more vdevs. ZFS stripes data across all vdevs, and each vdev in turn manages its member disks.
vdevs can be arranged in various configurations, including:
Single Disk: A vdev composed of a single physical disk without redundancy, so if the disk fails, data is lost. A very budget friendly option, but clearly insecure.
Mirror: Composed of two or more disks that mirror each other. Data is written identically to all disks in the mirror for redundancy. If one disk fails, the data is still available on the other disk(s), but the usable disk capacity is cut in half with a two-way mirror or even worse, cut down to just a third with a three-way mirror.
RAID-Z1/Z2/Z3: Composed of at least three, four or five disks. One, two or three drives’s capacity is used for parity data, allowing for one/two/three disk(s) to fail without data loss.
Stripe: Composed of two or more disks, where all data is split across the disks to improve performance and no space is wasted for parity data. However, there is no redundancy, meaning that data exists only on one disk, so if one disk fails, data is lost and that’s why they’re rarely use in most configurations.
Therefore, there is an equilibrium to decide between performance, resilience, and capacity. The rule of life (and technology,) is: You can have two “Big Things” in your life, but not three.
A dataset is a logical unit within a ZFS pool. It is like a folder or directory that can be used to organize and manage data. In Proxmox, ZFS datasets back LXC containers and can also be used for shared storage.
A volume is a block storage device that functions similarly to a traditional disk. In the context of ZFS, these volumes are known as zvols (ZFS volumes) and offer several key features:
It is generally a better option when you need a simple (minimal setup complexity), single-system storage and you want to maximize raw performance.
RAID (Redundant Array of Independent Disks) is a technology that combines multiple physical disk drives into one or more logical units for data redundancy, improved performance, or both. Many modern motherboards (e.g. the MSI B550 Gaming Plus) include an AMD RAIDXpert2 utility in the BIOS that supports RAID for SATA drives, meaning that this firmware RAID groups drives at a low level so the OS sees a single virtual disk.
For a simple and budget-friendly home setting, follow these steps to enable and configure motherboard RAID using a MSI B550 Gaming Plus motherboard (other motherboard will require similar steps):
Install Physical Drives in your computer. Install the hard drives (HDDs) or solid state drives (SSDs) in the appropriate SATA or M.2 connectors on the motherboard, connect the necessary power connectors from your power supply to the hard drives.
Enter BIOS/UEFI. Power on your system and immediately press the designated key (For MSI motherboards, this is typically the Del key. Other common keys are F2, F10, F12, or ESC) to enter the BIOS/UEFI setup.
Enable RAID Mode in BIOS. Once in the BIOS, navigate to the Settings menu, Advanced (Advanced Settings), Integrated Peripherals, SATA Configuration, set Sata Mode to RAID Mode
. If you want to use NVMe (Non-Volatile Memory Express) PCIe (Peripheral Component Interconnect Express) SSDs (solid-state drives) in your RAID array, look for an option like NVMe RAID mode (it might be under a PCIe or NVMe configuration section) and ensure it is set to “Enabled”. Save your changes, exit the BIOS (usually by pressing F10 and confirming), and reboot your system.
Initialize Disks within RAIDXpert2 Utility. After restarting, re-enter the BIOS, In the BIOS RAIDXpert2 utility (Settings, Advanced, RAIDXpert2 Configuration Utility or Settings, IO Ports, RAIDXpert2 Configuration Utility), go to Physical Disk Management or a similar menu. You should see a list of your installed drives, select the hard drives you want to include in the RAID array and and choose an option to initialize or prepare it for RAID. Next, use the down arrow key to move to Apply Changes and press Enter.
Create the RAID Array. Still in the RAIDXpert2 utility, navigate to Array Management, Create Array. Choose a RAID level (RAIDO, RAID1 or RAID10), select the member (physical) disks for the array and Apply Changes.
Configure the array’s cache policy if required: No Cache/ Write-through: Disable caching for safest (slow performance) operation (writes go directly to disk). Write-Back Cache: Enable write caching (Data is cached in RAM) for higher write performance, at the risk of data loss. Read Ahead Cache: Enable read-ahead caching to improve sequential read throughput (the controller will pre-fetch data).
Finalize Array Creation. After selecting RAID level, disks, and cache settings, choose Create Array and confirm. The BIOS will build the RAID volume. Save changes and exit BIOS (usually F10 to Save & Reboot).
Verifying New Drives/Arrays in Proxmox VE. After configuring the RAID in BIOS and rebooting, Proxmox VE should boot normally and detect it.
# Log into the Proxmox host
# Use the lsblk command to list block devices.
root@pve:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 238.5G 0 disk / (OS drive)
├─sda1 8:1 0 100M 0 part /boot/efi
├─sda2 8:2 0 1.0G 0 part [SWAP]
└─sda3 8:3 0 237.4G 0 part / (root partition)
sdb 8:16 0 931.5G 0 disk (New drive or RAID volume)
In this example, sda is the OS disk, and sdb is a new newly attached 1TB drive. If you configured a RAID1 in BIOS with this drive, you might instead see a single logical device (/dev/sdb) of ~1TB.
TrueNAS in Proxmox is usually a better alternative when data integrity is essential, you need advanced storage features, storage will be shared across multiple systems/VMs and future scalability is important.
RAID allows balancing performance and redundancy:
ZFS (Zettabyte File System) can manage multiple drives with various RAID-like configurations: mirror, stripe, etc. It is essentially a powerful filesystem and volume manager to create a storage pool on multiple drives.
Initial step. Seeing the New SATA Drives.
root@myserver:~ # lsblk (or fdisk -l) lists all disks and partitions in a tree-like format
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
sda 8:0 0 3.6T 0 disk
sdb 8:16 0 3.6T 0 disk
# /dev/sda and /dev/sdb are each 3.6T and currently unpartitioned.
sdc 8:32 0 238.5G 0 disk
# This is my boot drive (with Proxmox installed)
├─sdc1 8:33 0 1007K 0 part
├─sdc2 8:34 0 1G 0 part /boot/efi
└─sdc3 8:35 0 237.5G 0 part
├─pve-swap 252:0 0 8G 0 lvm [SWAP]
├─pve-root 252:1 0 69.4G 0 lvm /
├─pve-data_tmeta 252:2 0 1.4G 0 lvm
│ └─pve-data 252:4 0 141.2G 0 lvm
└─pve-data_tdata 252:3 0 141.2G 0 lvm
└─pve-data 252:4 0 141.2G 0 lvm
# I can see two new disks (/dev/sda and /dev/sdb)
Creating a ZFS Pool via Proxmox GUI. Go to https://Proxmox-IP:8006/ and log in with your root credentials. Go to Node (e.g. Datacenter, myserver), under Disks on the left-hand menu, we should see a list of our attached drives. The new SATA drives should appear if they are recognized by the system.
On the left sidebar, click the node name (e.g., myserver), go to Disks (you should see /dev/sda and /dev/sdb listed among the disks), ZFS, click on the Create ZFS button.
In the dialog box, enter a name for the new pool (ZFS pool name, e.g., mypool). Choose the disks to include in the pool from the list of available devices, e.g., check the boxes next to /dev/sda and /dev/sdb to include both disks. If multiple disks are selected, also choose the RAID Level (striping, mirror, RAID-Z variants, etc.) from the dropdown. For example, select our two disks and choose Stripe
if you want to combine them as one big pool without redundancy. Check Add Storage
if you want Proxmox to automatically add this pool to its storage configuration. Click Create and confirm. After a moment, the new ZFS pool should be created and also appeared under Datacenter, Storage as a ZFS type storage (e.g., mypool, ZFS).
# You can switch to the Shell to run zpool status to verify the pool status.
root@myserver:~# zpool status mypool
pool: mypool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
errors: No known data errors
Creating a ZFS Pool via Command Line. Advanced users may prefer the CLI, which offers more control and options.
# Identify the device names of the new drives by using lsblk or fdisk -l as above.
# It will show the drives (e.g. /dev/sda, /dev/sdb)
# A. Wipe the Disks (optional but recommended if the disks had partitions).
sgdisk --zap-all /dev/sda
sgdisk --zap-all /dev/sdb
# Decide the ZFS layout and formulate the zpool create command.
# B.1 It creates a simple mirrored ZFS named "mypool" for a single disk (no redundancy)
root@myserver:~# zpool create mypool mirror /dev/sdb /dev/sdc
# B.2 For a stripe of two or more disks, e.g., "mypool" stripes across /dev/sda and /dev/sdb
root@myserver:~# zpool create mypool /dev/sda /dev/sdb
# C. Confirm the pool is online and showing the expected configuration.
root@myserver:~# zpool status # It shows the health of the new pool, the devices, etc.
pool: mypool
state: ONLINE
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
sda ONLINE 0 0 0
sdb ONLINE 0 0 0
errors: No known data errors
root@myserver:~# zfs list # It shows any ZFS dataset or volumes in mypool
NAME USED AVAIL REFER MOUNTPOINT
mypool 396K 7.12T 96K /mypool
By default, the pool is mounted at a directory with the same name under root (e.g. /mypool). Check it by running df -h or mount.
Adding the ZFS Pool to Proxmox Storage Configuration. Now that the ZFS pool mypool is created, we need to tell Proxmox VE about it so that it can be used to store VM disks, containers, ISO images, or backups.
If you created the pool via the GUI and checked Add Storage, this step may have been already done for you. Check under Datacenter, Storage for an entry named after your pool. For example, you might see a storage ID like “mypool” of type “ZFS” with content types listed.
Otherwise, in the Proxmox Web UI, go to Datacenter, Storage, Add. In the Add ZFS dialog, give the storage an ID (a name for Proxmox to reference, e.g. “mypool” or “local-zfs2”), select in ZFS Pool from the drop drown mypool
(or whatever name you used previously) and choose the Content type, typically select Disk image/Container
that are used for hosting Virtual Machines and Containers.
Verify the New ZFS Storage. Under Datacenter, Storage, you should see the recently created mypool. When you create a new VM, in the wizard’s Hard Disk step, choose your new ZFS storage from the drop-down list.
When you create or migrate VMs/containers to this ZFS storage, Proxmox will manage ZFS datasets or volumes accordingly. VM disks on ZFS are typically created as ZFS block devices (zvols) which support internal snapshots and efficient clones.
zfs list -t all
# A container might appear as mypool/subvol-100-disk-0 if using ZFS.
mypool/subvol-100-disk-0 1.25G 6.75G 1.25G /mypool/subvol-100-disk-0
mypool/subvol-100-disk-1 96K 100G 96K /mypool/subvol-100-disk-1
mypool/vm-101-disk-0 3M 6.30T 112K -
mypool/vm-101-disk-1 540G 6.79T 39.6G -
In addition to using ZFS for VM storage, a great use-case is to dedicate part of your ZFS pool for backups.
In Proxmox, when you have a ZFS pool like “mypool” configured as storage for disk images and containers, you won’t see it as an option in the Backup pane for scheduling backups by default. This is because Proxmox's built-in (native) backup tool (vzdump) requires backup storage to be file-level storage, such as a directory, rather than directly on a raw ZFS pool or zvol.
How to enable backup scheduling on your ZFS pool (“mypool”):
Create a dedicated ZFS dataset inside your pool for backups, e.g., mypool/backup. Open the Proxmox host’s shell via SSH or the web UI’s, Datacenter, host (e.g., myserver), _ Shell, and execute the following command to create a new ZFS dataset named backup inside mypool: sudo zfs create mypool/backup
. This command creates a dataset named backup under the mypool pool. By default it will inherit the pool’s mountpoint setting, so likely it will be automatically mounted at /mypool/backup.
Verify the mount point.
# You can verify its mount point (the VALUE is the mount point path):
zfs get mountpoint /mypool/backup
NAME PROPERTY VALUE SOURCE
mypool/backup mountpoint mypool/backup default
zfs list mypool/backup
NAME USED AVAIL REFER MOUNTPOINT
mypool/backup 61.9G 6.30T 61.9G /mypool/backup
Enable compression on the backup dataset. You should take into consideration enabling compression on the backup dataset. lz4 is generally a good choice as it’s lightweight and can significantly save space: sudo zfs set compression=lz4 mypool/backup
.
Add the dataset as a Directory storage in Proxmox configured to accept backup content. In the Proxmox web UI, go to Datacenter, the Storage pane, click the Add button, and select Directory from the dropdown. We will add a new storage entry that points to the mountpoint of our ZFS dataset.
Fill in the details. ID: choose a descriptive name, e.g., mypool-backup, backup-zfs or local-zfs-backups. Directory: enter the mount point path of your dataset in the Proxmox filesystem, e.g., /mypool/backup. Content: select Backup
(VZDump backup file) from the dropdown, and save the storage configuration by clicking Add.
Schedule backups to the new storage. Now, when you create or schedule a backup job for a VM or container (either manually or as a scheduled task via Datacenter, Backup, Add), this new directory storage (e.g., mypool-backups) will be available as a target option.
Go to Datacenter, Backup in the Proxmox web GUI, click Add to create a new job. Select the Node (if standalone, it’s just that node, e.g., myserver). For Storage, select the ID of the directory storage we just added (e.g. “mypool-backup”). This tells Proxmox to put the backup files in /mypool/backup.
Schedule the time and repetition (e.g. daily at 11:00, or weekly, etc. according to your needs and requirements). Choose Compression for the backup file. This is separate from ZFS compression, so you can actually choose None
or alternatively ZSTD
(fast and good). Mode: Snapshot
, click on OK.
Test the backup job. It’s a good practice to run a backup manually to ensure it works. Navigate to Datacenter, Backup, select the job and click “Run Now”.
Watch the task log at the bottom of the GUI. You can also verify by listing the directory:
root@myserver:~# ls -lh /mypool/backup/dump
total 121G
-rw-r--r-- 1 root root 740 May 8 04:59 vzdump-lxc-100-2025_05_08-04_58_40.log
-rw-r--r-- 1 root root 730M May 8 04:59 vzdump-lxc-100-2025_05_08-04_58_40.tar.zst
-rw-r--r-- 1 root root 11 May 8 04:59 vzdump-lxc-100-2025_05_08-04_58_40.tar.zst.notes
-rw-r--r-- 1 root root 740 May 9 22:31 vzdump-lxc-100-2025_05_09-22_30_00.log
-rw-r--r-- 1 root root 730M May 9 22:31 vzdump-lxc-100-2025_05_09-22_30_00.tar.zst
-rw-r--r-- 1 root root 11 May 9 22:31 vzdump-lxc-100-2025_05_09-22_30_00.tar.zst.notes
-rw-r--r-- 1 root root 740 May 10 02:31 vzdump-lxc-100-2025_05_10-02_30_00.log
-rw-r--r-- 1 root root 730M May 10 02:31 vzdump-lxc-100-2025_05_10-02_30_00.tar.zst
-rw-r--r-- 1 root root 11 May 10 02:31 vzdump-lxc-100-2025_05_10-02_30_00.tar.zst.notes
[...]