I’d far rather be happy than right any day, Douglas Adams, The Hitchhiker’s Guide to the Galaxy
Network Attached Storage, or NAS, is a dedicated storage solution that allows multiple users and devices to access and store data from a centralized location. Unlike traditional hard drives that are directly connected to a single computer, a NAS is connected to a network, providing more flexible and collaborative data management. TrueNAS SCALE (an Open Source storage platform) can be virtualized on Proxmox VE to provide network-attached storage for your VMs and clients.
Step 1. Uploading the TrueNAS SCALE ISO to Proxmox Download the latest version of TrueNAS Community Edition (SCALE). In the Proxmox web interface, navigate to Datacenter, Node (e.g., myserver), go to your storage (e.g. “local”) and select the ISO Images menu. Click the Upload button, then browse and select the TrueNAS SCALE ISO file from your computer.
Step 2. Creating a Virtual Machine for TrueNAS. In the top right corner (Proxmox web UI), select Create VM. This opens the VM creation wizard. Then, follow these instructions:
General: give the virtual machine an easy-to-remember, meaningful name (e.g., truenasvm). Choose the node (host) on which to run the VM if you have a cluster.
OS. Set ISO Image to the TrueNAS SCALE ISO you have just uploaded. For Guest OS, Type, choose Linux
and Version as 6.x - 2.6 Kernel
.
System section. The default BIOS (SeaBIOS)
works for TrueNAS. Leave QEMU Guest Agent unchecked. Machine: Use Q35 for TrueNAS Scale as it's more modern and supports newer features. Change the SCSI Controller to VirtIO SCSI.
Hard Disks: Set the disk size to whatever you’d like, e.g. 32gb. Choose a storage for this disk (typically local-lvm) and use the VirtIO SCSI
controller and SCSI
disk type. Leave cache as Default (No Cache)
(Write Through or None) for safety or set Write Back (unsafe)
for better performance.
CPU. Set the total number of CPU Cores you’d like to use: 2, 4 or more CPUs (Cores). You need to assign CPU resources taking into consideration your host’s capacity and workloads. Set Type to host
to pass through the host CPU features or x86-64-v2-AES
.
Memory Allocate memory for the VM. Set 8192 (MB), 8GB, 16GB or 32GB.
Network Settings. Add a network device. Proxmox’s default, VirtIO (paravirtualized)
, is normally recommended because TrueNAS SCALE has VirtIO drivers and will perform best with it. Ensure it’s assigned to the appropriate bridge (e.g., vmbr0
) that connect to your local network.
Confirm. Review the summary of your settings. Ensure that the checkbox Start after Created (we need to add our data disks to the VM first) is not selected and click Finish to create the VM.
Step 3. Add Virtual Disk for TrueNAS Data Storage. Before installing TrueNAS, add one or more additional virtual disks to serve as the data drives for the ZFS storage pool.
Select the newly created TrueNAS VM (e.g., truenasvm) in the Proxmox web UI, then go to the Hardware tab, click Add, Hard Disk. Then, leave SCSI Controller as VirtIO SCSI
, choose your storage (where the virtual disk is, e.g., mypool-backup), and set a Disk size (GiB, e.g., 32 GB, 100 GB or more). Repeat this process for each additional virtual disk you want TrueNAS to manage.
After adding your virtual disks to the VM, you should be able to see them listed under the VM’s Hardware tab.
Step 4. Installing TrueNAS SCALE on the VM. Start the TrueNAS virtual machine and run through the TrueNAS installation process:
Yes
and press Enter to continue the installation.Yes
.Do not use any media
). After reboot, TrueNAS SCALE will perform its first boot setup where an IP address will be shown, The web user interface is at http://< TrueNassIPAddress >, e.g, my TrueNassIPAddress is 192.168.1.42
.Once TrueNAS is running, from a web browser on your network, navigate to http://< TrueNassIPAddress >so you can perform initial configuration via the web interface.
At the login screen, enter as username: truenas_admin
and the password you set previously during the installation process, and then you should gain access to the TrueNAS SCALE dashboard.
Ensure TrueNAS has the expected network settings in the web UI under Network settings. Update your system: System, Update.
Launch your favorite web browser and go to the web user interface address. Select Storage, Create Pool, give it a name proxmoxpool, select all available disks and move them into Data VDevs, pick a raid (Raid-z, Raid-z2 -very safe, but less capacity, etc.), and click on Create.
With TrueNAS up and running, let's create a ZFS Storage pool using the (data) virtual disks we previously added (no the boot disk).
It seems you haven't configured pools yet. Please click the button below to create a pool
, click the Create Pool button, and give it a name (e.g., mynaspool).Stripe
or if you added two disks and want a mirror, select both and choose Layout: Mirror
. Save and Confirm by clicking on Create Pool. After creation, you should see the new pool in the Storage section.Within our pool, it’s best practice to create a dataset for the actual data we plan to share, rather than sharing the root dataset (e.g., mynaspool) directly. Creating child datasets under mynaspool for your shares offers significant advantages in terms of organization, security, and management, even in a home lab setup, e.g, applying permissions to specific child datasets is much cleaner and more granular than trying to manage access at the root level. Besides, you can create different datasets for different purposes (e.g., mynaspool/documents, mynaspool/media, mynaspool/backups_vm).
Creating a Child Dataset for Sharing. In the TrueNAS UI, go to Datasets, select the root dataset (e.g., mynaspool, your root pool), click the Add Dataset button, and configure the new dataset.
Parent Path: mynaspool
, Name (enter a name for your dataset): data
, so the full path will be mynaspool/data; Dataset Preset, leave it as Generic
.
Generic dataset suitable for any share type.
Click Save to create the dataset. You now have a dataset mynaspool/data ready to be shared. It should now appear under the pool (mynaspool).
NFS (Network File System) is widely used for sharing files and directories with Linux/macOS clients (including Proxmox itself, for instance, for VM backups or ISO storage).
Setting Dataset Permissions for NFS. By default, our new dataset likely inherited ownership from the parent (truenas_admin) and mode bits (755).
For simplicity, let’s make the dataset owned by the TrueNAS built-in nobody user and allow wide-open access, then rely on NFS “mapall” to map remote users to that owner.
In Dataset, select the dataset (e.g., “data”). Permissions, Edit Permission. Set Owner, User to nobody
and Owner, Group to nogroup
(it might also be called nobody). These are special IDs that can act as a catch-all for anonymous NFS access. Check the boxes to Apply user and (Apply) group.
Set the permission mode for User/Group/Other as needed – for wide-open NFS in a lab scenario, set mode to 777 (rwx for all) for easiest access in a lab scenario. This basically means that any NFS client mapped to nobody (or any client if we open to Other) can read and write.
Create an NFS share for the dataset in TrueNAS. Navigate to Shares, Unix Shares (NFS), and click Add. In the Path field, browse to the dataset you have previously created (e.g., “/mnt/mynaspool/data”). Ensure the Enabled box is checked.
Under Networks and Hosts specify restrictions if needed. Typically (in a home lab), you should leave them blank to allow all clients from your local network. Click Advanced Options to tune access and set Mapall User to nobody
and Mapall Group to nogroup
so that all connecting users map to the nobody account on the server (ensuring permissions match because every NFS operation runs as the nobody user on TrueNAS).
Finally, click Save to create the NFS share. After clicking the save button, if the NFS service is not already on, TrueNAS will ask if you want to Enable the NFS service now: NFS Service is not currently running. Start the service now?
Click Confirm so that the NFS share is active.
Finally, on a Linux VM or client machine on the same network, we can mount the NFS export.
Ensure the NFS client packages are installed.
# NixOS. Add nfs-utils to your system packages in configuration.nix:
environment.systemPackages = with pkgs; [ nfs-utils ];
# Then, rebuild the system:
sudo nixos-rebuild switch
# On Ubuntu and Debian:
sudo apt update sudo apt install nfs-common
Create a local directory. This will be the mount point for the NFS share, e.g., sudo mkdir -p /mnt/fileserver
.
Mount the NFS File Share by running the mount command: sudo mount -t nfs < TrueNAS-IP >:/mnt/< poolname >/< dataset > /mnt/fileserver
, e.g., sudo mount -t nfs 192.168.1.42:/mnt/mynaspool/data /mnt/fileserver
where TrueNAS-IP is the IP or hostname of your NAS and < poolname >/< dataset > is the folder path on the NFS server.
Once the NFS Share is mounted, test access: cd /mnt/fileserver, nvim test.txt
❯ ls -l /mnt/fileserver
total 1
-rw-r--r-- 1 nobody nogroup 5 may 14 10:55 test.txt
Unmount the NFS File Share: sudo umount /mnt/fileserver
.
Mounting NFS File Shares Permanently.
To permanently mount an NFS share in NixOS, add this to your configuration.nix:
{ config, pkgs, ... }:
{
# Enable NFS Support
boot.supportedFilesystems = [ "nfs" "nfs4" ];
environment.systemPackages = with pkgs; [ nfs-utils ];
# NFS mount declaration
fileSystems."/mnt/fileserver" = {
device = "192.168.1.42:/mnt/mynaspool/data";
fsType = "nfs";
options = [
"x-systemd.automount" # Mount on first access
"noauto" # Don't mount at boot
"_netdev" # It delays until network is up
"nfsvers=4.2" # Force NFSv4.2
"nofail" # Continue boot if the NFS server is unreachable
];
};
}
If you are using other Linux distros, you can just add an entry to /etc/fstab on the client for persistent mounting: 192.168.1.42:/mnt/mynaspool/data /mnt/fileserver nfs defaults,_netdev 0 0
. This ensures the NFS share mounts at boot ( _netdev waits until network is up and running).