When the world says, ‘Give up,’ hope whispers, ‘Try it one more time,’ Lyndon B. Johnson
This guide walks through creating an Arch Linux virtual machine (VM) on Proxmox VE with the GNOME desktop environment and hardware-accelerated graphics so it could deliver a full desktop experience.
There are two methods:

Before creating the VM, ensure your hardware and Proxmox host are configured for virtualization and (if using gpu passthrough) for PCIe device assignment.
Hardware virtualization support: In your system BIOS/UEFI, enable the CPU virtualization extensions. This is typically labeled as Intel VT-x for Intel processors or AMD-V for AMD processors.
IOMMU (Intel VT-d / AMD-Vi). This is essential specifically for PCIe passthrough, including GPU passthrough. It allows Proxmox to isolate a PCIe device (like your dedicated GPU) from the host system and assign it directly to a VM. “Enable IOMMU” or “VT-d” should be set to Enabled instead of Auto. Depending on your particular motherboard these settings can be found under advanced CPU or Northbridge settings.
Separate Host GPU for Passthrough. If you plan to passthrough a dedicated GPU, it’s recommended to use another GPU for the Proxmox host output. Enter BIOS and set the primary display adapter accordingly, e.g., iGPU (the integrated GPU).
Arch Linux is an independently developed, free and open-source, rolling release distribution. It is a minimalist, lightweight, and bleeding edge distribution that targets advanced users.
# Check Your Graphics Card
root@myserver:~# lspci | grep VGA
2b:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1)
Visit the official Arch Linux download page and grab the latest ISO (HTTP Direct Downloads, BitTorrent -this is recommended for faster downloads-).
Upload the ISO to Proxmox. In the Proxmox web interface, select a storage on the left panel, Datacenter, Storage, e.g., local (myServer) or mypool (myServer), go to ISO Images. Click Download from URL or Upload (select the Arch ISO file and upload it).
Create a new Virtual Machine. In the Proxmox web UI, select Datacenter, node or host (server name, e.g., myserver) in the left tree, then click Create VM on the top right corner. In the creation wizard, use these settings:
General. Give the Virtual Machine a name (e.g., ArchGNOME or gnomeArch). VM ID are automatically generated and can be left as default or choose an unused ID.
OS. Select the type of operating system you plan to install (Linux, Windows, etc.). For the ISO image, select the Arch ISO you uploaded (e.g., Use CD/DVD disk image file. Storage: local; ISO image: archlinux-2025…).
System: Configure the BIOS type (OVMF(UEFI)). This sets up an EFI boot environment which is recommended for Arch and modern OSes. Graphic card (Default), EFI Storage (mypool). You may also enable Qemu Guest Agent here.
Disk: Storage, choose storage for the VM disk, e.g., local-lvm, mypool, etc., and allocate Disk size for the VM, e.g., 32 GB or more. Ensure SCSI Controller is VirtIO SCSI (Proxmox default).
CPU: Cores. Assign or set CPU cores to the VM, e.g., 2, 4 or more, and CPU Type to host (this makes the VM CPU mimic the host CPU, enabling all features for best performance).
Memory: Allocate the desired amount of RAM for the VM, e.g., 4096 or 8192 MB. GNOME is a full desktop environment, so 4 GB minimum; 8 GB or more are recommended for smooth performance and better experience.
Network: configure the network interface (usually defaults are fine).
When installing Arch Linux on Proxmox, Model: VirtIO (paravirtualized) provides better performance and compatibility for virtual machines. Bridge: Select your Proxmox bridge (it is usually vmbr0) so the VM gets network access, meaning that it is bridged to your LAN.
Confirm: Review your settings and click Finish to confirm and create your VM. It will be created but not yet started.Configure display and GPU for the VM. Now, you should have a new VM listed. Select the VM in the Proxmox interface. In the VM’s Hardware panel, find the Display device. Set Graphic card to VirtIO-GPU (VirGL). This gives the VM a virtual GPU that supports 3D acceleration.
For a PCIe Passthrough configuration: In the VM’s Hardware panel, click Add, PCI Device. In the dialog, select your GPU (it will show up as an available PCI device, identified by its vendor and device ID).
If you have an NVIDIA graphic card, you’ll typically see two functions (the GPU and its audio function), select All Functions to passthrough both at once. Besides, you may want to set the Graphic card to None (Hardware, Display) to avoid conflicts.
Add other devices: If you want sound in the VM and you’re using VirGL (a virtual GPU), you can add an Audio Device, click on Add, Audio Device (e.g., ich9-intel-hda), Backend Driver, set to SPICE.
Ensure the QEMU Guest Agent is enabled in the Options tab of the virtual machine.
Boot the VM from the ISO. Select the VM and click Start. Then, open the Console (noVNC or Spice) to watch it boot into a live environment.
You should see the Arch Linux boot loader menu. Device Manager, Secure Boot Configuration, Attempt Secure Boot (disable it), ESC, Reset

Run the Arch Installer. The Arch ISO comes with archinstall (a curses-based menu system) out-of-the-box.
# Update Arch keyring and the installer
pacman -Sy archlinux-keyring archinstall
Archinstall setup steps. Run the Archinstall Guided Setup.
# Check the Internet connection
ping www.google.com
# Launch the installer
archinstall
Archinstall language: English. You can accept defaults or change as needed. Next, set your Locales (language and region for the installed system): Keyboard layout: es/us; Locale language: en_US; Locale encoding: UTF-8.
Choose a Mirror region for package downloads close to your location (e.g., Spain). Select “yes” when asked about optional repos and mark multilib to enable it.
Disc configuration: Use a best-effort default partition layout (qEMU Hard disk, ext4 or btrfs). File System: Ext4 is recommended for simplicity unless you specifically want Btrfs.
Boot-loader: Systemd-boot. Recommended for UEFI systems, it is simple and reliable.
Swap (yes). If you have limited RAM, swap is useful and recommended.
Pick a Hostname for your system (e.g., archlinux, archvm, etc.) and set a root password and a user account. Typically, you’d also want to mark this user as administrator (which adds it to the wheel group) so you can use sudo.
Profile (Desktop Environment), select minimal or Desktop, gnome.
Choose an Audio server. PipeWire is usually the best choice.
Kernels. Use the default kernel linux, current stable kernel unless you have very specific requirements.
Network Configuration. Select NetworkManager with DHCP for simplicity.
Next, specify any Additional Packages to install during the initial setup: git, hyprland, openssh, quemu-guest-agent, sudo, vim, etc.
TimeZone. Pick your timezone, e.g., Europe/Madrid, America/Washington, etc., so your clock is always correct.
Optional Repositories. Good ideal to enable multilib repo for 32-bit compatibility packages.
Would you like to chroot …? No, then exist the installer menu, then back at the root prompt, type reboot.
Enable IOMMU and ACS override in GRUB.
# Edit GRUB on Proxmox host.
nvim /etc/default/grub
# We are passing options to the kernel at boot time.
GRUB_CMDLINE_LINUX_DEFAULT="quiet amd_iommu=on iommu=pt pcie_acs_override=downstream,multifunction nofb nomodeset video=vesafb:off,efifb:off"
update-grub # Update GRUB
reboot
amd_iommu=on: This option enables IOMMU (Input–Output Memory Management Unit), which acts as a memory manager for your PCI devices (e.g., GPU). Turning it on allows the host to isolate device memory accesses, a requirement for directly assigning a device to a VM.
iommu=pt: Here, pt stands for Pass-Through. This option instructs the kernel that, when a device is assigned to a VM, it should leave memory mappings untouched, allowing the VM to address the device directly.
pcie_acs_override=downstream,multifunction: PCIe ACS (Access Control Services) restricts which devices can communicate on the PCI bus. Using this override (downstream,multifunction) forces the kernel to separate devices into smaller groups, which is sometimes necessary on consumer boards that do not properly isolate devices. Use it only as a last resort – in most cases, a relative good motherboard will have the GPU in a separate group without this.
nofb nomodeset video=vesafb:off,efifb:off: These options disable the Linux framebuffer drivers (vesafb, efifb) that typically render the console on the GPU. This is done to prevent the host from claiming the GPU early in the boot process, allowing VFIO to bind it later.
VFIO (Virtual Function I/O) is a framework in the Linux kernel that allows safe and efficient direct device access for virtual machines (VMs). It enables users to assign physical devices, such as GPUs and network interfaces, directly to a VM, bypassing the host operating system. Once bound, the host’s normal drivers (e.g. nvidia, snd_hda_intel) can’t use it.
Load VFIO kernel modules at boot. These ensure the VFIO drivers are loaded on boot.
nvim /etc/modules
vfio
vfio_iommu_type1
vfio_pci
vfio_virqfd
Bind the specific GPU and audio device to VFIO (vfio-pci) by ID (identify device): echo "options vfio-pci ids=10de:13c2,10de:0fbb disable_vga=1" > /etc/modprobe.d/vfio.conf. This tells the host to bind these devices to the VFIO driver instead of the NVIDIA or audio drivers.
ids=10de:13c2,10de:0fbb are my GTX 970’s PCI vendor:device IDs (GPU and its HD audio function). You can get the PCI IDs of your GPU and its HDMI audio function by running lspci -nn | grep -E "VGA|Audio". Identify the IDs in the output (the [XXXX:YYYY] numbers). disable_vga=1: Prevents the host from initializing the card as a console display device. It may be necessary if the GPU was used for the host’s output.
lspci -nn | grep -E "VGA|Audio"
[...]
2b:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM204 [GeForce GTX 970] [10de:13c2] (rev a1)
2b:00.1 Audio device [0403]: NVIDIA Corporation GM204 High Definition Audio Controller [10de:0fbb] (rev a1)
Blacklist GPU drivers on the host.
blacklist nouveau
blacklist nvidia
blacklist nvidiafb
blacklist rivafb
This prevents the host from loading NVIDIA or conflicting drivers (NVIDIA cards use the proprietary nvidia driver.nouveau: Open‑source NVIDIA driver; nvidiafb, rivafb: Framebuffer drivers for NVIDIA/Riva) that could claim the GPU before VFIO does.
Do not blacklist the driver for a GPU that your Proxmox host needs to use.
After blacklisting GPU drivers on the host and configuring VFIO, update initramfs and reboot. initramfs (Initial RAM filesystem) is a small temporary root filesystem loaded into memory before the real root (/) is mounted. It must include the VFIO modules, so we rebuild it to include our new vfio-pci settings and to ensure the blacklists take effect early.
update-initramfs -u
reboot
Once the host reboots, verify that the GPU is now bound to VFIO (vfio-pci)
lspci -nnk | grep -i nvidia -A3
... You should see
Kernel driver in use: vfio-pci
...
This confirms that the GPU and its audio function are now managed by VFIO, not the NVIDIA or audio drivers.
If you are only using VirtIO-GPU with VirGL (the shared virtual 3D GPU) and not doing passthrough, you can skip the IOMMU, VFIO, and blacklisting steps above. To enable VirGL, install the following packages on the Proxmox host: (apt update), apt install libgl1 libegl1
Assigning the GPU to the VM in the Proxmox GUI. Shut down the VM, go to Hardware, Add, PCI Device, and select the GPU (e.g., 2b:00.0) and check All Functions to include audio. Leave Display as Default (the VM will see the card as its own).
Start the VM. Proxmox will pass through the physical card into the VM’s PCI bus.
Install NVIDIA drivers inside the VM. Enable the multilib repository for 32‑bit compatibility libraries, needed by some games/apps and lib32-nvidia-utils (32‑bit OpenGL libraries), sudo nvim /etc/pacman.conf, find and uncomment these lines:
[multilib]
Include = /etc/pacman.d/mirrorlist
sudo pacman -Sy
# Install the NVIDIA driver and utilities
sudo pacman -S nvidia nvidia-utils lib32-nvidia-utils nvidia-settings
Enable DRM KMS (Direct Rendering Manager Kernel Mode Setting). It allows the kernel to manage display modes and expose the GPU to user processes securely. The NVIDIA driver’s nvidia-drm module must be set to modeset, enabling Xorg or Wayland to fully utilize the card which is necessary for proper 3D acceleration.
# Edit your boot loader entry (we are using systemd-boot in this example)
sudo nvim /boot/loader/entries/arch.conf
# Or the date-naming you have, e.g., 2025-05-03_02-56-42_linux.conf
# In the "options" line, append: nvidia-drm.modeset=1.
options root=PARTUUID=659ea546-808a-40bb-96df-431408cd069c rw rootfstype=ext4 nvidia-drm.modeset=1
Generate an Xorg configuration file: sudo nvidia-xconfig. Creates /etc/X11/xorg.conf with a Device section for the NVIDIA card. This ensures X (and thus GNOME-on-Xorg) uses the proprietary driver.
Reboot and confirm the driver is loaded
sudo reboot
nvidia-smi # It shows the card is recognized and active
[nmaximo7@archlinux ~]$ nvidia-smi
NVIDIA-SMI 570. 144 Driver Version: 570.144 CUDA Version: 12.8
[...]
GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
[...]
0 NVIDIA GeForce GTX 970 Off
lsmod | grep nvidia # Checks the NVIDIA modules (nvidia, nvidia_modeset, nvidia_drm) are loaded.
nvidia_drm 139264 22
nvidia_uvm 3792896 0
nvidia_modeset 1830912 8 nvidia_drm
drm_ttm_helper 16384 1 nvidia_drm
video 81920 1 nvidia_modeset
nvidia 97144832 157 nvidia_uvm,nvidia_modeset
Configure GNOME for NVIDIA. Log out, select “GNOME on Xorg” at the login screen.
If you chose the VirGL virtual GPU approach (virtio-gpu with 3D acceleration):
lsmod | grep virtio_gpu.sudo pacman -S mesa mesa-vulkan-drivers.sudo pacman -S spice-vdagent. Then, enable it: systemctl enable --now spice-vdagentd.service. It helps with clipboard sharing, dynamic display resolution adjustment, and client mouse mode.glxinfo -B and check the “Renderer” line. You should see something like “Virgl”, “virgl”, or “virglrenderer” as the renderer, not llvmpipe.