JustToThePoint English Website Version
JustToThePoint en español
Colaborate with us

Docker & Jellyfin

Set Up Docker on a Linux Container (LXC) in Proxmox

Log in to Proxmox’s GUI, select the storage location where you’d like to store the container template (e.g., local (myserver)), select CT Templates, then Templates. Search for Ubuntu (e.g., ubuntu 20.10), Download, and click on Create CT to create a new Linux Container (LXC).

In the General section, give the container a Hostname (e.g., dockercontainer), a password, and disable Unprivileged container. In the Template tab, select the previously downloaded Ubuntu template. In the Disks tab, select the storage (e.g., mypool) and disk size (e.g. 24 GiB). Select the number of Cores (e.g., 2 or 4), Memory (16384 MB), change the Network to use DCHP instead of Static, and everything else can be left as it is.

Select the LXC Container (e.g., dockercontainer) we have just created, then select Options, Features, and Enable Nesting. When you do, it allows you to run another container or even a full virtual machine inside the LXC container. Essentially, you’re creating a container within a container.

Installing and Configuring Docker and Portainer on the LXC

# We are using the official Docker documentation dockerdocs, Install, Ubuntu
# Add Docker's official GPG key:
apt update && apt upgrade -y
apt-get install \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg -o /etc/apt/keyrings/docker.asc
chmod a+r /etc/apt/keyrings/docker.asc

# Add the repository to Apt sources:
echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/etc/apt/keyrings/docker.asc] https://download.docker.com/linux/ubuntu \
  $(. /etc/os-release && echo "$VERSION_CODENAME") stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

# Install the Docker packages
sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

# Run a Docker container for Portainer, which is a web-based management interface for Docker:
sudo docker run --security-opt apparmor=unconfined -d -p 8000:8000 -p 9000:9000 -p 9443:9443 --name=portainer --restart=always -v /var/run/docker.sock:/var/run/docker.sock -v portainer_data:/data portainer/portainer-ce:latest

This will install Portainer ocker and it will be accessible by the container’s IP address (e.g., 192.168.1.88) and port 9000. Next, you are required to create a username and password.

Install Jellyfin on Proxmox. Proxmox/KVM GPU Passthrough

  1. Install a LCT container for Jellyfin as we previously did.

  2. Install NVIDIA Drivers on the Proxmox Host (I have a NVIDIA GeForce GTX 970, MPG B550 GAMING PLUS (MS-7C56)). If you’re planning to use GPU passthrough to a virtual machine, installing the NVIDIA driver ensures that the GPU is properly recognized and utilized by the VM.

    apt install vainfo -y
    # vainfo is a command-line tool used to query and display information about the Video Acceleration API (VA-API).
    # VA-API provides hardware-accelerated video processing capabilities, enabling applications to offload certain video decoding and encoding tasks to the GPU.
    sudo apt-get install python3-launchpadlib
    sudo apt-get install -y software-properties-common
    wget https://us.download.nvidia.com/XFree86/Linux-x86_64/550.142/NVIDIA-Linux-x86_64-550.142.run # NVIDIA webpage
    sudo apt update
    
     apt install -y pve-headers-$(uname -r)
     apt install build-essential pkg-config xorg xorg-dev libglvnd0 libglvnd-dev
     apt install -y dkms pve-headers wget
     # Because the NVIDIA module is separate from the kernel, it must be rebuilt with Dynamic Kernel Module Support (DKMS) for each new kernel update.
    
     apt install pve-headers-6.5.13-5-pve
     cp /usr/src/linux-headers-6.5.13-5-pve/include/drm/drm_ioctl.h /usr/src/linux-headers-$(uname -r)/include/drm/drm_ioctl.h
    
     cd ~
     chmod +x NVIDIA-Linux-x86_64-550.142.run
     ./NVIDIA-Linux-x86_64-535.104.05.run --dkms
    

    The –dkms option is used to ensure that the NVIDIA driver is compiled and installed with Dynamic Kernel Module Support (DKMS). This means that the driver will automatically rebuild itself when there is a new kernel update, ensuring compatibility and preventing potential issues.

  3. You will have to enable IOMMU support in your BIOS/UEFI. BIOS menus vary between manufacturers (ASUS, Gigabyte, MSI, etc.) and even motherboard models. Usually the corresponding setting is called IOMMU or VT-d, but you should find the exact option name in the manual of your motherboard. Enable AMD-Vi (IOMMU -I/O Memory Management Unit-), e.g., OC, Overclocking, Advanced CPU Configuration, AMD CBS, IOMMU.. Enable Above 4G Decoding: Settings, Advanced, PCIe/PCI Subsystem Settings, Above 4G memory/Cryto Currency mining.

    An Input-Output Memory Management Unit (IOMMU) is a specialized type of memory management unit (MMU) that connects a direct-memory-access (DMA)-capable I/O bus to the main memory. In virtualized environments, the IOMMU allows guest operating systems to use hardware that isn’t specifically designed for virtualization, such as high-performance graphics cards.

    Disabling the dedicated (discrete GPU) in BIOS. The section containing graphics settings might be called Integrated Peripherals, Onboard Devices Configuration, Advanced Chipset Control," or something similar dealing with onboard devices or peripherals. Next setting might be called VGA Detection, Primary Display, Init Display First, Graphics Adapter Priority, or something similar that determines which graphics adapter the system uses first. Setting this to “Ignore” tells the system to bypass the dedicated graphics card and use the integrated graphics processor (iGPU) built into the CPU.

    It’s a very common practice to utilize a discrete GPU for virtual machines that require graphical processing power, such as those running a Media Center or Video Retro gaming center, while leaving the integrated graphics processor (iGPU) for the virtualization solution itself.

    Setting Boot Mode to UEFI: Boot (or similar), Boot Mode Select (or “CSM Support,” “Launch CSM,” etc.), UEFI: Select UEFI mode.

  4. Change boot parameters. vi /etc/default/grub and change the following line to include: GRUB_CMDLINE_LINUX_DEFAULT=“quiet iommu=pt amd_iommu=on video=efifb:off”,

    quiet: This option suppresses most of the boot messages, making the boot process cleaner.

    iommu=pt: This enables the IOMMU (Input-Output Memory Management Unit) with the

    amd_iommu=on: This enables IOMMU support specifically for AMD CPUs.

    Then, run update-grub to make sure the changes are taken into account after you reboot, then reboot.

  5. Blacklist the Nouveau Driver. Blacklist the open source nouveau kernel module to avoid it from interfering with the one from NVIDIA: echo "blacklist nouveau" >> /etc/modprobe.d/blacklist.conf

  6. Load the following modules at boot to ensure that (Virtual Function I/O) is correctly set up and ready to handle GPU passthrough when your system starts. This is crucial for allowing your virtual machines to directly access and use the GPU, providing better performance and functionality.

    echo vfio >> /etc/modules
    echo vfio_iommu_type1 >> /etc/modules
    echo vfio_pci >> /etc/modules
    echo vfio_virqfd >> /etc/modules
    
    update-initramfs -u
    # It updates the initial ramdisk (initramfs) in your system. When you make changes to kernel modules or configurations (like adding VFIO modules), these changes need to be included in the initramfs.
    # The initramfs is used during the boot process to load necessary modules and drivers before the main filesystem is mounted.
    
        lspci -v # This command lists all PCI devices in the system. Find the NVidia card and write down your PCI ID.
        # I have found my NVIDIA card's PCI ID, which in my case is 2b:00.0 for the VGA controller and 2b:00.1 for the audio device.
        # These IDs are important for configuring GPU passthrough.
        [---]
        2b:00.0 VGA compatible controller: NVIDIA Corporation GM204 [GeForce GTX 970] (rev a1) (prog-if 00 [VGA controller])
        Subsystem: Gigabyte Technology Co., Ltd GM204 [GeForce GTX 970]
        Flags: bus master, fast devsel, latency 0, IRQ 80, IOMMU group 15
        Memory at fb000000 (32-bit, non-prefetchable) [size=16M]
        Memory at d0000000 (64-bit, prefetchable) [size=256M]
        Memory at e0000000 (64-bit, prefetchable) [size=32M]
        I/O ports at e000 [size=128]
        Expansion ROM at 000c0000 [virtual] [disabled] [size=128K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable+ Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Legacy Endpoint, MSI 00
        Capabilities: [100] Virtual Channel
        Capabilities: [250] Latency Tolerance Reporting
        Capabilities: [258] L1 PM Substates
        Capabilities: [128] Power Budgeting 
        Capabilities: [600] Vendor Specific Information: ID=0001 Rev=1 Len=024 
        Capabilities: [900] Secondary PCI Express
        Kernel driver in use: nvidia
        Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia
    
        2b:00.1 Audio device: NVIDIA Corporation GM204 High Definition Audio Controller (rev a1)
        Subsystem: Gigabyte Technology Co., Ltd GM204 High Definition Audio Controller
        Flags: bus master, fast devsel, latency 0, IRQ 77, IOMMU group 15
        Memory at fc080000 (32-bit, non-prefetchable) [size=16K]
        Capabilities: [60] Power Management version 3
        Capabilities: [68] MSI: Enable- Count=1/1 Maskable- 64bit+
        Capabilities: [78] Express Endpoint, MSI 00
        Kernel driver in use: snd_hda_intel
        Kernel modules: snd_hda_intel
        [...]
        root@myserver:~# lspci -n -s 2b:00
        2b:00.0 0300: 10de:13c2 (rev a1)
        # This is the Vendor ID and Device ID. 10de is the Vendor ID for NVIDIA, and 13c2 identifies a specific GPU model.
        2b:00.1 0403: 10de:0fbb (rev a1)
    
        root@myserver:~# ls -lh /dev/dri
            drwxr-xr-x  2 root root      60 Jan 13 00:22 by-path
            crw-rw----+ 1 root video 226, 0 Jan 13 00:22 card0
            # The system has a Direct Rendering Manager (DRM) device (/dev/dri/card0), which is typically associated with a GPU.
        ```
    
    lspci: This command lists all PCI devices in the system.
    
    -n: This option tells lspci to display numeric IDs instead of resolving them to human-readable names. This is useful for getting the exact hardware IDs without additional information.
    
    -s 2b:00: This specifies the location of the PCI device you want to query, in this case, bus 2b, device 00.
    
  7. Host Device Passthrough. Configure device passthrough for a virtual machine (VM) using the VFIO (Virtual Function I/O) framework in Linux. This allows you to assign a physical device, such as a GPU, directly to a VM, enabling better performance and direct access to hardware. Pass the device IDs to the options of the vfio-pci modules by adding options vfio-pci ids=10de:13c2,10de:0fbb to a .conf file in /etc/modprobe.d/ where 10de:13c2 and 10de:0fbb are the vendor and device IDs previously obtained.

    echo "options vfio-pci ids=YOUR-VENDOR-ID,YOUR-DEVICE-ID disable_vga=1" > /etc/modprobe.d/vfio.conf
    
    # In my case,
    echo "options vfio-pci ids=10de:13c2,10de:0fbb" > /etc/modprobe.d/vfio.conf
    

    vfio-pci specifies the VFIO PCI driver for device passthrough.

    ids=10de:13c2,10de:0fbb specifies the vendor and device IDs of the hardware you want to pass through. These IDs were obtained using the lspci command.

    disable_vga=1 disable the VGA functionality of the device, which can help in scenarios where you want to ensure that the GPU is fully dedicated to the VM and not used by the host.

  8. Ensure the user inside the container has access to the GPU device. You may need to add the user to the video or render group: usermod -aG video root, usermod -aG render root.

  9. Use GPU for LXC Containers, Next, we have to modify our LXC container (Jellyfin) to pass through the iGPU from the host to the container. Run the command below substituting your Container ID (e.g., 106) to edit the container’s configuration file: nano /etc/pve/lxc/ID.conf (vi /etc/pve/lxc/106.conf)

    lxc.cgroup2.devices.allow: c 226:0 rwm
    lxc.cgroup2.devices.allow: c 226:128 rwm
    lxc.mount.entry: /dev/dri/card0 dev/dri/card0 none bind,optional,create=file
    # This is necessary if your application (e.g., Jellyfin) requires access to the GPU's display interface (e.g., for rendering graphics or running a graphical desktop environment).
    
    lxc.mount.entry: /dev/dri/renderD128 dev/dri/renderD128 none bind,optional,create=file
    # Applications that only need the GPU's rendering capabilities (e.g., for hardware-accelerated video encoding/decoding).
    lxc.hook.pre-start: sh -c "chown 0:108 /dev/dri/renderD128"
    # If the ownership or permissions of /dev/dri/renderD128 on the host change frequently (e.g., due to system updates or GPU driver changes), this line ensures the correct ownership is set every time the container starts.
    
  10. Verify GUP access.

        root@myjellyfin:~# ls -lh /dev/dri
        total 0
        crw-rw----+ 1 root video 226, 0 Jan 12 23:22 card0
        ----------  1 root root       0 Jan 13 00:08 renderD128
        #  You should see card0 and renderD128 inside the container.
        root@myjellyfin:~# chmod 777 /dev/dri/renderD128
        # A quick fix to avoid permission problems, it is not recommended for security reasons.
        apt update && apt upgrade -y # Update the system
    

This article is still a work in progress…

Biography

  1. Installing Jellyfin on Proxmox, https://www.wundertech.net/installing-jellyfin-on-proxmox/

  2. Building a Mini PC Home Server: Proxmox, Docker, Jellyfin + Hardware Acceleration, WunderTech.

Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.