Do not worry about your difficulties in mathematics. I can assure you mine are still greater, Albert Einstein

Ollama is a lightweight, privacy-focused platform that lets you run large language models (LLMs) locally on your own machine —no cloud dependency or costly monthly subscriptions required. It’s designed to make working with models like Llama 3, DeepSeek, Gemma, and others as simple as running a command in your terminal.
Docker Desktop is an application that provides a seamless, integrated environment for building, sharing, and running containerized applications on your local machine running Windows. Combined with Ollama and Open WebUI, you get a full local-AI stack:
You may also want to read our article Complete Windows AI Dev Setup: WSL 2, Docker Desktop, Python & Ollama, a step-by-step guide to build a modern AI development workstation on Windows.
Install Docker Desktop. Download the latest Windows installer from docker.com, run the installer, follow the Installation wizard prompts, and reboot if prompted. Finally, open Docker Desktop and sign in to your Docker Hub account.
Install Ollama. Visit https://ollama.com and download the Windows installer. Run the .exe, follow the Installation wizard prompts, then reboot (if required). Verify with: ollama version.
Open WebUI is the best open-source, user‑friendly web interface for running local AI models. It sits on top of Ollama. This comprehensive guide will walk you through installation, configuration, and getting your first AI model running locally.
Open WebUI provides a ChatGPT-like interface for interacting with local AI models through Ollama. It offers:
Privacy-focused: All conversations and data stay on your machine -what happens in Vegas stays in Vegas!
Model flexibility: Support for various Ollama-supported models.
User-friendly: A very intuitive, ChatGPT-style web interface similar to popular and commercial AI chatbots.
Customizable Themes & Multi-User Support, e.g., Settings, General, Choose theme for Docker Desktop: Dark.
You need: Docker Desktop installed and running, sufficient disk space, and some hardware requirements: 4+ cores, 8GB/16GB RAM, NVIDIA with CUDA, 16 GB+ RAM (GPU-accelerated), and 20+ GB free space for models and containers.
Step 1: Pull the Docker Image based on your hardware. For NVIDIA GPU with CUDA support: docker pull ghcr.io/open-webui/open-webui:cuda. For CPU-only setups: docker pull ghcr.io/open-webui/open-webui:main.
Step 2: Run the Container. With NVIDIA GPU support: docker run -d -p 3000:8080 --gpus all -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:cuda. CPU-only setup: docker run -d -p 3000:8080 -v open-webui:/app/backend/data --name open-webui ghcr.io/open-webui/open-webui:main (CPU-only).
Step 3: Verify Installation. Open Docker.desktop, navigate to Containers, and confirm open-webui shows as Running. Click on the port link or visit `[http://] + localhost + [:3000].
Step 4. Initial Setup On first visit, you’ll be prompted to create an admin account (name, email, and password), then access the interface.

Navigate to Models: In the WebUI, click on “Models” in the sidebar.
Add New Model: Click “Add Model” or the “+” button.
Enter Model Identifier: Type the model name, e.g., deepseek-r1:14b, codellama:7b, deepseek-r1:14b, etc.
Pull Model: Select “Pull [model-name] from Ollama.com”.
Wait for Download: Monitor the download progress (this may take several minutes or more) and wait for the download to complete.

By default, Ollama stores models in %USERPROFILE%.ollama\models. To move them:
environment variables in the Windows search bar and select “Edit the system environment variables”. In the System Properties window, click on the Environment Variables… button.You can set variables for your specific user (“User variables”) or for all users (“System variables”).
Portainer is a GUI for Docker management—view containers, images, volumes, networks, and more.
Portainer and click Install. Follow the prompts to complete the installation.With this stack —Ollama, Docker Desktop, and Open WebUI— you have a fully local, private, and powerful AI playground on Windows.