JustToThePoint English Website Version
JustToThePoint en español

Run Ollama Locally on Windows. From Minimal Chatbot to Full-Featured AI CLI

Without pain, without sacrifice, we would have nothing, Chuck Palahniuk - Fight Club

Community

Ollama is a lightweight, privacy-focused platform that lets you run large language models (LLMs) locally on your own machine —no cloud dependency or costly monthly subscriptions required. It’s designed to make working with models like Llama 3, DeepSeek, Gemma, and others as simple as running a command in your terminal.

This is the second article in our three-part series about Complete Windows AI Dev Setup: WSL 2, Docker Desktop, Python & Ollama and is a continuation of the first one, so if you haven’t taken a look at it yet, I recommend you read it first and come back.

AI Chatbot

This is a very basic Python script that uses the ollama library for chatting with a model.

# The script starts by importing the ollama library.
import ollama

# It sets the model to deepseek-r1:8b, ensuring that this model is available.
model_name = 'deepseek-r1:8b'

# Initialize conversation with a system prompt (optional) and a user message.
# A list called messages is initialized and used for this purpose.
messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "Hello!"},
]

#  The first response from the bot is fetched ...
response = ollama.chat(model=model_name, messages=messages)
# ... and printed.
print("Bot:", response.message.content)

# Conversation Loop: The script enters a loop allowing the user to continue the conversation
while True:
    user_input = input("You: ") # It waits for user input.
    if not user_input: # If the input is empty, the loop exits.
        break
    messages.append({"role": "user", "content": user_input})
    # It appends the user input to the messages list.

    # The bot's response is fetched ...
    response = ollama.chat(model=model_name, messages=messages)
    answer = response.message.content
    # ... and printed,
    print("Bot:", answer)
    #  and the assistant's message is appended to the messages list for context.
    messages.append({"role": "assistant", "content": answer})

Ollama Interactive Chat Interface

This Python script creates a feature-rich command-line chat interface for interacting with Ollama language models. Ollama is a tool that allows you to run large language models locally on your machine. The script goes beyond basic chat functionality by integrating web crawling, search capabilities, and system command execution, making it a comprehensive AI assistant tool.

Core Chat Functionality

"""
getinfo.py. Interactive Chat with Ollama Models

This script provides a conversational interface with Ollama-supported language models.
Features include:
- Color-coded input/output
- Command history management
- Graceful error handling
- Interactive help system
"""

# Importing necessary libraries
import ollama
import argparse
import sys, os
from dotenv import load_dotenv # For loading environment variables from .env files
from wsl import run_wsl_command, run_help_command # Importing WSL command functions
import asyncio # Importing asyncio for asynchronous programming
from webcrawl import main as webcrawl_main  # Importing the web crawling function from webcrawl module
from colorama import Fore, Style
from util import display_message, display_text_color, display_chunck# Importing utility functions message and colors
import mymessages
# Check our article From Minimal Chatbot to Full-Featured AI CLI 2
import traceback
from queryweb import my_duckduckgo_search # Importing the web search function from queryweb module

def display_welcome():
    """Display welcome message and help information"""
    display_message("Ollama Chat Interface v1.0")  # Display the welcome message with colors
    display_text_color("Commands", Fore.CYAN)  # Display the commands header in cyan
    display_text_color("Enter text to chat with the model", Fore.GREEN)  # Display the main command in green
    display_text_color("/clear - Clear conversation history", Fore.GREEN)  # Display clear command
    display_text_color("/? - Show this help message", Fore.GREEN)  # Display help command
    display_text_color("/exit - Exit the program", Fore.GREEN)  # Display exit command
    display_text_color("/man  - Show Linux manual page for a command", Fore.GREEN)
    display_text_color("/tldr  - Show simplified help for a command", Fore.GREEN)  # Display tldr command
    display_text_color("/curl  - Fetch information from cht.sh", Fore.GREEN)  # Display curl command
    display_text_color("/help  - Run Windows help command", Fore.GREEN)  # Display help command
    display_text_color("Press Ctrl+C to exit at any time", Fore.YELLOW)  # Display exit tip

def process_command(user_input, messages):
    """
    This function processes user commands that start with a slash (/) and performs specific actions based on those commands.

    Args:
        user_input (str): The command entered by the user
        messages (list): A list that maintains the conversation history

    Returns:
        tuple: (bool, list) indicating whether to continue the program and the updated message list
    """
    user_input = user_input.strip() # Remove any leading or trailing whitespace from user input

    # Handle the "/exit" command
    if user_input == "/exit":
        display_text_color("Exiting the program...", Fore.YELLOW) # Inform the user about exiting
        return False, messages # Return False to indicate the program should stop

    # Handle the "/clear" command
    elif user_input == "/clear":
        display_text_color("Clearing conversation history...", Fore.YELLOW) # Notify user about clearing history
        messages = messages[:1] # Keep only the system prompt in the messages list
        display_text_color("Conversation cleared. You can start fresh!", Fore.GREEN) # Confirm clearing
        return True, messages # Return True to continue the program

    # Handle the "/?" command to display help
    elif user_input == "/?":
        display_welcome()  # Call the function to display welcome/help information
        return False, messages # Return False to indicate no further action needed

    # Handle the "/man" command for manual pages
    elif user_input.startswith("/man"):
        run_wsl_command("man", user_input[4:].strip())
        # Execute the manual command with the specified argument
        return False, messages # Return False as this command does not require further action

    # Handle the "/tldr" command for simplified man pages
    elif user_input.startswith("/tldr"):
        run_wsl_command("tldr", user_input[5:].strip())
        # Execute the tldr command with the specified argument
        return False, messages # Return False as this command does not require further

    # Handle the "/curl" command for web requests
    elif user_input.startswith("/curl"):
        search_phrase = user_input[6:].strip() # Extract the search phrase from the command
        if search_phrase:
            run_wsl_command("curl", f"https://cht.sh/{search_phrase}")
            # Execute curl command with the search phrase
        else:
            display_text_color("Error: No search phrase provided for /curl command.", Fore.RED) # Error message if no phrase is given

        return False, messages # Return False as this command does not require further action

    # Handle the "/help" command for additional help
    elif user_input.startswith("/help"):
        run_help_command(user_input[6:].strip()) # Call the help command with the specified argument
        return False, messages # Return False as this command does not require further action

    # Handle unknown commands that start with "/"
    elif user_input.startswith("/"):
        display_text_color("Error: Unknown command. Type /? for help.", Fore.RED) # Error message for unknown commands
        return True, messages # Return True to continue the program

    return True, messages # Default case: continue the program

def chat_with_model(model_name, messages):
    """
    This function interacts with a chat model, retrieves responses, and updates the conversation history.

    Args:
        model_name (str): A string representing the name of the model to be used for generating responses.
        messages (list): A list containing the conversation history, including messages from both the user and the assistant.

    Returns:
        The function may return an error message if an exception is caught, but it does not explicitly return a value on success.
    """
    try: # It uses a try-except structure to handle potential errors during model communication.
        # Get response from the specified model and conversation history, enabling streaming to receive the response in chunks.
        response = ollama.chat(model=model_name, messages=messages, stream=True)

        complete_response = '' # Initialize a variable to store the full response as it is received.

        # It iterates through each chunk of the response
        for chunck in response:
            display_chunck(chunck['message']['content']) # Display the content of each chunk of the response using display_chunk
            complete_response += chunck['message']['content'] # Append the chunk to the complete response

            # After processing each chunk, the complete response is appended to the messages list as a new entry with the role of "assistant".
            messages.append({"role": "assistant", "content": complete_response})
    except Exception as e:
        # Handle any exceptions that occur during communication with the model
        display_text_color(f"An error occurred while communicating with the model: {str(e)}", Fore.RED)
        return # Exit the function if an error occurs

def get_asmuchinfo_as_possible(model_name="deepseek-r1:8b", user_input="", messages =None, crawl=False):
    """
    Fetch as much information as possible based on user input and option to crawl the user's web.

    Args:
        model_name (str): A string representing the model to be used (default is "deepseek-r1:8b").
        user_input (str): The search phrase provided by the user
        messages (list): Conversation history (default is None)
        crawl (bool): a boolean flag to enable or disable web crawling (default is False).

    Returns:
        None: The function does not return a value; it performs actions based on the input.
    """
    # Get response from the model with streaming enabled
    chat_with_model(model_name, messages)

    # Check if the user provided a search phrase
    if not user_input.strip():
        display_text_color("Error: No search phrase provided for web crawling.", Fore.RED) # Display error in red
        return

    # Generate a DuckDuckGo search query based on user input
    myduckduckgo_query = my_duckduckgo_search(user_input, "qwen3:8b")

    # If crawling is disabled, exit the function early
    if not crawl:
        return # If crawling is disabled, exit the function

    try:
        # Run the web crawling process asynchronously with the provided search phrase using asyncio.run.
        asyncio.run(webcrawl_main(user_input))
    except KeyboardInterrupt:
        # If the user interrupts the process (e.g., by pressing Ctrl+C), the program exits gracefully.
        sys.exit() # Exit the program without an error
    except Exception as e:
        # Handle any exceptions that occur during the web crawling process
        display_text_color(f"An error occurred while fetching information: {str(e)}", Fore.RED) # Display error in red

def main():
    """Main function to run the chat interface"""
    # Set up argument parser for command-line arguments
    parser = argparse.ArgumentParser(description="Interactive chat with Ollama models")
    parser.add_argument("--model", help="Specify model name") # Argument to specify the model name
    args = parser.parse_args() # Parse command-line arguments

    try:
        load_dotenv() # Load environment variables from .env file
        model_name = os.getenv("MODEL", "deepseek-r1:8b") # Get model name from environment variable or default
        crawl = os.getenv("CRAWL", "True") # Get crawl setting from environment variable or default to True
        if args.model:
            model_name = args.model # Override model name with command-line argument if provided

        display_text_color(f"Using model: {model_name}", Fore.YELLOW) # Display the model being used

        # Initialize conversation with system and user prompts
        messages = [
            mymessages.assistant_msg, # System prompt from messages module
            mymessages.myuser_msg, # User prompt from messages module
        ]

        display_welcome() # Display welcome message

        # Main chat loop
        while True:
            try:
                user_input = input(f"{Fore.BLUE}You: {Style.RESET_ALL}").strip() # Get user input

                # Process commands and check if the conversation should continue
                continue_flag, updated_messages = process_command(user_input, messages)
                if not continue_flag:
                    break # Exit loop if command indicates to stop
                if updated_messages != messages:
                    messages = updated_messages # Update messages if changed
                    continue # Continue to the next iteration

                # Add valid user input to conversation history
                messages.append({"role": "user", "content": user_input})
                # Fetch information based on user input
                get_asmuchinfo_as_possible(model_name, user_input, messages, crawl)

            except (EOFError, KeyboardInterrupt):
                # Handle any unexpected errors and display error message
                display_text_color("Exiting the program...", Fore.YELLOW) # Display exit message
                break # Exit loop on EOF or keyboard interrupt

    except Exception as e:
        # Handle any unexpected errors and display error message
        display_text_color("An unexpected error occurred. Please check your setup.", Fore.RED)
        traceback.print_exc() # Print detailed error traceback for debugging
        sys.exit(1) # Exit with error status

# Entry point of the script
if __name__ == "__main__":
    main() # Run the main function
Bitcoin donation

JustToThePoint Copyright © 2011 - 2025 Anawim. ALL RIGHTS RESERVED. Bilingual e-books, articles, and videos to help your child and your entire family succeed, develop a healthy lifestyle, and have a lot of fun. Social Issues, Join us.

This website uses cookies to improve your navigation experience.
By continuing, you are consenting to our use of cookies, in accordance with our Cookies Policy and Website Terms and Conditions of use.