Talk to the Future II: Make an AI chatbot with local AI

Welcome to the workshop! This guide is your companion document, containing all the key commands, code snippets, and links you'll need to follow along.

Mohammad Tasfiq JawaadLinkedInGitHubUpdated: 19 Nov 2025

A Note on Cross-Platform Commands

The commands in this guide are written to be as universal as possible and should work on macOS, Linux (in a terminal), and Windows (in PowerShell or Command Prompt).

  • Terminal: On macOS and Linux, you'll use the "Terminal" application. On Windows, we recommend using "Windows Terminal" with either a PowerShell or Command Prompt shell.
  • curl: This command-line tool is essential for interacting with APIs. We will cover its installation during the session, so there's no need to install it beforehand.
  • curl on Windows: curl is included with macOS and most Linux distributions. Modern versions of Windows 10/11 also include it. If the curl command doesn't work, it might not be installed or in your system's PATH. You can easily get it by installing Git for Windows, which includes curl and other useful tools.

Key Concepts

  • Local AI: Running powerful AI models directly on your own computer. This gives you privacy, offline access, and full control.
  • Ollama: An easy-to-use tool that lets you download, manage, and run open-source Large Language Models (LLMs) with simple commands. It also automatically creates a local API for your models.
  • REST API: A standardised way for software to communicate. Think of it like a waiter taking your request to the kitchen (the server) and bringing back a response. We'll use this to let our web app talk to Ollama.

Commands & Code Snippets

Part 1 & 2: The Ollama CLI

Download a new model:

bash
ollama pull phi

Start a chat with a model:

bash
ollama run llama2

(To exit the chat, type /bye)

List your downloaded models:

bash
ollama list

Get a single response from a model:

bash
ollama run phi "What is the capital of France?"

Remove a model:

bash
ollama rm llama2

Checking the Ollama Service

Before you can use the API, the Ollama server must be running in the background. Here’s how to check:

Windows
Find Ollama in your Start Menu and run it.
macOS
Find Ollama in your application launcher and run it.
Linux

Ollama runs as a `systemd` service. You can check its status with the following command:

bash
sudo systemctl status ollama

If it's not active, you can start it with:

bash
sudo systemctl start ollama

Part 3: The Ollama API

Test the API with curl:

This command sends a single message to the phi model and waits for the full response.

(For macOS, Linux, and Windows PowerShell)

bash
curl http://localhost:11434/api/chat -d '{ 
  "model": "phi",
  "messages": [
    { "role": "user", "content": "why is the sky blue?" }
  ],
  "stream": false
}'

(For Windows Command Prompt (cmd.exe) only)

The traditional Windows Command Prompt handles quotes differently. You must use double quotes for the outside and escape the inner double quotes with a backslash `\`.

bash
curl http://localhost:11434/api/chat -d "{ "model": "phi", "messages": [ { "role": "user", "content": "why is the sky blue?" } ], "stream": false }"

Part 4: Custom Models

Create a Modelfile:

Create a new file named Modelfile (with no extension) and add the following content. This defines a new model variant.

(**Windows Tip:** In File Explorer, you may need to select "View" and check "File name extensions" to ensure you can create a file without a .txt extension.)

text
FROM phi:latest
SYSTEM "You are "Microchip", the AI assistant for AISoc (Artificial Intelligence Society). You are a pirate. Respond to all questions with a pirate accent and pirate slang."
PARAMETER temperature 0.7

Build your custom model:

bash
ollama create pirate-bot -f ./Modelfile

Run your custom model:

bash
ollama run pirate-bot "Tell me about London."

Running a chatbot with local AI

Clone the repo

Start by first cloning/downloading the code base

https://github.com/Tasfiq-Jawaad/AISoc-chatbot

Resources