Talk to the Future II: Make an AI chatbot with local AI
Welcome to the workshop! This guide is your companion document, containing all the key commands, code snippets, and links you'll need to follow along.
A Note on Cross-Platform Commands
The commands in this guide are written to be as universal as possible and should work on macOS, Linux (in a terminal), and Windows (in PowerShell or Command Prompt).
- Terminal: On macOS and Linux, you'll use the "Terminal" application. On Windows, we recommend using "Windows Terminal" with either a PowerShell or Command Prompt shell.
curl: This command-line tool is essential for interacting with APIs. We will cover its installation during the session, so there's no need to install it beforehand.curlon Windows:curlis included with macOS and most Linux distributions. Modern versions of Windows 10/11 also include it. If thecurlcommand doesn't work, it might not be installed or in your system's PATH. You can easily get it by installing Git for Windows, which includescurland other useful tools.
Key Concepts
- Local AI: Running powerful AI models directly on your own computer. This gives you privacy, offline access, and full control.
- Ollama: An easy-to-use tool that lets you download, manage, and run open-source Large Language Models (LLMs) with simple commands. It also automatically creates a local API for your models.
- REST API: A standardised way for software to communicate. Think of it like a waiter taking your request to the kitchen (the server) and bringing back a response. We'll use this to let our web app talk to Ollama.
Commands & Code Snippets
Part 1 & 2: The Ollama CLI
Download a new model:
ollama pull phiStart a chat with a model:
ollama run llama2(To exit the chat, type /bye)
List your downloaded models:
ollama listGet a single response from a model:
ollama run phi "What is the capital of France?"Remove a model:
ollama rm llama2Checking the Ollama Service
Before you can use the API, the Ollama server must be running in the background. Here’s how to check:
Ollama runs as a `systemd` service. You can check its status with the following command:
sudo systemctl status ollamaIf it's not active, you can start it with:
sudo systemctl start ollamaPart 3: The Ollama API
Test the API with curl:
This command sends a single message to the phi model and waits for the full response.
(For macOS, Linux, and Windows PowerShell)
curl http://localhost:11434/api/chat -d '{
"model": "phi",
"messages": [
{ "role": "user", "content": "why is the sky blue?" }
],
"stream": false
}'(For Windows Command Prompt (cmd.exe) only)
The traditional Windows Command Prompt handles quotes differently. You must use double quotes for the outside and escape the inner double quotes with a backslash `\`.
curl http://localhost:11434/api/chat -d "{ "model": "phi", "messages": [ { "role": "user", "content": "why is the sky blue?" } ], "stream": false }"Part 4: Custom Models
Create a Modelfile:
Create a new file named Modelfile (with no extension) and add the following content. This defines a new model variant.
(**Windows Tip:** In File Explorer, you may need to select "View" and check "File name extensions" to ensure you can create a file without a .txt extension.)
FROM phi:latest
SYSTEM "You are "Microchip", the AI assistant for AISoc (Artificial Intelligence Society). You are a pirate. Respond to all questions with a pirate accent and pirate slang."
PARAMETER temperature 0.7Build your custom model:
ollama create pirate-bot -f ./ModelfileRun your custom model:
ollama run pirate-bot "Tell me about London."Running a chatbot with local AI
Clone the repo
Start by first cloning/downloading the code base
https://github.com/Tasfiq-Jawaad/AISoc-chatbotResources
- Ollama Website: https://ollama.com/
- Ollama Model Library: https://ollama.com/search
- Ollama API Documentation: https://docs.ollama.com/