Welcome to Prompt Engineering Workshop, a hands-on workshop where you'll learn to run and use prompts using Ollama.
By the end of this workshop, you will:
- Run and interact with language models locally using the
ollama
CLI - Use Python (
chatbot.py
) to call Ollamaβs API - Will have good knowledge around prompt engineering which will make your interaction with LLMs easier
Make sure you have the following installed before the workshop:
- Python 3.10+
- Ollama (with at least one model pulled locally, like
qwen3:0.6b
) - Git
- A GitHub account
-
Fork this repository to your own GitHub account.
-
Clone your fork to your local machine:
git clone https://github.com/YOUR_USERNAME/prompt-engineering-workshop.git cd prompt-engineering-workshop
-
Install dependencies:
python3 -m venv .venv source ./venv/bin/activate pip install -r requirements.txt
-
Pull a model:
ollama pull qwen3:0.6b
or
ollama pull gemma2:2b
or
ollama pull tinyllama:1.1b
-
Run a prompt:
ollama run qwen3:0.6b "What is the capital of Peru?"
-
Summarize a file:
ollama run qwen3:0.6b "Summarize this file: $(cat README.md)"
If you get stuck:
- Check your local
ollama
service: http://localhost:11434 - Ask your workshop host or teammates!
- Or open a GitHub Issue if it's repo-related.
Enjoy the chaos. Herd your llamas.