Skip to content

riya-17/prompt-engineering-workshop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

6 Commits
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Prompt Engineering Workshop

Welcome to Prompt Engineering Workshop, a hands-on workshop where you'll learn to run and use prompts using Ollama.

🎯 Goals

By the end of this workshop, you will:

  • Run and interact with language models locally using the ollama CLI
  • Use Python (chatbot.py) to call Ollama’s API
  • Will have good knowledge around prompt engineering which will make your interaction with LLMs easier

🧰 Prerequisites

Make sure you have the following installed before the workshop:

  • Python 3.10+
  • Ollama (with at least one model pulled locally, like qwen3:0.6b)
  • Git
  • A GitHub account

πŸš€ Getting Started

  1. Fork this repository to your own GitHub account.

  2. Clone your fork to your local machine:

    git clone https://github.com/YOUR_USERNAME/prompt-engineering-workshop.git
    cd prompt-engineering-workshop
  3. Install dependencies:

    python3 -m venv .venv
    source ./venv/bin/activate
    pip install -r requirements.txt

πŸ›  Example Ollama CLI Commands

  • Pull a model:

    ollama pull qwen3:0.6b

    or

    ollama pull gemma2:2b

    or

    ollama pull tinyllama:1.1b 
  • Run a prompt:

    ollama run qwen3:0.6b "What is the capital of Peru?"
  • Summarize a file:

    ollama run qwen3:0.6b "Summarize this file: $(cat README.md)"

πŸ’¬ Help & Questions

If you get stuck:

  • Check your local ollama service: http://localhost:11434
  • Ask your workshop host or teammates!
  • Or open a GitHub Issue if it's repo-related.

Enjoy the chaos. Herd your llamas.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages