An AI-powered chatbot for exploring SPEAR (Seamless system for Prediction and EArth system Research) climate model data.
This chatbot is designed to be run alongside the SPEAR MCP server for AWS hosted output:
https://github.com/zappalaja/spear-mcp-test
- Direct SPEAR Data Access: Query climate data from AWS S3 (historical and SSP5-8.5 scenarios)
- Visualization: Generate plots
- Ollama AI Integration: Powered by local Ollama models (default: Gemma 3)
- Smart Size Management: Automatic query validation and size checking
- Geographic Conversion: Auto-converts longitude formats (-180/180 ↔ 0-360)
- Expert Knowledge Base: Built-in climate science terminology and SPEAR model info
- Confidence Assessment: Qualitative self-assessment of response accuracy
Containers to be publicly available in the future! Currently only for private use.
Python Virtual Environment
-
Clone and setup
git clone https://github.com/zappalaja/spear-climate-chatbot.git cd spear-climate-chatbot python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies
pip install -r requirements.txt
-
Configure Ollama
cp .env.template .env # Edit .env if Ollama is not running locally: # OLLAMA_BASE_URL=http://localhost:11434
Ensure Ollama is running and the model is available:
ollama pull gemma3
-
Run the application
Make executable and run
run-local.sh:chmod +x run-local.sh ./run_local.sh
Or:
streamlit run chatbot_app.py
-
Access the chatbot Open your browser to: http://localhost:8501
Create a .env file with:
OLLAMA_BASE_URL=http://localhost:11434If you run Ollama on another host, update the URL accordingly (omit any trailing /api, /api/chat, or /v1/chat/completions). If you want to use the OpenAI-compatible endpoint, set OLLAMA_BASE_URL to http://localhost:11434/v1.
The chatbot's behavior can be customized by editing:
controlled_vocabulary.py: Language policies, prohibited topics, terminologyvariable_definitions.py: Climate variable definitions and unitsspear_model_info.py: SPEAR model specifications and scenariosconfidence_assessment.py: Qualitative confidence assessment criteriaai_config.py: Model settings, conversation tone, welcome message
podman build -t spear-chatbot ."What variables are available in SPEAR?"
"List the ensemble members for historical scenario"
"Show me metadata for temperature data"
"What is SSP5-8.5?"
"Explain the difference between tas and tasmax"
"What are ensemble members?"
The chatbot automatically prevents queries that would exceed API token limits:
- Maximum: 200,000 tokens per request
- Safe threshold: 100,000 tokens (for data)
- Automatic blocking: Queries exceeding limits show alternatives
- Suggestions: Smaller regions, shorter time periods, or Python code
- Python 3.11+
- 2GB RAM
- Internet connection (for S3 access)
- Ollama running locally (or reachable over the network)
Make sure you've installed all dependencies:
pip install -r requirements.txtCheck your .env file has the correct URL:
OLLAMA_BASE_URL=http://localhost:11434The chatbot will show alternatives. Try:
- Smaller geographic region
- Shorter time period
- Request spatial averages
Ensure port 8501 is available:
lsof -i :8501