A terminal-based strategy/simulation game written in Python using the curses library. Manage a dwarf, explore, gather resources, and interact with a world of fungi!
✅ PRODUCTION-READY - The Oracle LLM integration is now fully ready for live API usage with comprehensive safety features:
- 🛡️ Cost Controls: Daily request limits, token limits, timeout protection
- ⚡ Reliability: Smart retry logic, graceful error handling, provider auto-detection
- 🔧 Multi-Provider: Supports XAI (Grok), OpenAI (GPT), Anthropic (Claude), Groq
- 📊 Monitoring: Real-time usage tracking, detailed logging, emergency controls
Quick Setup: Copy llm_config.ini.example → llm_config.ini, set your API key as an environment variable, and play! See LLM Oracle Integration below for details.
- Curses-based graphical interface
- Map generation and exploration
- Basic dwarf task management (moving, simple actions)
- Resource tracking (in-memory)
- Inventory and Shop screens (basic implementation)
- Mycelial network concepts
- Python 3.7+
- curses:
- Linux/macOS: Typically included with Python or available through system package managers (e.g.,
sudo apt-get install libncursesw5-devon Debian/Ubuntu, often pre-installed on macOS). - Windows: You need to install the
windows-cursespackage:(Note: This project is primarily developed/tested on Unix-like systems. Windows compatibility viapip install windows-curses
windows-cursesmay vary.)
- Linux/macOS: Typically included with Python or available through system package managers (e.g.,
- groq: Current LLM integration testing using the X.AI API (used by the Oracle for LLM features) with groq.
pip install groq
- LLM SDKs: For LLM features, the following are installed via
requirements.txt:openai: For XAI (Grok), OpenAI (GPT), and other OpenAI-compatible APIs.groq: For Groq API.requests: Used for some direct API calls (e.g., Anthropic).
-
Clone the repository:
git clone <repository-url> cd fungi-fortress
-
Set up API Key (Optional, for LLM features): This game can use a Large Language Model (LLM) for certain features. To enable these:
- Copy
llm_config.ini.exampletollm_config.ini - Set your API key as an environment variable (for security):
# For XAI (Grok): export XAI_API_KEY="your-xai-api-key-here" # For OpenAI: export OPENAI_API_KEY="your-openai-api-key-here" # For Anthropic: export ANTHROPIC_API_KEY="your-anthropic-api-key-here" # For Groq: export GROQ_API_KEY="your-groq-api-key-here" # For Together: export TOGETHER_API_KEY="your-together-api-key-here" # For Perplexity: export PERPLEXITY_API_KEY="your-perplexity-api-key-here"
- The game automatically detects which API key to use based on your chosen model
- Security: API keys are stored in environment variables, never in files
- Copy
-
Ensure requirements are met (see above, especially for
curseson Windows). -
Run the game:
python main.py
(Alternatively, if running as an installed module later:
python -m fungi_fortress.main)
- Arrow Keys: Move cursor / navigate menus
- Enter: Confirm selection / interact
- ESC: Quit game / Close open menus
- i: Toggle Inventory
- q: Open Quest/Oracle Content Menu
- l: Toggle Legend
- t: Talk to adjacent NPCs/Oracles
- e: Enter/Interact with structures
- p: Pause game
- m: Mine/Move (Assign task at cursor)
- f: Fish/Fight (Assign task at cursor)
- b: Build (Assign task at cursor)
- d: Descent/Shop preparation
- c: Cast selected spell
- s: Cycle through spells
- 1-5: Select spell slots
When consulting an Oracle:
- Type normally: Enter your query (including 'q' safely)
- Enter: Submit query
- ESC: Exit consultation (with confirmation when typing)
- Arrow Keys: Navigate longer responses
This project uses pytest for testing and mypy for static type checking.
-
Install Development Dependencies: Ensure you have installed the necessary packages:
pip install -r requirements.txt
(This includes
pytestandmypymarked as development dependencies). -
Running Tests: Execute tests from the root directory:
pytest
-
Running Type Checks: Run the type checker from the root directory:
mypy .
- Level 1 Pathfinding: Harvesting magic fungi on the first level may not illuminate a path to the
nexus_siteif the only path requires crossing water tiles, as there is currently no mechanism for the player to cross water. Design question: Should the path still illuminate through water, or should we add water-crossing mechanics? - Code Quality: Test coverage and type hint coverage are currently low.
- Procedural Content Generation: Integrate Large Language Model (LLM) APIs to enable player-driven procedural generation of game content, including maps, story elements, characters, and events.
- Improve Test Coverage: Write comprehensive unit and integration tests for core game logic, map generation, entities, and utilities.
- Enhance Type Hinting: Add type hints throughout the codebase and resolve any issues reported by
mypyto improve code robustness and maintainability.
- Event System (
events.py):- Implement the
trigger_eventfunction to handle game event initiation. - Implement the
check_eventsfunction for ongoing event condition checking.
- Implement the
- Interactions (
interactions.py):- Develop and implement the UI and core functionality for the Dwarven Sporeforge interaction (
interact_dwarven_sporeforge_logic).
- Develop and implement the UI and core functionality for the Dwarven Sporeforge interaction (
- Map Generation & Entities (
map_generation.py,tiles.py):- Define a "mineral_deposit" entity in the
ENTITY_REGISTRYand integrate its spawning into grotto map generation. - Define a walkable "shallow_water" entity in the
ENTITY_REGISTRYand implement its use for river fords in surface map generation. This could also help address pathfinding issues across water.
- Define a "mineral_deposit" entity in the
- LLM Integration (
llm_interface.py):- Implement true streaming support for Anthropic models in
_detect_provider_and_call_api_streaminginstead of the current non-streaming fallback. - Expand
handle_game_eventto process a wider variety of game events (e.g., dynamic event generation, NPC behavior adaptations) using LLM capabilities. - Refine LLM prompt context by selectively adding more detailed and relevant game state information to
handle_oracle_query_streamingandhandle_oracle_query_non_streaming. - Standardize LLM provider detection: Ensure the logic in
_detect_provider_and_call_api(for non-streaming calls) is consistent withconfig_manager.detect_provider_from_model, ideally by calling the centralized function. - Consider creating a dedicated
_call_anthropic_apifunction for non-streaming Anthropic calls if it offers unique features or requires specific error handling beyond the generic OpenAI-compatible wrapper.
- Implement true streaming support for Anthropic models in
- Gameplay Features:
- Further develop dwarf task management (e.g., more complex tasks, dwarf skills affecting outcomes beyond mining).
- Expand shop functionality (e.g., dynamic pricing, wider item variety).
- Flesh out the Mycelial Network mechanics beyond path illumination (e.g., resource transfer, environmental effects).
Contributions are welcome! Please feel free to open issues or submit pull requests. (Further contribution guidelines TBD).
Fungi Fortress features an AI-powered Oracle that provides guidance, lore, and interactive storytelling through Large Language Model (LLM) integration. The Oracle system is designed to be flexible and provider-agnostic, allowing players to use their preferred LLM service and API credits.
🚀 PRODUCTION-READY - The LLM integration is fully stable with comprehensive safety features, extensive testing, and robust error handling for live API usage.
- ✅ Dynamic Text Streaming: Oracle responses now stream in real-time for a more engaging experience.
- ✅ Enhanced Error Handling: Improved parsing of malformed LLM responses with graceful degradation
- ✅ Comprehensive Test Suite: Over 170 tests covering unit tests, integration tests, and live API validation
- ✅ Response Format Support: Handles both structured JSON responses and legacy text format seamlessly
- ✅ Provider Auto-Detection: Automatically selects the correct API based on model name patterns
- ✅ Structured Output: Supports XAI's structured JSON schema for more reliable responses
- ✅ Integration Tests: Added
test_integration_game.pyandtests/test_integration_xai_direct.pyfor full pipeline validation
The Oracle supports multiple LLM providers through a unified interface:
- XAI (Grok models): Access to Grok-3, Grok-2, and other Grok models via XAI's direct API
- NEW: Full support for structured JSON responses and reasoning tokens
- OpenAI: GPT-4o, GPT-4o-mini, GPT-4-turbo, and GPT-3.5-turbo models
- Anthropic: Claude-3.5-Sonnet, Claude-3.5-Haiku, and Claude-3-Opus models
- Groq: Fast inference for open-source models like LLaMA, Mixtral, and Gemma
- Auto-detection: Automatically chooses the appropriate provider based on model name
Copy llm_config.ini.example to llm_config.ini and configure your settings:
[LLM]
# === API KEY CONFIGURATION ===
# API keys are loaded from environment variables for security
# Set these environment variables in your shell or .env file:
#
# For XAI (Grok): export XAI_API_KEY="your-xai-api-key-here"
# For OpenAI: export OPENAI_API_KEY="your-openai-api-key-here"
# For Anthropic: export ANTHROPIC_API_KEY="your-anthropic-api-key-here"
# For Groq: export GROQ_API_KEY="your-groq-api-key-here"
# For Together: export TOGETHER_API_KEY="your-together-api-key-here"
# For Perplexity: export PERPLEXITY_API_KEY="your-perplexity-api-key-here"
# Provider selection (auto, xai, groq, openai, anthropic, together, perplexity)
provider = auto
# Model to use - examples by provider:
# XAI: grok-3, grok-3-beta, grok-2-1212, grok-3-mini, grok-3-mini-fast
# OpenAI: gpt-4o, gpt-4o-mini, gpt-3.5-turbo
# Anthropic: claude-3-5-sonnet-20241022, claude-3-5-haiku-20241022
# Groq: llama-3.3-70b-versatile, llama-3.1-8b-instant, gemma2-9b-it
# Together: meta-llama/Llama-3.2-90B-Vision-Instruct-Turbo
# Perplexity: llama-3.1-sonar-small-128k-online
model_name = gpt-4o-mini
# Context level for game information (low, medium, high)
context_level = medium
# === COST CONTROL SETTINGS ===
max_tokens = 1000 # Max response length (prevents runaway costs)
daily_request_limit = 0 # Daily API call limit (0 = unlimited)
timeout_seconds = 60 # Request timeout (prevents hanging)
max_retries = 2 # Retry attempts (reliability)Secure Environment Variable Setup: API keys are now stored as environment variables for enhanced security:
# Add to your shell profile (.bashrc, .zshrc, etc.) for persistence:
export XAI_API_KEY="your-xai-api-key-here"
# Or set for current session only:
export XAI_API_KEY="your-xai-api-key-here"The LLM integration includes comprehensive testing:
# Run all tests (most use mocks, safe for CI)
pytest
# Run only LLM-specific interface tests (uses mocks)
pytest tests/test_llm_interface.py -v
# Run specific integration tests that may require API keys (check .gitignore for these files)
# Example for XAI live endpoint tests:
pytest tests/test_integration_xai_direct.py -vTest Organization:
- Unit and integration tests (using mocks) are located in the
tests/directory. - Tests designed for live API validation (e.g.,
tests/test_integration_xai_direct.py) are configured in.gitignoreto prevent accidental key exposure and should be run with caution.