Dolphinoko is a delightful farm-themed user interface for building and using LLM-powered agents via the Dolphin Model Context Protocol (MCP). Create and deploy character-based AI helpers with different specialties, organize them visually, and enjoy interacting with them through an accessible, warm interface. The tooling is built on local models with Ollama, with optional cloud model integration.
- 🐱 Character-based AI agents with different specialties and personalities
- 🧰 Powerful tool creation and categorization system
- 🌾 Farm-themed, accessible UI with improved usability
- 🏠 Run everything locally with Ollama models
- ☁️ Optional integration with cloud models like Claude
- 🔄 Seamless character-tool integration for intelligent responses
- Complete UI redesign with a cozy farm aesthetic
- Improved accessibility and readability
- Better mobile responsiveness
- Create and customize animal characters as AI assistants
- Each character specializes in different tool categories
- Visual character creator with customization options
- Improved tool categorization and organization
- Better tool search and discovery
- Seamless integration between tools and characters
- Fixed scrolling and display issues in chat interface
- Enhanced tool execution directly within chat
- Better message rendering and formatting
- Python 3.8 or higher
- Node.js 16.x or higher
- Ollama for local model inference
- dolphin-mcp (installed automatically)
This is an experimental project provided AS IS. Use at your own risk. There may still be bugs and issues, but we're actively working to improve it!
Current development priorities:
- Finishing Anthropic integration
- Adding more character types and customization options
- Improving tool categories and persistence
- Enhanced context handling between characters and tools
- Clone the repository:
git clone https://github.com/holdmydata/dolphinoko.git
cd dolphinoko- Set up the Python backend:
# Create a virtual environment
python -m venv venv
# Activate the virtual environment
# On Windows:
venv\Scripts\activate
# On macOS/Linux:
source venv/bin/activate
# Install dependencies
pip install -r requirements.txt- Set up the React frontend:
cd frontend
npm install- Start the backend:
# From the root directory with virtual environment activated
cd backend
python main.py- Start the frontend:
# In another terminal, from the root directory
cd frontend
npm run dev- Open your browser and navigate to
http://localhost:3000
- Install Ollama if you haven't already
- Pull a model, e.g.:
ollama pull dolphin-llama3orollama pull gemma:7b - Make sure Ollama is running when you use Dolphinoko
- Navigate to the Character Creator page
- Design your character:
- Choose an animal type (cat, dog, bird, etc.)
- Select a color and give them a name
- Assign a role and toolCategory that fits their specialty
- Save your character
- Visit the Island or Chat page to interact with your new assistant!
- Navigate to the Tool Builder page
- Click "Create New Tool"
- Fill in the tool details:
- Name: A descriptive name for your tool
- Provider: Choose "Ollama" for local models
- Model: Select a model you've pulled to Ollama
- Category: Select a category that matches a character's toolCategory
- Prompt Template: Create a template using
{input}as placeholder for user input
- Save your tool
- Use the Tool Organizer to properly categorize your tools
- Interact with the appropriate character to utilize your tool!
The project is organized with a clear separation between backend and frontend:
dolphinoko/
├── backend/ # FastAPI Python backend
├── frontend/ # React/TypeScript frontend
│ ├── src/
│ │ ├── components/ # UI components
│ │ ├── context/ # Context providers
│ │ ├── pages/ # Application pages
│ │ └── utils/ # Utility functions
└── README.md
- UI Theme: The farm theme can be customized in
frontend/src/styles/theme.ts - Characters: Modify available character types in
frontend/src/context/CharacterContext.tsx - Tool Categories: Edit categories in
frontend/src/types/categories.ts - Adding more providers: Extend the providers in
backend/services/mcp_service.py
Contributions are welcome! Feel free to submit a Pull Request or open an Issue for bugs and feature requests.
MIT License (for our code). Models and third-party libraries maintain their own licensing.
- Eric's Dolphin MCP for the underlying MCP implementation
- Ollama for the local model inference
- The farming and kawaii aesthetics that inspired our new UI
Dolphinoko includes a Blender integration through the Model Context Protocol (MCP) that allows AI models to control and manipulate 3D scenes in Blender.
-
Install the Blender addon:
- Find the
addon.pyfile in theassets/blenderdirectory - In Blender, go to Edit > Preferences > Add-ons
- Click "Install..." and select the
addon.pyfile - Enable the addon by checking the box
- Find the
-
Connect to Blender:
- In Blender, find the "Dolphinoko" tab in the sidebar (press N if not visible)
- Click "Connect to Dolphinoko"
- You should see "Server Status: Running on port 9334"
-
Use the API:
- The API endpoints are available at
/blender/ - You can also use the AI to control Blender by asking it to perform actions
- The API endpoints are available at
For more detailed instructions, see the Blender Integration README.