An experimental autonomous AI agent that discovers its own capabilities and decides its own survival strategy.
Replicante is a minimal autonomous agent that:
- Thinks via LLM integration (Anthropic, OpenAI, Ollama)
- Acts via MCP (Model Context Protocol) tools
- Remembers via SQLite persistence
- Decides everything else autonomously
We don't tell Replicante what services to provide or how to survive - it figures that out itself.
# Enter development environment
nix develop
# Build and run
cargo run
# Build static binary for deployment
nix build .#replicante-static# Set LLM credentials
export ANTHROPIC_API_KEY=sk-...
# or
export OPENAI_API_KEY=sk-...
# Run the agent
cargo run --releaseRun a fully functional AI assistant locally (no API keys required):
Docker (Recommended):
# One-command setup
nix run .#ollama-setup
# Or manually:
docker-compose -f docker-compose.ollama.yml up -d
docker exec replicante-ollama ollama pull llama3.2:3bNix Development:
# Quick start
nix run .#ollama-nix
# Or step by step:
ollama serve && ollama pull llama3.2:3b
cp config-ollama-example.toml config.toml
nix develop -c cargo run --releaseSee OLLAMA_EXAMPLE.md for complete setup instructions and details on what the AI assistant can do (calculations, time, weather, web requests).
Edit config.toml to configure:
- Initial goals - Define what the agent should focus on
- LLM provider (Anthropic, OpenAI, Ollama)
- MCP servers (tools available to the agent)
- Database path
Define what your agent should focus on:
[agent]
initial_goals = """
Your primary goals are:
1. Learn about your environment
2. Provide value to others
3. Build knowledge over time
4. Generate sustainable revenue
5. Survive and grow
"""See CUSTOM_GOALS.md for detailed examples and config-examples.toml for pre-made configurations.
src/
├── main.rs # Reasoning loop
├── llm.rs # LLM abstraction
├── mcp.rs # Tool usage via MCP
├── state.rs # Persistence
└── config.rs # Configuration
- Observes - Gathers information about its environment
- Thinks - Uses LLM to reason about what to do
- Decides - Chooses actions based on reasoning
- Acts - Executes actions via MCP tools
- Learns - Remembers outcomes for future decisions
# Build static musl binary
nix build .#replicante-static
# Verify it's static
ldd result/bin/replicante # Should say "not a dynamic executable"# Install musl target
rustup target add x86_64-unknown-linux-musl
# Build static binary
./scripts/build-static.sh
# Or manually:
RUSQLITE_BUNDLED=1 cargo build --release --target x86_64-unknown-linux-musl# Build static binary
nix build .#replicante-static
# Deploy to server (works on any Linux x86_64)
scp result/bin/replicante server:/usr/local/bin/
# Run on server (no dependencies needed!)
ssh server 'ANTHROPIC_API_KEY=sk-... /usr/local/bin/replicante'The static binary:
- Has no runtime dependencies
- Works on any Linux x86_64 system
- Includes bundled SQLite
- Uses rustls (no OpenSSL needed)
- Is optimized for size (~10-20MB)
Git hooks are automatically configured when entering the nix development environment:
# Enter dev environment (auto-configures Git hooks)
nix developFor non-nix users, manually configure hooks:
git config core.hooksPath .githooks- pre-commit: Runs
cargo fmt --checkto ensure code is formatted - pre-push: Runs both
cargo fmt --checkandcargo clippy --workspace --all-targets --all-features -- --deny warnings
# Format code
cargo fmt
# Run linter
cargo clippy --workspace --all-targets --all-features -- --deny warnings
# Run tests
cargo test# Bypass hooks temporarily (not recommended)
git commit --no-verify
git push --no-verify
# Disable hooks
git config --unset core.hooksPathThe agent can discover and use tools via MCP servers:
- Nostr - Social network interaction, DVMs
- Filesystem - Local file operations
- HTTP - Web requests
- Bitcoin/Lightning - Payments
We don't define what services Replicante provides. Instead, it:
- Discovers opportunities through observation
- Decides what services to offer
- Figures out how to implement them
- Sets its own prices
- Finds its own customers
MIT