OpenViking can run as a standalone HTTP server, allowing multiple clients to connect over the network.
# Start server (reads ~/.openviking/ov.conf by default)
openviking-server
# Or specify a custom config path
openviking-server --config /path/to/ov.conf
# Verify it's running
curl http://localhost:1933/health
# {"status": "ok"}| Option | Description | Default |
|---|---|---|
--config |
Path to ov.conf file | ~/.openviking/ov.conf |
--host |
Host to bind to | 0.0.0.0 |
--port |
Port to bind to | 1933 |
Examples
# With default config
openviking-server
# With custom port
openviking-server --port 8000
# With custom config, host, and port
openviking-server --config /path/to/ov.conf --host 127.0.0.1 --port 8000The server reads all configuration from ov.conf. See Configuration Guide for full details on config file format.
The server section in ov.conf controls server behavior:
{
"server": {
"host": "0.0.0.0",
"port": 1933,
"root_api_key": "your-secret-root-key",
"cors_origins": ["*"]
},
"storage": {
"workspace": "./data",
"agfs": { "backend": "local" },
"vectordb": { "backend": "local" }
}
}Server manages local AGFS and VectorDB. Configure the storage path in ov.conf:
{
"storage": {
"workspace": "./data",
"agfs": { "backend": "local" },
"vectordb": { "backend": "local" }
}
}openviking-serverServer connects to remote AGFS and VectorDB services. Configure remote URLs in ov.conf:
{
"storage": {
"agfs": { "backend": "remote", "url": "http://agfs:1833" },
"vectordb": { "backend": "remote", "url": "http://vectordb:8000" }
}
}openviking-serverFor Linux systems, you can use Systemd to manage OpenViking as a service, enabling automatic restart and startup on boot. Firstly, you should tried to install and configure openviking on your own.
Create /etc/systemd/system/openviking.service file:
[Unit]
Description=OpenViking HTTP Server
After=network.target
[Service]
Type=simple
# Replace with your working directory
WorkingDirectory=/var/lib/openviking
# Choose one of the following start methods
ExecStart=/usr/bin/openviking-server
Restart=always
RestartSec=5
# Path to config file
Environment="OPENVIKING_CONFIG_FILE=/etc/openviking/ov.conf"
[Install]
WantedBy=multi-user.targetAfter creating the service file, use the following commands to manage the OpenViking service:
# Reload systemd configuration
sudo systemctl daemon-reload
# Start the service
sudo systemctl start openviking.service
# Enable service on boot
sudo systemctl enable openviking.service
# Check service status
sudo systemctl status openviking.service
# View service logs
sudo journalctl -u openviking.service -fimport openviking as ov
client = ov.SyncHTTPClient(url="http://localhost:1933", api_key="your-key", agent_id="my-agent")
client.initialize()
results = client.find("how to use openviking")
client.close()The CLI reads connection settings from ovcli.conf. Create ~/.openviking/ovcli.conf:
{
"url": "http://localhost:1933",
"api_key": "your-key"
}Or set the config path via environment variable:
export OPENVIKING_CLI_CONFIG_FILE=/path/to/ovcli.confThen use the CLI:
python -m openviking ls viking://resources/curl http://localhost:1933/api/v1/fs/ls?uri=viking:// \
-H "X-API-Key: your-key"OpenViking provides pre-built Docker images published to GitHub Container Registry:
docker run -d \
--name openviking \
-p 1933:1933 \
-v ~/.openviking/ov.conf:/app/ov.conf \
-v /var/lib/openviking/data:/app/data \
--restart unless-stopped \
ghcr.io/volcengine/openviking:mainYou can also use Docker Compose with the docker-compose.yml provided in the project root:
docker compose up -dTo build the image yourself: docker build -t openviking:latest .
The project provides a Helm chart located at examples/k8s-helm/:
helm install openviking ./examples/k8s-helm \
--set openviking.config.embedding.dense.api_key="YOUR_API_KEY" \
--set openviking.config.vlm.api_key="YOUR_API_KEY"For a detailed cloud deployment guide (including Volcengine TOS + VikingDB + Ark configuration), see the Cloud Deployment Guide.
| Endpoint | Auth | Purpose |
|---|---|---|
GET /health |
No | Liveness probe — returns {"status": "ok"} immediately |
GET /ready |
No | Readiness probe — checks AGFS, VectorDB, APIKeyManager |
# Liveness
curl http://localhost:1933/health
# Readiness
curl http://localhost:1933/ready
# {"status": "ready", "checks": {"agfs": "ok", "vectordb": "ok", "api_key_manager": "ok"}}Use /health for Kubernetes liveness probes and /ready for readiness probes.
- Authentication - API key setup
- Monitoring - Health checks and observability
- API Overview - Complete API reference