MiniRAG now provides optional API support through FastAPI servers that add RAG capabilities to existing LLM services. You can install MiniRAG with API support in two ways: (using MiniRAG is the same as LightRAG)
pip install "lightrag-hku[api]"
Note: we use the same package for the MiniRAG.
# Clone the repository
git clone https://github.com/HKUDS/minirag.git
# Change to the repository directory
cd minirag
# create a Python virtual enviroment if neccesary
# Install in editable mode with API support
pip install -e ".[api]"
Before running any of the servers, ensure you have the corresponding backend service running for both llm and embedding. The new api allows you to mix different bindings for llm/embeddings. For example, you have the possibility to use ollama for the embedding and openai for the llm.
- LoLLMs must be running and accessible
- Default connection: http://localhost:9600
- Configure using --llm-binding-host and/or --embedding-binding-host if running on a different host/port
- Ollama must be running and accessible
- Requires environment variables setup or command line argument provided
- Environment variables: LLM_BINDING=ollama, LLM_BINDING_HOST, LLM_MODEL
- Command line arguments: --llm-binding=ollama, --llm-binding-host, --llm-model
- Default connection is http://localhost:11434 if not priveded
The default MAX_TOKENS(num_ctx) for Ollama is 32768. If your Ollama server is lacking or GPU memory, set it to a lower value.
- Requires environment variables setup or command line argument provided
- Environment variables: LLM_BINDING=ollama, LLM_BINDING_HOST, LLM_MODEL, LLM_BINDING_API_KEY
- Command line arguments: --llm-binding=ollama, --llm-binding-host, --llm-model, --llm-binding-api-key
- Default connection is https://api.openai.com/v1 if not priveded
Azure OpenAI API can be created using the following commands in Azure CLI (you need to install Azure CLI first from https://docs.microsoft.com/en-us/cli/azure/install-azure-cli):
# Change the resource group name, location and OpenAI resource name as needed
RESOURCE_GROUP_NAME=MiniRAG
LOCATION=swedencentral
RESOURCE_NAME=MiniRAG-OpenAI
az login
az group create --name $RESOURCE_GROUP_NAME --location $LOCATION
az cognitiveservices account create --name $RESOURCE_NAME --resource-group $RESOURCE_GROUP_NAME --kind OpenAI --sku S0 --location swedencentral
az cognitiveservices account deployment create --resource-group $RESOURCE_GROUP_NAME --model-format OpenAI --name $RESOURCE_NAME --deployment-name gpt-4o --model-name gpt-4o --model-version "2024-08-06" --sku-capacity 100 --sku-name "Standard"
az cognitiveservices account deployment create --resource-group $RESOURCE_GROUP_NAME --model-format OpenAI --name $RESOURCE_NAME --deployment-name text-embedding-3-large --model-name text-embedding-3-large --model-version "1" --sku-capacity 80 --sku-name "Standard"
az cognitiveservices account show --name $RESOURCE_NAME --resource-group $RESOURCE_GROUP_NAME --query "properties.endpoint"
az cognitiveservices account keys list --name $RESOURCE_NAME -g $RESOURCE_GROUP_NAME
The output of the last command will give you the endpoint and the key for the OpenAI API. You can use these values to set the environment variables in the .env
file.
LLM_BINDING=azure_openai
LLM_BINDING_HOST=endpoint_of_azure_ai
LLM_MODEL=model_name_of_azure_ai
LLM_BINDING_API_KEY=api_key_of_azure_ai
We provide an Ollama-compatible interfaces for MiniRAG, aiming to emulate MiniRAG as an Ollama chat model. This allows AI chat frontends supporting Ollama, such as Open WebUI, to access MiniRAG easily.
After starting the minirag-server, you can add an Ollama-type connection in the Open WebUI admin pannel. And then a model named minirag:latest will appear in Open WebUI's model management interface. Users can then send queries to MiniRAG through the chat interface.
MiniRAG can be configured using either command-line arguments or environment variables. When both are provided, command-line arguments take precedence over environment variables.
For better performance, the API server's default values for TOP_K and COSINE_THRESHOLD are set to 50 and 0.4 respectively. If COSINE_THRESHOLD remains at its default value of 0.2 in MiniRAG, many irrelevant entities and relations would be retrieved and sent to the LLM.
You can configure MiniRAG using environment variables by creating a .env
file in your project root directory. Here's a complete example of available environment variables:
# Server Configuration
HOST=0.0.0.0
PORT=9721
# Directory Configuration
WORKING_DIR=/app/data/rag_storage
INPUT_DIR=/app/data/inputs
# RAG Configuration
MAX_ASYNC=4
MAX_TOKENS=32768
EMBEDDING_DIM=1024
MAX_EMBED_TOKENS=8192
#HISTORY_TURNS=3
#CHUNK_SIZE=1200
#CHUNK_OVERLAP_SIZE=100
#COSINE_THRESHOLD=0.4
#TOP_K=50
# LLM Configuration
LLM_BINDING=ollama
LLM_BINDING_HOST=http://localhost:11434
LLM_MODEL=mistral-nemo:latest
# must be set if using OpenAI LLM (LLM_MODEL must be set or set by command line parms)
OPENAI_API_KEY=you_api_key
# Embedding Configuration
EMBEDDING_BINDING=ollama
EMBEDDING_BINDING_HOST=http://localhost:11434
EMBEDDING_MODEL=bge-m3:latest
# Security
#MINIRAG_API_KEY=you-api-key-for-accessing-MiniRAG
# Logging
LOG_LEVEL=INFO
# Optional SSL Configuration
#SSL=true
#SSL_CERTFILE=/path/to/cert.pem
#SSL_KEYFILE=/path/to/key.pem
# Optional Timeout
#TIMEOUT=30
The configuration values are loaded in the following order (highest priority first):
- Command-line arguments
- Environment variables
- Default values
For example:
# This command-line argument will override both the environment variable and default value
python minirag.py --port 8080
# The environment variable will override the default value but not the command-line argument
PORT=7000 python minirag.py
Parameter | Default | Description |
---|---|---|
--host | 0.0.0.0 | Server host |
--port | 9721 | Server port |
--llm-binding | ollama | LLM binding to be used. Supported: lollms, ollama, openai |
--llm-binding-host | (dynamic) | LLM server host URL. Defaults based on binding: http://localhost:11434 (ollama), http://localhost:9600 (lollms), https://api.openai.com/v1 (openai) |
--llm-model | mistral-nemo:latest | LLM model name |
--llm-binding-api-key | None | API Key for OpenAI Alike LLM |
--embedding-binding | ollama | Embedding binding to be used. Supported: lollms, ollama, openai |
--embedding-binding-host | (dynamic) | Embedding server host URL. Defaults based on binding: http://localhost:11434 (ollama), http://localhost:9600 (lollms), https://api.openai.com/v1 (openai) |
--embedding-model | bge-m3:latest | Embedding model name |
--working-dir | ./rag_storage | Working directory for RAG storage |
--input-dir | ./inputs | Directory containing input documents |
--max-async | 4 | Maximum async operations |
--max-tokens | 32768 | Maximum token size |
--embedding-dim | 1024 | Embedding dimensions |
--max-embed-tokens | 8192 | Maximum embedding token size |
--timeout | None | Timeout in seconds (useful when using slow AI). Use None for infinite timeout |
--log-level | INFO | Logging level (DEBUG, INFO, WARNING, ERROR, CRITICAL) |
--key | None | API key for authentication. Protects minirag server against unauthorized access |
--ssl | False | Enable HTTPS |
--ssl-certfile | None | Path to SSL certificate file (required if --ssl is enabled) |
--ssl-keyfile | None | Path to SSL private key file (required if --ssl is enabled) |
--top-k | 50 | Number of top-k items to retrieve; corresponds to entities in "local" mode and relationships in "global" mode. |
--cosine-threshold | 0.4 | The cossine threshold for nodes and relations retrieval, works with top-k to control the retrieval of nodes and relations. |
Ollama is the default backend for both llm and embedding, so by default you can run minirag-server with no parameters and the default ones will be used. Make sure ollama is installed and is running and default models are already installed on ollama.
# Run minirag with ollama, mistral-nemo:latest for llm, and bge-m3:latest for embedding
minirag-server
# Using specific models (ensure they are installed in your ollama instance)
minirag-server --llm-model adrienbrault/nous-hermes2theta-llama3-8b:f16 --embedding-model nomic-embed-text --embedding-dim 1024
# Using an authentication key
minirag-server --key my-key
# Using lollms for llm and ollama for embedding
minirag-server --llm-binding lollms
# Run minirag with lollms, mistral-nemo:latest for llm, and bge-m3:latest for embedding, use lollms for both llm and embedding
minirag-server --llm-binding lollms --embedding-binding lollms
# Using specific models (ensure they are installed in your ollama instance)
minirag-server --llm-binding lollms --llm-model adrienbrault/nous-hermes2theta-llama3-8b:f16 --embedding-binding lollms --embedding-model nomic-embed-text --embedding-dim 1024
# Using an authentication key
minirag-server --key my-key
# Using lollms for llm and openai for embedding
minirag-server --llm-binding lollms --embedding-binding openai --embedding-model text-embedding-3-small
# Run minirag with lollms, GPT-4o-mini for llm, and text-embedding-3-small for embedding, use openai for both llm and embedding
minirag-server --llm-binding openai --llm-model GPT-4o-mini --embedding-binding openai --embedding-model text-embedding-3-small
# Using an authentication key
minirag-server --llm-binding openai --llm-model GPT-4o-mini --embedding-binding openai --embedding-model text-embedding-3-small --key my-key
# Using lollms for llm and openai for embedding
minirag-server --llm-binding lollms --embedding-binding openai --embedding-model text-embedding-3-small
# Run minirag with lollms, GPT-4o-mini for llm, and text-embedding-3-small for embedding, use openai for both llm and embedding
minirag-server --llm-binding azure_openai --llm-model GPT-4o-mini --embedding-binding openai --embedding-model text-embedding-3-small
# Using an authentication key
minirag-server --llm-binding azure_openai --llm-model GPT-4o-mini --embedding-binding azure_openai --embedding-model text-embedding-3-small --key my-key
# Using lollms for llm and azure_openai for embedding
minirag-server --llm-binding lollms --embedding-binding azure_openai --embedding-model text-embedding-3-small
Important Notes:
- For LoLLMs: Make sure the specified models are installed in your LoLLMs instance
- For Ollama: Make sure the specified models are installed in your Ollama instance
- For OpenAI: Ensure you have set up your OPENAI_API_KEY environment variable
- For Azure OpenAI: Build and configure your server as stated in the Prequisites section
For help on any server, use the --help flag:
minirag-server --help
Note: If you don't need the API functionality, you can install the base package without API support using:
pip install lightrag-hku
All servers (LoLLMs, Ollama, OpenAI and Azure OpenAI) provide the same REST API endpoints for RAG functionality.
Query the RAG system with options for different search modes.
curl -X POST "http://localhost:9721/query" \
-H "Content-Type: application/json" \
-d '{"query": "Your question here", "mode": "hybrid", ""}'
Stream responses from the RAG system.
curl -X POST "http://localhost:9721/query/stream" \
-H "Content-Type: application/json" \
-d '{"query": "Your question here", "mode": "hybrid"}'
Insert text directly into the RAG system.
curl -X POST "http://localhost:9721/documents/text" \
-H "Content-Type: application/json" \
-d '{"text": "Your text content here", "description": "Optional description"}'
Upload a single file to the RAG system.
curl -X POST "http://localhost:9721/documents/file" \
-F "file=@/path/to/your/document.txt" \
-F "description=Optional description"
Upload multiple files at once.
curl -X POST "http://localhost:9721/documents/batch" \
-F "files=@/path/to/doc1.txt" \
-F "files=@/path/to/doc2.txt"
Trigger document scan for new files in the Input directory.
curl -X POST "http://localhost:9721/documents/scan" --max-time 1800
Ajust max-time according to the estimated index time for all new files.
Get Ollama version information
curl http://localhost:9721/api/version
Get Ollama available models
curl http://localhost:9721/api/tags
Handle chat completion requests
curl -N -X POST http://localhost:9721/api/chat -H "Content-Type: application/json" -d \
'{"model":"minirag:latest","messages":[{"role":"user","content":"猪八戒是谁"}],"stream":true}'
For more information about Ollama API pls. visit : Ollama API documentation
Clear all documents from the RAG system.
curl -X DELETE "http://localhost:9721/documents"
Check server health and configuration.
curl "http://localhost:9721/health"
Contribute to the project: Guide
For LoLLMs:
uvicorn lollms_minirag_server:app --reload --port 9721
For Ollama:
uvicorn ollama_minirag_server:app --reload --port 9721
For OpenAI:
uvicorn openai_minirag_server:app --reload --port 9721
For Azure OpenAI:
uvicorn azure_openai_minirag_server:app --reload --port 9721
When any server is running, visit:
- Swagger UI: http://localhost:9721/docs
- ReDoc: http://localhost:9721/redoc
You can test the API endpoints using the provided curl commands or through the Swagger UI interface. Make sure to:
- Start the appropriate backend service (LoLLMs, Ollama, or OpenAI)
- Start the RAG server
- Upload some documents using the document management endpoints
- Query the system using the query endpoints
- Trigger document scan if new files is put into inputs directory
When starting any of the servers with the --input-dir
parameter, the system will automatically:
- Check for existing vectorized content in the database
- Only vectorize new documents that aren't already in the database
- Make all content immediately available for RAG queries
This intelligent caching mechanism:
- Prevents unnecessary re-vectorization of existing documents
- Reduces startup time for subsequent runs
- Preserves system resources
- Maintains consistency across restarts
Important Notes:
- The
--input-dir
parameter enables automatic document processing at startup - Documents already in the database are not re-vectorized
- Only new documents in the input directory will be processed
- This optimization significantly reduces startup time for subsequent runs
- The working directory (
--working-dir
) stores the vectorized documents database
Create your service file: minirag-server.sevice
. Modified the following lines from minirag-server.sevice.example
Description=MiniRAG Ollama Service
WorkingDirectory=<minirag installed directory>
ExecStart=<minirag installed directory>/minirag/api/start_minirag.sh
Create your service startup script: start_minirag.sh
. Change you python virtual environment activation method as need:
#!/bin/bash
# python virtual environment activation
source /home/netman/minirag-xyj/venv/bin/activate
# start lightrag api server
lightrag-server
Install lightrag.service in Linux. Sample commands in Ubuntu server look like: #Note: lightrag-server.service is the service file name, you can change it to minirag-server.service as needed.
sudo cp lightrag-server.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl start lightrag-server.service
sudo systemctl status lightrag-server.service
sudo systemctl enable lightrag-server.service