A tiny terminal chatbot that loads a local text file, splits it into chunks, indexes it in Qdrant, and answers questions using an agent with a RAG tool.
This example uses Qdrant over gRPC.
If you don't already have Qdrant running, one quick way is Docker:
docker run --rm -p 6333:6333 -p 6334:6334 qdrant/qdrantThe example uses OpenAI-compatible endpoints via environment variables:
OPENAI_API_KEY(optional for local OpenAI-compatible servers)OPENAI_BASE_URL(optional; if unset and no key is provided, it defaults to an Ollama-compatible base URL)OPENAI_MODEL(optional)
export OPENAI_BASE_URL=http://localhost:11434/v1
export OPENAI_MODEL=gpt-oss:20b-cloud
# OPENAI_API_KEY can be empty for Ollamaexport OPENAI_API_KEY=...your_key...
export OPENAI_MODEL=gpt-4o-mini
# optional
# export OPENAI_BASE_URL=https://api.openai.com/v1From the repo root:
go run ./examples/rag-chatbot -file /path/to/your/file.txtType your questions and press Enter. Type exit to quit.
-chunk-size/-chunk-overlap: controls the splitter-topk: how many chunks to retrieve per tool call-qdrant-host/-qdrant-port: Qdrant connection (defaults tolocalhost:6334)-qdrant-collection: Qdrant collection name (defaults torag_chatbot)-qdrant-api-key: optional API key-qdrant-tls: enable TLS