Skip to content

MarcosAugusto47/full_local_rag

Repository files navigation

Local RAG System

Setup

I am using python==3.11.11 and mistral:latest that has 4.1 GB size.

  1. Install dependencies:
pip install -r requirements.txt
  1. Install Ollama (https://ollama.ai/) and pull the Mistral model:
ollama pull mistral
  1. Create a data directory and add your documents:
mkdir data
# Add your .txt or .md files to the data directory
  1. Ingest your documents:
python ingest_documents.py

Usage

Chatbot Mode

python chatbot.py

CLI Mode

python cli.py --query "Your question here"

Interactive CLI Mode

python cli.py

API Server

uvicorn api:app --reload

Then access the API at http://localhost:8000

Project Structure

  • local_rag.py: Main RAG implementation
  • ingest_documents.py: Document ingestion script
  • api.py: FastAPI server
  • cli.py: Command-line interface
  • data/: Directory for your documents
  • chroma_db/: Directory where ChromaDB stores embeddings

About

This a full local RAG application using Mistral models via Ollama.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors