You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, using llama.cpp for local hardware and a "Release Process". To make it easier for enterprise users to deploy, we need a containerized version.
Create a docker-compose.yml that spins up:
The FastAPI Backend.
The Frontend (served via Nginx).
A vector store container if migrating away from local FAISS. Constraint: Must handle the llama.cpp model loading inside the container efficiently.