- Python FastAPI app with LangChain memory-enabled agent using Groq (
langchain_groq). - Endpoint:
POST /api/chatwith JSON{ session_id, message, user_profile? }.
- Create and activate venv
python -m venv .venv
.venv\\Scripts\\activate- Install deps
pip install -r requirements.txt- Set environment variables (PowerShell)
$env:GROQ_API_KEY = "<your_groq_api_key>"
$env:GROQ_MODEL = "llama-3.1-70b-versatile"
$env:GROQ_TEMPERATURE = "0.3"- Run server
uvicorn backend.app.main:app --reload --host 0.0.0.0 --port 8000- Vite + React single-page chat UI.
cd frontend
npm install
npm run dev- Optional: change API base via
VITE_API_BASEenv (defaults tohttp://localhost:8000/api).
- Memory is per
session_idin-memory on the server (ephemeral). Persisted stores (Redis/Postgres) can be added later. - Groq model defaults to
llama-3.1-70b-versatile. Adjust via env.