Standalone MVP that converts SIMPHONY support conversations into a structured service request draft. The app is designed for academic demonstration and local product prototyping, with a mock-first AI path so it runs even when no OpenAI API key is configured.
frontend/: Next.js review UI.backend/: FastAPI extraction and drafting API.docs/: PRD, architecture, and assumptions.data/: synthetic SIMPHONY seed transcripts and gold labels.scripts/: helper entrypoints for local evaluation.
-
Start the backend:
cd backend python -m venv .venv source .venv/bin/activate pip install -e .[dev] uvicorn app.main:app --reload --port 8000
-
Start the frontend in a second terminal:
cd frontend npm install BACKEND_INTERNAL_URL=http://localhost:8000 npm run dev -- --hostname 0.0.0.0 --port 3000 -
Open
http://localhost:3000.
Run the full stack with:
docker compose up --buildThe Docker setup uses PostgreSQL for persistence and runs the backend in MOCK_MODE=true by default.
- Backend tests:
cd backend && pytest - Backend lint/type checks:
cd backend && ruff check . && ruff format --check . && mypy app - Frontend lint:
cd frontend && npm run lint - Frontend unit tests:
cd frontend && npm run test - Frontend smoke test:
cd frontend && npm run test:e2e - Seed evaluation:
./scripts/run-evaluation.sh
- Optional live AI mode:
backend/.envOPENAI_API_KEY=...MOCK_MODE=false
Without an API key, the backend uses deterministic mock extraction, rule-based completeness checks, and template-based question/draft generation.