A simple fullstack application for cardiovascular health assessment with automated grammar improvement.
├── backend/ # Python FastAPI backend with async job processing
├── frontend/ # React frontend application
└── README.md # This file
- Python 3.8+
- Node.js 16+
- Redis server
-
Start Redis (if not already running):
# macOS: brew services start redis # Linux: sudo systemctl start redis # Docker: docker run -d -p 6379:6379 redis:alpine
-
Start everything with hot reload:
./start_dev.sh
-
Access the Application:
- Frontend: http://localhost:3000
- Backend API: http://localhost:8000
- API Documentation: http://localhost:8000/docs
-
Stop everything:
# Press Ctrl+C in the terminal running start_dev.sh # The script will automatically stop all services
- FastAPI Backend: Auto-reloads on Python file changes
- React Frontend: Auto-reloads on JavaScript/CSS changes
- arq Worker: Auto-restarts on Python file changes (using watchmedo)
- Data Storage: JSON file automatically created on first run
If you prefer to run services individually in separate terminals:
# Terminal 1 - Backend
cd backend
pip install -r requirements.txt
uvicorn main:app --reload --port 8000
# Terminal 2 - Worker (with hot reload)
cd backend
watchmedo auto-restart --recursive -- arq worker.WorkerSettings
# Terminal 3 - Frontend
cd frontend
npm install
npm start- FastAPI: Modern Python web framework
- JSON File Storage: Simple local file persistence (no database setup required)
- arq: Redis-based job queue for async processing
- Redis: In-memory data store for job queue
- OpenAI + OpenRouter: LLM integration for grammar improvement using Llama 4 Maverick
- React: JavaScript library for building user interfaces
- Axios: HTTP client for API communication
- CSS: Modern styling with responsive design
- Frontend Form: User fills out 5 cardiovascular health questions
- JSON Storage: Form data is saved to
backend/health_form_data.json - Async Processing: When user triggers grammar improvement, a background job is queued
- LLM Analysis: OpenAI + OpenRouter (Llama 4 Maverick) analyzes the health responses for grammatical issues
- Smart Improvements: The LLM makes minimal changes to improve grammar while preserving meaning
- Real-time Updates: Frontend polls job status and displays results
GET /form- Retrieve current health form dataPUT /form- Update health form dataPOST /form/improve-grammar- Trigger LLM grammar improvement (async)GET /jobs/{job_id}/status- Check background job status