A web application that analyzes squat form using a custom-trained deep learning model and biomechanical pose metrics.
- Train – Upload a Google Drive folder containing labeled squat videos (
good/andbad/subfolders). The app trains a 3D CNN video classifier on them. - Analyze – Upload any squat video and get an instant classification (Good / Needs Improvement) with two personalized coaching tips.
The model only trains once. After training, the app switches automatically to analysis mode.
PoseAITraining_web/
├── app.py # Streamlit UI
├── main.py # Model architecture, training pipeline
├── backend.py # Video analysis, pose metrics (MediaPipe), feedback
├── outputs/ # Saved model files (auto-created after training)
├── .streamlit/ # Streamlit config
├── .env # Environment variables (not committed)
└── requirements.txt
pip install -r requirements.txtstreamlit run app.pyPaste a Google Drive folder link that follows this structure:
root/
good/
video1.mp4
...
bad/
video2.mp4
...
Click Train model and wait. Training runs once — the model is saved to outputs/pose_model.pt.
Upload a squat video (MP4, MOV, AVI, MKV, WEBM) and click Analyze video.
Coaching tips are generated from two sources:
| Source | When used |
|---|---|
| Biomechanical rules (MediaPipe pose metrics) | Always |
| OpenAI GPT-4o-mini | Only if OPENAI_API_KEY is set in .env |
Metrics measured: squat depth, torso lean angle, knee tracking, left/right symmetry, movement stability.
OPENAI_API_KEY=sk-... # Optional – enables AI-generated feedback
EPOCHS=4 # Training epochs (default: 4)
BATCH_SIZE=2 # Training batch size (default: 2)
LEARNING_RATE=0.0001 # Learning rate (default: 1e-4)- Python 3.9+
- PyTorch
- OpenCV
- MediaPipe
- Streamlit
- gdown
- MediaPipe may fail when the
.venvpath contains non-ASCII characters (e.g. Hebrew folder names). In that case, pose metrics are skipped and classification still works normally. - The model file (
outputs/) is excluded from version control.