Empa is a full-stack web application that leverages computer vision and machine learning to analyze facial expressions and translate them into recognizable emotions. It's designed to assist individuals with communication disorders in social interactions, help those on the autism spectrum understand emotional cues, and enhance empathy in diverse, cross-cultural communications.
- Real-time emotion recognition from live facial facial footage using our custom-trained model.
- Radar chart showing emotion metrics (Measured with the confidence level of each emotion).
- Recommended responses from the detected emotion using transcribed audio to text (i.e. if you say a phrase expressing anger, the app shows ways you can say to soothe that person).
- Flask backend
- Python + Jupyter notebook to train the model
- Vanilla React frontend styled with Tailwind CSS
landmarking.ipynb-> Downloaded FER2013 dataset (imgs), and used Mediapipe to landmark 463 facial points, writing this to a CSV (fer2013_landmarks_nopathsfixed.csv)- x is the emotion (label) for the image
- y are facial coordinates per image
landmarking_model-> Trained custom model using Tensorflow and Sci-kit Learn, using landmarked data from the CSV
- Before you begin, ensure you have met the following requirements:
- Install required dependencies in root folder and both frontend and backend folders
npm install
- Create a
.envfile in this folder with the following variables:
OPENAI_API_KEY={YOUR_API_KEY}
(127.0.0.1:5000 by default)
cd serverpython3 -m venv venvsource venv/bin/activate(MacOS)venv\Scripts\activate(Windows Powershell)pip install -r requirements.txtpython3 app.py
(localhost:3000 by default)
cd clientnpm installnpm start
- Radar chart updating live on real-time footage
- Adding detection for emotions detected in vocal tone & body language for improved conversation suggestions
- Deployment
- Demo Video

