Skip to content

melllinia/FacialExpressionRecognition

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Facial Emotion Recognition

License Python Issues

Table of Contents

Introduction

Facial Emotion Recognition is a project aimed at identifying human emotions from facial expressions in images or video sequences using deep learning techniques. The emotions recognized include happiness, sadness, surprise, anger, fear, disgust and neutral.

Features

  • Pre-trained models for quick setup
  • Easy to train on custom datasets
  • REST API for easy integration and testing
  • Supports images
  • Detailed results visualization

Installation

  1. Clone the repository:

    git clone https://github.com/melllinia/FacialExpressionRecognition.git
    cd FacialExpressionRecognition/source
  2. Create a virtual environment and activate it:

    python -m venv venv
    source venv/bin/activate  # On Windows use `venv\Scripts\activate`
  3. Install the required packages:

    pip3 install -r requirements.txt

Usage

Running the REST API

To start the REST API server, run:

uvicorn server.controllers::app

The APIs will be available at http://127.0.0.1:8000/. The Swagger UI will be available at http://127.0.0.1:8000/docs.

API Endpoints

Emotion Recognition from Image with Probabilities Response

  • Endpoint: /model/detect-emotion/
  • Method: POST
  • Response: The list of probabilities of emotions with corresponding face coordinates

Emotion Recognition from Image with Image Response

  • Endpoint: /model/detect-emotion/image
  • Method: POST
  • Response: Annotated image

Dataset

The model can be trained on various facial emotion datasets such as FER2013, CK+, etc. Make sure to download and place the dataset in the appropriate directory and update the path in the configuration file.

Model Architecture

The model uses a convolutional neural network (CNN) based architecture for feature extraction and emotion classification. The architecture can be customized by modifying the model/net.py file.

Results

Our pre-trained model achieves the following accuracy on the FER2013 dataset:

  • Accuracy: 55%

Contributing

Contributions are welcome! Please follow these steps to contribute:

  1. Fork the repository.
  2. Create a new branch: git checkout -b feature-branch-name.
  3. Make your changes and commit them: git commit -m 'Add some feature'.
  4. Push to the branch: git push origin feature-branch-name.
  5. Open a pull request.

Please make sure your code follows the project's coding standards and includes proper documentation.

Acknowledgements

Contact

For any queries, please contact [email protected] or [email protected].


Keep innovating! 💡🚀

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •  

Languages