In this project, I combined my knowledge of computer vision techniques and deep learning to build and end-to-end facial keypoint recognition system. Facial keypoints include points around the eyes, nose, and mouth on any face and are used in many applications, from facial tracking to emotion recognition. My code is able to take in any image containing faces and identify the location of each face and their facial keypoints, as shown below,
- Clone the repository, and navigate to the downloaded folder.
git clone https://github.com/JonathanKSullivan/Facial-Keypoint-Detection.git
cd AIND-CV-FacialKeypoints
-
Create (and activate) a new environment with Python 3.5 and the
numpy
package.- Linux or Mac:
conda create --name aind-cv python=3.5 numpy source activate aind-cv
- Windows:
conda create --name aind-cv python=3.5 numpy scipy activate aind-cv
-
Install/Update TensorFlow (for this project, you may use CPU only).
- Option 1: To install TensorFlow with GPU support, follow the guide to install the necessary NVIDIA software on your system. If you are using the Udacity AMI, you can skip this step and only need to install the
tensorflow-gpu
package:
pip install tensorflow-gpu -U
- Option 2: To install TensorFlow with CPU support only:
pip install tensorflow -U
- Option 1: To install TensorFlow with GPU support, follow the guide to install the necessary NVIDIA software on your system. If you are using the Udacity AMI, you can skip this step and only need to install the
-
Install/Update Keras.
pip install keras -U
-
Switch Keras backend to TensorFlow.
- Linux or Mac:
KERAS_BACKEND=tensorflow python -c "from keras import backend"
- Windows:
set KERAS_BACKEND=tensorflow python -c "from keras import backend"
-
Install a few required pip packages (including OpenCV).
pip install -r requirements.txt
All of the data you'll need to train a neural network is in the AIND-CV-FacialKeypoints repo, in the subdirectory data
. In this folder are a zipped training and test set of data.
- Navigate to the data directory
cd data
- Unzip the training and test data (in that same location). If you are in Windows, you can download this data and unzip it by double-clicking the zipped files. In Mac, you can use the terminal commands below.
unzip training.zip
unzip test.zip
You should be left with two .csv
files of the same name. You may delete the zipped files.
Troubleshooting: If you are having trouble unzipping this data, you can download that same training and test data on Kaggle.
Now, with that data unzipped, you should have everything you need!
Main project files:
- mimic.js: Javascript file with code that connects to the Affectiva API and processes results.
- index.html: Dynamic webpage that displays the video feed and results.
- mimic.css: Stylesheet file that defines the layout and presentation for HTML elements.
There are two additional files provided for serving the project as a local web application:
- serve.py: A lightweight Python webserver required to serve the webpage over HTTPS, so that we can access the webcam feed.
- generate-pemfile.sh: A shell script you’ll need to run once to generate an SSL certificate for the webserver.
- Navigate back to the repo. (Also your source environment should still be activated at this point)
cd
cd AIND-CV-FacialKeypoints
- Open the notebook and follow the instructions.
jupyter notebook CV_project.ipynb
- Udacity - Initial work - AIND-CV-FacialKeypoints
- Jonathan Sulivan
- Hackbright Academy
- Udacity
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License. Please refer to Udacity Terms of Service for further information.