A lion identification service based on lion face and whiskers recognitions.
This application is currently deployed via a blue green methodology using Github Actions. The process is as follows:
- Work on code changes in a feature branch based off of the staging branch
- Submit a merge request to the staging branch
- Run the Deploy Github Action workflow pointed to the staging branch
- Receive approval from project administrators
- Submit a merge request to the master branch
- Determine which environment (blue, green) is active in production
- Run the Deploy Github Action workflow pointed to the inactive environment
- Point the staging and production webapp in Heroku to the inactive environment, making it active
- Run the Destroy Github Action workflow pointed to the new inactive environment
- Clone linc-cv-data.
- Create a
datafolder under linc-cv/linc_cv. - Copy
whisker_model_yolo.h5fromlinc-cv-datato linc-cv/linc_cv/data.- The
whisker_model_yolo.h5model was built by previous developers. Unfortunately, the training code is missing.
- The
- Export the following ENV variables:
- LINC_USERNAME. Username used to login to LINC website.
- LINC_PASSWORD. Password used to login to LINC website.
- Make sure your LINC_USERNAME is added to the ALLOWED_EMAILS environment variable on Heroku linc-api app.
- Execute the following training commands in linc-cv/main.py:
- PYTHONPATH=$(pwd) python linc_cv/main.py --parse-lion-database
- PYTHONPATH=$(pwd) python linc_cv/main.py --download-cv-images
- PYTHONPATH=$(pwd) python linc_cv/main.py --extract-cv-features
- PYTHONPATH=$(pwd) python linc_cv/main.py --train-cv-classifier
- PYTHONPATH=$(pwd) python linc_cv/main.py --download-whisker-images
- PYTHONPATH=$(pwd) python linc_cv/main.py --train-whisker-classifier
linc-cv uses 3 components: Flower, Celery and Supervisor
- Run
brew install gcc, if you are using Mac Apple Silicon. - Download Conda
- Run
conda create --name linc-cv python=3.10 - Run
conda activate linc-cv - Run
pip install --upgrade pip setuptools wheel - Run
pip install -r requirements.txt - Install redis. Celery uses redis message broker.
- Download models from linc-cv-data repository to
linc_cv/data
- Install Homebrew
- Run
brew install supervisor - Make a copy of
linc-cv/linc_cv/tests/supervisordtolinc-cv/linc_cv/tests/supervisord_local - Open
/opt/homebrew/etc/supervisord.confwith your editor of choice- Scroll to the bottom of the page.
- Replace
files = /usr/local/etc/supervisor.d/*.iniwithfiles = /path/to/linc-cv/linc_cv/tests/supervisord_local/*.conf. - You need to replace
/path/towith your local path tolinc-cvproject.
- Create a logs folder in
linc-cv - Open
celery.confandflower.confinlinc-cv/linc_cv/tests/supervisord_local- Replace
user=johndoewith your own username. This is the username you use to log in to your machine. - You may need to modify the path in
command=/opt/anaconda3/envs/..., if your conda is not installed in the default location. - Make sure the path in
environment=PYTHONPATH=/Users/.../linc/linc-cvandstdout_logfile=/Users/.../linc/linc-cv/logs/...are correct.
- Replace
- Run
/opt/homebrew/opt/supervisor/bin/supervisord -c /opt/homebrew/etc/supervisord.conf --nodaemon- Make sure redis is installed and running on your machine. If not, run
brew install redisand runredis-serverin terminal.
- Make sure redis is installed and running on your machine. If not, run
celery-classification.log,celery-training.logandflower.logwill be created inlinc-cv/logsfolder.- Now you should be able to navigate to Flower UI - http://localhost:5555/
- Execute the following code snippet to download the pretrained model:
-
> conda activate linc-cv > (linc-cv) python
-
>>> import pretrainedmodels >>> model_name = 'senet154' >>> model = pretrainedmodels.__dict__[model_name](num_classes=1000, pretrained='imagenet')
- The pretrained model is saved to
$HOME/.torch.
-
- Under project directory
linc-cv, execute the following in terminal:-
> export API_KEY=blah > PYTHONPATH=$(pwd) python linc_cv/web.py
-
- Replace
http://localhost:5000with real service IP address and port number. - Example of request and response (truncated for brievity) for lion face recognition:
-
curl --location --request POST 'http://localhost:5000/linc/v1/classify' \ --header 'ApiKey: blah' \ --header 'Content-Type: application/json' \ --data-raw '{ "type": "cv", "url": "https://raw.githubusercontent.com/linc-lion/linc-cv/master/tests/images/female_lion_face_1.jpeg" }' -
{ "id": "f9591d42-96e6-4178-9022-cab02cd86b3b", "status": "PENDING", "errors": [] } - Replace the last part of the url with the request id.
-
curl --location --request GET 'http://localhost:5000//linc/v1/results/f9591d42-96e6-4178-9022-cab02cd86b3b' \ --header 'ApiKey: blah' \ --header 'Content-Type: application/json' -
{ "status": "finished", "predictions": [ { "lion_id": "80", "probability": 0.412 }, { "lion_id": "40", "probability": 0.032 }, { "lion_id": "297", "probability": 0.028 } ] } - Example of request and response (truncated for brievity) for lion whisker recognition:
-
curl --location --request POST 'http://localhost:5000/linc/v1/classify' \ --header 'ApiKey: blah' \ --header 'Content-Type: application/json' \ --data-raw '{ "type": "whisker", "url": "https://raw.githubusercontent.com/linc-lion/linc-cv/master/tests/images/sample_lion_whisker_23.jpg" }' - Replace the last part of the url with the request id.
-
{ "id": "3f6dbfdf-98ea-4d76-92af-e5ff9912546b", "status": "PENDING", "errors": [] } -
curl --location --request GET 'http://localhost:5000//linc/v1/results/3f6dbfdf-98ea-4d76-92af-e5ff9912546b' \ --header 'ApiKey: blah' \ --header 'Content-Type: application/json' -
{ "status": "finished", "predictions": [ { "lion_id": "15", "probability": 0.951 }, { "lion_id": "372", "probability": 0.79 }, { "lion_id": "94", "probability": 0.785 } ] }
-