This repository only contains the blurring algorithms and API.
It is based on YOLOv11 for object detection (faces and license plates) using a custom trained model.
Blurring is done on original JPEG pictures by manipulating low-level MCU in the JPEG raw data, to keep all other parts of the original image unchanged (no decompression/recompression). This also saves CPU usage.
The blurring service calls a detection service through local HTTP calls.
These dependencies are needed for lossless JPEG transformations :
- turbojpeg library and headers
- exiftran
You can install them through your package manager, for example in Ubuntu:
sudo apt install libturbojpeg0-dev libjpeg-turbo-progs exiftran
Basic dependencies may also need:
sudo apt install git python-is-python3 python3-pip
Running on a GPU will requires NVidia drivers and Cuda.
You can download code from this repository with git clone:
git clone https://github.com/cquest/sgblur.git
cd sgblur/
We use Pip to handle Python dependencies. You can create a virtual environment first:
python -m venv env
source ./env/bin/activate
Install python dependencies for the API:
pip install -e .
To use the blurring API, 2 APIs need to be launched:
- the detection API, that does the machine learning processing, and benefits from a powerful GPU
- the blurring API, that calls the detection API and blurs the picture using the detected objects. This does not require a GPU, but is CPU-bound.
The Web API can be launched with the following commands:
# detection service on port 8001 (1 worker to save GPU VRAM)
uvicorn src.detect.detect_api:app --port 8001
The API is usally access through the blurring API, but can also be used directly on localhost:8001.
A single picture can be detected using the following HTTP call (here made using curl):
# Considering your picture is called original.jpg
curl -X 'POST' \
'http://127.0.0.1:8001/detect/' \
-F '[email protected]'
Exemple using httpie :
http --form POST http://127.0.0.1:8001/detect/ [email protected]
The response is a JSON object containing the detected objects.
The API documentation is available on localhost:8001/docs.
The blurring API can be launched with the following command:
# blurring service (several workers using CPU for the blurring)
uvicorn src.blur.blur_api:app --port 8000 --workers 8
It is then accessible on localhost:8000.
A single picture can be blurred using the following HTTP call (here made using curl):
# Considering your picture is called original.jpg
curl -X 'POST' \
'http://127.0.0.1:8000/blur/' \
-F '[email protected]' \
--output blurred.jpg
Example using httpie :
http --form POST http://127.0.0.1:8000/blur/ [email protected] --download --output blurred.jpg
A demo API with a minimal web UI is running on https://panoramax.openstreetmap.org/blur/
DO NOT USE it in production without prior authorization. Thanks.
2 environment variables can be used to configure the API:
CROP_SAVE_DIR
: directory where cropped pictures are saved (default:/data/crops
)TMP_DIR
: directory where temporary files are saved (default:/dev/shm
)DETECT_URL
: URL of the face/plate detection API (default:http://localhost:8001
). If set to""
, the detection will be done locally. It's not recommended to do so in production, but it can be useful for testing.
The API documentation is available on localhost:8000/docs.
Pull requests are welcome. For major changes, please open an issue first to discuss what you would like to change.
Copyright (c) Panoramax team 2022-2024, released under MIT license.