This repository contains the full artifact (code + environment) for our PETS 2025 paper titled "HyDia: FHE-based Facial Matching with Hybrid Approximations and Diagonalization".
Follow these steps below to (1) build the Docker image, (2) run the five face‑matching approaches, (3) capture their latency/parameter output and (4) generate the figures present in the manuscript.
git clone https://github.com/n7koirala/image_matching.git
cd image_matching
If Docker is not installed, please refer to Docker Installation to install Docker for Ubuntu 22.04. Then, proceed with building the docker file:
docker build --tag popets2025-hydia .
-
Installs all system prerequisites on top of Ubuntu 22.04 (build‑essential, cmake, libomp, etc.).
-
Fetches and compiles OpenFHE v1.2.3.
-
Fetches and installs all the dependendies for generating the figures.
-
Compiles the project into
/opt/image_matching/build.
First create a directory where you want to store the generated figures from the manuscript.
mkdir -p ~/artifacts_output
Run the command below to execute all five approaches described in the paper and store the result (latency statistics for the Membership and Index scenarios, FHE parameters used, and basic correctness checks etc.) under output.log. It then generates all the figures present in manuscript under ~/artifact_output .
docker run --rm -v ~/artifact_output:/tmp popets2025-hydia | tee output.log
All the figures present in the main manuscript can be obtained under ~/artifact_output/manuscript_figures.
The code above only reproduce the graphs based on the data over a subset of the the FRGC 2.0 RGB dataset and is located inside image_matching/tools/figures. The full set of data can be found in image_matching/HyDia_full_data.zip. The data present in image_matching/tools/figures are obtained using the full set of data.
We have not included the full FRGC 2.0 RGB dataset (including the images), as it was provided by the CVRL lab at the University of Notre Dame and may contain private or proprietary content that cannot be publicly shared.
Therefore, we record our result based on the obtained data (embeddings) and provide the code to generate graphs based on them, as they are the exact ones we present in the paper.
run_artifact.sh inside the container will use a pre-generated small encrypted database of 210 facial‐feature vectors. Optionally, the database size can be easily changed to a higher number (up to 220) after that specific amount of vectors are generated using the generate_data.sh script located under /tool. For instance to generate 215, run generate_data.sh "2_15.dat" $((2**15)).
Then, edit run_artifact.sh to update your changes.
To generate an experimental dataset, run the following script from the build folder:
../tools/generate_data.sh [FILENAME] [SIZE]Note that the dataset will be automatically placed in the test folder. For example, to generate a dataset with 1024 database vectors located at /test/2_10.dat, try:
../tools/generate_data.sh "2_10.dat" $((2**10))To run the latency experiments upon the image matching application, navigate to the build folder and use the following command in your terminal:
./ImageMatching ../test/[FILENAME] [APPROACH]The [FILENAME] parameter must correspond to an existing file generated by the above scripts.
The [APPROACH] parameter determines which algorithm is used to perform the encrypted facial matching upon the provided dataset. The possibilities for this parameter are given below:
| Parameter | Experimental Approach |
|---|---|
| 1 | Literature Baseline Approach |
| 2 | GROTE Approach (Baseline + Group Testing) |
| 3 | Blind-Match Approach |
| 4 | HERS Approach |
| 5 | HyDia Approach (Ours) |
For instance, try:
./ImageMatching ../test/2_10.dat 1This will execute the main application, showcasing both image matching algorithms, more specifically their encryption, matching, and decryption steps.
To run the accuracy experiments upon the image matching application, navigate to the build folder and use the following command in your terminal:
./ImageMatchingAccuracy [SUBJECT_INDEX] [APPROACH]The [SUBJECT_INDEX] parameter determines which facial template vector is used as the query vector. The query dataset includes 50 randomly sampled facial template vectors which can be used, therefore this parameter must be an integer in the range 0-49.
The [APPROACH] parameter determines which algorithm is used to perform the encrypted facial matching upon the provided dataset. The possibilities for this parameter are given below:
| Parameter | Experimental Approach |
|---|---|
| 1 | Literature Baseline Approach |
| 2 | GROTE Approach (Baseline + Group Testing) |
| 3 | Blind-Match Approach |
| 4 | HERS Approach |
| 5 | HyDia Approach (Ours) |
For instance, try:
./ImageMatchingAccuracy 0 5This experiment performs the designated approach upon the FRGC 2.0 dataset, reporting the number of true/false positives and negatives produced by the approach. The experiment also reports the number of true/false positives and negatives produced by the facial feature extractor without any encryption, for purposes of comparison.
The application can be configured using various parameters defined in the source code. Key parameters include:
- Similarity Match Threshold: Set the cosine similarity value above which vectors are considered to be matching.
- Comparison Depth: Set the multiplicative depth to be used by the comparison-approximating function.
- Alpha-Norm Depth: Set the multiplicative depth to be used by the alpha-norm maximum approximation in the group-testing approach.
- CPU Cores: Set the maximum number of CPU cores to be allotted to the enroller, receiver, and sender in multi-threaded operations.
- Security Level: Configure the security level of the CKKS scheme.
- Scaling Mod Size: Configure the size for the scaling modulus of the CKKS scheme.
// include/config.h
const double MATCH_THRESHOLD = 0.85;
const size_t COMP_DEPTH = 10;
const size_t ALPHA_DEPTH = 2;
const size_t MAX_NUM_CORES = 32;// src/main.cpp
CCParams<CryptoContextCKKSRNS> parameters;
parameters.SetSecurityLevel(HEStd_128_classic);
parameters.SetScalingModSize(45);
parameters.SetScalingTechnique(FIXEDMANUAL);We welcome contributions from the community to enhance the functionality and performance of the image matching project. Here’s how you can contribute:
- Fork the Repository: Click on the fork button at the top right of the repository page.
- Create a Branch: Create a new branch for your feature or bugfix.
git checkout -b feature-name
- Make Changes: Implement your changes in the new branch.
- Submit a Pull Request: Push your changes to your forked repository and submit a pull request to the main repository.
This project is licensed under the MIT License. See the LICENSE file for more details.
MIT License
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
This code is designed strictly for academic and research purposes. It has NOT undergone scrutiny by security professionals. No part of this code should be used in any real-world or production setting.