This repository contains code for machine unlearning experiments, including:
- Data poisoning and visualization
- Hessian-based analysis of model landscapes
- Various unlearning methods comparison (Retrain, Finetune, Scrub, Grad-Asc)
- Evaluation of unlearning effectiveness on both classical MLP and quantum neural networks
The paper reference is https://arxiv.org/abs/2508.02422 where we for the first time proposed the concept of Quantum Machine Unlearning.
datar.py: Data preprocessing and loading functionsmodels.py: Model for traning and unlearning definitions for both MLP and QNNpoison_unlearn.py: Implementation of various unlearning method wrappersrun_unlearn_mnist.py: Main script for MNIST unlearning experimentsrun_unlearn_xxz.py: Main script for XXZ model unlearning experimentsrun_example.py: Run the code with only data poisoning partrun_test.py: A simplified test script for quick verification on functionality parallelrun_unlearn_mnist.pyplot_*.ipynb: Visualization related notebooks
Install all requirements with (python 3.12 suggested):
pip install -r requirements.txtThis work is enabled by the high performance quantum-classical hybrid software infrastructure of TensorCircuit-NG.
Run the example script to test the code:
python run_example.pyTo run the MNIST unlearning experiments:
python run_unlearn_mnist.pyTo run the XXZ model unlearning experiments:
python run_unlearn_xxz.py-
No specified hardware is required.
-
The typical environment setup and software installation time is around 10 minutes.
-
Part of the expected output from the demo
run_example.py(expected runtime on a desktop: ~5min):
Machine Unlearning Examples
========================================
1. MNIST Example:
Loading MNIST data...
Training: (500, 784)
250 250
Validation: (1000, 784)
Running label flipping experiment with QNN...
F=5.71 C=6.53 S=12.00 P=13.19: 100%|████████████| 64/64 [00:01<00:00, 43.69it/s]
Results: [{'depth': 4, 'epochs': 10, 'learning_rate': 0.005, 'batch_size': 256, 'acc': Array(0.624, dtype=float32), 'val_acc': Array(0.591, dtype=float32)}, {'depth': 4, 'epochs': 10, 'learning_rate': 0.005, 'batch_size': 256, 'acc': Array(0.67, dtype=float32), 'val_acc': Array(0.69100004, dtype=float32)}, {'depth': 4, 'epochs': 10, 'learning_rate': 0.005, 'batch_size': 256, 'acc': Array(0.70400006, dtype=float32), 'val_acc': Array(0.744, dtype=float32)}, {'depth': 4, 'epochs': 10, 'learning_rate': 0.005, 'batch_size': 256, 'acc': Array(0.57000005, dtype=float32), 'val_acc': Array(0.67200005, dtype=float32)}, {'depth': 4, 'epochs': 10, 'learning_rate': 0.005, 'batch_size': 256, 'acc': Array(0.476, dtype=float32), 'val_acc': Array(0.47200003, dtype=float32)}, {'depth': 4, 'epochs': 10, 'learning_rate': 0.005, 'batch_size': 256, 'acc': Array(0.54200006, dtype=float32), 'val_acc': Array(0.513, dtype=float32)}, {'depth': 4, 'epochs': 10, 'learning_rate': 0.005, 'batch_size': 256, 'acc': Array(0.522, dtype=float32), 'val_acc': Array(0.48700002, dtype=float32)}, {'depth': 4, 'epochs': 10, 'learning_rate': 0.005, 'batch_size': 256, 'acc': Array(0.59400004, dtype=float32), 'val_acc': Array(0.34, dtype=float32)}, {'depth': 4, 'epochs': 10, 'learning_rate': 0.005, 'batch_size': 256, 'acc': Array(0.614, dtype=float32), 'val_acc': Array(0.27100003, dtype=float32)}, {'depth': 4, 'epochs': 10, 'learning_rate': 0.005, 'batch_size': 256, 'acc': Array(0.67800003, dtype=float32), 'val_acc': Array(0.261, dtype=float32)}]
2. XXZ Model Example:
Loading XXZ data...The repository implements several machine unlearning methods:
- Retrain: Retrain the model from scratch on the retained data
- Fine-tune (cf): Continue training on the retained data
- Scrub: Use KL divergence regularization to "scrub" the influence of forgotten data
- Gradient Ascent (ga): Use gradient ascent to actively unlearn the forgotten data
Two types of models are implemented:
- Classical MLP: Multi-layer perceptron with configurable hidden layers
- Quantum Neural Network (QNN): Quantum circuit-based model using TensorCircuit
The code supports experiments on:
- MNIST: Binary classification of digits 1 and 9
- XXZ Model: Quantum many-body system data (requires external dataset files)
The experiments evaluate:
- Validation Accuracy: Performance on clean validation data
- Forgetting Accuracy: Ability to forget the "unlearned" data
- Model Stability: Robustness of the unlearning process
This project is licensed under the MIT License - see the LICENSE file for details.