Official PyTorch implementation of the paper "LFighter: Defending against the Label-Flipping Attack in Federated Learning", published in Neural Networks, vol. 170, pp. 111–126, Elsevier, 2024.
LFighter is a robust aggregation defense mechanism designed to protect federated learning systems against label-flipping attacks — a class of data poisoning attacks in which malicious participants (clients) deliberately mislabel training samples to corrupt the global model's behavior on specific target classes.
This repository provides a fully reproducible implementation of all experiments reported in the paper across three benchmark datasets and multiple model architectures, under both IID and non-IID data distributions with up to 40% malicious participants.
LFighter: Defending against the Label-Flipping Attack in Federated Learning Najeeb M. Jebreel, Josep Domingo-Ferrer, David Sánchez, Alberto Blanco-Justicia Neural Networks, vol. 170, pp. 111–126, Elsevier, 2024 🔗 Read on ScienceDirect
LFighter/
├── notebooks/ # Jupyter notebooks for each benchmark
│ ├── Experiments_MNIST.ipynb
│ ├── Experiments_CIFAR10.ipynb
│ └── Experiments_IMDB_IID.ipynb
├── src/ # Core Python source modules
│ ├── aggregation.py # Aggregation rules (LFighter + baselines)
│ ├── datasets.py # Dataset loaders
│ ├── environment_federated.py # Federated learning environment
│ ├── experiment_federated.py # Experiment runner
│ ├── models.py # Model architectures
│ ├── sampling.py # Data sampling utilities
│ └── utils.py # General utilities
├── data/ # Place IMDB dataset here (see below)
├── checkpoints/ # Model checkpoints (auto-generated)
├── results/ # Experiment results (auto-generated)
├── figures/ # Result figures referenced in this README
│ ├── main_results.PNG
│ └── stability_all.PNG
├── requirements.txt
└── README.md
| Dependency | Version |
|---|---|
| Python | ≥ 3.6 |
| PyTorch | ≥ 1.6 |
| TensorFlow | ≥ 2.0 |
git clone https://github.com/najeebjebreel/LFighter.git
cd LFighter
pip install -r requirements.txt| Dataset | Access | Notes |
|---|---|---|
| MNIST | Automatic | Downloaded via PyTorch/TensorFlow data loaders |
| CIFAR-10 | Automatic | Downloaded via PyTorch/TensorFlow data loaders |
| IMDB | Manual | See instructions below |
- Download the preprocessed dataset from Google Drive.
- Save the file as
imdb.csvin the following path:
LFighter/data/imdb.csv
Open the Jupyter notebook corresponding to the benchmark of interest and follow the inline instructions:
jupyter notebook notebooks/Experiments_MNIST.ipynb
jupyter notebook notebooks/Experiments_CIFAR10.ipynb
jupyter notebook notebooks/Experiments_IMDB_IID.ipynbEach notebook walks through data loading, federated training, attack simulation, and defense evaluation with inline instructions.
The table below reports LFighter's classification robustness under a label-flipping attack with 40% malicious participants, compared against baseline aggregation strategies.
The figure below shows the per-round source class accuracy under the label-flipping attack (40% attackers) for the CIFAR-10/ResNet-18/non-IID and IMDB/BiLSTM settings, illustrating LFighter's training stability relative to undefended baselines.
If you use this code or build upon this work, please cite:
@article{jebreel2024lfighter,
title = {LFighter: Defending against the label-flipping attack in federated learning},
author = {Jebreel, Najeeb Moharram and Domingo-Ferrer, Josep and S{\'a}nchez, David and Blanco-Justicia, Alberto},
journal = {Neural Networks},
volume = {170},
pages = {111--126},
year = {2024},
publisher = {Elsevier},
doi = {10.1016/j.neunet.2023.11.047}
}This research was funded by the European Commission (projects H2020-871042 "SoBigData++" and H2020-101006879 "MobiDataLab"), the Government of Catalonia (ICREA Acadèmia Prizes to J. Domingo-Ferrer and D. Sánchez, grant no. 2021 SGR 00115, and FI_B00760 grant to N. Jebreel), and MCIN/AEI/10.13039/501100011033 and "ERDF A way of making Europe" under grant PID2021-123637NB-I00 "CURLING". The authors are with the UNESCO Chair in Data Privacy, but the views in this paper are their own and are not necessarily shared by UNESCO.
This project is licensed under the MIT License. See the LICENSE file for details.
Developed at the CRISES Research Group, Universitat Rovira i Virgili (URV), Tarragona, Catalonia.
For questions or issues, please open a GitHub issue or contact Najeeb Jebreel.