Skip to content

najeebjebreel/LFighter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LFighter: Defending against Label-Flipping Attacks in Federated Learning

Paper Python PyTorch License

Official PyTorch implementation of the paper "LFighter: Defending against the Label-Flipping Attack in Federated Learning", published in Neural Networks, vol. 170, pp. 111–126, Elsevier, 2024.


Overview

LFighter is a robust aggregation defense mechanism designed to protect federated learning systems against label-flipping attacks — a class of data poisoning attacks in which malicious participants (clients) deliberately mislabel training samples to corrupt the global model's behavior on specific target classes.

This repository provides a fully reproducible implementation of all experiments reported in the paper across three benchmark datasets and multiple model architectures, under both IID and non-IID data distributions with up to 40% malicious participants.


Paper

LFighter: Defending against the Label-Flipping Attack in Federated Learning Najeeb M. Jebreel, Josep Domingo-Ferrer, David Sánchez, Alberto Blanco-Justicia Neural Networks, vol. 170, pp. 111–126, Elsevier, 2024 🔗 Read on ScienceDirect


Repository Structure

LFighter/
├── notebooks/                      # Jupyter notebooks for each benchmark
│   ├── Experiments_MNIST.ipynb
│   ├── Experiments_CIFAR10.ipynb
│   └── Experiments_IMDB_IID.ipynb
├── src/                            # Core Python source modules
│   ├── aggregation.py              # Aggregation rules (LFighter + baselines)
│   ├── datasets.py                 # Dataset loaders
│   ├── environment_federated.py    # Federated learning environment
│   ├── experiment_federated.py     # Experiment runner
│   ├── models.py                   # Model architectures
│   ├── sampling.py                 # Data sampling utilities
│   └── utils.py                    # General utilities
├── data/                           # Place IMDB dataset here (see below)
├── checkpoints/                    # Model checkpoints (auto-generated)
├── results/                        # Experiment results (auto-generated)
├── figures/                        # Result figures referenced in this README
│   ├── main_results.PNG
│   └── stability_all.PNG
├── requirements.txt
└── README.md

Installation

Prerequisites

Dependency Version
Python ≥ 3.6
PyTorch ≥ 1.6
TensorFlow ≥ 2.0

Setup

git clone https://github.com/najeebjebreel/LFighter.git
cd LFighter
pip install -r requirements.txt

Datasets

Dataset Access Notes
MNIST Automatic Downloaded via PyTorch/TensorFlow data loaders
CIFAR-10 Automatic Downloaded via PyTorch/TensorFlow data loaders
IMDB Manual See instructions below

IMDB Manual Setup

  1. Download the preprocessed dataset from Google Drive.
  2. Save the file as imdb.csv in the following path:
LFighter/data/imdb.csv

Reproducing Experiments

Open the Jupyter notebook corresponding to the benchmark of interest and follow the inline instructions:

jupyter notebook notebooks/Experiments_MNIST.ipynb
jupyter notebook notebooks/Experiments_CIFAR10.ipynb
jupyter notebook notebooks/Experiments_IMDB_IID.ipynb

Each notebook walks through data loading, federated training, attack simulation, and defense evaluation with inline instructions.


Results

Attack Robustness

The table below reports LFighter's classification robustness under a label-flipping attack with 40% malicious participants, compared against baseline aggregation strategies.

Source Class Accuracy Stability

The figure below shows the per-round source class accuracy under the label-flipping attack (40% attackers) for the CIFAR-10/ResNet-18/non-IID and IMDB/BiLSTM settings, illustrating LFighter's training stability relative to undefended baselines.


Citation

If you use this code or build upon this work, please cite:

@article{jebreel2024lfighter,
  title     = {LFighter: Defending against the label-flipping attack in federated learning},
  author    = {Jebreel, Najeeb Moharram and Domingo-Ferrer, Josep and S{\'a}nchez, David and Blanco-Justicia, Alberto},
  journal   = {Neural Networks},
  volume    = {170},
  pages     = {111--126},
  year      = {2024},
  publisher = {Elsevier},
  doi       = {10.1016/j.neunet.2023.11.047}
}

Acknowledgment

This research was funded by the European Commission (projects H2020-871042 "SoBigData++" and H2020-101006879 "MobiDataLab"), the Government of Catalonia (ICREA Acadèmia Prizes to J. Domingo-Ferrer and D. Sánchez, grant no. 2021 SGR 00115, and FI_B00760 grant to N. Jebreel), and MCIN/AEI/10.13039/501100011033 and "ERDF A way of making Europe" under grant PID2021-123637NB-I00 "CURLING". The authors are with the UNESCO Chair in Data Privacy, but the views in this paper are their own and are not necessarily shared by UNESCO.


License

This project is licensed under the MIT License. See the LICENSE file for details.


Affiliation

Developed at the CRISES Research Group, Universitat Rovira i Virgili (URV), Tarragona, Catalonia.


Contact

For questions or issues, please open a GitHub issue or contact Najeeb Jebreel.

About

This repository contains PyTorch implementation of the paper ''LFighter: Defending against Label-flipping Attacks in Federated Learning''.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors