Skip to content

brain-lab-research/Bant

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

🛡️ Bant: Byzantine Antidote via Trial Function and Trust Scores

[AAAI-26 Oral]

Paper

Abstract:

Recent advancements in machine learning have improved performance while also increasing computational demands. While federated and distributed setups address these issues, their structures remain vulnerable to malicious influences. In this paper, we address a specific threat: Byzantine attacks, wherein compromised clients inject adversarial updates to derail global convergence. We combine the concept of trust scores with trial function methodology to dynamically filter outliers. Our methods address the critical limitations of previous approaches, allowing operation even when Byzantine nodes are in the majority. Moreover, our algorithms adapt to widely used scaled methods such as Adam and RMSProp, as well as practical scenarios, including local training and partial participation. We validate the robustness of our methods by conducting extensive experiments on both public datasets and private ECG data collected from medical institutions. Furthermore, we provide a broad theoretical analysis of our algorithms and their extensions to the aforementioned practical setups. The convergence guaranties of our methods are comparable to those of classical algorithms developed without Byzantine interference.

Table of contents

  1. Running Experiments -- Run the command to get reproducibility
  2. Quickstart -- Follow the instructions and get the result!
  3. C4 notation -- Context Container Component Code scheme.
  4. Federated Method Explaining -- Get the basis and write your own method
  5. Config Explaining -- See allowed optionalization
  6. Attacks -- Get the basis and write custom attack

🚀 Quickstart Guide

📋 Prerequisites

  1. Install dependencies
python -m venv venv
source venv/bin/activate
pip install -e .
  1. Download CIFAR-10 dataset
python src/utils/cifar_download.py --target_dir=cifar10

Verification: You should get "All steps completed successfully!!!".

⚠️ Important: Check/update paths in configs/observed_data_params/.

⚙️ Experiment Setups

🔄 Standard Federated Averaging on CIFAR-10

python src/train.py \
  training_params.batch_size=32 \
  federated_params.print_client_metrics=False \
  training_params.device_ids=[0] \
  > test_run_fedavg_cifar.txt

device_ids controls the GPU number (if there are several GPUs on the machine).

Additionally, manager.batch_size client processes will be created. To forcefully terminate the training, kill any of the processes.

🔩 FedaAvg with Proximal Term

python src/train.py \
  training_params.batch_size=32 \
  federated_params.print_client_metrics=False \
  federated_method=fedprox \
  > test_run_fedprox_cifar.txt

🌪️ Heterogeneous CIFAR10 Experiment

Dirichlet Partition with $\alpha=0.1$ (strong heterogeneity)

python src/train.py \
  training_params.batch_size=32 \
  federated_params.print_client_metrics=False \
  observed_data_params@dataset=cifar10_dirichlet \
  dataset.alpha=0.1 \
  federated_params.amount_of_clients=100 \
  > test_run_fedavg_cifar_dirichlet_strong_heterogeneity_100_clients.txt

Uniform Distribution ($\alpha=1000$) with various amount_of_clients

dataset.alpha=1000 \
federated_params.amount_of_clients=42 \

🦠 Byzantine Attacks

FedAvg with Label Flipping Attack

python src/train.py \
  training_params.batch_size=32 \
  federated_params.print_client_metrics=False \
  federated_params.clients_attack_types=label_flip \
  federated_params.prop_attack_clients=0.5 \
  federated_params.attack_scheme=constant \
  federated_params.prop_attack_rounds=1.0 \
  > test_run_fedavg_cifar_label_flip_half_byzantines.txt

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%