We explore how to construct unbiased Chest X-Ray datasets using StyleGAN!

Our method effectively mitigates the effects of adversarial label poisoning attacks.
git clone "https://github.com/Wazhee/Debiasing-Chest-X-Rays-with-StyleGAN.git"
cd Debiasing-Chest-X-Rays-with-StyleGAN
We used code from HiddenInPlainSight [Code][Paper] to simulate adversarial attacks. Specifically we demonstrate how our augmentation method improves the robustness of CXR classifiers against label poisoning attacks.
Link to the sample section: Link Text. All code for simulating adversarial label poisoning is found in HiddenIPS folder
cd HiddenIPS
To run original HiddenInPlainSight Code
python src/main.py -train
To simulate adversarial attacks on augmented dataset
python src/main.py -train -model densenet -augment True
To specify the attack rate
python src/main.py -train -model densenet -augment True -rate 0.05 -gpu 0
Testing HiddenIPS
python src/main.py -analyze -test_ds rsna
python src/main.py -analyze -test_ds rsna -augment True
conda activate resnet-pytorch
cd Fall\ 2024/CXR\ Project/GCA-torch/HiddenIPS
python src/main.py -train -model densenet -augment True -rate 0.05 -gpu 0 # with GCA
python src/main.py -train -model densenet -rate 0.05 -gpu 0 # without GCA
Kulkarni et al, Hidden in Plain Sight, MIDL 2024.
@article{kulkarni2024hidden,
title={Hidden in Plain Sight: Undetectable Adversarial Bias Attacks on Vulnerable Patient Populations},
author={Kulkarni, Pranav and Chan, Andrew and Navarathna, Nithya and Chan, Skylar and Yi, Paul H and Parekh, Vishwa S},
journal={arXiv preprint arXiv:2402.05713},
year={2024}
}