Skip to content

Selective Aggregation for Low-Rank Adaptation in Federated Learning [ICLR 2025]

Notifications You must be signed in to change notification settings

Pengxin-Guo/FedSA-LoRA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FedSA-LoRA

The implementation of Selective Aggregation for Low-Rank Adaptation in Federated Learning [ICLR 2025].
Pengxin Guo, Shuang Zeng, Yanran Wang, Huijie Fan, Feifei Wang, and Liangqiong Qu.

framework

Figure 1. The illustration of (a) LoRA, (b) FFA-LoRA, and (c) FedSA-LoRA. In LoRA, both $A$ and $B$ matrices are trainable and shared with the server for aggregation. In FFA-LoRA, only $B$ matrices are trainable and shared with the server for aggregation, while $A$ matrices are fixed after initialization. In FedSA-LoRA, both $A$ and $B$ matrices are trainable, but only $A$ matrices are shared with the server for aggregation while $B$ matrices are kept locally.

Installation

Our code is based on Python version 3.10 and PyTorch version 2.1.0. You can install all the dependencies with the following command:

conda create -n fedsa-lora python=3.10
conda activate fedsa-lora
conda install pytorch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 pytorch-cuda=12.1 -c pytorch -c nvidia
pip install -e .[llm]

Training

Now, we can fine-tune a LLM with FedSA-LoRA:

python federatedscope/main.py --cfg federatedscope/glue/yamls/fedsa-lora.yaml

Acknowledgement

We would like to thank the authors for releasing the public repository: FederatedScope-LLM.

About

Selective Aggregation for Low-Rank Adaptation in Federated Learning [ICLR 2025]

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published