- 
                Notifications
    
You must be signed in to change notification settings  - Fork 384
 
Open
Description
Implement the FedCPD method as described in the paper title above, focusing on prototype‑enhanced representation learning and memory distillation for personalized federated learning (FL).
Goals
- End‑to‑end server–client pipeline.
 - Prototype construction/maintenance per class or task (per paper).
 - Memory/distillation mechanism to stabilize personalization across rounds.
 - Clear configs, scripts, and documentation to reproduce reported behaviors.
 
Scope
- PyTorch-based implementation (preferred).
 - Modular components for server, client,
 - Support for common FL benchmarks and non‑IID partitions.
 
Proposed structure (suggested)
- fedcpd/
- server.py, client.py, trainer.py
 
 
Tasks
- Implement client training loop (local epochs, optimizer, prototype updates).
 - Add memory/distillation loss as specified in the paper.
 - Implement server orchestration (rounds, broadcast, collection).
 - Implement prototype handling (update/EMA/aggregation per paper).
 - Write README with setup, commands, and expected metrics.
 - Add scripts to reproduce main experiments and ablations.
 - Add unit tests for prototype ops, memory buffer, and aggregation.
 
Deliverables
- 
Well‑documented code with docstrings and comments.
 - 
Configuration files and example command lines.
 
Acceptance criteria
- Training completes on at least one dataset with non‑IID partitions.
 - Results trend aligns with the paper (within reasonable tolerance).
 
References
- Paper: “FedCPD: Personalized Federated Learning with Prototype‑Enhanced Representation and Memory Distillation (IJCAI 2025)”
 - Link(s): [FedCPD]
 - Please cite the paper in README.
 
Labels
enhancement, help wanted, research, federated-learning, personalization
Metadata
Metadata
Assignees
Labels
No labels