-
Notifications
You must be signed in to change notification settings - Fork 3
Add split learning PELoss defense and DcorLoss defense benchmark.(IJC… #31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary of ChangesHello @bear-zd, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the security of split learning systems by introducing two advanced defense strategies, PE Loss and Dcor Loss, against label inference attacks. These additions aim to protect sensitive label information during collaborative model training, thereby improving privacy without severely compromising model performance. Concurrently, a refinement to the sim_lia attack mechanism ensures more accurate evaluation of defense effectiveness. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces two new defense mechanisms, PE Loss and Dcor Loss, against label inference attacks, based on a recent research paper. The implementation includes the loss functions, their integration into the benchmark framework, and corresponding tests. The changes are a valuable addition. My review focuses on improving code quality, robustness, and configurability. Key suggestions include using the logging module instead of print statements, making hyperparameters configurable, preventing potential runtime errors like division by zero, and strengthening the new tests with assertions.
090e04e to
994343a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
overall good contribution, some modifications would be great
803d15c to
543e16e
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
|
thank you, very high quality contribution! |
Type of change
Description
New defense method: PE Loss and Dcor Loss
Added PE loss as a defense method against LIA (Label Inference Attacks) and implemented the Dcor variant of the NoPeek method mentioned in the paper as a baseline defense algorithm. Provided design files for loss-based LIA defense methods. Testing shows compatibility with existing LIA methods and produces effective results.
PE Loss (Potential Energy Loss) and Dcor Loss (Distance Correlation Loss) are two loss functions designed to defend against label inference attacks. PE Loss counters clustering-based LIA attack methods by adjusting the spatial positioning of features through potential energy loss regulation, published in IJCAI 2024. Paper: Protecting Split Learning by Potential Energy Loss
Local testing was conducted using sim lia as the attack method, with PE loss and Dcor loss as defense methods

Partial results show the advance of the defense method. Otherwise, adjusting the hyperparameter alpha value affects defense effectiveness - higher alpha values provide better defense but may impact classification performance.
Command can be run as follows:
Small fix of sim lia
Normalizing the feature before k-means will get better attack metrics.