Skip to content

Conversation

@bear-zd
Copy link
Contributor

@bear-zd bear-zd commented Sep 24, 2025

Type of change

  • Add new papers (Please tell us why you think this paper is awesome!)
  • Fix the category of an existing paper/papers (Please tell us the reasons)
  • Add a new tool/primitive/application with a new markdown page (Thank you! Also, please tell us more about this awesome thing!)

Description

New defense method: PE Loss and Dcor Loss

Added PE loss as a defense method against LIA (Label Inference Attacks) and implemented the Dcor variant of the NoPeek method mentioned in the paper as a baseline defense algorithm. Provided design files for loss-based LIA defense methods. Testing shows compatibility with existing LIA methods and produces effective results.

PE Loss (Potential Energy Loss) and Dcor Loss (Distance Correlation Loss) are two loss functions designed to defend against label inference attacks. PE Loss counters clustering-based LIA attack methods by adjusting the spatial positioning of features through potential energy loss regulation, published in IJCAI 2024. Paper: Protecting Split Learning by Potential Energy Loss

Local testing was conducted using sim lia as the attack method, with PE loss and Dcor loss as defense methods
image

Partial results show the advance of the defense method. Otherwise, adjusting the hyperparameter alpha value affects defense effectiveness - higher alpha values provide better defense but may impact classification performance.

Command can be run as follows:

python benchmark_examples/autoattack/main.py mnist resnet18 sim_lia peloss
python benchmark_examples/autoattack/main.py mnist resnet18 sim_lia dcorloss

Small fix of sim lia

Normalizing the feature before k-means will get better attack metrics.

@bear-zd bear-zd requested a review from rivertalk as a code owner September 24, 2025 09:08
@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @bear-zd, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the security of split learning systems by introducing two advanced defense strategies, PE Loss and Dcor Loss, against label inference attacks. These additions aim to protect sensitive label information during collaborative model training, thereby improving privacy without severely compromising model performance. Concurrently, a refinement to the sim_lia attack mechanism ensures more accurate evaluation of defense effectiveness.

Highlights

  • New Defense Methods: Introduced two novel defense mechanisms, PE Loss and Dcor Loss, specifically designed to counter Label Inference Attacks (LIA) in split learning environments.
  • PE Loss Implementation: Implemented Potential Energy Loss (PELoss), which works by adjusting the spatial positioning of features to defend against clustering-based LIA, based on a paper from IJCAI 2024.
  • Dcor Loss Implementation: Implemented Distance Correlation Loss (DcorLoss), a variant of the NoPeek method, as another baseline defense algorithm against LIA.
  • sim_lia Attack Improvement: Applied a small fix to the sim_lia attack by normalizing features before k-means clustering, which has been shown to yield better attack metrics.
  • Integration and Testing: Integrated these new defense methods into the benchmark system, with local testing demonstrating their compatibility with existing LIA methods and effective results, though higher alpha values for defense may impact classification performance.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces two new defense mechanisms, PE Loss and Dcor Loss, against label inference attacks, based on a recent research paper. The implementation includes the loss functions, their integration into the benchmark framework, and corresponding tests. The changes are a valuable addition. My review focuses on improving code quality, robustness, and configurability. Key suggestions include using the logging module instead of print statements, making hyperparameters configurable, preventing potential runtime errors like division by zero, and strengthening the new tests with assertions.

@bear-zd bear-zd force-pushed the main branch 4 times, most recently from 090e04e to 994343a Compare September 24, 2025 10:48
Copy link
Member

@da-niao-dan da-niao-dan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

overall good contribution, some modifications would be great

@bear-zd bear-zd force-pushed the main branch 4 times, most recently from 803d15c to 543e16e Compare October 10, 2025 17:51
Copy link
Member

@ian-huu ian-huu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Member

@da-niao-dan da-niao-dan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@da-niao-dan
Copy link
Member

thank you, very high quality contribution!

@da-niao-dan da-niao-dan merged commit aca8bd0 into secretflow:main Oct 21, 2025
22 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants