You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+12-9Lines changed: 12 additions & 9 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -70,18 +70,21 @@ According to the document, security threats in CodeLMs are mainly classified int
70
70
Backdoor attacks inject malicious behavior into the model during training, allowing the attacker to trigger it at inference time using specific triggers:
71
71
-**Data poisoning attacks**: Slight changes to the training data that cause backdoor behavior.
72
72
73
-
| Year | Conf./Jour. | Paper | Code Repository | Reproduced Repository |
| 2024 | ISSTA |[FDI: Attack Neural Code Generation Systems through User Feedback Channel.](./papers_en/2024-ISSTA-FDI.pdf)|[](https://github.com/v587su/FDI)||
76
77
| 2024 | TSE |[Stealthy Backdoor Attack for Code Models.](./papers_en/2024-TSE-Stealthy_Backdoor_Attack_for_Code_Models.pdf)|[](https://github.com/yangzhou6666/adversarial-backdoor-for-code-models)||
| 2024 | TOSEM |[Poison Attack and Poison Detection on Deep Source Code Processing Models.](./papers_en/2024-TOSEM-Poison_Attack_and_Poison_Detection_on_Deep_Source_Code_Processing_Models.pdf)|[](https://github.com/LJ2lijia/CodeDetector)||
| 2024 | TOSEM |[Poison Attack and Poison Detection on Deep Source Code Processing Models.](./papers_en/2024-TOSEM-Poison_Attack_and_Poison_Detection_on_Deep_Source_Code_Processing_Models.pdf)|[](https://github.com/LJ2lijia/CodeDetector)||
79
80
| 2023 | ICPC |[Vulnerabilities in AI Code Generators: Exploring Targeted Data Poisoning Attacks.](./papers_en/2023-ICPC-Vulnerabilities_in_AI_Code_Generators.pdf)|[](https://github.com/dessertlab/Targeted-Data-Poisoning-Attacks)||
| 2022 | ICPR |[Backdoors in Neural Models of Source Code.](./papers_en/2022-ICPR-Backdoors_in_Neural_Models_of_Source_Code.pdf)|[](https://github.com/tech-srl/code2seq)||
82
-
| 2022 | FSE |[You See What I Want You to See: Poisoning Vulnerabilities in Neural Code Search.](./papers_en/2022-FSE-You_See_What_I_Want_You_to_See.pdf)|[](https://github.com/CGCL-codes/naturalcc)||
| 2023 | EMNLP |[TrojanSQL: SQL Injection against Natural Language Interface to Database.](./papers_en/2023-TrojanSQL.pdf)|[](https://github.com/jc-ryan/trojan-sql)||
| 2022 | ICPR |[Backdoors in Neural Models of Source Code.](./papers_en/2022-ICPR-Backdoors_in_Neural_Models_of_Source_Code.pdf)|[](https://github.com/tech-srl/code2seq)||
84
+
| 2022 | FSE |[You See What I Want You to See: Poisoning Vulnerabilities in Neural Code Search.](./papers_en/2022-FSE-You_See_What_I_Want_You_to_See.pdf)|[](https://github.com/CGCL-codes/naturalcc)||
0 commit comments