LLM-wielding students seem to be a new "attack vector" in this and many other repos #1952
Replies: 3 comments 3 replies
-
|
Agreed. I for one have been trying not to be too heavy handed or confrontational in order not to turn away potential new open source contributors but I acknowledge that this tendency has serious downsides. |
Beta Was this translation helpful? Give feedback.
-
|
Thanks @nickchomey for referencing Ghostty terminal's contribution guidelines they looks solid. Also I think we can stop PR without assigned issue — it is much simpler to review issue than PR. |
Beta Was this translation helpful? Give feedback.
-
|
What I suggest is that we do not force a formal process or template on contributors. We should accept contributions in whatever form folks can provide, and then politely work with them to get it into shape. When someone is far off-base, we can point them to our contribution guidelines. More than once if needed. Recently we’ve also seen a rash of low-effort AI-generated submissions. In the last few days we’ve been more aggressive about closing issues that are in the "AI Slop" category. I think we should keep this up, to protect the time of leadership. Here’s the behavior I’m proposing (and I am open to other suggestions):
This is just my 2 cents. I know your time is valuable as leads for contributions and I want to respect it. I’m hoping that being quicker to close non-actionable items addresses some of your concerns. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
It is abundantly clear to me that this repo (and seemingly many others) are being spammed by LLM-wielding students who are eager to put something on their CV.
There's been some seemingly helpful PR that make small (eg http->https) changes, but there's a lot of very concerning stuff going on that I dont think the maintainers are at all aware of yet.
#1927 is the most egregious example of this, but #1937 is completely of this nature, I opened #1936 but the only two responses I got were clearly like this as well. Even after I rejected the terrible proposal there, they went ahead and submitted a PR (#1949) anyway, which @jmanico started treating seriously. I've noticed other maintainers (eg @szh in #1937) doing the same.
You created an AI-disclosure policy and PR template (which only seems to be sporadically used), but there's clearly nothing stopping these people from flouting it all. Moreover, even if they say they "manually reviewed" the AI output, what value does that have when they are self-interested ESL students with limited knowledge/experience?
Given that this is a premier source of security guidance, it seems like it would be prudent for your team to give serious thought towards how to be significantly more vigilant going-forward with guarding against this issue.
edit: To be clear. I say this all with significant gratitude for the work you have done. It is an alert and call to action, rather than any sort of castigation. I dont really know what can be done about it, but I am sure that all of open source is suffering from the same problem.
Ghostty terminal seems to have a comprehensive approach to all of this: https://github.com/ghostty-org/ghostty/blob/main/CONTRIBUTING.md
They dont even allow anyone to open issues (and presumably PRs must stem from an approaved issue) - you have to start a discussion first.
I hope this helps
Beta Was this translation helpful? Give feedback.
All reactions