AI policy #789
oskardotglobal
started this conversation in
General
AI policy
#789
Replies: 2 comments 3 replies
-
|
@oskardotglobal My thoughts largely mirror yours, these are all sensible suggestions. |
Beta Was this translation helpful? Give feedback.
0 replies
-
|
I agree, to all points. We should definitely formalise this as part of the contribution guidelines (or new ai guidelines). Form time to time we should check whether Coderabbit is still safe to use (it seems fine for now). Ai industry is full of broken promises and security flaws (I have seen exploits to gain repo access from similar ais). |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Sadly, AI is a thing we can no longer ignore now. But aside from slop machines like the recently unveiled Sora LLMs can sometimes be useful when working with code. I think we should get our ideas straight about this so we can make a policy to act after in the future.
So, I'm just gonna start with some of my ideas:
A thing I'd like to discuss further specifically is the use of AI review tooling like https://www.coderabbit.ai/. You have probably seen this all over Github by now, generating titles and entire PR bodies for newly created pull requests. While this is too intrusive for my taste by default, it can be configured to only review or even only review when asked too, as you can see in #308. Currently, I've taken the liberty to set it up on this repository, so any pull request can get this kind of review by commenting
@coderabbitai reviewWhat do you think?
(cc @LDprg @KernelGhost @AkechiShiro @eylenburg)
Beta Was this translation helpful? Give feedback.
All reactions