|
1 | 1 | # Ground Rules for Better PRs |
2 | 2 |
|
3 | | -- Instruct the AI to prefer battle-tested open source over creating from scratch |
4 | | -- If not open source, then have we got anything existing in our packages (@intact/component-library, @intact/core, @harbour/core etc.) |
5 | | -- Avoid duplication - reuse existing helpers |
6 | | -- Maintain ownership of the code |
7 | | -- Small focused PRs (max files length) that must be deployable and show something working |
| 3 | +- Prefer battle-tested open source over building from scratch |
| 4 | +- If no open source, check our packages first (@intact/component-library, @intact/core, @harbour/core) |
| 5 | +- Reuse existing helpers. Don't duplicate. |
| 6 | +- Don't copy logic to work around complexity. Fix or extend the original. |
| 7 | +- Own your code. Be able to explain every line. |
| 8 | +- Small focused PRs that deploy and show something working |
8 | 9 |
|
9 | 10 | ## Hold AI to the Same Standards |
10 | 11 |
|
11 | | -- AI favours volume over precision and ownership |
12 | | -- You wouldn't accept a 100-file PR from a new dev - don't accept it from AI |
13 | | -- Don't reverse decades of hard-won practices (DRY, small PRs, code ownership) just because AI makes it easy to generate code, these came about because things when wrong |
14 | | -<<<<<<< HEAD |
15 | | -- PR reviews now take forever - too much volume for the eye to take in |
16 | | -======= |
17 | | -- PR reviews could take forever or just ignored - too much volume for the eye to take in |
18 | | ->>>>>>> 3448a362 (add scope to performance scout) |
19 | | -- Open source/existing code has tests, is battle-hardened from real use, and edge cases are already solved - AI code has none of that |
| 12 | +- You wouldn't accept a 100-file PR from a new dev. Don't accept it from AI. |
| 13 | +- AI favours volume over precision. Reviewers rubber-stamp because they can't review 100 files. |
| 14 | +- DRY, small PRs, code ownership exist because we learned what happens without them. |
| 15 | +- Open source is tested and battle-hardened. AI code has none of that. |
| 16 | + |
| 17 | +## AI Doesn't Extend, It Recreates |
| 18 | + |
| 19 | +- **Copy-and-diverge**: AI hits complex code, doesn't understand why it's complex, copies what it needs with looser types. Now you have two versions—one tested, one not. They'll drift. |
| 20 | +- Recent PR: 800-line duplicated file. GitHub hides large files by default. I missed it first time. |
| 21 | +- **NIH on steroids**: AI sees "table with editing" and builds one. It doesn't know material-react-table exists with years of bug fixes already solved. |
20 | 22 |
|
21 | 23 | ## Practical Steps |
22 | 24 |
|
23 | | -- Write tests first based on acceptance criteria from JIRA stories - gives AI better context |
24 | | -- Use Sonar quality gates to fail PRs that introduce duplication or don't meet standards (analyzes new code only) |
25 | | -- stricter eslint rules |
26 | | -- Use Danger.js to highlight (not block) when basic libraries are used instead of preferred alternatives (e.g. @mui/material table vs material-react-table) |
27 | | -- Sonar/jscpd can detect duplicated functions within the same PR |
28 | | -- Fail/warn on large new files (Danger.js or CI script to flag any new file over X lines) |
29 | | -- Fail/warn if a PR grows an existing file by more than Y lines |
30 | | -<<<<<<< HEAD |
31 | | -- Manually viewing the PR is the last step but still important |
32 | | -======= |
33 | | -- Manually reviewing the PR is the last step but still important |
34 | | ->>>>>>> 3448a362 (add scope to performance scout) |
| 25 | +- Tests first, based on JIRA acceptance criteria |
| 26 | +- Sonar quality gates: fail PRs that introduce duplication (new code only) |
| 27 | +- Danger.js: warn when using basic libraries over preferred alternatives |
| 28 | +- Flag new files over X lines, or PRs that grow files by Y+ lines |
| 29 | +- Manual review is last step, still essential |
35 | 30 |
|
36 | | -## AI Doesn't Extend, It Recreates |
| 31 | +## What Happens If We Don't |
37 | 32 |
|
38 | | -- When adding to complex existing code, AI errs on the side of caution and recreates everything instead of extending |
39 | | -- Example: DSL-to-react-json-schema-form transformer - adding a new feature resulted in an 800+ line file that recreated all types (weaker), all functions (worse), instead of extending what was there |
40 | | -- Funny side effect: GitHub hides large files by default - I missed the 800 page monster in the PR review initially :) |
41 | | -<<<<<<< HEAD |
42 | | -======= |
| 33 | +- You fix a bug in the original. The copy still has it. You don't know the copy exists. |
| 34 | +- "AI will know" — AI has no memory between sessions. If you need AI to understand your own code, you've lost. |
| 35 | +- Bugs ship that open source fixed years ago |
| 36 | +- 2am incident, no one knows where to look |
43 | 37 |
|
44 | | -## citations |
| 38 | +## Citations |
45 | 39 |
|
46 | | -- [SonarSource blog on poor code quality growth in AI-accelerated codebases](https://www.sonarsource.com/blog/the-inevitable-rise-of-poor-code-quality-in-ai-accelerated-codebases) |
47 | | ->>>>>>> 3448a362 (add scope to performance scout) |
| 40 | +- [SonarSource: Poor code quality in AI-accelerated codebases](https://www.sonarsource.com/blog/the-inevitable-rise-of-poor-code-quality-in-ai-accelerated-codebases) |
0 commit comments