fix: extend bad_spec to cover data-loss findings within story domain#2139
fix: extend bad_spec to cover data-loss findings within story domain#2139gabadi wants to merge 2 commits intobmad-code-org:mainfrom
Conversation
🤖 Augment PR SummarySummary: Extends the quick-dev step-04-review guidance so in-scope data-loss findings are treated as 🤖 Was this summary useful? React with 👍 or 👎 |
| 2. Classify each finding. The first three categories are **this story's problem** — caused or exposed by the current change. The last two are **not this story's problem**. | ||
| - **intent_gap** — caused by the change; cannot be resolved from the spec because the captured intent is incomplete. Do not infer intent unless there is exactly one possible reading. | ||
| - **bad_spec** — caused by the change, including direct deviations from spec. The spec should have been clear enough to prevent it. When in doubt between bad_spec and patch, prefer bad_spec — a spec-level fix is more likely to produce coherent code. | ||
| - **bad_spec** — caused by the change, including direct deviations from spec. The spec should have been clear enough to prevent it. When in doubt between bad_spec and patch, prefer bad_spec — a spec-level fix is more likely to produce coherent code. Also applies when a finding reveals data silently dropped or never reaching its destination within the story's domain — even if the code predates the diff. |
There was a problem hiding this comment.
This bad_spec definition still starts with “caused by the change” but then says it applies even if the code predates the diff, which could confuse reviewers about when pre-existing issues should be treated as this story’s problem. Consider clarifying what qualifies as “within the story’s domain” (and whether being merely in-scope vs. caused/exposed-by-change is the deciding factor) to keep classifications consistent.
Severity: medium
🤖 Was this useful? React with 👍 or 👎, or 🚀 if it prevented an incident/outage.
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughDocumentation update to the quick-dev review classification process. The Changes
Estimated code review effort🎯 1 (Trivial) | ⏱️ ~2 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
What
Extends the
bad_specclassification in quick-dev step-04-review to cover findings where data is silently dropped or never reaches its destination within the story's domain — even if the code predates the diff.Why
In a real-world quick-dev run, an edge case hunter correctly identified a missing field mapping (data silently dropped at a domain boundary). The finding was misclassified as
deferbecause the diff didn't touch that line. The bug shipped and was only caught by human review.Fixes #2138
How
bad_specdefinition instep-04-review.md: data-loss findings within the story's domain qualify asbad_spec, notdeferdeferdefinition — the gate lives where the reviewer looks when deciding what qualifies asbad_specTesting
Validated through adversarial review and edge case analysis of the proposed change. The rule correctly reclassifies the original finding (missing
learningPopulationon CreditCheck builder) asbad_specwhile leaving genuinely out-of-scope pre-existing issues asdefer.