Replies: 1 comment
-
|
Assuming the answer to this question is "yes" -- here's a possible path to a concrete implementation: |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
We launched git-lrc on Product Hunt last week.
It ended up as #3 Product of the Day.
Good signal. But launches are spikes. The more important question in my mind is how to proceed further — and whether we are building something worthwhile for the global engineering community.
As of now, my core idea is simple:
Not a SaaS layer floating above Git.
Not comments that disappear into CI logs.
Not feedback locked inside a PR UI.
If review matters, it should live where the code lives.
The Model: Git ↔ GitHub
The mental model is the same as Git and GitHub.
Similarly:
Inference can be cloud or local (any model).
Everything else — execution, storage, policy — is local and Git-native.
That separation feels correct.
Reviews as Artifacts (
.lrc/)Each commit generates a review.
That review is stored:
That does a few important things:
Most AI review systems generate text and move on.
Here, review becomes repository state.
That changes incentives. You can’t hand-wave it away.
.lrcas PolicyThe
.lrcfolder is not just output storage.It defines:
So the repository itself declares how it wants to be reviewed.
Two repos can behave very differently.
That’s a feature.
The Signal Layer (Youtube-style)
Right now, git-lrc generates a web UI on commit.
The obvious next step is letting developers signal on each comment:
Not for generic model training.
For repository-level refinement.
Over time:
Most AI tools stop at generation.
This closes the loop.
And the loop is everything.
Unifying Static Analysis + AI
There is no reason to choose between AI review and traditional tools.
Tools like:
already do valuable work.
The opportunity is orchestration:
Developers don’t want five disjoint streams of warnings.
They want one prioritized review.
AI is well-suited for the aggregation layer.
Making Review Callable
If review becomes a repository primitive, other systems should call it.
Another bot should be able to ask:
That allows bot-to-bot workflows.
It also means git-lrc becomes infrastructure, not just a CLI tool.
Automatic Fixes via Capability Discovery
Different developers have different agents available:
git-lrc can detect what exists and expose actions:
Review should not end in text.
It should end in improved code.
Understanding Real Usage
To build something durable, I need to understand:
Not vanity metrics.
Just clarity about what phase this product is actually in.
The “Upgrade” Path
The transition to LiveReview should be obvious.
A reasonable progression:
.lrcpolicies mature.If git-lrc is Git, LiveReview is GitHub.
That symmetry is intentional.
What I’m Actually Trying to Do
What I do feel strongly at a higher level is this:
If code generation keeps accelerating, review must become tighter, earlier, and versioned.
Running review at commit time — storing it inside the repo — and refining it with explicit developer signal feels like the correct direction for engineers who ship.
Local storage may function better overall because it fits how developers already work.
Beta Was this translation helpful? Give feedback.
All reactions