Skip to content
Open
Show file tree
Hide file tree
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,9 @@ We employ several techniques to output a lot of value in a unit of time:
While some software houses lose \~30% of their performance (due to miscommunications, etc.), we stay way below 1% (the exact number is hard to measure when the amount of waste is so low).
As mentioned above, we tend to pause work on a ticket when we are not sure, choosing a small delay on a particular task over potentially having to discard work due to a bad assumption. Our clients prefer it this way.
10. Use modern IDEs. Pycharm, Cursor, Windsurf or VisualStudio with a LLM plugin. Some people are trying to use VIM with plugins for LLMs, but nowadays it's mostly Cursor/Windsurf/Pycharm.
11. Use LLMs (ChatGPT, GitHub Copilot, Claude etc) to speed up the work on the code, though watch every single byte of the diff like it's been written by a party you shouldn't trust.
11. Use LLMs to speed up coding, but we still own QA: read every byte and aim for our normal quality. For small,
contained “one‑shot” changes we may relax QA—and even skip review—if we save the prompt/spec so future bugs can be
fixed smoothly. See the [LLM‑assisted coding agreement](agreements.md#llm-assisted-coding).

# Code Review

Expand Down
45 changes: 45 additions & 0 deletions agreements.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,51 @@ That's one of the reasons we have someone in that role for every project.
This also allows for other solutions such as, for example, splitting the cost of a fix between two clients, discounts etc - something you couldn't do on your own.
Fortunately, with a dedicated client contact person, you don't have to!

## LLM-Assisted Coding

Motivation

- LLMs let us move faster (including during reviews), but cumulative low-quality changes can erode long-term
maintainability — one of our trademarks.
- Responsibility remains human: both the author and the reviewer own the quality of what ships, regardless of tooling.
- We may intentionally accept slightly lower quality for one‑shot, low‑risk changes to gain speed, but we do so when it is
clearly safe and we document the prompt/spec.
- There’s a trade‑off: more features with less quality vs fewer features with higher quality. Over time, lower quality
makes teams ship fewer features due to maintenance drag; we optimize for sustained maintainability while using
one‑shots to keep speed where risk is minimal.
- Practical observation: there’s a limit to how much an LLM can safely rewrite in a day without agreed QA.

Scope and rules

- Default path: normal QA and review. One‑shot changes (small, independently judgeable changes) may be merged with
relaxed quality and may skip review if all of the following hold:
- The change is low‑risk, contained, and has minimal blast radius.
- The author performs a basic functional check.
- The “final prompt/spec” is saved in the repo as markdown (distilled ask + key constraints/acceptance criteria; the
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I find this requirement "stupid" ;) - most of my prompts are:
[20 lines of logs with bug]
you see you f***ed up, fix it you ******** :D

I think it might be helpful to instruct LLM to take context and summarize it as a "development log" - or maybe put some additional info about why this decision was made that code is not able to give

but really - most of the time instead of explaining what to be done... or has been done... I like say:

read this commit diff
...
now we need to add...

even when I need to cleat context I just bootstrap next with diff - it has the most information inside...

AND...

if we REALLY want some insights from context be visible in repo - the best thing is to instruct to write it in comments... why? - because those are really close to the code and when you work with LLM on the code in new context you don't need instruct it to find documents that might have up to date info about it... - if LLM has this info close to the code it will update it when it will be asked to change something....

really... most of docs are now generated from code - because it is the most natural way to store it :)

Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I share some of @mzukowski-reef concerns. What exactly is the intention behind this saving spec requirement? Maybe it should be mentioned in the handbook.

There also lacks a practical example of how the prompt should be saved. Like, we all have different styles of vibe coding. Am I to save only the main prompt and omit the other minor ones? Or maybe I should prompt llm to prepare the session summary? Where should it be placed?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mzukowski-reef @kacper-wolkiewicz-reef -> you are right, I have relaxed the prompt saving requirements. If you agree I will bring it back to s3 tomorrow for final vote :)

chat can be condensed to a short summary) and kept up to date when the code changes.
- The one‑shot relaxation is mentioned explicitly in the PR/commit.
- The change is kept standalone (not tucked into a larger PR).
- Larger/core contributions done with LLM assistance require a case‑by‑case, agreed QA/release plan. If we must merge a
larger LLM change before full QA, do it consciously with that plan in place. Do not merge low‑quality core code
without such agreement.
- Use case‑by‑case judgement for one‑shots; if uncertain, take the normal QA/review path.

Author responsibilities

- Read every byte of generated code, functionally test it, and prepare the change for regular review (structure the diff,
write tests/docs where applicable).
- When using the one‑shot relaxation, save the prompt/spec near the change (even if you refactored the generated code).
For LLM‑assisted work that goes through normal QA/review, keeping a prompt/spec is optional. Do not include secrets or
client‑sensitive data in prompts.
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do not include secrets or client‑sensitive data in prompts.

This is probably kinda obvious, but if we are mentioning it in the handbook then I think it should be stated clearly and seperatly as a most important point of all.


Review expectations

- Normal review applies; the reviewer may use or not use LLM tooling.
- If a one‑shot change is being reviewed anyway, verify that the prompt/spec exists, is up to date, and meaningfully
reflects the change.
- Confirm maintainability isn’t degraded (structure, naming, tests, docs), and that no secrets are stored in
prompts/specs.

## Fast track decisions via Slack instead of standard Sociocracy approach

At Reef Technologies, we mostly make decisions in our weekly Sociocracy meetings.
Expand Down
12 changes: 12 additions & 0 deletions docs/Code_Review.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,11 @@ As with every policy, if something is bad and you care about it enough, feel fre
and the governance process, operating in front of you and with you, will decide whether to change (most likely yes).
</details>

Responsibility remains with humans: authors and reviewers are accountable for code quality whether or not LLMs were
used. See the [LLM‑Assisted Coding agreement](../agreements.md#llm-assisted-coding).
Tiny, low‑risk one‑shot LLM changes may skip review if they meet the agreement’s conditions (prompt/spec saved, low
blast radius, basic functional check). Otherwise, request review as usual.

## When/how to request a code review

1. If you are a developer, always perform basic functional testing (manually!) of your code and self-review your PR
Expand All @@ -31,6 +36,11 @@ and the governance process, operating in front of you and with you, will decide
commits instead, like “Fix XYZ”, “Add tests”. <details><summary>Note.</summary>This is a limitation of GitHub and a
primary motivator to consider moving development off to gitlab / gerrit, though as of writing this document we have
not decided to switch.</details>
6. Follow the [LLM‑Assisted Coding agreement](../agreements.md#llm-assisted-coding). If you’re using the LLM one‑shot
relaxation, save the prompt/spec in the repo and say so explicitly; those relaxed changes should stand alone, not be
tucked into larger PRs. Authors must read and functionally verify all generated code before requesting review; if
merging without review under the one‑shot rule, mention “llm‑one‑shotted” and the prompt/spec path in the commit
message instead.

## How to review

Expand Down Expand Up @@ -101,6 +111,8 @@ Note: not all of these apply to all PRs, of course.
12. Performance / memory considerations. Will it OOM in a corner case after the change?
13. Self-healing. If a (network, HDD) device or a part of the system breaks temporarily, will the rest of the system
recover automatically or will it require manual intervention?
14. If a one‑shot LLM change is being reviewed: is the prompt/spec saved in the repo and kept up to date?
15. Maintainability not degraded (structure, naming, tests, docs); no secrets in prompt/specs.

## How to resolve the reviewer’s comments

Expand Down