Skip to content

Conversation

@magicmark
Copy link
Contributor

@magicmark magicmark commented Jan 9, 2026

https://agents.md/ is now a standard supported by most coding agents.

Seems worth adding - e.g. my instance of claude keeps trying to run things with uv, so this would teach it we're using poetry atm.

wdyt?

Summary by Sourcery

Documentation:

  • Add AGENTS.md documenting repository setup, development commands, project structure, and pull request requirements for automated coding agents and contributors.

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Jan 9, 2026

Reviewer's Guide

Adds an AGENTS.md file documenting project setup, commands, code style, structure, and PR requirements specifically to guide AI coding agents and contributors.

File-Level Changes

Change Details Files
Document project setup and development workflows for agents and contributors.
  • Describe dependency management and local setup using Poetry and pre-commit hooks
  • List common development commands for tests, type checking, linting, and formatting
  • Specify code style expectations including Ruff configuration, strict mypy typing, decorator-based schema, and async-first design
  • Outline high-level project structure for core library, integrations, and tests
  • Define PR requirements including RELEASE.md usage, release types, and testing expectations
AGENTS.md

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

@magicmark magicmark marked this pull request as ready for review January 9, 2026 20:30
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've left some high level feedback:

  • Double-check that the listed commands and tools (poetry, mypy strict, ruff, test markers, etc.) exactly match the current project setup and config files so AGENTS.md doesn’t drift from reality.
  • Ensure the PR requirements in AGENTS.md (RELEASE.md requirement, release types, test expectations) are consistent with any existing CONTRIBUTING or release process docs to avoid conflicting guidance.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- Double-check that the listed commands and tools (poetry, mypy strict, ruff, test markers, etc.) exactly match the current project setup and config files so AGENTS.md doesn’t drift from reality.
- Ensure the PR requirements in AGENTS.md (RELEASE.md requirement, release types, test expectations) are consistent with any existing CONTRIBUTING or release process docs to avoid conflicting guidance.

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment on lines +40 to +42
- Include `RELEASE.md` file describing changes
- Release types: patch/minor/major
- Tests required for all code changes with full coverage
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we add an example of release file too? 😊

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's also add a CLAUDE.md with @AGENTS.md 😊

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shall we add an example of release file too? 😊

would be cool to make this a skill

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same thing goes for schema tests for a new skill, think of a new /generate-tests command which generates the test file structure, schema, query inputs and expected outputs.

@XChikuX
Copy link
Contributor

XChikuX commented Jan 26, 2026

I'd recommend making the AGENTS.md significantly more detailed Eg. Almost 90% of the file tree. This helps CLAUDE and other agents gather all the needed context right out of the bat.

It's important not to go overboard and use up too much context at the same time by being too wordy. Including the pre-commit checks would help the agent avoid reading the README.md and using up too much context for every single commit.

I'd also recommend a .serena folder with memories of important incomplete tasks and possibly a skills folder for Claude with python concepts that you'd want to be used in strawberry.

@patrick91
Copy link
Member

@XChikuX I'd avoid serena, I haven't found it to be that useful, do you have an example CLAUDE.md?

@XChikuX
Copy link
Contributor

XChikuX commented Jan 26, 2026

Full Stack Application Example You may want to adjust for tools available to opensource as well as pure python project structure like strawberry. Caveat: this is only one way in a vast amount of ways you can structure your CLAUDE.md I do not claim it is the best way, I've just seen good things come out of this general structure.

I'd avoid serena

It aids LLMs that are not as good with tool calling as Claude.

I've found it noticeably reduces context bloat when I prompt agents as well in exchange for some accuracy loss. Of course, your mileage may vary.

@magicmark
Copy link
Contributor Author

magicmark commented Jan 26, 2026

@XChikuX i'm strongly in favour of the most minimal AGENTS.md/CLAUDE.md file possible. These instructions get added to the context window and added to every prompt. Token space is at a premium.

Information that is already easily discernible is redundant (such as the directory layout, general intent of certain files). I would reserve the token count for common stumbling blocks on the most common actions. (and I would expect the need for such instructions to diminish over time too as the models improve!)

magicmark and others added 3 commits January 26, 2026 14:17
Co-authored-by: Thiago Bellini Ribeiro <hackedbellini@gmail.com>
@XChikuX
Copy link
Contributor

XChikuX commented Jan 26, 2026

These instructions get added to the context window and added to every prompt

I'd argue more context get's used in searching for files, understanding file structure and repeat API calls that lack much needed context. I assumed the same as you in the beginning, but my real world use has been counter intuitive.

The only time CLAUDE didn't waste precious tokens by creating new unnecessary files was when it already knew the file existed and exactly what that file did.

and I would expect the need for such instructions to diminish over time too as the models improve!

We can always edit it when that time comes. In fact, I recommend we do. :)

EDIT:

Keeping AGENTS.md small would sacrifice input token context in exchange for lower quality output tokens (which cost x10+) because of the extra tool calls and the unnecessary Now I will figure out what this file does outputs.

Although it will help free tiers, it will not help Claude from my experience. Instead of pointing CLAUDE.md - > AGENTS.md consider keeping both separate since Claude more or less ignores the latter, you can find the issue on their github that proves this.

Larger input context for Claude. Smaller context for free tier LLMs would be the way to go.

Just a suggestion @patrick91

@erikwrede
Copy link
Member

I'm with @magicmark and @patrick91 on this one. However, we should think about adding some knowledge on structure:

I would reserve the token count for common stumbling blocks on the most common actions.

This is what we need. however, that's something we need to add piece by piece when we actually identify the stumbling blocks. So maybe we all add whatever we frequently see misinterpreted by claude code. For the rest, claude's plan / explore agents do an exceptional job at understanding the codebase nowadays.

Copy link
Member

@erikwrede erikwrede left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

awesome!

Comment on lines +40 to +42
- Include `RELEASE.md` file describing changes
- Release types: patch/minor/major
- Tests required for all code changes with full coverage
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same thing goes for schema tests for a new skill, think of a new /generate-tests command which generates the test file structure, schema, query inputs and expected outputs.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants