Skip to content

Conversation

@nbardy
Copy link

@nbardy nbardy commented Apr 21, 2025

Aider Macros

This fork adds Aider Macros—a lightweight Python DSL for scripting multi‑step, parallel, “agentic” workflows directly inside Aider.

Why Macros?

The existing /load command is useful, but it has three key limitations:

Limitation Impact
Sequential‑only No loops, branching, or early exits
No arguments Steps can’t pass data to one another efficiently
No parallelism Can’t fan out multiple LLM calls and gather results

Aider Macros lift these constraints while remaining interactive and deterministic.

Key Features

  • Flexible control flow – Use standard Python loops, conditionals, and functions around LLM invocations.

  • Spawn / gather concurrency – Run up to N models in parallel, then gather() their outputs. Includes an ncurses‑style progress line so you can watch them finish in real time.

  • Tool use built‑in – A new /search command (OpenRouter) is exposed to macros for quick web look‑ups.

  • Deterministic & testable – Macro logic is plain Python, so you can unit‑test or lint it like any other code.

  • Gentle “agentic” path – Adds tool use and light autonomy without going fully headless or uncontrollably recursive.

Inspirations

Relationship to Core Aider

Architect mode already offers macro‑like behaviour, but Aider Macros generalise the idea and deprecate /load and /architecture. The README has been rewritten to focus on this new system; full docs will be restored and expanded if there’s interest in upstreaming.


nbardy added 27 commits April 20, 2025 21:54
This commit implements a new `/context` command that allows users to:
- List all context documents in the current session
- View the content of specific context documents
- See file details like size, modification time, and token count

The implementation adds functionality to:
- List context documents from chat files, read-only files, and repo map
- Display detailed file information
- Show file contents with syntax highlighting
- Provide help documentation for the new command
@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

@nbardy
Copy link
Author

nbardy commented Apr 21, 2025

@paul-gauthier Hey this is still a draft. But wanted to float the idea.

Will probably maintain this as a fork, would like to start automating some of the workflows I am doing on llms for game development screens loops, web of thoughts, etc...

I have found poor luck with automatic agent, but lots of success with /architect and critic chain style reasoning.

@strawberrymelonpanda
Copy link

The Examples section in your fork is particularly worth a look, for anyone interested. Nice work!

@pcfreak30
Copy link

@nbardy i question how this is different to MCP as it just feels the entire approach can work with the mcp rpc, maybe an abstraction ontop of that. im not a fan of tools creating their own REPL DSL. Reminds me of https://xkcd.com/927/.

@nbardy
Copy link
Author

nbardy commented May 14, 2025

@pcfreak30 This is very different from MCP. Solves a different case. And is compatible with MCP. Macros .chat commands can still call MCP tooling inside the macro. Any time you call an LLM it can call MCP tooling.

This is a later around tool caling.

Most importantly this is different from MPC because the macro is handling the control flow NOT the LLM.

If you want to run test driven development you need to make sure the tests run every time. You can ask the LLM to call tests for you but it only doesn't so wiht some 85% accuracy. It can also rewrite your tests.

The idea is that you want to run a command "Write a code patch", that is interpretered by the LLM, can call MCP, etc..

However that llm commands runs in a program context with a deterministic control loop. So the python code calls and checs the test every time and can feed that back into the . But the model can rewrite tests or skip running tests, that is guaranteed from the deterministic python control flow.

@nbardy
Copy link
Author

nbardy commented May 14, 2025

I'm now developing my own CLI tool around this because the PR isn't getting much traction so I'll close it.

@nbardy nbardy closed this May 14, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants