Skip to content

Add auto mode for automated task execution without confirmation #3942

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

PierrunoYT
Copy link

@PierrunoYT PierrunoYT commented May 3, 2025

Description

This PR adds a new automation mode for aider that can perform tasks automatically without requiring confirmation for each step, with enhanced context-finding capabilities.

Features

Auto Mode

  • Created a new AutoCoder class that inherits from ContextCoder
  • Sets yes_to_all = True to bypass confirmations
  • Sets auto_accept_architect = True to automatically accept architect changes
  • Automatically adds files and applies changes
  • Added a new command /auto to switch to this mode

Enhanced Context Finding

  • Added a new get_enhanced_file_mentions method that uses multiple strategies to identify relevant files
  • Reduced the minimum identifier length from 5 to 3 characters to catch more potential matches
  • Added detection of parent directory names and path components
  • Added detection of import statements and package references
  • Increased token allocation for the repository map by 50% to provide more context
  • Added command-line options to control context finding behavior

How to Use

Users can now use the auto mode in several ways:

  1. Start aider with the auto mode:

    aider --chat-mode auto
    
  2. Start aider with auto mode but disable enhanced context finding:

    aider --chat-mode auto --no-deep-context-search
    
  3. Customize the minimum identifier length:

    aider --chat-mode auto --min-identifier-length 4
    
  4. Switch to auto mode during a session:

    /chat-mode auto
    
  5. Use the auto mode for a single command:

    /auto <your request>
    

In auto mode, aider will:

  1. Automatically identify which files need to be edited using enhanced context finding
  2. Add those files to the chat context
  3. Make the necessary changes without asking for confirmation
  4. Run linting and testing if configured

This mode is particularly useful for automating repetitive tasks or for users who want a more streamlined experience without the need to confirm each step.

Issues Fixed

This PR also includes fixes for several issues that were encountered during testing:

1. TypeError in base_coder.py

The first issue was a TypeError in base_coder.py (line 181) where the deep_context_search parameter was being passed to the Coder.__init__ method, but the method didn't accept this parameter:

Traceback (most recent call last):
  File "<frozen runpy>", line 198, in _run_module_as_main
  File "<frozen runpy>", line 88, in _run_code
  File "__main__.py", line 7, in <module>
    sys.exit(main())
             ^^^^^^
  File "main.py", line 959, in main
    coder = Coder.create(
            ^^^^^^^^^^^^^
  File "base_coder.py", line 181, in create
    res = coder(main_model, io, **kwargs)
          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
TypeError: Coder.__init__() got an unexpected keyword argument 'deep_context_search'

This was fixed by:

  • Adding the deep_context_search and min_identifier_length parameters to the Coder.__init__ method
  • Storing these parameters as instance variables in the Coder class
  • Adding a base implementation of the get_enhanced_file_mentions method to the Coder class

2. UnicodeDecodeError in JSON Loading

The second issue was a UnicodeDecodeError when loading JSON files with non-ASCII characters:

Traceback (most recent call last):
  ...
  File "utils.py", line 188, in <module>
    json_data = json.load(f)
                ^^^^^^^^^^^^
  File "__init__.py", line 293, in load
    return loads(fp.read(),
                 ^^^^^^^^^^
  File "cp1252.py", line 23, in decode
    return codecs.charmap_decode(input,self.errors,decoding_table)[0]
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
UnicodeDecodeError: 'charmap' codec can't decode byte 0x81 in position 1980: character maps to <undefined>

This was fixed by:

  1. Adding a global JSON loading patch in aider/llm.py that handles UTF-8 encoding for all JSON files loaded by litellm
  2. Updating file reading methods in base_coder.py to handle UnicodeDecodeError exceptions:
    • get_abs_fnames_content
    • choose_fence
    • get_read_only_files_content

These changes ensure that binary files (like MP3s, MP4s, and other non-text files) are gracefully handled and don't cause the application to crash with UnicodeDecodeError.

3. Token Limit Exceeded with Enhanced Context Finding

When using the enhanced context finding feature, there's a potential issue with exceeding token limits, especially in large codebases:

Tokens: 13k sent, 395 received. Cost: $0.04 message, $0.08 session.
Warning: it's best to only add files that need changes to the chat.
https://aider.chat/docs/troubleshooting/edit-errors.html
Automatically added files: [long list of files...]
Your estimated chat context of 1,530,131 tokens exceeds the 200,000 token limit for openrouter/anthropic/claude-3.7-sonnet!
To reduce the chat context:
- Use /drop to remove unneeded files from the chat
- Use /clear to clear the chat history
- Break your code into smaller files
It's probably safe to try and send the request, most providers won't charge if the context limit is exceeded.
litellm.BadRequestError: OpenrouterException - Message: This endpoint's maximum context length is 200000 tokens. However, you requested about 1606966 tokens (1542966 of text input, 64000 in the output). 
Please reduce the length of either one, or use the "middle-out" transform to compress your prompt automatically., Metadata: {'provider_name': None}, User ID:

This is a known limitation when using enhanced context finding in large codebases. Users should be aware that they may need to:

  1. Use /drop to remove unneeded files from the chat
  2. Use /clear to clear the chat history
  3. Use --no-deep-context-search to disable enhanced context finding in very large codebases
  4. Increase the --min-identifier-length parameter to reduce the number of matches

@CLAassistant
Copy link

CLAassistant commented May 3, 2025

CLA assistant check
All committers have signed the CLA.

@FinnBorge
Copy link

FinnBorge commented May 3, 2025

How different is this than the option argument --yes-always https://aider.chat/docs/config/options.html#--yes-always ?

@FinnBorge
Copy link

And I think you may have accidentally included a number of unrelated ideas in the code you've submitted to this PR

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants