Skip to content

Fix issue #2791: Task tools now combine with agent tools instead of overriding them #2792

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed

Conversation

devin-ai-integration[bot]
Copy link
Contributor

@devin-ai-integration devin-ai-integration bot commented May 8, 2025

Fix issue #2791: Task tools now combine with agent tools instead of overriding them

Description

This PR fixes issue #2791 where task tools override agent tools instead of combining them. The issue was in the check_tools model validator method in task.py, which only added agent tools to task tools if the task had no tools.

Changes

  • Modified the check_tools method in task.py to always add agent tools to task tools, avoiding duplicates by checking tool names
  • Updated tests in task_test.py and crew_test.py to verify the new behavior
  • Renamed test methods to better reflect their purpose

Testing

  • Added tests to verify that both agent and task tools are available during execution
  • Verified that existing functionality still works correctly

CI Status

The CI tests are failing due to OpenAI API authentication errors, which are unrelated to the code changes in this PR. The specific errors are:

  • litellm.exceptions.APIError: litellm.APIError: APIError: OpenAIException - Connection error
  • These errors occur because the CI environment doesn't have valid OpenAI API credentials

Related Issues

Fixes #2791

Link to Devin run: https://app.devin.ai/sessions/05d2a5ba4131481c8b1a38d491396edc
Requested by: Joe Moura ([email protected])

Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add '(aside)' to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

@joaomdmoura
Copy link
Collaborator

Disclaimer: This review was made by a crew of AI Agents.

Code Review for PR #2792 in crewAIInc/crewAI

Overview

This pull request addresses issue #2791 by modifying how task tools interact with agent tools in the task.py file. The primary focus is to ensure that task tools do not override agent tools but rather combine them appropriately. The changes affect several files, namely src/crewai/task.py, tests/crew_test.py, and tests/task_test.py.

Detailed Analysis

  1. Task.py Changes

    • The logic within the check_tools validator has been significantly improved. Previously, the lack of checks led to the potential overriding of task tools by agent tools. The updated version effectively combines the tools while preserving task-specific tools.
    • Specific Improvement Suggestions:
      • Implement validation for tool name formats to prevent invalid names from being added. A simple helper function like _validate_tool_name can be provided for this purpose.
      • Incorporate logging for better traceability of the tool combination decisions, aiding in debugging discrepancies in tool management.

    Here's an example:

    import logging
    
    @model_validator(mode="after")
    def check_tools(self):
        if self.agent and self.agent.tools:
            existing_tool_names = {tool.name for tool in self.tools}
            for tool in self.agent.tools:
                if tool.name not in existing_tool_names:
                    logging.debug(f"Adding agent tool '{tool.name}' to task tools")
                    self.tools.append(tool)
  2. Test Changes

    • The tests have been reformulated to enhance clarity and increase coverage regarding tool combinations.
    • Recommendations for Improvement:
      • Add tests for edge cases, such as when the agent is None or when duplicate tool names are present. Here’s an example of a test for null agents:
      def test_task_tools_with_null_agent():
          task = Task(
              description="Test task",
              expected_output="Test output",
              tools=[TestTool()],
              agent=None
          )
          assert len(task.tools) == 1
  3. Performance Considerations

    • The switch to using a set for checking existing tool names increases performance dramatically, enhancing response time for tool management operations.
  4. Documentation Needs

    • Clarify the check_tools method with detailed docstrings explaining the process for future reference. This should include a list of steps, expected behavior, and return types.
  5. Potential Issues Identified

    • Considerations for tool name format validation and logging were noted, as well as the need for version conflict management in future revisions.
  6. Additional Recommendations

    • Future revisions could focus on developing an approach for validating tool origins to ensure security and integrity.

Final Verdict

The changes introduced in this pull request enhance the code's maintainability and functionality by effectively allowing task tools to coexist with agent tools. However, additional testing and documentation will be necessary to fully capture these updates' implications and to ensure robust implementation.

Links to historical context can be derived from past related PRs that emphasize improving tool handling across iterations and a growing trend of enhancing test coverage to reduce regression errors.

Overall, I approve of these changes with the suggestions above noted for future improvements.

Copy link
Contributor Author

Closing due to inactivity for more than 7 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[BUG] a task list of tools overrides agent list of tools
1 participant