Skip to content

Conversation

Cathy0908
Copy link
Collaborator

@Cathy0908 Cathy0908 commented Sep 19, 2025

Copy link
Contributor

Summary of Changes

Hello @Cathy0908, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the efficiency of operator execution within the Ray framework by implementing an intelligent, automatic concurrency calculation. The new system dynamically allocates CPU, GPU, and memory resources to operators, ensuring optimal utilization and balanced processing. It respects user-defined concurrency settings while preventing resource over-allocation, leading to more robust and performant data processing pipelines.

Highlights

  • Automatic Concurrency Calculation for Ray Operators: Introduced a new mechanism to automatically calculate the optimal num_proc (concurrency) for operators when running in Ray mode, aiming to maximize resource utilization and balance processing capacity across operators.
  • Intelligent Resource Allocation Logic: A new function find_optimal_concurrency was added to intelligently distribute available cluster resources (CPU, GPU, memory) among operators. It considers both user-specified concurrency and dynamically allocates remaining resources to auto-scaling operators.
  • Integration into RayDataset Processing: The new resource allocation logic is integrated into the RayDataset.set_resource_for_ops method, which is invoked before operators are processed, ensuring that each operator receives an optimized num_proc setting.
  • Enhanced GPU Memory Information Retrieval: The data_juicer/utils/ray_utils.py file was updated to collect total GPU memory information from Ray nodes, providing more comprehensive data for the resource allocation calculations.
  • Comprehensive Test Coverage: A new test file tests/core/data/test_ray_aoto_proc.py has been added to thoroughly validate the automatic num_proc calculation across various scenarios, including CPU-bound, GPU-bound, mixed, and resource-constrained operations.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a sophisticated mechanism for automatically calculating the optimal concurrency (num_proc) for operators when running in Ray mode. It aims to maximize resource utilization by considering CPU, GPU, and memory requirements of each operator. The core logic is implemented in RayDataset.set_resource_for_ops and a new helper function find_optimal_concurrency. The changes also include fetching total GPU memory and adding comprehensive unit tests for the new resource allocation logic.

My feedback focuses on improving robustness by handling potential division-by-zero errors, enhancing code clarity and maintainability through refactoring and better documentation of constants, and optimizing performance by reducing redundant subprocess calls.

return batch.filter(mask)


def find_optimal_concurrency(resource_ratios, total_resource):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we have perf test results to go with the optimization?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We will conduct detailed perf testing and provide reports afterward.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@Cathy0908 Cathy0908 changed the title [WIP] Optimize the auto num_proc calculation of operators in ray mode Optimize the auto num_proc calculation of operators in ray mode Sep 25, 2025
* * start & stop ray cluster in setUp and tearDown method of test cases for ray mode

* * start & stop ray cluster in setUp and tearDown method of test cases for ray mode

* * try another garbage collection method

* * clean up extra code
@HYLcool
Copy link
Collaborator

HYLcool commented Oct 9, 2025

The runtime_np method in base_op.py need to be updated as well. The current unittest-dist is stuck in the test cases of test_alphanumeric_filter and I think the reason might be that it doesn't call the right calculate_ray_np method in the runtime_np method in base_op.py. See the unittest run.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants