Fix/environment variable in multinode train#8413
Fix/environment variable in multinode train#8413huangfu170 wants to merge 7 commits intomodelscope:mainfrom
Conversation
In cloud servers, the environment variable names may differ from those used here (for example, in Tencent Cloud's DDP environment, NNODES is actually named WORLD_SIZE). This can cause torch.distributed.run to fail to recognize nnodes, preventing the master node and worker nodes from discovering each other.
In cloud servers, the environment variable names may differ from those used here (for example, in Tencent Cloud's DDP environment, NNODES is actually named WORLD_SIZE). This can cause torch.distributed.run to fail to recognize nnodes, preventing the master node and worker nodes from discovering each other.
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the reliability and user-friendliness of multi-node training examples by ensuring that critical environment variables are properly exported. This change prevents common setup failures in varied cloud environments and clarifies how these variables are consumed by distributed training frameworks, making the examples more robust and easier for new users to understand and implement. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request improves the multi-node training example scripts by changing how environment variables are set. Using export for each variable instead of prefixing them to the command enhances readability and makes the scripts easier for users to modify for their specific cloud environments. This aligns well with the goal of providing better guidance for beginners. I have added one suggestion to train_node2.sh to include a comment for the MASTER_ADDR placeholder, which will further improve the script's usability by explicitly telling users what needs to be changed.
Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
PR type
PR information
In cloud servers, the environment variable names may differ from those used here (for example, in Tencent Cloud's DDP environment, NNODES is actually named WORLD_SIZE). This can cause torch.distributed.run to fail to recognize nnodes, preventing the master node and worker nodes from discovering each other.
The multi-node training examples in the examples directory may fail due to subtle differences in environment variable names, which can disrupt the normal training process. Additionally, the existing examples can mislead users into thinking that the training script reads values directly from the shell script rather than retrieving environment variables with the same names. This lacks sufficient guidance for beginners.