Skip to content

Commit a0e8e44

Browse files
authored
[megatron] chore: clean legacy code path part 2, clean legacy CI (#4529)
### What does this PR do? this is one of a series PRs to clean the legacy megatron code path and make bridge the default path for megatron. #4496 This PR make clean CIs, including qwen2.5vl megatron, model merger, config converter ### Checklist Before Starting - [ ] Search for similar PRs. Paste at least one query link here: ... - [ ] Format the PR title as `[{modules}] {type}: {description}` (This will be checked by the CI) - `{modules}` include `fsdp`, `megatron`, `sglang`, `vllm`, `rollout`, `trainer`, `ci`, `training_utils`, `recipe`, `hardware`, `deployment`, `ray`, `worker`, `single_controller`, `misc`, `perf`, `model`, `algo`, `env`, `tool`, `ckpt`, `doc`, `data` - If this PR involves multiple modules, separate them with `,` like `[megatron, fsdp, doc]` - `{type}` is in `feat`, `fix`, `refactor`, `chore`, `test` - If this PR breaks any API (CLI arguments, config, function signature, etc.), add `[BREAKING]` to the beginning of the title. - Example: `[BREAKING][fsdp, megatron] feat: dynamic batching` ### Test > For changes that can not be tested by CI (e.g., algorithm implementation, new model support), validate by experiment(s) and show results like training curve plots, evaluation results, etc. ### API and Usage Example > Demonstrate how the API changes if any, and provide usage example(s) if possible. ```python # Add code snippet or script demonstrating how to use this ``` ### Design & Code Changes > Demonstrate the high-level design if this PR is complex, and list the specific changes. ### Checklist Before Submitting > [!IMPORTANT] > Please check all the following items before requesting a review, otherwise the reviewer might deprioritize this PR for review. - [ ] Read the [Contribute Guide](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md). - [ ] Apply [pre-commit checks](https://github.com/volcengine/verl/blob/main/CONTRIBUTING.md#code-linting-and-formatting): `pre-commit install && pre-commit run --all-files --show-diff-on-failure --color=always` - [ ] Add / Update [the documentation](https://github.com/volcengine/verl/tree/main/docs). - [ ] Add unit or end-to-end test(s) to [the CI workflow](https://github.com/volcengine/verl/tree/main/.github/workflows) to cover all the code. If not feasible, explain why: ... - [ ] Once your PR is ready for CI, send a message in [the `ci-request` channel](https://verl-project.slack.com/archives/C091TCESWB1) in [the `verl` Slack workspace](https://join.slack.com/t/verl-project/shared_invite/zt-3855yhg8g-CTkqXu~hKojPCmo7k_yXTQ). (If not accessible, please try [the Feishu group (飞书群)](https://applink.larkoffice.com/client/chat/chatter/add_by_link?link_token=772jd4f1-cd91-441e-a820-498c6614126a).)
1 parent d7c82bd commit a0e8e44

File tree

8 files changed

+6
-315
lines changed

8 files changed

+6
-315
lines changed

.github/workflows/checkpoint_converter.yml

Lines changed: 0 additions & 175 deletions
This file was deleted.

.github/workflows/e2e_ppo_trainer_megatron_sglang.yml

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -136,11 +136,6 @@ jobs:
136136
export VLLM_USE_V1=1
137137
ray start --head
138138
ENGINE=sglang MODE=async RESUME_MODE=auto MODEL_ID=deepseek-ai/deepseek-coder-1.3b-instruct TOTAL_TRAIN_STEPS=2 bash tests/special_e2e/run_ppo_trainer_megatron.sh
139-
- name: Test Megatron checkpoints merging function (DeepSeek Actor and Critic)
140-
run: |
141-
exp_name="deepseek-coder-1.3b-instruct-megatron-gsm8k-minimal"
142-
python -m verl.model_merger test --backend megatron --local_dir checkpoints/verl-test/${exp_name}/global_step_1/actor --test_hf_dir checkpoints/verl-test/${exp_name}/global_step_1/actor/huggingface
143-
python -m verl.model_merger test --backend megatron --is-value-model --local_dir checkpoints/verl-test/${exp_name}/global_step_1/critic --test_hf_dir checkpoints/verl-test/${exp_name}/global_step_1/critic/huggingface
144139
- name: Profiling GRPO GSM8K E2E training tests with 3D parallelism on 8 L20 GPUs with Megatron (Deepseek)
145140
run: |
146141
ray stop --force
@@ -181,11 +176,6 @@ jobs:
181176
run: |
182177
ray stop --force
183178
ALL_OFFLOAD=True VAL_BEFORE_TRAIN=True TEST_FREQ=1 SAVE_FREQ=1 LR_WARMUP_STEPS=1 TOTAL_TRAIN_STEPS=2 MODEL_ID=Qwen/Qwen3-0.6B bash tests/special_e2e/run_ppo_trainer_megatron.sh
184-
- name: Test Megatron checkpoints merging function (Qwen3 Actor and Critic)
185-
run: |
186-
exp_name="qwen3-0.6b-megatron-gsm8k-minimal"
187-
python -m verl.model_merger test --backend megatron --tie-word-embedding --local_dir checkpoints/verl-test/${exp_name}/global_step_1/actor --test_hf_dir checkpoints/verl-test/${exp_name}/global_step_1/actor/huggingface
188-
python -m verl.model_merger test --backend megatron --is-value-model --local_dir checkpoints/verl-test/${exp_name}/global_step_1/critic --test_hf_dir checkpoints/verl-test/${exp_name}/global_step_1/critic/huggingface
189179
- name: Running GSM8K E2E training tests with 3D parallelism on 8 L20 GPUs with FP8 rollout
190180
run: |
191181
ray stop --force

.github/workflows/e2e_ppo_trainer_megatron_sglang_2.yml

Lines changed: 0 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -105,37 +105,6 @@ jobs:
105105
faas-url: "${{ env.DYNAMIC_RUNNER_ENDPOINT }}"
106106
mlp-image: "${{ env.IMAGE }}"
107107

108-
e2e_ppo_trainer_megatron-qwen2_5vl-3b:
109-
needs: setup
110-
runs-on: ["${{ needs.setup.outputs.runner-label || 'L20x8' }}"]
111-
timeout-minutes: 60 # Increase this timeout value as needed
112-
env:
113-
HTTP_PROXY: ${{ secrets.PROXY_HTTP }}
114-
HTTPS_PROXY: ${{ secrets.PROXY_HTTPS }}
115-
NO_PROXY: "localhost,127.0.0.1,hf-mirror.com"
116-
HF_ENDPOINT: "https://hf-mirror.com"
117-
HF_HUB_ENABLE_HF_TRANSFER: "0" # This is more stable
118-
steps:
119-
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
120-
with:
121-
fetch-depth: 0
122-
- name: Install the current repository
123-
run: |
124-
pip3 install --no-deps -e .[test]
125-
- name: Prepare Geo3k dataset
126-
run: |
127-
python3 examples/data_preprocess/geo3k.py --local_dataset_path ${HOME}/models/hf_data/hiyouga/geometry3k/
128-
- name: Prepare dist_ckpt of Qwen2.5-VL-3B, only supports dist_ckpt
129-
run: |
130-
python3 scripts/converter_hf_to_mcore.py --hf_model_path ${HOME}/models/Qwen/Qwen2.5-VL-3B-Instruct --output_path checkpoints/verl-test/qwen2.5-vl-3b-megatron
131-
- name: Running Geo3k E2E training tests with 3D parallelism on 8 L20 GPUs with Megatron (Qwen)
132-
run: |
133-
ray stop --force
134-
ENGINE=sglang ROLLOUT_MODE=async TRAIN_FILES=${HOME}/data/geo3k/train.parquet VAL_FILES=${HOME}/data/geo3k/test.parquet MAX_PROMPT_LENGTH=1024 MAX_RESPONSE_LENGTH=2048 MODEL_ID=Qwen/Qwen2.5-VL-3B-Instruct ADV_ESTIMATOR=grpo USE_DYNAMIC_BSZ=False SKIP_SAVE_HF_MODEL=1 COMMON_PP=4 COMMON_VPP=null COMMON_CP=1 COMMON_TP=2 USE_DIST_CKPT=true DIST_CKPT_PATH=checkpoints/verl-test/qwen2.5-vl-3b-megatron bash tests/special_e2e/run_ppo_trainer_megatron.sh
135-
- name: clean up
136-
run: |
137-
rm -rf checkpoints
138-
139108
e2e_ppo_trainer_fsdp_sglang:
140109
needs: setup
141110
runs-on: [ "${{ needs.setup.outputs.runner-label || 'L20x8' }}" ]
@@ -221,7 +190,6 @@ jobs:
221190
needs:
222191
[
223192
setup,
224-
e2e_ppo_trainer_megatron-qwen2_5vl-3b,
225193
e2e_ppo_trainer_fsdp-qwen2_5vl-3b,
226194
e2e_ppo_trainer_fsdp_sglang,
227195
]

.github/workflows/e2e_ppo_trainer_megatron_vllm.yml

Lines changed: 0 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -186,11 +186,6 @@ jobs:
186186
run: |
187187
ray stop --force
188188
ALL_OFFLOAD=True VAL_BEFORE_TRAIN=True TEST_FREQ=1 SAVE_FREQ=1 LR_WARMUP_STEPS=1 TOTAL_TRAIN_STEPS=2 MODEL_ID=Qwen/Qwen3-0.6B bash tests/special_e2e/run_ppo_trainer_megatron.sh
189-
- name: Test Megatron checkpoints merging function (Qwen3 Actor and Critic)
190-
run: |
191-
exp_name="qwen3-0.6b-megatron-gsm8k-minimal"
192-
python -m verl.model_merger test --backend megatron --tie-word-embedding --local_dir checkpoints/verl-test/${exp_name}/global_step_1/actor --test_hf_dir checkpoints/verl-test/${exp_name}/global_step_1/actor/huggingface
193-
python -m verl.model_merger test --backend megatron --is-value-model --local_dir checkpoints/verl-test/${exp_name}/global_step_1/critic --test_hf_dir checkpoints/verl-test/${exp_name}/global_step_1/critic/huggingface
194189
- name: Running GSM8K E2E training tests with 3D parallelism on 8 L20 GPUs with FP8 rollout
195190
run: |
196191
ray stop --force

.github/workflows/e2e_ppo_trainer_megatron_vllm_2.yml

Lines changed: 0 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -153,42 +153,6 @@ jobs:
153153
run: |
154154
rm -rf checkpoints
155155
156-
e2e_ppo_trainer_megatron-qwen2_5vl-3b:
157-
needs: setup
158-
runs-on: ["${{ needs.setup.outputs.runner-label || 'L20x8' }}"]
159-
timeout-minutes: 60 # Increase this timeout value as needed
160-
env:
161-
HTTP_PROXY: ${{ secrets.PROXY_HTTP }}
162-
HTTPS_PROXY: ${{ secrets.PROXY_HTTPS }}
163-
NO_PROXY: "localhost,127.0.0.1,hf-mirror.com"
164-
HF_ENDPOINT: "https://hf-mirror.com"
165-
HF_HUB_ENABLE_HF_TRANSFER: "0" # This is more stable
166-
steps:
167-
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
168-
with:
169-
fetch-depth: 0
170-
- name: Install the current repository
171-
run: |
172-
pip3 install --no-deps -e .[test]
173-
pip3 install transformers==$TRANSFORMERS_VERSION
174-
- name: Prepare Geo3k dataset
175-
run: |
176-
python3 examples/data_preprocess/geo3k.py --local_dataset_path ${HOME}/models/hf_data/hiyouga/geometry3k/
177-
- name: Prepare dist_ckpt of Qwen2.5-VL-3B, only supports dist_ckpt
178-
run: |
179-
python3 scripts/converter_hf_to_mcore.py --hf_model_path ${HOME}/models/Qwen/Qwen2.5-VL-3B-Instruct --output_path checkpoints/verl-test/qwen2.5-vl-3b-megatron
180-
- name: Running Geo3k E2E training tests with 3D parallelism on 8 L20 GPUs with Megatron (Qwen)
181-
run: |
182-
ray stop --force
183-
TRAIN_FILES=${HOME}/data/geo3k/train.parquet VAL_FILES=${HOME}/data/geo3k/test.parquet \
184-
MAX_PROMPT_LENGTH=1024 MAX_RESPONSE_LENGTH=2048 MODEL_ID=Qwen/Qwen2.5-VL-3B-Instruct ADV_ESTIMATOR=grpo \
185-
USE_DYNAMIC_BSZ=False USE_FUSED_KERNELS=True SKIP_SAVE_HF_MODEL=1 \
186-
COMMON_PP=4 COMMON_VPP=null COMMON_CP=1 COMMON_TP=2 USE_DIST_CKPT=true \
187-
DIST_CKPT_PATH=checkpoints/verl-test/qwen2.5-vl-3b-megatron bash tests/special_e2e/run_ppo_trainer_megatron.sh
188-
- name: clean up
189-
run: |
190-
rm -rf checkpoints
191-
192156
e2e_ppo_trainer_fsdp_vllm:
193157
needs: setup
194158
runs-on: [ "${{ needs.setup.outputs.runner-label || 'L20x8' }}" ]
@@ -330,7 +294,6 @@ jobs:
330294
[
331295
setup,
332296
e2e_ppo_trainer_megatron-moe-expert-parallel,
333-
e2e_ppo_trainer_megatron-qwen2_5vl-3b,
334297
e2e_ppo_trainer_fsdp-qwen2_5vl-3b,
335298
e2e_ppo_trainer_fsdp_vllm,
336299
]

.github/workflows/model.yml

Lines changed: 0 additions & 30 deletions
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,6 @@ on:
4848
# Entrypoints
4949
- ".github/workflows/model.yml"
5050
- "tests/special_distributed/test_fsdp_ckpt.py"
51-
- "tests/special_distributed/test_mcore_config_converter.py"
5251
- "tests/special_distributed/test_tensor_dict.py"
5352
- "tests/models/**"
5453
- "tests/special_distributed/run_all.sh"
@@ -144,34 +143,6 @@ jobs:
144143
run: |
145144
STRATEGY=fsdp2 torchrun --nproc_per_node=8 tests/special_distributed/test_fsdp_ckpt.py
146145
147-
mcore_config_converter:
148-
needs: setup
149-
runs-on: [ "${{ needs.setup.outputs.runner-label || 'L20x8' }}" ]
150-
timeout-minutes: 20 # Increase this timeout value as needed
151-
env:
152-
HTTP_PROXY: ${{ secrets.PROXY_HTTP }}
153-
HTTPS_PROXY: ${{ secrets.PROXY_HTTPS }}
154-
NO_PROXY: "localhost,127.0.0.1,hf-mirror.com"
155-
HF_ENDPOINT: "https://hf-mirror.com"
156-
HF_HUB_ENABLE_HF_TRANSFER: "0" # This is more stable
157-
steps:
158-
- uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 # v4.2.2
159-
with:
160-
fetch-depth: 0
161-
- name: Install the current repository
162-
run: |
163-
pip3 install -e .[test]
164-
# - name: Download model config files
165-
# run: |
166-
# hf download Qwen/Qwen2.5-7B config.json --local-dir $HOME/configs/Qwen/Qwen2.5-7B
167-
# hf download Qwen/Qwen3-8B config.json --local-dir $HOME/configs/Qwen/Qwen3-8B
168-
# hf download deepseek-ai/deepseek-coder-1.3b-instruct config.json --local-dir $HOME/configs/deepseek-ai/deepseek-coder-1.3b-instruct
169-
# hf download Qwen/Qwen2-57B-A14B config.json --local-dir $HOME/configs/Qwen/Qwen2-57B-A14B
170-
# hf download Qwen/Qwen3-30B-A3B config.json --local-dir $HOME/configs/Qwen/Qwen3-30B-A3B
171-
# hf download deepseek-ai/DeepSeek-V3-Base config.json --local-dir $HOME/configs/deepseek-ai/DeepSeek-V3-Base
172-
- name: Running mcore config converter tests on 8 L20 GPUs
173-
run: |
174-
torchrun --nproc_per_node=8 tests/special_distributed/test_mcore_config_converter.py
175146
176147
model_engine:
177148
needs: setup
@@ -206,7 +177,6 @@ jobs:
206177
setup,
207178
model_rmpad,
208179
model_rmpad_fsdp2_unstable,
209-
mcore_config_converter,
210180
model_engine
211181
]
212182
if: always()

docs/advance/checkpoint.rst

Lines changed: 2 additions & 26 deletions
Original file line numberDiff line numberDiff line change
@@ -137,32 +137,8 @@ Current implementation use solution 2.
137137
HuggingFace to Megatron DistCheckpoint details
138138
----------------------------------------------
139139

140-
If your model is quite huge, we recommend you to use Megatron dist-checkpoint to load the model.
141-
Megatron dist-checkpoint supports loading with different kinds of model parallelism,
142-
and it is much faster than the original checkpoint loading.
143-
144-
To convert original HuggingFace model to Megatron dist-checkpoint,
145-
you can use the ``scripts/converter_hf_to_mcore.py`` script. Large MoE models are temporarily supported with CPU initialization,
146-
which is a little slower. While we are working on a better solution to support large models.
147-
148-
Example command to convert the model is as follows:
149-
150-
.. code:: bash
151-
152-
python scripts/converter_hf_to_mcore.py \
153-
--hf_model_path Qwen/Qwen1.5-MoE-A2.7B-Chat \
154-
--output_path /mnt/disk/Qwen/Qwen1.5-MoE-A2.7B-Chat \
155-
--use_cpu_initialization # Only work for MoE models
156-
157-
158-
Example command to distributed convert the huge model like deepseekv3 671B is as follows:
159-
160-
.. code:: bash
161-
162-
torchrun --nproc_per_node 1 --nnodes 8 --node_rank ${RANK} scripts/converter_hf_to_mcore.py \
163-
--hf_model_path deepseek-ai/DeepSeek-V3 \
164-
--output_path /mnt/disk/deepseek-ai/DeepSeek-V3 \
165-
--use_cpu_initialization # Only work for MoE models
140+
Through ``mbridge``, we can directly save the mcore model to huggingface format during training.
141+
No need to convert the model to Megatron dist-checkpoint format.
166142

167143
Original Checkpoint Utils
168144
-------------------------

0 commit comments

Comments
 (0)