[training] skip Dtensor/TP integration test pending solution#4059
Open
danielvegamyhre wants to merge 1 commit intomainfrom
Open
[training] skip Dtensor/TP integration test pending solution#4059danielvegamyhre wants to merge 1 commit intomainfrom
danielvegamyhre wants to merge 1 commit intomainfrom
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/4059
Note: Links to docs will display an error until the docs builds have been completed. ❌ 2 New Failures, 6 PendingAs of commit 4264158 with merge base 77f23d0 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
Warning: Unknown label
Please add the new label to .github/pytorch-probot.yml |
|
Warning: Unknown label
Please add the new label to .github/pytorch-probot.yml |
|
Warning: Unknown label
Please add the new label to .github/pytorch-probot.yml |
8f27938 to
ad5b9a7
Compare
… on ROCm (#3992) * [ROCm] Enable FSDP2 Float8 and affine quantized tensor parallel tests on ROCm Remove blanket ROCm test skips and fix FP8 hardware capability gates to support AMD MI300/MI350 GPUs alongside NVIDIA SM89+/SM90+. test/float8/test_fsdp2/test_fsdp2.py: - Replace dual module-level skip (is_sm_at_least_89 + ROCm skip) with a single gate: is_sm_at_least_89() or is_MI300() or is_MI350() - Import e4m3_dtype from config and use it in test_amax_allreduce_device_mesh instead of hardcoded torch.float8_e4m3fn (MI300 uses float8_e4m3fnuz) test/dtypes/test_affine_quantized_tensor_parallel.py: - Remove module-level pytest.skip on ROCm that blocked all TP tests (Int8wo, Int4wo, Int8dq) even though they have no FP8 dependency - Fix Float8 TP class gate: use is_sm_at_least_90() instead of raw get_device_capability() >= (9, 0), which incorrectly passes on ROCm where gfx90a (MI250X) maps to (9, 0) despite lacking FP8 support Validated on MI250X (gfx90a, 8 GPUs): - FSDP2 Float8: correctly skipped (MI250X lacks FP8) - Affine quantized TP: 4 passed, 2 skipped (Int8wo 3/3, Int8dq 1/1) - Float8 TP classes correctly not defined on non-FP8 hardware * Fix ruff F401: remove unused pytest import in test_affine_quantized_tensor_parallel.py The pytest import was left over after removing the module-level pytest.skip on ROCm. * Fix ruff format: break long pytest.skip line
andrewor14
approved these changes
Mar 11, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary