You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Fix FutureWarning: Replace torch.cuda.amp.GradScaler with torch.amp.GradScaler (#3458)
## Plan to Fix FutureWarning for torch.cuda.amp.GradScaler
- [x] Update ignite/engine/__init__.py:
- [x] Change import from `torch.cuda.amp.GradScaler` to
`torch.amp.GradScaler`
- [x] Update type hints from `"torch.cuda.amp.GradScaler"` to
`"torch.amp.GradScaler"`
- [x] Update documentation example from
`torch.cuda.amp.GradScaler(2**10)` to `torch.amp.GradScaler('cuda',
2**10)`
- [x] Update docstring reference from `torch.cuda.amp` to `torch.amp`
- [x] Collapse imports: combine `from torch.amp import autocast` and
`from torch.amp import GradScaler`
- [x] Remove explicit 'cuda' parameter from GradScaler instantiations
(device is auto-detected)
- [x] Keep PyTorch version requirements at >= 1.12.0 (when torch.amp was
introduced)
- [x] Update tests/ignite/engine/test_create_supervised.py:
- [x] Update all type hints from `"torch.cuda.amp.GradScaler"` to
`"torch.amp.GradScaler"`
- [x] Update test instantiations from `torch.cuda.amp.GradScaler` to
`torch.amp.GradScaler`
- [x] Remove 'cuda' parameter from test GradScaler instantiations
- [x] Update example files:
- [x] examples/cifar10/main.py - removed 'cuda' parameter
- [x] examples/cifar10_qat/main.py - removed 'cuda' parameter
- [x] examples/cifar100_amp_benchmark/benchmark_torch_cuda_amp.py -
removed 'cuda' parameter
- [x] examples/transformers/main.py - removed 'cuda' parameter
- [x] examples/references/segmentation/pascal_voc2012/main.py - removed
'cuda' parameter
- [x] examples/references/classification/imagenet/main.py - removed
'cuda' parameter
- [x] examples/notebooks/CycleGAN_with_torch_cuda_amp.ipynb - reverted
to original state
- [x] Update documentation and notebooks:
- [x] README.md - updated benchmark description from torch.cuda.amp to
torch.amp
- [x] docs/source/conf.py - updated type hint reference from
torch.cuda.amp.grad_scaler to torch.amp.grad_scaler
- [x] examples/notebooks/CycleGAN_with_torch_cuda_amp.ipynb - updated
GradScaler references and collapsed imports
- [x] Fix code style issues:
- [x] Added newline at end of CycleGAN notebook to satisfy pre-commit
hooks
- [x] Run linters and tests to validate changes
- [x] Python syntax check passed
- [x] Module import test passed
- [x] GradScaler instantiation tests passed (without explicit device
parameter)
- [x] Fixed CI test failures
- [x] Verified no remaining torch.cuda.amp.GradScaler references
- [x] Code style checks passing
## Summary
Successfully replaced all occurrences of the deprecated
`torch.cuda.amp.GradScaler` with `torch.amp.GradScaler`:
- **Changed 11 files** (1 core module, 1 test file, 6 example scripts, 1
README, 1 config, 1 notebook)
- **Collapsed imports** as requested: `from torch.amp import autocast,
GradScaler`
- **Updated all type hints** from `"torch.cuda.amp.GradScaler"` to
`"torch.amp.GradScaler"`
- **Simplified instantiations** to use `GradScaler(enabled=...)` without
explicit device parameter (auto-detected)
- **Maintained version requirements** at PyTorch >= 1.12.0 (when
torch.amp was introduced)
- **Updated documentation and notebooks** to reflect the new API
- **Fixed code style issues** to pass pre-commit checks
The changes have been tested and validated:
- ✓ No syntax errors
- ✓ Module imports successfully
- ✓ No deprecation warnings
- ✓ GradScaler works correctly with simplified API (device
auto-detection)
- ✓ Fixed test failures by removing 'cuda' parameter from test
instantiations
- ✓ All torch.cuda.amp.GradScaler references replaced with
torch.amp.GradScaler
- ✓ Code style checks passing
<!-- START COPILOT CODING AGENT SUFFIX -->
<details>
<summary>Original prompt</summary>
>
> ----
>
> *This section details on the original issue you should resolve*
>
> <issue_title>Fix FutureWarning: `torch.cuda.amp.GradScaler(args...)`
is deprecated. Please use `torch.amp.GradScaler('cuda', args...)`
instead.</issue_title>
> <issue_description></issue_description>
>
> <agent_instructions>Change the code where used
`torch.cuda.amp.GradScaler` into `torch.amp.GradScaler`. Make sure to
collapse imports like:
> ```python
> from torch.amp import autocast
> from torch.amp import GradScaler
> ```
> into
> ```python
> from torch.amp import autocast, GradScaler
> ```</agent_instructions>
>
> ## Comments on the Issue (you are @copilot in this section)
>
> <comments>
> <comment_new><author>@vfdev-5</author><body>
> @keelobytes go ahead. Thanks!</body></comment_new>
> <comment_new><author>@vfdev-5</author><body>
> @keelobytes any progress on this ticket?</body></comment_new>
> <comment_new><author>@vfdev-5</author><body>
> Send a PR if you can work on this issue</body></comment_new>
> </comments>
>
</details>
Fixes#3435
<!-- START COPILOT CODING AGENT TIPS -->
---
💬 Share your feedback on Copilot coding agent for the chance to win a
$200 gift card! Click
[here](https://survey3.medallia.com/?EAHeSx-AP01bZqG0Ld9QLQ) to start
the survey.
---------
Co-authored-by: copilot-swe-agent[bot] <[email protected]>
Co-authored-by: vfdev-5 <[email protected]>
Co-authored-by: vfdev <[email protected]>
Copy file name to clipboardExpand all lines: README.md
+1-1Lines changed: 1 addition & 1 deletion
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -397,7 +397,7 @@ Few pointers to get you started:
397
397
-[](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/FastaiLRFinder_MNIST.ipynb)[Basic example of LR finder on
-[](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/Cifar100_bench_amp.ipynb)[Benchmark mixed precision training on Cifar100:
400
-
torch.cuda.amp vs nvidia/apex](https://github.com/pytorch/ignite/blob/master/examples/notebooks/Cifar100_bench_amp.ipynb)
400
+
torch.amp vs nvidia/apex](https://github.com/pytorch/ignite/blob/master/examples/notebooks/Cifar100_bench_amp.ipynb)
401
401
-[](https://colab.research.google.com/github/pytorch/ignite/blob/master/examples/notebooks/MNIST_on_TPU.ipynb)[MNIST training on a single
-[](https://colab.research.google.com/drive/1E9zJrptnLJ_PKhmaP5Vhb6DTVRvyrKHx)[CIFAR10 Training on multiple TPUs](https://github.com/pytorch/ignite/tree/master/examples/cifar10)
Copy file name to clipboardExpand all lines: examples/notebooks/CycleGAN_with_torch_cuda_amp.ipynb
+11-12Lines changed: 11 additions & 12 deletions
Original file line number
Diff line number
Diff line change
@@ -875,10 +875,10 @@
875
875
"As suggested, we divide the objective by 2 while optimizing D, which slows down the rate at which D learns, relative to the rate of G. \n",
876
876
"\n",
877
877
"According to the paper:\n",
878
-
"- generator A is trained minimize $\\text{mean}_{x \\in A}[(D_B(G(x)) − 1)^2]$ and cycle loss $\\text{mean}_{x \\in A}\\left[ |F(G(x)) - x|_1 \\right]$\n",
879
-
"- generator B is trained minimize $\\text{mean}_{y \\in B}[(D_A(F(y)) − 1)^2]$ and cycle loss $\\text{mean}_{y \\in B}\\left[ |G(F(y)) - y|_1 \\right]$\n",
880
-
"- discriminators A is trained to minimize $\\text{mean}_{x \\in A}[(D_A(x) − 1)^2] + \\text{mean}_{y \\in B}[D_A(F(y))^2]$.\n",
881
-
"- discriminator B is trained to minimize $\\text{mean}_{y \\in B}[(D_B(y) − 1)^2] + \\text{mean}_{x \\in A}[D_B(G(x))^2]$."
878
+
"- generator A is trained minimize $\\text{mean}_{x \\in A}[(D_B(G(x)) \u2212 1)^2]$ and cycle loss $\\text{mean}_{x \\in A}\\left[ |F(G(x)) - x|_1 \\right]$\n",
879
+
"- generator B is trained minimize $\\text{mean}_{y \\in B}[(D_A(F(y)) \u2212 1)^2]$ and cycle loss $\\text{mean}_{y \\in B}\\left[ |G(F(y)) - y|_1 \\right]$\n",
880
+
"- discriminators A is trained to minimize $\\text{mean}_{x \\in A}[(D_A(x) \u2212 1)^2] + \\text{mean}_{y \\in B}[D_A(F(y))^2]$.\n",
881
+
"- discriminator B is trained to minimize $\\text{mean}_{y \\in B}[(D_B(y) \u2212 1)^2] + \\text{mean}_{x \\in A}[D_B(G(x))^2]$."
882
882
]
883
883
},
884
884
{
@@ -887,7 +887,7 @@
887
887
"id": "JE8dLeEfIl_Z"
888
888
},
889
889
"source": [
890
-
"We will use [`torch.amp.autocast`](https://pytorch.org/docs/master/amp.html#torch.amp.autocast) and [`torch.cuda.amp.GradScaler`](https://pytorch.org/docs/master/amp.html#torch.cuda.amp.GradScaler) to perform automatic mixed precision training. Our code follows a [typical mixed precision training example](https://pytorch.org/docs/master/notes/amp_examples.html#typical-mixed-precision-training)."
890
+
"We will use [`torch.amp.autocast`](https://pytorch.org/docs/master/amp.html#torch.amp.autocast) and [`torch.amp.GradScaler`](https://pytorch.org/docs/master/amp.html#torch.amp.GradScaler) to perform automatic mixed precision training. Our code follows a [typical mixed precision training example](https://pytorch.org/docs/master/notes/amp_examples.html#typical-mixed-precision-training)."
0 commit comments