Skip to content

delete the dtype argument at decorate api #5815

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: develop
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 1 addition & 2 deletions docs/api/paddle/amp/decorate_cn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,7 @@
decorate
-------------------------------

.. py:function:: paddle.amp.decorate(models, optimizers=None, level='O1', dtype='float16', master_weight=None, save_dtype=None, master_grad=False, excluded_layers=None)
.. py:function:: paddle.amp.decorate(models, optimizers=None, level='O1', master_weight=None, save_dtype=None, master_grad=False, excluded_layers=None)


装饰神经网络参数,来支持动态图模式下执行的算子的自动混合精度策略(AMP)。
Expand All @@ -17,7 +17,6 @@ decorate
- **models** (Layer|list of Layer) - 网络模型。在 ``O2`` 模式下,输入的模型参数将由 float32 转为 float16 或 bfloat16。
- **optimizers** (Optimizer|list of Optimizer,可选) - 优化器,默认值为 None,若传入优化器或由优化器组成的 list 列表,将依据 master_weight 对优化器的 master_weight 属性进行设置。
- **level** (str,可选) - 混合精度训练模式,默认 ``O1`` 模式。
- **dtype** (str,可选) - 混合精度训练数据类型使用 float16 还是 bfloat16,默认为 float16 类型。
- **master_weight** (bool|None,可选) - 是否使用 master weight 策略。支持 master weight 策略的优化器包括 ``adam``、``adamW``、``momentum``,默认值为 None,在 ``O2`` 模式下使用 master weight 策略。
- **save_dtype** (str|None,可选) - 网络存储类型,可为 float16、bfloat16、float32、float64。通过 ``save_dtype`` 可指定通过 ``paddle.save`` 和 ``paddle.jit.save`` 存储的网络参数数据类型。默认为 None,采用现有网络参数类型进行存储。
- **master_grad** (bool, 可选) - 在 ``O2`` 模式下是否使用 float32 类型的权重梯度进行梯度裁剪、权重衰减、权重更新等计算。如果被启用,在反向传播结束后权重的梯度将会是 float32 类型。默认值:False,模型仅保存一份 float16 类型的权重梯度。
Expand Down