Skip to content

[BUG] Trainer have no attribute 'use_cuda_amp' #795

Open
@SinnerSS

Description

@SinnerSS

Bug description

Trainer.train() when finishing training return:
AttributeError: 'Trainer' object has no attribute 'use_cuda_amp'
During handling of the above exception, another exception occurred:
AttributeError: 'Trainer' object has no attribute 'use_amp'

Steps/Code to reproduce bug

Train a model with these configurations:
training_args = T4RecTrainingArguments(
output_dir=lb_out.as_posix(),
max_sequence_length=20,
data_loader_engine='nvtabular',
num_train_epochs=1,
dataloader_drop_last=False,
per_device_train_batch_size=256,
per_device_eval_batch_size=16,
gradient_accumulation_steps=1,
learning_rate=0.0005,
report_to=[],
save_strategy='no',
logging_steps=500
)

trainer = Trainer(
model=model,
args=training_args,
schema=schema,
compute_metrics=True,
)

Expected behavior

Model finish training without returning error

Environment details

  • Transformers4Rec version: 23.08.00
  • Platform:Ubuntu 22.04.5 LTS
  • Python version:3.10.16
  • Huggingface Transformers version:4.45.0
  • PyTorch version (GPU?):2.4.1
  • Tensorflow version (GPU?):

Additional context

It seems that transformers have remove CUDA amp entirely for CPU amp. huggingface/transformers#27760

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions