Open
Description
Bug description
I just updated lightning version to 2.0 and running inference on cpu is extremely slow.
My previous code can be found here. Inference is basically performed by initialising a trainer with devices=0 and strategy=None.
trainer = ptl.Trainer(
devices=devices,
logger=False,
callbacks=callbacks,
accelerator=accelerator if gpus > 0 else "cpu",
strategy=None if gpus < 2 else "ddp",
enable_progress_bar=enable_progress_bar,
)
return_predictions = False if gpus > 1 else True
predictions = trainer.predict(
self, dataloaders=dataloader, return_predictions=return_predictions
)
Since in lightning 2.0 we cant pass None
to the strategy, I replaced it with "cpu" if gpus < 1. Yet, this makes the code much slower. Is this normal? In gpu everything seems to work well.
How to reproduce the bug
No response
Error messages and logs
# Error messages and logs here please
Environment
Current environment
#- Lightning Component (e.g. Trainer, LightningModule, LightningApp, LightningWork, LightningFlow):
#- PyTorch Lightning Version (e.g., 1.5.0):
#- Lightning App Version (e.g., 0.5.2):
#- PyTorch Version (e.g., 2.0):
#- Python version (e.g., 3.9):
#- OS (e.g., Linux):
#- CUDA/cuDNN version:
#- GPU models and configuration:
#- How you installed Lightning(`conda`, `pip`, source):
#- Running environment of LightningApp (e.g. local, cloud):
More info
No response
cc @Borda