Error when using 16 bit precision #99
Description
Hi, thank you for the great integration of Lightning & Ray!
I found that using 16 bit precision returns the following error: pytorch_lightning.utilities.exceptions.MisconfigurationException: You have asked for native AMP on CPU, but AMP is only available on GPU
Using the same script, with 32 bit works fine.
I believe this is due to the fact that the number of GPU's is only set in the Ray Plugin and not in the Lightning Trainer, and this check happens prior to the Ray plugin being utilized. More generally it may be a bit dangerous to not set the Trainer number of gpus when actually intending to use GPU's as there may be other internal checks that Lightning does which could lead to unexpected behavior such as this one.
Would appreciate any tips on getting this integration to work with half precision training, thank you!