-
Notifications
You must be signed in to change notification settings - Fork 8
Open
Description
When I use DataParallel for multiple GPUs, it raises an error in the function sparsestmax:
RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorMathCompareT.cu:31
It seems that the rad is only set to gpu0.
So how should I re-organize the code to set rad to all GPUs?
Metadata
Metadata
Assignees
Labels
No labels