Skip to content

Multiple GPUs error #3

@ArsenLuca

Description

@ArsenLuca

When I use DataParallel for multiple GPUs, it raises an error in the function sparsestmax:
RuntimeError: arguments are located on different GPUs at /pytorch/aten/src/THC/generic/THCTensorMathCompareT.cu:31
It seems that the rad is only set to gpu0.
So how should I re-organize the code to set rad to all GPUs?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions