Skip to content

Multi-GPU training capability for the Pytorch Transformer LM training script - https://github.com/kaldi-asr/kaldi/blob/master/egs/wsj/s5/local/pytorchnn/run_nnlm.sh #4699

Open
@saikirandingu

Description

@saikirandingu

I used the script ### https://github.com/kaldi-asr/kaldi/blob/master/egs/wsj/s5/local/pytorchnn/run_nnlm.sh, but I could not figure out how we could distribute the training of Transformer based LM on multiple GPUs in order to speed-up the Pytorch training. Please suggest if there is any way to do so.

Thanks!

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions