-
Notifications
You must be signed in to change notification settings - Fork 39
Distributed Data Parallel #68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
It is often useful to access convolutional layer attributes, e.g. for output shapes precalculation.
attribute readers for ConvNd
…t_mask Fixed generation of square subsequent mask
…ad:map_locaion, torchrun, Tests are provided
|
Hi @orlando-labs, thanks for the PR. I think most of this would be better as a separate gem for now, as I'm not in a position to support this functionality. (also, there should already be a |
|
Hi, @ankane. It's a really good idea to move this to a separate gem. However, some core functionality changes are needed to run DDP, such as improved device handling and mapping the location in |
|
Feel free to create individual PRs for those specific changes (for device handling, there's already a |
The code and its behavior mostly mimic those of PyTorch. It differs only in the multiprocessing section, where the Ruby flow differs from the Python one.
All code has been tested in a multi-GPU environment, which is not currently reproducible on GitHub CI:
We tried to maximize test coverage for every aspect of DDP communication. A benchmark and an example are also included.