Skip to content

feat(Preencoding): Add test for seperate CUDA device usage #384

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

Pebin-Joseph
Copy link

@Pebin-Joseph Pebin-Joseph commented Feb 14, 2025

Summary

This PR adds support for preencoding on a separate CUDA device before passing the encoded images to the UNet model for training. This improves multi-GPU efficiency and allows better memory management.

Changes Made

  • Added encoding_device and training_device parameters to the Imagen class.
  • Moved image encoding to a separate CUDA device (cuda:0).
  • Moved encoded images to the training device (cuda:1) before UNet processing.
  • Ensured non-blocking tensor transfer for efficiency.
  • Added test/test_preencoding.py to verify the implementation.

Testing

  • Created a dummy UNet and encoder to verify tensor device movement.
  • Ran test/test_preencoding.py and confirmed correct GPU allocation.
  • No performance degradation or errors encountered.

Notes

  • This implementation should be backward compatible (if encoding_device is not specified, it defaults to the training device).
  • This feature was requested by users in the LAION Discord.

Checklist

  • Code follows the project's style guidelines.
  • Tested the feature with sample inputs.
  • Updated the documentation if necessary.

@Pebin-Joseph
Copy link
Author

could you reply to this. coz i need to know the updates...!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant