Skip to content

Device Management in Multi-GPU systems #130

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

adenzler-nvidia
Copy link
Collaborator

WIP - DO NOT MERGE! Depends on #50 to work.

Closes #108

This only takes care of allocating all data and running the kernels on 1 device. This does not handle potential multi-GPU setups. The strategy is to specify the target device in the put_model call and then use that device for all CUDA operations.

@adenzler-nvidia adenzler-nvidia changed the title Dev/adenzler/device management Device Management in Multi-GPU systems Apr 7, 2025
@adenzler-nvidia
Copy link
Collaborator Author

Will take this on again this week. Might come in a different PR, let's see.

@erikfrey
Copy link
Collaborator

Agreed this seems dependent on #50, which is dependent on #169 :-)

Working to zip this up!

@adenzler-nvidia
Copy link
Collaborator Author

closing in favor of #182 182

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Device Management in Multi-GPU Systems
2 participants