-
Notifications
You must be signed in to change notification settings - Fork 44
Open
Labels
enhancementNew feature or requestNew feature or requesthelp wantedExtra attention is neededExtra attention is needed
Description
Feature
Desired Behavior / Functionality
Currently, one can only enable or disable cuda and also only do so globally using the torchquad.set_up_backend function. First of all, this means that even on multi-GPU machines one can only ever use the first device "cuda:0". Secondly, it means that using torchquad can break existing code that one tries to integrate it into, because the set_up_backend function globally changes how torch Tensors are initialized. Instead I propose to include device as an optional argument in the integrate function.
What Needs to Be Done
Unfortunately, I am not familiar enough with the library's code to make informed comments on how this can be implemented. I suspect that it's actually a fairly difficult request.
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or requesthelp wantedExtra attention is neededExtra attention is needed