Replies: 1 comment 1 reply
-
|
That's a good catch, it probably should be using device id for the torch device. Sapien device object is only for the rendering backend. That being said people have gotten multi gpu rl working (through puffer lib, they have an example) with the code here using torch run. Not sure what you are using The physx prefix is added as we may potentially support more than just the physx physics simulation engine backend. We had considered supporting MJwarp although the integration efforts for that have been put on pause for now. |
Beta Was this translation helpful? Give feedback.
1 reply
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Link to code: backend
I've been debugging multi-GPU code and noticed some potentially redundant design in the device parsing process. Specifically looking at the
sim_backendhandling (andrender_backendhas the same issue):Potential Bug in Device Creation
Notice:
torch.device("cuda")doesn't use thesim_device_id, whilesapien.Devicedoes - this seems inconsistent.Redundant Mapping
Why the
physx_prefix? The mapping adds prefixes but the final devices still use base names (cpu/cuda). What's the purpose of this extra layer? Is this intentional design?If this complexity isn't necessary, I'd be happy to submit a PR to simplify the code and fix the potential device_id inconsistency.
Looking forward to understanding the design rationale!
Beta Was this translation helpful? Give feedback.
All reactions