You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello everyone!
For context: I've acquired a series of depth and color images from the same object, using a RealSense camera, by moving the camera to another angle, close to each other, trying to keep the camera at the same height. The colmap2nerf.py script from instant-ngp creates a very good "transforms.json" that, when used in instant-ngp itself, is able to reconstruct a very nice 3D representation from those pictures.
Now I'm trying to use KinfuTracker to register the depth images and provide the camera poses, so I can build the "transforms.json" for instant-ngp.
But I still have had no success.
Does anyone have any hints? I understand that the Y and Z axes have opposite directions between Kinfu and instant-ngp, but there might be some more details I couldn't find. I've provided the camera intrinsic (fl-x, fl-y, cx, cy), an idea of grid length and grid center, and the depth images. Am I on the right track?
Thanks a lot!
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
Hello everyone!
For context: I've acquired a series of depth and color images from the same object, using a RealSense camera, by moving the camera to another angle, close to each other, trying to keep the camera at the same height. The colmap2nerf.py script from instant-ngp creates a very good "transforms.json" that, when used in instant-ngp itself, is able to reconstruct a very nice 3D representation from those pictures.
Now I'm trying to use KinfuTracker to register the depth images and provide the camera poses, so I can build the "transforms.json" for instant-ngp.
But I still have had no success.
Does anyone have any hints? I understand that the Y and Z axes have opposite directions between Kinfu and instant-ngp, but there might be some more details I couldn't find. I've provided the camera intrinsic (fl-x, fl-y, cx, cy), an idea of grid length and grid center, and the depth images. Am I on the right track?
Thanks a lot!
Beta Was this translation helpful? Give feedback.
All reactions