Skip to content

Post-Hoc Depth Computation #481

Open
@suraj-nair-1

Description

@suraj-nair-1

Preliminary Checks

  • This issue is not a duplicate. Before opening a new issue, please search existing issues.
  • This issue is not a question, bug report, or anything other than a feature request directly related to this project.

Proposal

Hello,

We have a Zed 2 camera that will be moving around with a robot and collecting data. The onboard computer we have with the robot does not have a GPU/CUDA, hence we are not able to use the ZED SDK directly online during data collection. However, we would like to capture the raw RGB stereo frames, then feed them through the ZED SDK later (on a different GPU enabled machine) for post-hoc computation of the depth.

It does not appear that the current ZED SDK api supports this (at least for python), where it looks like depth/pointclouds can only be captured live from the camera. Is there functionality to pass in previously captured RGB images and camera intrinsics and get depth? While I understand that there are off-the-shelf models that could be used for this, I would like to use the depth prediction models with the ZED SDK as they are likely the best tuned for the hardware.

Perhaps this functionality exists and I missed it? If such functionality could be added that would be much appreciated. Or if you have any pointers on how to go about modifying the ZED SDK myself to support this that would also be very helpful.

Thank you!

-- Suraj Nair

Use-Case

No response

Anything else?

No response

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions