-
Notifications
You must be signed in to change notification settings - Fork 1.3k
Open
Description
Hi, I was hoping that someone can help me to understand the physical units of the predictions. The paper says that the predictions are in the reference frame of the first camera. What then are the units of depth map--even if they are in a normalized space? A follow-on question is then, can I convert from this normalized space into real-world space by using a camera calibration file that has the actual focal length and pixel pitches of the first camera?
This project is pretty much the coolest thing I've ever seen in computer vision. My jaw dropped seeing this presented at CVPR last year.
Metadata
Metadata
Assignees
Labels
No labels