-
Hello! Mitsuba3 is a very interesting project - thanks for all of the work and excellent documentation. I'm currently attempting to implement a prototype in Mitsuba3 which optimizes the shape of an object using multiple input images captured from several real cameras with known approximate camera parameters, however I have some questions about how I can implement the optimization of the camera parameter. It appears to be the case that the scene parameters of the PerspectiveCameras (specifically the to_world transform and fov) are not differentiable, and instead it's necessary to apply an inverse transform to the object in the scene (as done in the object pose estimation demo). Is this indeed the suggested approach? If there is a more direct way to optimize the parameters of multiple camera views I'd very much appreciate any suggestions. I'd also like to experiment with optimizing a parameterization of camera lens distortion, which I imagine will either require defining offsets to the ray direction of each pixel, or perhaps a differentiable distortion applied to the target image data itself. If you can recommend any relevant examples or suggestions for an approach to defining lens distortion in a way Mitsuba3 is able to differentiate I'd appreciate the tips. |
Beta Was this translation helpful? Give feedback.
Replies: 2 comments 4 replies
-
Hi @sjobeek , Indeed optimizing for the camera parameters isn't yet supported in Mitsuba 3. This is on our roadmap and hopefully we will get to fix this soon. Basically a few changes to the |
Beta Was this translation helpful? Give feedback.
-
There was a PR for support of |
Beta Was this translation helpful? Give feedback.
Hi @sjobeek ,
Indeed optimizing for the camera parameters isn't yet supported in Mitsuba 3. This is on our roadmap and hopefully we will get to fix this soon. Basically a few changes to the
prbreparam
integrator are needed for the gradients to properly flow through the reparameterization to the camera's parameters.