Optimizing a PyTorch Model Within a Customized BSDF Class #1466
Unanswered
colinzhenli
asked this question in
Q&A
Replies: 1 comment
-
Hi @colinzhenli , There's a community-created tutorial regarding implementing neural representations of spatially-varying BRDF parameters in Mitsuba 3 which I think aligns with what you're after. Just be aware that it was created before the Mitsuba 3.6 release, so parts may be out of date and some of the code may have to ported (e.g. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Description
I would like to write a customized BSDF using a PyTorch model, such as employing an MLP to map incoming and outgoing directions to BRDF values. Furthermore, I want to use the inverse rendering pipeline to optimize the PyTorch model.
Following the "Inverse Rendering Tutorial," I implemented a customized BSDF class as shown below:
Code
Customized BSDF Class
Optimization Code for Mitsuba Framework
After successfully optimizing the
roughness
andeta
parameters, I followed the guide on "Mitsuba and PyTorch Compatibility" to implement a PyTorch-based model within the customized BSDF class:PyTorch Model Integration
Updated BSDF with PyTorch Model
Issue
The forward pass works as expected, but I am unsure how to optimize the PyTorch model parameters (e.g., MLP weights). Specifically, I don't know:
mapped_eta
) for optimization, either within the Mitsuba optimization framework or by using a decorator for the PyTorch framework.traverse
method for registering parameters with PyTorch's optimization pipeline.System Information
Any guidance on properly integrating and optimizing the PyTorch model parameters within this framework would be greatly appreciated. Thank you!
Beta Was this translation helpful? Give feedback.
All reactions