Replies: 1 comment 3 replies
-
Hi @miangoleh This should happen by default, if you don't mess with the parameters/optimizers. Here's an example in one of our tutorials: https://mitsuba.readthedocs.io/en/stable/src/inverse_rendering/radiance_field_reconstruction.html#Optimization |
Beta Was this translation helpful? Give feedback.
3 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I am trying to only perform the opt.step() after accumulating grads from a few different samples.
However by probing the opt it seems that gradients are not accumulated and only the first computation is kept. Is there a way to achieve the similar effect as of using batches/mini batches in training neural networks?
Beta Was this translation helpful? Give feedback.
All reactions