Replies: 1 comment
-
|
Hi @kalosu, Thanks for opening this! There has been some discussion regarding this in the past, though I can't locate the reference at the moment. I believe @manuelFragata was looking into an improved DLS implementation, so perhaps he can comment here. I absolutely agree that this would be a valuable addition. To confirm, I think you are suggesting the implementation of soft constraints (or penalty functions) via operands. This would allow us to incorporate bounds directly into the merit function, contributing to the error only when a bound is violated. I know this is a standard approach in optical design and, as you noted, would unlock the robust Regarding the Jacobians, This is a great point. While Optiland already supports full optimization using the PyTorch backend (as seen here), your suggestion of using PyTorch specifically to feed the Jacobian into the SciPy interface is distinct. The only trick is the overhead and fragility of transferring data between the Torch (tensors) and NumPy (arrays) backends during the optimization loop. However, if the overhead is manageable, the stability and speed gains from using exact gradients could be massive. This is absolutely something we should investigate. Thanks again! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hello there friends from Optiland,
I was using the
LeastSquaresoptimizer and comparing the results to what is obtained using commercial ray-tracing software packages.In those software I see they mostly use the DLS algorithm which seems to give better results in comparison to trust region methods at least for the type of merit functions used for optical systems.
At the moment,
lmis used by default through the least-squares interface but if I need to set bounds on my variables, then the underlyingtrmalgorithm is used.In your opinion, would it be worth adding support for operands that can handle bounds for different parameters like surface thickness, radius of curvature, etc? In that way,
lmwould be directly used without switching to thetrm. Any thoughts about this?Also, at the moment, I think that all jacobians are approximated using the 2 or 3 points approach exposed by scipy. But scipy also supports providing the jacobians as arrays. Do you think there would be any benefit from using the torch backend instead of the numpy one for this? I think at the moment, for optimization, only the numpy backend is supported due to the use of scipy, but tensors could be detached and interpreted as numpy arrays once the jacobians are computed. Any thoughts on this?
Beta Was this translation helpful? Give feedback.
All reactions