Replies: 1 comment
-
|
PHOEBE does implement gradient methods (differential corrections, cg). While the simplex method and Powell's method indeed rely only on function evaluations, they too optimize efficiently (albeit slower than, say, dc). The main drawback of pure gradient methods is that they easily diverge or converge only locally. For interactive work that may be fine, but for any sort of automating iterations that causes problems. My experience has been that DC suffices for interactive work, but your mileage may vary. That said, there is no reason why other optimizers couldn't or shouldn't be implemented; if there is a specific one you would like to see supported, let us know or, better yet, consider a PR. :) |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
PHOEBE currently uses methods that involve directly searching the parameter space for optimizing parameters. However, considering how well gradient-based methods work for estimating high-dimensional parameters in machine learning, I wonder why we're not using those methods here instead.
Aside from the computational expense of calculating gradients, are there any other reasons preventing the use of gradient-based methods? Alternatively, is there any consideration to adopt these methods in the future that I may not be aware of?
Beta Was this translation helpful? Give feedback.
All reactions