-
Notifications
You must be signed in to change notification settings - Fork 47
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rewriten finite difference logging #283
base: master
Are you sure you want to change the base?
Conversation
This new objective wraps the other objective, but uses the precalculated values to evaluate points close to the precalculated points. The intended usage is to quickly redo parts of an optimization, that got interrupted due to for example time limits.
… a least squares problem or generic. Updated the "general" solver in serial.
No longer useful for reseting the least squares optimizer, but that wasn't a good default due to the 1000s of residuals in the QS objective.
…licate work and because calling fd.fn() in the logger resulted in the execution freezing on cobra.
…python nomenclature.
… DOFs if the underlying optimizable got updated.
Codecov ReportBase: 91.52% // Head: 91.28% // Decreases project coverage by
Additional details and impacted files@@ Coverage Diff @@
## master #283 +/- ##
==========================================
- Coverage 91.52% 91.28% -0.24%
==========================================
Files 61 61
Lines 9249 9328 +79
==========================================
+ Hits 8465 8515 +50
- Misses 784 813 +29
Flags with carried forward coverage won't be shown. Click here to find out more.
Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here. ☔ View full report at Codecov. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It feels suboptimal to expose finite_difference_jac_decorator
and to have to call it explicitly in the solvers. I would think a simpler interface would be to move the verbose
keyword to the __init__
method of FiniteDifference
and MPIFiniteDifference
.
It also feels suboptimal that the logging functionality is spread between finite_difference_jac_decorator
, FiniteDifference
, and MPIFiniteDifference
. For example, both FiniteDifference
, and MPIFiniteDifference
have a init_log()
function (would be nice to eliminate this redundancy) whereas lots of the actual printing to the log file is in finite_difference_jac_decorator
. It would be ideal to encapsulate all the logging in one class.
Would it make sense for the jacobian logging functionality to go in a new class, call it FiniteDifferenceLog
for now? This way all the variables related to logging like log_file
, new_log_file
, and log_header_written
would be stored by the FiniteDifferenceLog
object. One option would be for FiniteDifference
and MPIFiniteDifference
to each have a FiniteDifferenceLog
instance as an attribute. Or it could be the other way around: a FiniteDifferenceLog
object could have a fd
attribute which is either a FiniteDifference
or MPIFiniteDifference
instance. I kind of like this last approach.
BTW "decorator" has a specific meaning in python, and finite_difference_jac_decorator
is not a decorator in that sense. So I would use a different name.
It would be good to have test coverage of the various cases, e.g. the different options for verbose
.
Is the idea of the flatten_out
variable to handle non-least-squares problems? All the new if self.flatten_out
statements seem suboptimal - it would be best to have a single "logic" instead of two parallel "logics". I feel like it would be more elegant to handle scalar objectives by having a grad()
function in FiniteDifference
and MPIFiniteDifference
that returns the gradient instead of the Jacobian, so the non-least-squares algorithm would call grad
instead of jac
.
Rewrites part of MPIFiniteDifference and FiniteDifference to:
The current format of the Jacobian log only logs the points at which the objective was evaluated to calculate the Jacobian, and not the actual function value at these points. For more expensive objectives (such as those used for direct turbulence optimization), it becomes non-trivial to reevaluate the Jacobian, so the output is not as useful.
A concern I have is that this pull request currently introduces some "switch-case based programming", where the (MPI)FiniteDifference objects and the finite_difference_jac_decorator uses if-clauses to adjust their outputs, partly based on the needs of the different solvers.
A different approach would be to embrace that MPIFiniteDifference and FiniteDifference are only meant for use with scipy.optimize.least_squares, and create a derived class FiniteDifference_minimize that overrides the jac method to flatten the output for scipy.optimize.minimize. The logging could then again be done in the FiniteDifference objects themselves, since they would be specific to each solver and thus could print hard-coded solver info. Users using different solvers would then create derived FiniteDifference classes with different solver information. Input and ideas are appreciated.