Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: slimmed-down Levenberg-Marquardt for nonlinear least squares #500

Draft
wants to merge 9 commits into
base: develop
Choose a base branch
from

Conversation

rileyjmurray
Copy link
Contributor

pyGSTi's nonlinear least squares solver is a custom implementation with Levenberg-Marquardt with a dizzying array of options. The core algorithm is over 800 lines long and has 10 levels of indentation at its deepest point. The complexity of this existing implementation makes it basically impossible to extend.

This PR introduces slimmed-down infrastructure for nonlinear least squares. I started by copying customlm.py into a new file called simplerlm.py, then I removed one feature at a time in a series of commits. "Removal" of a feature meant making the default behavior for a given option the only behavior. I haven't tested these changes yet but they should work just fine.

This PR has two goals.

  1. Have pyGSTi rely on simplerlm.py unless an option is requested that's only been requested that's only in customlm.py (in which scenario we dispatch the current code in customlm.py).
  2. Formally deprecate the contents of customlm.py. I don't think we need a timeline for removal, but I would like to start raising deprecation warnings.

Notes

I tried to use ChatGPT to help split the monolithic custom_leastsq function in customlm.py into simpler functions. This turned out to not work for reasons that I'll discuss off-GitHub if people are interested.

For the curious, I'm doing this because I want to extend our nonlinear least squares solver in two ways. First, I want it to support pytorch Tensors; right now we only support numpy arrays and whatever home-cooked thing we have for distributed-memory computations. Second, I want to experiment with optimization algorithms that rarely (if ever!) require evaluating the full Jacobian of circuit outcome probabilities.

…copy-paste of custom_leastsq to simplish_lstsq)
…ty" in the CustomLSOptimizer behavior. Remove damping_clip as well, since that was only used for damping_mode != "identity".
…tomLMOptimizer when damping_basis=="diagonal_basis". I believe that since I already restricted damping_mode=identity in the previous commit that damping_basis has no special meaning anyway.
…s in simplerlm.py instead of customlm.py, so we can mark the whole customlm.py file as deprecated.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant