Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Improve and tidy implementation of hadamard gradient (#6928)
**Context:** We are intending to make some improvements to the hadamard gradient to support variations of the differentiation method. To prepare us for these extensions, I've made some minor reworking. **Description of the Change:** * `Rot` operations are now decomposed instead of having bespoke logic. It's basically the same resources at the end, and makes the code substantially cleaner. * `hadamard_grad` now works with anything with a generator, instead of a manually managed list of simple operators. This will set us to apply hadamard grad to more complicated hamiltonian time evolution problems. * The postprocessing function is extracted to global scope. This makes it more clear which variables are needed from the preprocessing step. * The postprocessing is slightly reworked to have fewer branches, and to no longer need to reorder everything at the end. By populating the `grad` structure in the correct shape to begin with, we can avoid a potentially expensive and badly scaling step. * The postprocessing now returns lists instead of tuples. Our return-spec is supposed to be agnostic to lists versus tuples. If lists made sense for creating the structure to begin with, we shouldn't have to go back and cast them. That's an extra step we don't need, if we just allow the other components to expect a list. Yes, I do generally prefer tuple's to lists, but in this case, the list makes sense. **Benefits:** More generic code that is more versatile and easier to maintain. **Possible Drawbacks:** **Related Shortcut Stories:** [sc-84784] --------- Co-authored-by: Isaac De Vlugt <[email protected]> Co-authored-by: Andrija Paurevic <[email protected]> Co-authored-by: Yushao Chen (Jerry) <[email protected]> Co-authored-by: David Wierichs <[email protected]>
- Loading branch information