Skip to content

Commit

Permalink
Improve and tidy implementation of hadamard gradient (#6928)
Browse files Browse the repository at this point in the history
**Context:**

We are intending to make some improvements to the hadamard gradient to
support variations of the differentiation method. To prepare us for
these extensions, I've made some minor reworking.

**Description of the Change:**

* `Rot` operations are now decomposed instead of having bespoke logic.
It's basically the same resources at the end, and makes the code
substantially cleaner.

* `hadamard_grad` now works with anything with a generator, instead of a
manually managed list of simple operators. This will set us to apply
hadamard grad to more complicated hamiltonian time evolution problems.

* The postprocessing function is extracted to global scope. This makes
it more clear which variables are needed from the preprocessing step.

* The postprocessing is slightly reworked to have fewer branches, and to
no longer need to reorder everything at the end. By populating the
`grad` structure in the correct shape to begin with, we can avoid a
potentially expensive and badly scaling step.

* The postprocessing now returns lists instead of tuples. Our
return-spec is supposed to be agnostic to lists versus tuples. If lists
made sense for creating the structure to begin with, we shouldn't have
to go back and cast them. That's an extra step we don't need, if we just
allow the other components to expect a list. Yes, I do generally prefer
tuple's to lists, but in this case, the list makes sense.

**Benefits:**

More generic code that is more versatile and easier to maintain.

**Possible Drawbacks:**

**Related Shortcut Stories:**
[sc-84784]

---------

Co-authored-by: Isaac De Vlugt <[email protected]>
Co-authored-by: Andrija Paurevic <[email protected]>
Co-authored-by: Yushao Chen (Jerry) <[email protected]>
Co-authored-by: David Wierichs <[email protected]>
  • Loading branch information
5 people authored Feb 26, 2025
1 parent 09ea76b commit 64e226f
Show file tree
Hide file tree
Showing 17 changed files with 279 additions and 361 deletions.
7 changes: 7 additions & 0 deletions doc/releases/changelog-dev.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,9 @@

<h3>Improvements 🛠</h3>

* `qml.gradients.hadamard_grad` can now differentiate anything with a generator, and can accept circuits with non-commuting measurements.
[(#6928)](https://github.com/PennyLaneAI/pennylane/pull/6928)

* `Controlled` operators now have a full implementation of `sparse_matrix` that supports `wire_order` configuration.
[(#6994)](https://github.com/PennyLaneAI/pennylane/pull/6994)

Expand Down Expand Up @@ -280,6 +283,10 @@

<h3>Breaking changes 💔</h3>

* `qml.gradients.gradient_transform.choose_trainable_params` has been renamed to `choose_trainable_param_indices`
to better reflect what it actually does.
[(#6928)](https://github.com/PennyLaneAI/pennylane/pull/6928)

* `MultiControlledX` no longer accepts strings as control values.
[(#6835)](https://github.com/PennyLaneAI/pennylane/pull/6835)

Expand Down
8 changes: 4 additions & 4 deletions pennylane/gradients/finite_difference.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@
_all_zero_grad,
_no_trainable_grad,
assert_no_trainable_tape_batching,
choose_trainable_params,
choose_trainable_param_indices,
find_and_validate_gradient_methods,
)

Expand Down Expand Up @@ -479,11 +479,11 @@ def finite_diff(
if argnum is None and not tape.trainable_params:
return _no_trainable_grad(tape)

trainable_params = choose_trainable_params(tape, argnum)
trainable_params_indices = choose_trainable_param_indices(tape, argnum)
diff_methods = (
find_and_validate_gradient_methods(tape, "numeric", trainable_params)
find_and_validate_gradient_methods(tape, "numeric", trainable_params_indices)
if validate_params
else {idx: "F" for idx in trainable_params}
else {idx: "F" for idx in trainable_params_indices}
)

if all(g == "0" for g in diff_methods.values()):
Expand Down
21 changes: 16 additions & 5 deletions pennylane/gradients/gradient_transform.py
Original file line number Diff line number Diff line change
Expand Up @@ -101,7 +101,7 @@ def assert_no_trainable_tape_batching(tape, transform_name):
)


def choose_trainable_params(tape, argnum=None):
def choose_trainable_param_indices(tape, argnum=None):
"""Returns a list of trainable parameter indices in the tape.
Chooses the subset of trainable parameters to compute the Jacobian for. The function
Expand All @@ -116,6 +116,17 @@ def choose_trainable_params(tape, argnum=None):
Returns:
list: list of the trainable parameter indices
Note that trainable param indices are a **double pointer**.
>>> tape = qml.tape.QuantumScript([qml.RX(0.0, 0), qml.RY(1.0, 0), qml.RZ(2.0, 0)], trainable_params=[1,2])
>>> chose_trainable_param_indices(tape, argnum=[0])
[0]
>>> tape.get_operation(0)
(RY(1.0, wires=[0]), 1, 0)
In this case ``[0]`` points to the ``RY`` parameter. ``0`` selects into ``tape.trainable_params``,
which selects into ``tape.data``.
"""

if argnum is None:
Expand Down Expand Up @@ -382,13 +393,13 @@ def _contract_qjac_with_cjac(qjac, cjac, tape):
num_measurements = len(tape.measurements)
has_partitioned_shots = tape.shots.has_partitioned_shots

if isinstance(qjac, tuple) and len(qjac) == 1:
if isinstance(qjac, (tuple, list)) and len(qjac) == 1:
qjac = qjac[0]

if isinstance(cjac, tuple) and len(cjac) == 1:
if isinstance(cjac, (tuple, list)) and len(cjac) == 1:
cjac = cjac[0]

cjac_is_tuple = isinstance(cjac, tuple)
cjac_is_tuple = isinstance(cjac, (tuple, list))

multi_meas = num_measurements > 1

Expand All @@ -402,7 +413,7 @@ def _contract_qjac_with_cjac(qjac, cjac, tape):
_qjac = _qjac[0]
if has_partitioned_shots:
_qjac = _qjac[0]
single_tape_param = not isinstance(_qjac, tuple)
single_tape_param = not isinstance(_qjac, (tuple, list))

if single_tape_param:
# Without dimension (e.g. expval) or with dimension (e.g. probs)
Expand Down
Loading

0 comments on commit 64e226f

Please sign in to comment.