Skip to content

Commit

Permalink
note some opportunities for improved efficiency
Browse files Browse the repository at this point in the history
  • Loading branch information
rileyjmurray committed Feb 2, 2024
1 parent f85716b commit 14f1af4
Showing 1 changed file with 16 additions and 1 deletion.
17 changes: 16 additions & 1 deletion pygsti/forwardsims/torchfwdsim.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,6 +46,21 @@
overload @ in whatever way that they need.
"""

"""Efficiency ideas
* Compute the jacobian in blocks of rows at a time (iterating over the blocks in parallel). Ideally pytorch
would recognize how the computations decompose, but we should check to make sure it does.
* Recycle some of the work in setting up the Jacobian function.
Calling circuit.expand_instruments_and_separate_povm(model, outcomes) inside the StatelessModel constructor
might be expensive. It only need to happen once during an iteration of GST.
* get_torch_cache can be made much more efficient.
* it should suffice to just iterate over self.param_labels (or, equivalently, the keys of free_params).
I can add a self.param_types field to the StatelessModel class.
We might need to store a little more info in StatelessModel so we have the necessary metadata for each
parameter's static "torch_base" method (dimensions should suffice).
"""

class StatelessCircuit:

def __init__(self, spc: SeparatePOVMCircuit, model: ExplicitOpModel):
Expand Down Expand Up @@ -125,7 +140,7 @@ def get_free_parameters(self, model: ExplicitOpModel):
d[lbl] = vec
return d

def get_torch_cache(self, free_params: Dict[Label, torch.Tensor], grad: bool):
def get_torch_cache(self, free_params: OrderedDict[Label, torch.Tensor], grad: bool):
torch_cache = dict()
for c in self.circuits:

Expand Down

0 comments on commit 14f1af4

Please sign in to comment.