Skip to content

Commit

Permalink
Merge branch 'master' into pr/sparse_diff_idx
Browse files Browse the repository at this point in the history
  • Loading branch information
bluescarni committed Nov 10, 2023
2 parents ef8e031 + 930c9ca commit 08702d5
Show file tree
Hide file tree
Showing 4 changed files with 11 additions and 3 deletions.
2 changes: 1 addition & 1 deletion doc/advanced_tutorials.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Advanced tutorials

.. important::

More :ref:`tutorials <hypy:adv_tutorials>` and :ref:`examples <hypy:examples>` are available in the documentation
More tutorials and examples are available in the documentation
of heyoka's `Python bindings <https://bluescarni.github.io/heyoka.py>`__.

In this section we will show some of heyoka's more advanced functionalities,
Expand Down
2 changes: 1 addition & 1 deletion doc/basic_tutorials.rst
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ Basic tutorials

.. important::

More :ref:`tutorials <hypy:basic_tutorials>` and :ref:`examples <hypy:examples>` are available in the documentation
More tutorials and examples are available in the documentation
of heyoka's `Python bindings <https://bluescarni.github.io/heyoka.py>`__.

The code snippets in these tutorials assume the inclusion of the
Expand Down
3 changes: 3 additions & 0 deletions doc/changelog.rst
Original file line number Diff line number Diff line change
Expand Up @@ -21,6 +21,9 @@ New
Changes
~~~~~~~

- Substantial speedups in the computation of first-order derivatives
with respect to many variables/parameters
(`#358 <https://github.com/bluescarni/heyoka/pull/358>`__).
- Substantial performance improvements in the computation of
derivative tensors of large expressions with a high degree
of internal redundancy
Expand Down
7 changes: 6 additions & 1 deletion src/expression_diff.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -797,7 +797,10 @@ void diff_tensors_reverse_impl(

// Helpers to ease the access to the active member of the local_diff variant.
// NOTE: if used incorrectly, these will throw at runtime.
auto local_dmap = [&local_diff]() -> diff_map_t & { return std::get<diff_map_t>(local_diff); };
// NOTE: currently local_dmap is never used because the heuristic
// for deciding between forward and reverse mode prevents reverse mode
// from being used for order > 1.
auto local_dmap = [&local_diff]() -> diff_map_t & { return std::get<diff_map_t>(local_diff); }; // LCOV_EXCL_LINE
auto local_dvec = [&local_diff]() -> diff_vec_t & { return std::get<diff_vec_t>(local_diff); };

// Cache the number of diff arguments.
Expand Down Expand Up @@ -959,6 +962,7 @@ void diff_tensors_reverse_impl(

local_dvec().emplace_back(tmp_v_idx, std::move(cur_der));
} else {
// LCOV_EXCL_START
// Check if we already computed this derivative.
if (const auto it = local_dmap().find(tmp_v_idx); it == local_dmap().end()) {
// The derivative is new. If the diff argument is present in the
Expand All @@ -973,6 +977,7 @@ void diff_tensors_reverse_impl(
[[maybe_unused]] const auto [_, flag] = local_dmap().try_emplace(tmp_v_idx, std::move(cur_der));
assert(flag);
}
// LCOV_EXCL_STOP
}
}

Expand Down

0 comments on commit 08702d5

Please sign in to comment.