Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use less memory in multi_normal_cholesky_lpdf #2983

Open
wants to merge 5 commits into
base: develop
Choose a base branch
from
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
61 changes: 27 additions & 34 deletions stan/math/prim/prob/multi_normal_cholesky_lpdf.hpp
Original file line number Diff line number Diff line change
Expand Up @@ -53,6 +53,9 @@ return_type_t<T_y, T_loc, T_covar> multi_normal_cholesky_lpdf(
using T_partials_return = partials_return_t<T_y, T_loc, T_covar>;
using matrix_partials_t
= Eigen::Matrix<T_partials_return, Eigen::Dynamic, Eigen::Dynamic>;
using vector_partials_t = Eigen::Matrix<T_partials_return, Eigen::Dynamic, 1>;
using row_vector_partials_t
= Eigen::Matrix<T_partials_return, 1, Eigen::Dynamic>;
using T_y_ref = ref_type_t<T_y>;
using T_mu_ref = ref_type_t<T_loc>;
using T_L_ref = ref_type_t<T_covar>;
Expand Down Expand Up @@ -119,59 +122,49 @@ return_type_t<T_y, T_loc, T_covar> multi_normal_cholesky_lpdf(
}

if (include_summand<propto, T_y, T_loc, T_covar_elem>::value) {
Eigen::Matrix<T_partials_return, Eigen::Dynamic, Eigen::Dynamic>
y_val_minus_mu_val(size_y, size_vec);
row_vector_partials_t half(size_vec);
vector_partials_t y_val_minus_mu_val(size_vec);
vector_partials_t scaled_diff(size_vec);
matrix_partials_t L_val = value_of(L_ref);

T_partials_return sum_lp_vec(0.0);

for (size_t i = 0; i < size_vec; i++) {
decltype(auto) y_val = as_value_column_vector_or_scalar(y_vec[i]);
decltype(auto) mu_val = as_value_column_vector_or_scalar(mu_vec[i]);
y_val_minus_mu_val.col(i) = y_val - mu_val;
y_val_minus_mu_val = y_val - mu_val;
half = mdivide_left_tri<Eigen::Lower>(L_val, y_val_minus_mu_val)
.transpose();
scaled_diff = mdivide_right_tri<Eigen::Lower>(half, L_val).transpose();
Comment on lines +136 to +138
Copy link
Collaborator

@andrjohns andrjohns Dec 11, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the part that concerns me, since it's gone from a single solve each (with a matrix) to size_vec solves (with a vector) for half and scaled_diff each. Especially when the single larger solve can be better vectorised with SIMD & other compiler opts

Is there enough of a memory hit to justify to extra operations?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Definitely agreed. We do this sort of thing already in

  • lp_type sum_lp_vec(0.0);
    for (size_t i = 0; i < size_vec; i++) {
    const auto& y_col = as_column_vector_or_scalar(y_vec[i]);
    const auto& mu_col = as_column_vector_or_scalar(mu_vec[i]);
    sum_lp_vec += trace_inv_quad_form_ldlt(ldlt_Sigma, y_col - mu_col);
  • for (size_t i = 0; i < size_vec; i++) {
    const auto& y_col = as_column_vector_or_scalar(y_vec[i]);
    const auto& mu_col = as_column_vector_or_scalar(mu_vec[i]);
    sum_lp_vec
    += log1p(trace_inv_quad_form_ldlt(ldlt_Sigma, y_col - mu_col) / nu);

I'm really not sure what's best here.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If it's alright with you, I'd prefer not to implement this. The current implementation is likely to scale better to larger inputs, and the changes would also reduce any benefits from OpenCL accelerated-ops.

But also completely happy for you to call someone in for a tie-breaker if you feel strongly about it!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with closing it but I want someone to weigh in on if we should change the other distributions. I can update the mvn derivatives pr to follow the same.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@SteveBronder - as the Chief of Memory Police, what do you think?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have an M1max. Is there someone who could benchmark on a windows and linux machine?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I got the library setup but I don't have taskset. Also, how can I set up the script to run the two branches?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You don't need taskset to run the benchmarks, only if you want to isolate in a single core.

You can add another branch in your benchmarks cmake file like

FetchContent_Declare(
  stanmathalt
  GIT_REPOSITORY https://github.com/stan-dev/math
  GIT_TAG  mybranch # replace with the version you want to use
)

FetchContent_GetProperties(stanmathalt)
if(NOT stanmathalt_POPULATED)
  FetchContent_Populate(stanmathalt)
endif()

Then you can include it in your executible build like ${stanmathalt_SOURCE_DIR}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Then how do I just run benchmarks for this distribution?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's an example of how to add a benchmark on the readme and an example benchmark folder below. You need to write a little Cmake file for compiling the benchmark and should be able to use that folder as an example

https://github.com/SteveBronder/stan-perf/tree/main/benchmarks/matmul_aos_soa


sum_lp_vec += dot_self(half);

if (!is_constant_all<T_y>::value) {
partials_vec<0>(ops_partials)[i] += -scaled_diff;
}
if (!is_constant_all<T_loc>::value) {
partials_vec<1>(ops_partials)[i] += scaled_diff;
}
if (!is_constant<T_covar_elem>::value) {
partials_vec<2>(ops_partials)[i] += scaled_diff * half;
}
}

matrix_partials_t half;
matrix_partials_t scaled_diff;
logp += -0.5 * sum_lp_vec;

// If the covariance is not autodiff, we can avoid computing a matrix
// inverse
if (is_constant<T_covar_elem>::value) {
matrix_partials_t L_val = value_of(L_ref);

half = mdivide_left_tri<Eigen::Lower>(L_val, y_val_minus_mu_val)
.transpose();

scaled_diff = mdivide_right_tri<Eigen::Lower>(half, L_val).transpose();

if (include_summand<propto>::value) {
logp -= sum(log(L_val.diagonal())) * size_vec;
}
} else {
matrix_partials_t inv_L_val
= mdivide_left_tri<Eigen::Lower>(value_of(L_ref));

half = (inv_L_val.template triangularView<Eigen::Lower>()
* y_val_minus_mu_val)
.transpose();

scaled_diff = (half * inv_L_val.template triangularView<Eigen::Lower>())
.transpose();

logp += sum(log(inv_L_val.diagonal())) * size_vec;
partials<2>(ops_partials) -= size_vec * inv_L_val.transpose();

for (size_t i = 0; i < size_vec; i++) {
partials_vec<2>(ops_partials)[i] += scaled_diff.col(i) * half.row(i);
}
}

logp -= 0.5 * sum(columns_dot_self(half));

for (size_t i = 0; i < size_vec; i++) {
if (!is_constant_all<T_y>::value) {
partials_vec<0>(ops_partials)[i] -= scaled_diff.col(i);
}
if (!is_constant_all<T_loc>::value) {
partials_vec<1>(ops_partials)[i] += scaled_diff.col(i);
}
partials<2>(ops_partials) -= size_vec * inv_L_val.transpose();
}
}

Expand Down