-
Notifications
You must be signed in to change notification settings - Fork 248
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Parameter tuning for multi_arm_causal_forest #1195
Comments
Hi Stephen, We haven't added tuning for this forest yet, it'll maybe be added at some point. You can however assess model performance by calculating the implied loss (R-loss mentioned in the docstring) yourself. If tau.hat <- predict(forest)$predictions[,,]
W.matrix <- model.matrix(~ forest$W.orig - 1)[, -1, drop = FALSE]
W.hat <- forest$W.hat[, -1]
Y.hat <- forest$Y.hat[, 1]
Y.orig <- forest$Y.orig[, 1]
residual <- Y.orig - (Y.hat + rowSums((W.matrix - W.hat) * tau.hat))
R.loss <- mean(residual^2) Tuning essentially works by cross-validating on this criterion (and evaluating it on a held out sample). If you just want to try a few tuning parameters or compare with other estimators, then just computing and comparing this loss should be perfectly fine. You may notice empirically that different |
Hi Erik,
That’s great – thanks very much for your helpful response and for including the code. Thanks also for the link to Stefan’s video, I will check that out.
Regards,
Stephen
From: Erik Sverdrup ***@***.***>
Sent: 02 August 2022 17:42
To: grf-labs/grf ***@***.***>
Cc: Stephen O'Neill ***@***.***>; Author ***@***.***>
Subject: Re: [grf-labs/grf] Parameter tuning for multi_arm_causal_forest (Issue #1195)
Hi Stephen,
We haven't added tuning for this forest yet, it'll maybe be added at some point.
You can however assess model performance by calculating the implied loss (R-loss mentioned in the docstring<https://grf-labs.github.io/grf/reference/multi_arm_causal_forest.html#details>) yourself. If forest is a trained multi-arm causal forest:
tau.hat <- predict(forest)$predictions[,,]
W.matrix <- model.matrix(~ forest$W.orig - 1)[, -1, drop = FALSE]
W.hat <- forest$W.hat[, -1]
Y.hat <- forest$Y.hat[, 1]
Y.orig <- forest$Y.orig[, 1]
residual <- Y.orig - (Y.hat + rowSums((W.matrix - W.hat) * tau.hat))
R.loss <- mean(residual^2)
Tuning essentially works by cross-validating on this criterion (and evaluating it on a held out sample). If you just want to try a few tuning parameters or compare with other estimators, then just computing and comparing this loss should be perfectly fine.
You may notice empirically that different R.loss's only differ in the last digits: that is expected since the treatment effect signal is dominated by noise. Stefan has a nice video lecture covering an example of this here<https://youtu.be/fAUmCRgpP6g?t=529>.
—
Reply to this email directly, view it on GitHub<#1195 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/A2KIA5EWDSTSN6QVGW7SKILVXFFVBANCNFSM55HLCBNA>.
You are receiving this because you authored the thread.Message ID: ***@***.******@***.***>>
|
Hello,
Thanks for your great package. I am interested in applying the multi_arm_causal_forest function, however I notice the tune.parameters option, available for other functions, does not seem to be an option here. I would appreciate any advice on how to tune the parameters in this context. I wonder if this option is likely to be added in future?
Thanks,
Stephen
The text was updated successfully, but these errors were encountered: