-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow for passing of 'fixed' propensity scores #54
Comments
Some ideas:
|
@ArseniyZvyagintsevQC Moving the discussion from the already closed PR to this open issue I believe I understand your motivation of not only wanting to pass floats. A practical complication, though, is that this will cause problems at inference time for out-of-sample data. Granted, the What did you have in mind regarding the concern I'm raising? Are you confident that learning the propensity model based on covariates won't effectively lead to a recovery of the fixed propensities? |
@kklein I do understand your concerns. I have two ideas in mind how we could not break the design & not raise nasty errors while still passing fixed prop scores:
Note that users can do something like (2) under current implementation. In my project I simply added one column to the data called prop_scores and specified the propensity_model to use this column only. It worked out nicely |
Several MetaLearners, such as the R-Learner or DR-Learner, have propensity base models.
As of now, they are trained -- just as all other base models -- based on the data passed through the
MetaLearner
'sfit
call.In particular in cases of non-observational data, it might be interesting to pass 'fixed' propensity scores, as compared to trying to infer the propensities from the experiment data.
Next steps:
__init__
,fit
,predict
?)The text was updated successfully, but these errors were encountered: