-
Notifications
You must be signed in to change notification settings - Fork 244
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
michaelis_menten
transformation as pt.TensorVariable
#1054
michaelis_menten
transformation as pt.TensorVariable
#1054
Conversation
michaelis_menten
tranformation as pt.TensorVariable
michaelis_menten
transformation as pt.TensorVariable
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
pymc_marketing/mmm/transformers.py
Outdated
@@ -914,6 +914,31 @@ def michaelis_menten( | |||
return alpha * x / (lam + x) | |||
|
|||
|
|||
def michaelis_menten( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need two functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can all of the SaturationTransformations
be wrapped in as_tensor_variable instead of making these changes?
ie. here:
return self.function(x, **kwargs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need two functions?
We don't need them, but made my life easier to avoid the pesky L70
Can all of the
SaturationTransformations
be wrapped in as_tensor_variable instead of making these changes?
Do you have anything in mind? I can think of 2 ways:
- Adding boilerplate to all the classes. Perhaps defining a decorator to reduce boilerplate.
- Rely on
Transformation._checks()
but here we would need to trust the type hints to use something likesignature(class_function).return_annotation
and wrappt.as_tensor_variable
if needed.
Perhaps you have something less convoluted in mind?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you not add it in the one location of the apply
method? (where I linked) That is a common method that is not overwritten by the subclasses
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think having it for apply would make sure that other custom saturation transformations will not encounter this found bug as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shouldn't hurt to has pt.as_tensor_variable wrapping an already TensorVariable. pytensor will figure out an optimization
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you not add it in the one location of the
apply
method? (where I linked) That is a common method that is not overwritten by the subclasses
The linked apply
method needs to be called within a model context. Not the case in the plotting function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The proposed sample_curve
might work though, since it opens a model context. Having a look
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we want to add another pm.Deterministic
to the model?
It would be the case if we use _sample_curve
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We wouldnt want to use the private methods of the saturation transformation. Does sample_curve with fit_result not get us close to the data that is plotted? (Just not scaled on x and y)
pymc_marketing/mmm/transformers.py
Outdated
alpha: float, | ||
lam: float, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't know the best type hints here. But TensorVariable is also accepted here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I still don't know the best type hints here. But TensorVariable is also accepted here
Reverted changes to type hints
pymc_marketing/mmm/utils.py
Outdated
@@ -22,7 +22,7 @@ | |||
import xarray as xr | |||
from scipy.optimize import curve_fit, minimize_scalar | |||
|
|||
from pymc_marketing.mmm.transformers import michaelis_menten | |||
from pymc_marketing.mmm.transformers import michaelis_menten_function | |||
|
|||
|
|||
def estimate_menten_parameters( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Where is this still used? I thought there was a generalization of this? @cetagostini
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created an issue here: #1055
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you write a test for the case that discovered this bug
I am looking over the There needs to be a solution that works for others custom saturation transformations I will create an issue for this. We can address in the future #1056 |
Added |
This will add 3 extra |
Lets skip if that is required. Dont think that would be good |
pymc_marketing/mmm/transformers.py
Outdated
@@ -826,10 +826,10 @@ def tanh_saturation_baselined( | |||
return gain * x0 * pt.tanh(x * pt.arctanh(r) / x0) / r | |||
|
|||
|
|||
def michaelis_menten( | |||
def michaelis_menten_function( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only used in the functions we will deprecate, right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is only used in the functions we will deprecate, right?
Yes, they are only used in the methods linked in #1055.
If we remove those, there is no need to define the extra function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Mind closing that now?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @wd60622,
having doubts about completely removing functions in #1055.
As an example, take estimate_menten_parameters
. It was introduced in #329, and never used outside of the current test. I am guessing the original purpose was to double check that the introduced function gives correct results. If we remove the tests, we would never be checking that again.
As alternative I've simplified the PR to wrap only MichaelisMentenSaturation.function
with pt.as_tensor_variable
.
Note that the newly introduced fixture now passes the tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we need to remove tests, that is fine.
This is only saturation transformation which needs to be defined with two functions (which I find silly). Just cosolidate
Thanks for all the edits @PabloRoque I am out atm but @juanitorduz will take a look |
I'll review on the weekend 🙏💪 (busy week ahead 😄) |
No rush! I'll be taking some time off myself until the 28th. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is looking good! Shall we merge this one and continue with other smaller PRs?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good. Let's merge and continue elsewhere if needed
Make
michaelis_meten
consistent with the rest ofSaturationTransformation
by wrapping it aspt.TensorVariable
Description
Rename oldmichaelis_menten
asmichaelis_menten_function
. Made to avoid major changes inmmm.utils.estimate_menten_parameters
's L70.Wrap previousmichaelis_menten_function
aspt.TensorVariable
insidemichaelis_menten
Minor change intest_michaelis_menten
. The TensorVariable now needs to be evaluated.Minor changes to type hintsMichaelisMentenSaturation.function
withpt.as_tensor_variable
test_plotting.mock_mmm
fixture withMichaelisMentenSaturation
.eval()
in example notebookRelated Issue
MMM.plot_direct_contribution_curves
errors when usingMichaelisMentenSaturation
#1050Checklist
Modules affected
Type of change
📚 Documentation preview 📚: https://pymc-marketing--1054.org.readthedocs.build/en/1054/