-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add GLM Support #763
Add GLM Support #763
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here are some minor code-style comments. I am unfamiliar with the numerical estimation details, so someone else can help review them.
weights_type=weights_type, | ||
# tol=tol, | ||
# maxiter=maxiter, | ||
collin_tol=collin_tol, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why these commented lines ? :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's a relict from an earlier implementation in which the Feglm class inherited from Fepois, which has these arguments. Now it inherits from Feols, which does not have them, so both arguments are set in Feglm.init. I've cleaned this up, coming with the next commit.
pyfixest/estimation/feglm_.py
Outdated
|
||
# Step 1: _get weights w_tilde(r-1) and v(r-1) (eq. 2.5) | ||
detadmu = self._update_detadmu(mu=mu) | ||
# v = self._update_v(y=_Y.flatten(), mu=mu, detadmu=detadmu) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
commented line?
v_dotdot=v_dotdot, | ||
X_dotdot=X_dotdot, | ||
deviance_old=deviance_old, | ||
step_halfing_tolerance=1e-12, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you a case where the user would need to modify the step_halfing_tolerance
parameter?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I checked the glmnet
API, and they do not seem to support setting the step halfing tolerance: link. For now I would maybe not support it, but add it in case users request it?
return Z.T @ W @ v | ||
|
||
def _get_diff(self, deviance: np.ndarray, last: np.ndarray) -> np.ndarray: | ||
return np.abs(deviance - last) / (0.1 + np.abs(last)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why 0.1? It clearly seems to be numerical stability, but could it be smaller (just curious)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I actually don't know - I am using this as it is mimic fixest
, which itself follows R's glm implementation, which is using this convergence criterion: link
I had some issues locking the file in #767 (initial commits). I had to make sure to update the environments that I really needed. It seems a total update is breaking something (in case it helps) |
Tests fail due to an incorrect assumption on formulaic behavior, which seems to have changed with version 1.1.0. Setting formulaic requirements to <1.1.0 for now. Details here: #777 |
Introduces
pf.feglm()
, which allows to fit GLMs with Gaussian & Binomial families (Logit and Probit).In Python:
In
R
:To do:
Only apparent issues / error (?):
pyfixest
but notfixest
(Gaussian scores in fixest <> OLS scores, not sure why).