-
Notifications
You must be signed in to change notification settings - Fork 71
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The gradient of h_estimate is not cut down. #6
Comments
Hi @ZhangXiao96, what do you mean by that? Do you have an idea how this could be fixed? |
Maybe @ZhangXiao96 is talking about what is mentioned here. |
Hi! Thanks for the great code! How can I fixed these problems? Thanks in advance. Here is my experimental Details:
After reading the issues, I think maybe "h_estimate" should not be directly use for calculate itselves the next step. Because each step, only the value of h_estimate should be used. If h_estimate is used as a "variable", I am not sure whether the hvp() function will calculate multiple order gradient?
or
But I found, both this two modification will increase the influence function to NAN as recursion_depth increases. |
I am not sure it is right to use the initial h_estimate to calculate the hvp() in each step. I check the tf code provided by the author (https://github.com/kohpangwei/influence-release/blob/578bc458b4d7cc39ed7343b9b271a04b60c782b1/influence/genericNeuralNet.py#L475). |
Hi @zhongyy, However, why did you modify hvp part? I want to ask more about your NAN error. How can we reproduce that error? (note) |
@zhongyy I have the same problem like you.The h_estimate is increasing in the iteration and will be nan.So do you fix this problem? |
@zhongyy @wangdi19941224 @ryokamoi did any of you manage to fix the NaN blowing up issue? I face the same whenever I encase it with a |
@iamgroot42 What kind of model did you use? One possible solution is to use a larger "scale". |
@ryokamoi it's VGG19. I did try increasing "scale" to 500. I got rid of the NaNs (for now). |
@iamgroot42 I think there is no computationally easy way to get the lowest scale since we have to calculate detH. |
Right. Thanks a lot, @ryokamoi :D |
Hi everyone, have any of you managed to solve the NAN problem? I've increased the scale to a very large number, but still got NAN after about 100 iterations.? |
Nice repo!
However, I think that the gradient of h_estimate is not cut down, which may lead to some problems.
The text was updated successfully, but these errors were encountered: