Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training accuracies are low and their gaps are big when Conv4 backbone is used. #63

Open
hjk92g opened this issue May 10, 2021 · 0 comments

Comments

@hjk92g
Copy link

hjk92g commented May 10, 2021

I checked training accuracy when I used Conv4 backbone. I found that the training accuracies are low and their gaps are big.

For example, in 5 way 1 shot task (CUB data with data augmentation), protonet achieved 55.02% +- 0.97% training accuracy and baseline++ achieved 73.84% +- 0.86% training accuracy. Training accuracy gaps might also affect the test accuracy.

When I used ResNet18 backbone, protonet achieved 92.65% +- 0.63% training accuracy and baseline++ achieved 99.81% +- 0.10% training accuracy. It is less issue compare to the case for Conv4.

If you are comparing models with simple backbones, you have to be careful and check training accuracy for fair comparison. You can check training accuracy by using "--split base" in both "save_features.py" and "test.py".

(It is not due to the best model selection using the validation set. I also checked training accuracy for every epoch in protonet. When I used Conv4 backbone, the maximum training accuracy was 57.11% +- 2.10%.)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant