You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I checked training accuracy when I used Conv4 backbone. I found that the training accuracies are low and their gaps are big.
For example, in 5 way 1 shot task (CUB data with data augmentation), protonet achieved 55.02% +- 0.97% training accuracy and baseline++ achieved 73.84% +- 0.86% training accuracy. Training accuracy gaps might also affect the test accuracy.
When I used ResNet18 backbone, protonet achieved 92.65% +- 0.63% training accuracy and baseline++ achieved 99.81% +- 0.10% training accuracy. It is less issue compare to the case for Conv4.
If you are comparing models with simple backbones, you have to be careful and check training accuracy for fair comparison. You can check training accuracy by using "--split base" in both "save_features.py" and "test.py".
(It is not due to the best model selection using the validation set. I also checked training accuracy for every epoch in protonet. When I used Conv4 backbone, the maximum training accuracy was 57.11% +- 2.10%.)
The text was updated successfully, but these errors were encountered:
I checked training accuracy when I used Conv4 backbone. I found that the training accuracies are low and their gaps are big.
For example, in 5 way 1 shot task (CUB data with data augmentation), protonet achieved 55.02% +- 0.97% training accuracy and baseline++ achieved 73.84% +- 0.86% training accuracy. Training accuracy gaps might also affect the test accuracy.
When I used ResNet18 backbone, protonet achieved 92.65% +- 0.63% training accuracy and baseline++ achieved 99.81% +- 0.10% training accuracy. It is less issue compare to the case for Conv4.
If you are comparing models with simple backbones, you have to be careful and check training accuracy for fair comparison. You can check training accuracy by using "--split base" in both "save_features.py" and "test.py".
(It is not due to the best model selection using the validation set. I also checked training accuracy for every epoch in protonet. When I used Conv4 backbone, the maximum training accuracy was 57.11% +- 2.10%.)
The text was updated successfully, but these errors were encountered: