Skip to content
This repository has been archived by the owner on Oct 30, 2019. It is now read-only.

accuracy in val set is different between training and testing #171

Open
Lzc6996 opened this issue Mar 22, 2017 · 4 comments
Open

accuracy in val set is different between training and testing #171

Lzc6996 opened this issue Mar 22, 2017 · 4 comments

Comments

@Lzc6996
Copy link

Lzc6996 commented Mar 22, 2017

I train a resnet-50 in my own dataset, and i get top1 7.20 in my val set during training. But when i use the testOnly option to test the same model in the same val set i get top1 7.49 .
i want to know what happen there?

@Lzc6996
Copy link
Author

Lzc6996 commented Mar 23, 2017

@colesbury could you help me ? thank you very much! I really need to know what make the same model have different performance. when training i get * Best model 7.2048611111111 1.5873015873016 but when i testOnly it turn to top1 error 7.499, the same model in the same dataset.

@colesbury
Copy link
Contributor

Possibly differences in batch normalization's running_mean/var between data parallel replicas and the saved model. You can try recomputing the statistics on your training set.

@Lzc6996
Copy link
Author

Lzc6996 commented Mar 24, 2017

I see. I will try. thanks!

@zaeemzadeh
Copy link

Which one did you report in the papers?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants