Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Direct Transfer results #1

Open
liangbh6 opened this issue Jan 29, 2018 · 6 comments
Open

Direct Transfer results #1

liangbh6 opened this issue Jan 29, 2018 · 6 comments

Comments

@liangbh6
Copy link

 Hi, I have some questions about your 'Direct Transfer' results in Table 2 in your paper. I make the setting consistent with yours, but I can't get so-high baselines, like rank1 33.1%, mAP 16.7  when train on market1501 and test on Duke-MTMC-reID. Actually I always got this result around rank1 27%, mAP 13%. And this result is better than those in the [https://arxiv.org/pdf/1705.10444.pdf](url), where this result is that rank1 21.9%, mAP 10.9%. 
 I wonder that is there some tricks used in your experiments, and I'm looking forward to your reply .
@ghost
Copy link

ghost commented Jan 30, 2018

@liangbh6 @Simon4Yan Which framework you use? Caffe or pytorch?

@liangbh6
Copy link
Author

@Simon4john pytorch. So, the reason is the differences between pytorch and caffe? If I want to reproduce your results using pytorch, do you have some suggestion, about the learning rate, data augmentation, or testing tricks like normalization? Actually I have tried to normalized the features but it helped a little.

@Simon4Yan
Copy link
Owner

@liangbh6 @Simon4john Thanks for your attention. The code for re-ID feature learning is mainly modified from IDE, and the framework is Caffe.

@Simon4Yan
Copy link
Owner

Thanks for your question. And we conduct experiment to see that difference between pytroch and caffe, we find the BN leads to the this performance gap. I will give the experiment details about it after I come back to school.

@Simon4Yan
Copy link
Owner

Simon4Yan commented Mar 12, 2018

With the help of Houjing Huang (his homepage is here ), I find the performance gap on pytorch and caffe is caused by BN.

I give huang's experiments here:

whether you set BN layer to train or eval mode during training. The eval mode for BN layer during training, corresponding to Caffe's batch_norm_param {use_global_stats: true}, means using ImageNet BN mean and variance during training.

We train models using pytorch, and the settings are the same with caffe.

(1) When setting BN layer to train mode during training and eval mode during testing, the scores are as follows:

  • Market1501->Market1501 [mAP: 58.13%], [cmc1: 78.95%]
  • Market1501->Duke [mAP: 11.55%], [cmc1: 21.99%]

(2) When setting BN layer to eval mode during training and eval mode during testing, the scores are as follows:

  • Market1501->Market1501 [mAP: 52.38%], [cmc1: 76.31%]
  • Market1501->Duke [mAP: 16.68%], [cmc1: 31.82%]

Therefore, we believe that BN is the key factor to the performance gap between caffe and pytorch.

@liangbh6
Copy link
Author

@Simon4Yan Excellent work! Thanks a lot.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants