Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Poor performnace when reproducing evaluation on market1501 #81

Open
rrryan2016 opened this issue Jun 12, 2021 · 2 comments
Open

Poor performnace when reproducing evaluation on market1501 #81

rrryan2016 opened this issue Jun 12, 2021 · 2 comments

Comments

@rrryan2016
Copy link

Thanks for your great work and kind sharing.

I am actually a beginner in ReID, and trying to reproduce the test stage of this repo at first.

Environment: 3090Ti, CUDA 11.0, python 2.7, pytorch 0.3.1

I strictly follow the guidance on README, but get below result for ResNet-50 + Global Loss on Market1501

Extracting feature...
1000/1000 batches done, +0.58s, total 454.75s
Done, 454.93s
Computing global distance...
Done, 0.57s
Computing scores for Global Distance...
[mAP: 0.11%], [cmc1: 0.06%], [cmc5: 0.53%], [cmc10: 0.95%]
Done, 8.84s
Re-ranking...
Done, 52.41s
Computing scores for re-ranked Global Distance...
[mAP: 0.11%], [cmc1: 0.06%], [cmc5: 0.53%], [cmc10: 0.95%]
Done, 9.68s

In detail, I downloaded transformed Market1501 by the google drive link you provided, and configure corresponding codes in __init__.py as stated in https://github.com/huanghoujing/AlignedReID-Re-Production-Pytorch#configure-dataset-path.

Then, I downloaded saved model weights of both ResNet-50 + Global Loss with or without mutual learning in the google drive link.

Here is the detailed command and result for ResNet-50 + Global Loss + Mutual Learning on Market1501 setting:

python script/experiment/train.py
-d '(0,)'
--dataset market1501
--normalize_feature false
-glw 1
-llw 0
-idlw 0
--only_test true
--exp_dir /home/vgc/users/lwz/code/reID/AlignedReID-Re-Production-Pytorch/results/exp/resnet50_global_loss_mutual
--model_weight_file /home/vgc/users/lwz/result/reID/checkpoint/ResNet-50_Global_Loss_Mutual_Learning/model_weight.pth

and all the output

cfg.dict
{'base_lr': 0.0002,
'ckpt_file': '/home/vgc/users/lwz/code/reID/AlignedReID-Re-Production-Pytorch/results/exp/resnet50_global_loss_mutual/ckpt.pth',
'crop_prob': 0,
'crop_ratio': 1,
'dataset': 'market1501',
'exp_decay_at_epoch': 76,
'exp_dir': '/home/vgc/users/lwz/code/reID/AlignedReID-Re-Production-Pytorch/results/exp/resnet50_global_loss_mutual',
'g_loss_weight': 1.0,
'global_margin': 0.3,
'id_loss_weight': 0.0,
'ids_per_batch': 32,
'im_mean': [0.486, 0.459, 0.408],
'im_std': [0.229, 0.224, 0.225],
'ims_per_id': 4,
'l_loss_weight': 0.0,
'local_conv_out_channels': 128,
'local_dist_own_hard_sample': False,
'local_margin': 0.3,
'log_steps': 10000000000.0,
'log_to_file': True,
'lr_decay_type': 'exp',
'model_weight_file': '/home/vgc/users/lwz/result/reID/checkpoint/ResNet-50_Global_Loss_Mutual_Learning/model_weight.pth',
'normalize_feature': False,
'only_test': True,
'prefetch_threads': 2,
'resize_h_w': (256, 128),
'resume': False,
'run': 1,
'scale_im': True,
'seed': None,
'staircase_decay_at_epochs': (101, 201),
'staircase_decay_multiply_factor': 0.1,
'stderr_file': '/home/vgc/users/lwz/code/reID/AlignedReID-Re-Production-Pytorch/results/exp/resnet50_global_loss_mutual/stderr_2021-06-12_15:42:23.txt',
'stdout_file': '/home/vgc/users/lwz/code/reID/AlignedReID-Re-Production-Pytorch/results/exp/resnet50_global_loss_mutual/stdout_2021-06-12_15:42:23.txt',
'sys_device_ids': (1,),
'test_batch_size': 32,
'test_final_batch': True,
'test_mirror_type': None,
'test_set_kwargs': {'batch_dims': 'NCHW',
'batch_size': 32,
'final_batch': True,
'im_mean': [0.486, 0.459, 0.408],
'im_std': [0.229, 0.224, 0.225],
'mirror_type': None,
'name': 'market1501',
'num_prefetch_threads': 2,
'part': 'test',
'prng': <module 'numpy.random' from '/home/vgc/anaconda3/envs/lwz27/lib/python2.7/site-packages/numpy/random/init.pyc'>,
'resize_h_w': (256, 128),
'scale': True,
'shuffle': False},
'test_shuffle': False,
'total_epochs': 150,
'train_final_batch': False,
'train_mirror_type': 'random',
'train_set_kwargs': {'batch_dims': 'NCHW',
'crop_prob': 0,
'crop_ratio': 1,
'final_batch': False,
'ids_per_batch': 32,
'im_mean': [0.486, 0.459, 0.408],
'im_std': [0.229, 0.224, 0.225],
'ims_per_id': 4,
'mirror_type': 'random',
'name': 'market1501',
'num_prefetch_threads': 2,
'part': 'trainval',
'prng': <module 'numpy.random' from '/home/vgc/anaconda3/envs/lwz27/lib/python2.7/site-packages/numpy/random/init.pyc'>,
'resize_h_w': (256, 128),
'scale': True,
'shuffle': True},
'train_shuffle': True,
'trainset_part': 'trainval',
'weight_decay': 0.0005}


market1501 trainval set

NO. Images: 12936
NO. IDs: 751


market1501 test set

NO. Images: 31969
NO. IDs: 751
NO. Query Images: 3368
NO. Gallery Images: 15913
NO. Multi-query Images: 12688

/home/vgc/anaconda3/envs/lwz27/lib/python2.7/site-packages/torch/cuda/init.py:95: UserWarning:
Found GPU0 GeForce RTX 3090 which requires CUDA_VERSION >= 9000 for
optimal performance and fast startup time, but your PyTorch was compiled
with CUDA_VERSION 8000. Please install the correct PyTorch binary
using instructions from http://pytorch.org

warnings.warn(incorrect_binary_warn % (d, name, 9000, CUDA_VERSION))
Loaded model weights from /home/vgc/users/lwz/result/reID/checkpoint/ResNet-50_Global_Loss_Mutual_Learning/model_weight.pth

=========> Test on dataset: market1501 <=========

Extracting feature...
1000/1000 batches done, +0.58s, total 458.88s
Done, 459.08s
Computing global distance...
Done, 0.60s
Computing scores for Global Distance...
[mAP: 1.63%], [cmc1: 0.06%], [cmc5: 0.45%], [cmc10: 0.92%]
Done, 8.12s
Re-ranking...
Done, 53.22s
Computing scores for re-ranked Global Distance...
[mAP: 1.63%], [cmc1: 0.06%], [cmc5: 0.45%], [cmc10: 0.92%]
Done, 8.62s

Any recommendation for what to do next? :P

Thanks in advance.

@rrryan2016
Copy link
Author

After I training on Market1501, the evaluation results turn to be,

=========> Test on dataset: market1501 <=========
Extracting feature...
1000/1000 batches done, +0.58s, total 30.42s
Done, 30.58s
Computing global distance...
Done, 0.52s
Computing scores for Global Distance...
[mAP: 50.04%], [cmc1: 0.00%], [cmc5: 0.24%], [cmc10: 0.24%]
Done, 5.63s
Re-ranking...
Done, 39.60s
Computing scores for re-ranked Global Distance...
[mAP: 50.04%], [cmc1: 0.00%], [cmc5: 0.24%], [cmc10: 0.24%]
Done, 6.17s

@rrryan2016
Copy link
Author

If I change the renset50 to Densenet121, and do training, it comes out like,

Extracting feature...
1000/1000 batches done, +0.77s, total 39.60s
Done, 39.70s
Computing global distance...
Done, 0.45s
Computing scores for Global Distance...
[mAP: 50.05%], [cmc1: 0.00%], [cmc5: 0.24%], [cmc10: 0.24%]
Done, 5.92s
Re-ranking...
./aligned_reid/utils/re_ranking.py:45: RuntimeWarning: invalid value encountered in divide
original_dist = np.transpose(1. * original_dist/np.max(original_dist,axis = 0))
Done, 42.30s

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant