diff --git a/README.rst b/README.rst
index c23b7ae33..e4fbedb17 100755
--- a/README.rst
+++ b/README.rst
@@ -1,5 +1,20 @@
Torchreid
===========
+.. image:: https://img.shields.io/github/license/KaiyangZhou/deep-person-reid
+ :alt: GitHub license
+ :target: https://github.com/KaiyangZhou/deep-person-reid/blob/master/LICENSE
+
+.. image:: https://img.shields.io/github/v/release/KaiyangZhou/deep-person-reid
+ :alt: GitHub release (latest by date)
+
+.. image:: https://img.shields.io/github/stars/KaiyangZhou/deep-person-reid
+ :alt: GitHub stars
+ :target: https://github.com/KaiyangZhou/deep-person-reid/stargazers
+
+.. image:: https://img.shields.io/github/forks/KaiyangZhou/deep-person-reid
+ :alt: GitHub forks
+ :target: https://github.com/KaiyangZhou/deep-person-reid/network
+
Torchreid is a library built on `PyTorch `_ for deep-learning person re-identification.
It features:
@@ -26,6 +41,8 @@ How-to instructions: https://kaiyangzhou.github.io/deep-person-reid/user_guide.
Model zoo: https://kaiyangzhou.github.io/deep-person-reid/MODEL_ZOO.
+Tech report: https://arxiv.org/abs/1910.10093.
+
Installation
---------------
@@ -248,6 +265,7 @@ ReID-specific models
- `PCB `_
- `MLFN `_
- `OSNet `_
+- `OSNet-AIN `_
Losses
------
@@ -257,13 +275,27 @@ Losses
Citation
---------
-If you find this code useful to your research, please cite the following publication.
+If you find this code useful to your research, please cite the following publications.
.. code-block:: bash
+
+ @article{torchreid,
+ title={Torchreid: A Library for Deep Learning Person Re-Identification in Pytorch},
+ author={Zhou, Kaiyang and Xiang, Tao},
+ journal={arXiv preprint arXiv:1910.10093},
+ year={2019}
+ }
- @article{zhou2019osnet,
+ @inproceedings{zhou2019osnet,
title={Omni-Scale Feature Learning for Person Re-Identification},
author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
- journal={arXiv preprint arXiv:1905.00953},
+ booktitle={ICCV},
+ year={2019}
+ }
+
+ @article{zhou2019learning,
+ title={Learning Generalisable Omni-Scale Representations for Person Re-Identification},
+ author={Zhou, Kaiyang and Yang, Yongxin and Cavallaro, Andrea and Xiang, Tao},
+ journal={arXiv preprint arXiv:1910.06827},
year={2019}
}
diff --git a/docs/figures/ranked_results.jpg b/docs/figures/ranking_results.jpg
similarity index 100%
rename from docs/figures/ranked_results.jpg
rename to docs/figures/ranking_results.jpg
diff --git a/docs/user_guide.rst b/docs/user_guide.rst
index add1e1e15..8cacc7c25 100644
--- a/docs/user_guide.rst
+++ b/docs/user_guide.rst
@@ -201,11 +201,11 @@ Visualize learning curves with tensorboard
The ``SummaryWriter()`` for tensorboard will be automatically initialized in ``engine.run()`` when you are training your model. Therefore, you do not need to do extra jobs. After the training is done, the ``*tf.events*`` file will be saved in ``save_dir``. Then, you just call ``tensorboard --logdir=your_save_dir`` in your terminal and visit ``http://localhost:6006/`` in a web browser. See `pytorch tensorboard `_ for further information.
-Visualize ranked results
--------------------------
-Ranked images can be visualized by setting ``visrank`` to true in ``engine.run()``. ``visrank_topk`` determines the top-k images to be visualized (Default is ``visrank_topk=10``). Note that ``visrank`` can only be used in test mode, i.e. ``test_only=True`` in ``engine.run()``. The images will be saved under ``save_dir/visrank_DATASETNAME`` where each image contains the top-k ranked list given a query. An example is shown below. Red and green denote incorrect and correct matches respectively.
+Visualize ranking results
+---------------------------
+This can be achieved by setting ``visrank`` to true in ``engine.run()``. ``visrank_topk`` determines the top-k images to be visualized (Default is ``visrank_topk=10``). Note that ``visrank`` can only be used in test mode, i.e. ``test_only=True`` in ``engine.run()``. The output will be saved under ``save_dir/visrank_DATASETNAME`` where each plot contains the top-k similar gallery images given a query. An example is shown below where red and green denote incorrect and correct matches respectively.
-.. image:: figures/ranked_results.jpg
+.. image:: figures/ranking_results.jpg
:width: 800px
:align: center