Reproduction of MobileNet V2 architecture as described in MobileNetV2: Inverted Residuals and Linear Bottlenecks by Mark Sandler, Andrew Howard, Menglong Zhu, Andrey Zhmoginov and Liang-Chieh Chen on ILSVRC2012 benchmark with PyTorch framework. Adapted from pytorch-classification and pytorch-mobilenet-v2.
This implementation provides an example procedure of training and validating any prevalent deep neural network architecture, with modular data processing, training, logging and visualization integrated.
Download the ImageNet dataset and move validation images to labeled subfolders. To do this, you can use the following script: https://raw.githubusercontent.com/soumith/imagenetloader.torch/master/valprep.sh
NumPy 1.15.0
matplotlib 2.2.3
PyTorch 0.4.1
torchvision 0.2.1
tensorbord 1.9.0
tensorboardX 1.2
Our pretrained model achieves 71.97% top-1 accuracy and 90.52% top-5 accuracy on ImageNet validation set, which is comparable with the statistics reported in the original paper and official TensorFlow implementation.
Pretrained model can be easily imported using the following lines and then finetuned for other vision tasks or utilized in resource-limited platforms.
from models.imagenet import mobilenetv2
net = mobilenetv2()
net.load_state_dict(torch.load('pretrained/mobilenetv2-36f4e720.pth'))
The following is a BibTeX entry for the MobileNet V2 paper that you should cite if you use this model.
@InProceedings{Sandler_2018_CVPR,
author = {Sandler, Mark and Howard, Andrew and Zhu, Menglong and Zhmoginov, Andrey and Chen, Liang-Chieh},
title = {MobileNetV2: Inverted Residuals and Linear Bottlenecks},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}
This repository is licensed under the Apache License 2.0.