This repository contains implementations of popular Convolutional Neural Network (CNN) backbone networks for image classification, object detection, and segmentation tasks. The goal is to provide easy-to-use and well-documented implementations of these models for researchers and practitioners.
To use the code in this repository, follow the instructions below to install dependencies, run the models, and explore the different implementations.
-
Clone the repository:
git clone https://github.com/lvzongyao/cnn-classification-networks.git cd cnn-classification-networks
-
Create a virtual environment and activate it:
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
To run the models, you can use the provided train.py
script. Here are some examples:
-
Train a ResNet-34 model on the CIFAR-10 dataset:
python train.py --dataset cifar10 --data_path /path/to/cifar10 --num_classes 10 --batch_size 128 --model resnet34 --num_epochs 30
-
Train a ResNet-34 model on the CIFAR-100 dataset:
python train.py --dataset cifar100 --data_path /path/to/cifar100 --num_classes 100 --batch_size 128 --model resnet34 --num_epochs 30
-
Train ViT on MNIST:
python vit_train.py --dataset mnist --epochs 10 --batch_size 128 --lr 0.001 --device cuda --num_classes 10
-
Train ViT on CIFAR-10:
python vit_train.py --dataset cifar10 --epochs 20 --batch_size 64 --lr 0.0001 --device cuda --num_classes 10
-
Train ViT on CIFAR-100:
python vit_train.py --dataset cifar100 --epochs 30 --batch_size 32 --lr 0.0005 --device cuda --num_classes 100
This repository includes implementations of the following models:
-
LeNet: A classic convolutional neural network for image classification.
- Implementation:
models/LeNet/lenet.py
- Training script:
models/LeNet/train.py
- Implementation:
-
AlexNet: A deep convolutional neural network for image classification.
- Implementation:
models/alexnet.py
- Implementation:
-
Network in Network (NIN): A network that replaces traditional convolutional layers with micro neural networks.
- Implementation:
models/nin.py
- Implementation:
-
VGG: A deep convolutional neural network with very small convolutional filters.
- Implementation:
models/vgg.py
- Alternative implementation:
models/vgg_2.py
- Implementation:
-
AllConvNet: An all convolutional network for image classification.
- Implementation:
models/allconvnet/allconvnet.ipynb
- Python file:
models/allconvnet/allconvnet.py
- Implementation:
-
ResNet: A deep residual network for image classification.
- Implementation:
models/resnet.py
- Implementation:
-
Vision Transformer (ViT): A transformer-based model for image classification.
- Implementation:
models/vit.py
- Implementation:
The repository is organized as follows:
CNNs-to-Vision-Transformers/
├── models/
│ ├── LeNet/
│ │ ├── lenet.py
│ │ ├── train.py
│ ├── allconvnet/
│ │ ├── allconvnet.ipynb
│ │ ├── allconvnet.py
│ │ ├── best.pt
│ │ ├── checkpoint/
│ │ │ ├── allconvnet_cifar10_epoch_1.ckpt
│ │ │ ├── allconvnet_cifar10_epoch_2.ckpt
│ ├── alexnet.py
│ ├── nin.py
│ ├── resnet.py
│ ├── vgg.py
│ ├── vgg_2.py
│ ├── vit.py
├── train.py
├── utils.py
├── utils_for_google_drive.py
├── commands.txt
├── .gitignore
└── README.md
LeNet [paper] (1998)
The LeNet model is a classic convolutional neural network for image classification. It was one of the first successful applications of CNNs and is widely used for educational purposes.
AlexNet [paper] (2012)
The AlexNet model is a deep convolutional neural network for image classification. It was one of the first models to achieve significant performance improvements on the ImageNet dataset.
Network in Network (NIN) [paper] (2013)
The Network in Network (NIN) model replaces traditional convolutional layers with micro neural networks, which helps to improve the model's performance and efficiency.
VGG [paper] (2014)
The VGG model is a deep convolutional neural network with very small convolutional filters. It is known for its simplicity and effectiveness in image classification tasks.
AllConvNet [paper] (2015)
The AllConvNet model is an all convolutional network for image classification. It replaces traditional fully connected layers with convolutional layers, making the network more efficient and easier to train.
ResNet [paper] (2015)
The ResNet model is a deep residual network for image classification. It introduces residual connections, which help to mitigate the vanishing gradient problem and enable the training of very deep networks.
Vision Transformer (ViT) [paper] (2020)
The Vision Transformer (ViT) is a deep learning architecture that applies the transformer model, originally designed for natural language processing, to image recognition tasks by dividing images into patches and processing them as sequences, achieving state-of-the-art performance on various vision benchmarks.