Skip to content
This repository has been archived by the owner on May 6, 2023. It is now read-only.

Commit

Permalink
Merge pull request #16 from AFAgarap/docs-cnn-svm
Browse files Browse the repository at this point in the history
docs : add results documentation
  • Loading branch information
AFAgarap authored Dec 9, 2017
2 parents 6b818a6 + 3b8c87a commit a187481
Showing 1 changed file with 66 additions and 0 deletions.
66 changes: 66 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,8 @@ An Architecture Combining Convolutional Neural Network (CNN) and Support Vector
*This project was inspired by Y. Tang's [Deep Learning using Support Vector Machines](https://arxiv.org/abs/1306.0239)
(2013).*

## Abstract

Convolutional Neural Networks (CNNs) are similar to "ordinary" neural networks in the sense that they are made up of
hidden layers consisting neurons with "learnable" parameters. These neurons receive inputs, performs a dot product, and
then follows it with a non-linearity. The whole network expresses the mapping between raw image pixels and class scores.
Expand All @@ -20,8 +22,72 @@ inspired by [[Tang, 2017](https://arxiv.org/abs/1306.0239)].

## Usage

First, clone the project.
```bash
git clone https://github.com/AFAgarap/cnn-svm.git/
```

Run the `setup.sh` to ensure that the pre-requisite libraries are installed in the environment.
```bash
sudo chmod +x
./setup.sh
```

Program parameters.
```bash
usage: main.py [-h] -m MODEL -d DATASET [-p PENALTY_PARAMETER] -c
CHECKPOINT_PATH -l LOG_PATH

CNN & CNN-SVM for Image Classification

optional arguments:
-h, --help show this help message and exit

Arguments:
-m MODEL, --model MODEL
[1] CNN-Softmax, [2] CNN-SVM
-d DATASET, --dataset DATASET
path of the MNIST dataset
-p PENALTY_PARAMETER, --penalty_parameter PENALTY_PARAMETER
the SVM C penalty parameter
-c CHECKPOINT_PATH, --checkpoint_path CHECKPOINT_PATH
path where to save the trained model
-l LOG_PATH, --log_path LOG_PATH
path where to save the TensorBoard logs
```

Then, go to the repository's directory, and run the `main.py` module as per the desired parameters.
```bash
cd cnn-svm
python3 main.py --model 2 --dataset ./MNIST_data --penalty_parameter 1 --checkpoint_path ./checkpoint --log_path ./logs
```

## Results

The hyperparameters used in this project were manually assigned, and not through optimization.

|Hyperparameters|CNN-Softmax|CNN-SVM|
|---------------|-----------|-------|
|Batch size|128|128|
|Epochs|10000|10000|
|Learning rate|1e-3|1e-3|
|SVM C|N/A|1|

The experiments were conducted on a laptop computer with Intel Core(TM) i5-6300HQ CPU @ 2.30GHz x 4, 16GB of DDR3 RAM,
and NVIDIA GeForce GTX 960M 4GB DDR5 GPU.

![](figures/accuracy-loss.png)

**Figure 1. Training accuracy (left) and loss (right) of CNN-Softmax and CNN-SVM on image classification using
[MNIST](http://yann.lecun.com/exdb/mnist/).**

The orange plot refers to the training accuracy and loss of CNN-Softmax, with a test accuracy of 99.22999739646912%.
On the other hand, the blue plot refers to the training accuracy and loss of CNN-SVM, with a test accuracy of
99.04000163078308%. The results do not corroborate the findings of [Tang (2017)](https://arxiv.org/abs/1306.0239)
for [MNIST handwritten digits](http://yann.lecun.com/exdb/mnist/) classification. This may be attributed to the fact
that no data preprocessing nor dimensionality reduction was done on the dataset for this project.


## Citation
```
```
Expand Down

0 comments on commit a187481

Please sign in to comment.