Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Zhylkaaa committed May 20, 2020
1 parent 1f19d9e commit fbfac64
Show file tree
Hide file tree
Showing 2 changed files with 20 additions and 4 deletions.
22 changes: 19 additions & 3 deletions Readme.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,25 @@ $ mkdir build
$ cd build
$ cmake ..
$ make
$ ./vulkan_perceptron
```

To check if it works you can run example:

```bash
$ ./vulkan_perceptron \
--train_dataset_size 20000 \
--val_dataset_size 10000 \
--train_data_path ../train_MNIST_images.txt \
--train_labels_path ../train_MNIST_labels.txt \
--val_data_path ../val_MNIST_images.txt \
--val_labels_path ../val_MNIST_labels.txt \
--batch_size 32 \
--x_dim 784 \
--y_dim 10 \
--learning_rate 0.3 \
--optimization_steps 1000 \
--layers=512,10 \
--activations=relu,softmax
```

## Quick reference
Expand All @@ -28,8 +46,6 @@ Activation list:
- id - identity (not a layer actually, it's just omitted)
- softmax - softmax function, not actually an activation function, but it's hear for sake of simplicity

You can refer to `line 10,11,12 of main.cpp` for usage example.

## TODOs
- Switch to using Tensor instead of VkBuffer
- Implement trainers for different tasks
Expand Down
2 changes: 1 addition & 1 deletion main.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -187,7 +187,7 @@ int main(int argc, char** argv) {

std::vector<float> loss_history;

trainer.train(optimization_steps, loss_history, 1);
trainer.train(optimization_steps, loss_history, 100);

std::cout<<"accuracy after training: "<<mlp.evaluate(val_x, val_y)<<std::endl;

Expand Down

0 comments on commit fbfac64

Please sign in to comment.