Skip to content

Commit

Permalink
Co-authored-by: Remco Bloemen <[email protected]>
Browse files Browse the repository at this point in the history
  • Loading branch information
dcbuild3r committed Oct 1, 2022
1 parent 4dacdbd commit 4401652
Showing 1 changed file with 70 additions and 6 deletions.
76 changes: 70 additions & 6 deletions Readme.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,25 @@
# ZKP Neural Networks

Prototype of evalutation neural networks in zero knowledge proofs using the plonky2 proof system.
Prototype of evalutation of neural networks inside zero knowledge proofs using the plonky2 proof system.

## ZKML

To find out more about zero-knowledge machine learning, check out the [awesome-zkml](https://github.com/worldcoin/awesome-zkml) repository we have created. It aggregates scientific research papers, codebases, articles, and use cases in the field of ZKML.

## Potential Worldcoin use cases

[Worldcoin](https://worldcoin.org/) is a [Privacy-Preserving Proof-of-Personhood Protocol](https://worldcoin.org/the-worldcoin-protocol). ZKML could help us make our protocol more trustless, and make it more easily upgradeable and auditabie.

- Verifying that a user has created a valid and unique [WorldID](https://id.worldcoin.org) locally by running the IrisCode model on self-hosted biometric data and is calling [_addMember(uint256 groupId, uint256 identityCommitment)](https://github.com/semaphore-protocol/semaphore/blob/4e6be04729ed2d7e29461a3915877a66a2c9c4d2/contracts/base/SemaphoreGroups.sol#L43) function on the WorldID Semaphore identity group with a valid identityCommitment. -> Makes protocol more permissionless
- Making the Orb trustless, provide proof that fraud filters on the hardware and firmware are applied
- Enable IrisCode upgradeability

## Technological stack

- [Python](https://www.python.org/), [numpy](https://numpy.org/) - Flexible dynamic programming language and the fundamental package for scientific computing in Python -> used to create a vanilla implementation of a [CNN](https://en.wikipedia.org/wiki/Convolutional_neural_network)
- [Rust](https://www.rust-lang.org/) - A performant, memory safe, systems-level programming language.
- [plonky2](https://github.com/mir-protocol/plonky2): Powerful zero-knowledge proving system developed by the [Polygon Zero team](https://polygon.technology/solutions/polygon-zero/) -> to create zero knowledge circuits of the inference step of a neural network
- [serde](https://serde.rs/) - Serialization/Deserialization library for Rust

## Build and run

Expand All @@ -19,7 +38,7 @@ python3 vanilla_cnn.py
python3 generate_cnn_json.py
cd ../
# run Rust CNN implementation and compare results against your previous results
cargo test nn::tests::serde_neural_net -- --show-output
cargo test serialize::tests::deserialize_nn_json -- --show-output
```

- This will run the vanilla CNN Python implementation and generate JSON files for the random matrices generated by numpy, these will be deserealized by serde in the Rust implementation and turned into an ndarray `ArcArray<f32, IxDyn>`.
Expand Down Expand Up @@ -52,7 +71,7 @@ final output: [-0.11425511 -0.13403508 -0.41759714 -0.24778798 0.85626755]
- Rust

```text
---- nn::tests::bench_neural_net stdout ----
---- serialize::tests::deserialize_nn_json stdout ----
layer | output shape | #parameters | #ops
-----------------------------------------------------------------------------
conv 32x5x5x3 | [116, 76, 32] | 2400 | 7052800
Expand All @@ -77,7 +96,7 @@ cd ref_cnn
python benchmark_cnn.py
# generates matrices for the Rust implementation to use
python generate_cnn_json.py
cargo bench bench_serde_neural_net
cargo bench bench_neural_net
```

### Example output
Expand All @@ -93,7 +112,7 @@ The average time is 0.8297840171150046 seconds for 1000 runs
- Rust: 0.151s

```text
test nn::bench_serde_neural_net ... bench: 151,632,316 ns/iter (+/- 1,469,992)
test nn::bench_neural_net ... bench: 151,632,316 ns/iter (+/- 1,469,992)
```

In this benchmark the Rust implementation is **5.5x faster**!
Expand All @@ -109,5 +128,50 @@ cargo test
In order to see output use `cargo test -- --output`, i.e.:

```text
cargo test nn::tests::serde_neural_net -- --show-output
cargo test nn::tests::neural_net -- --show-output
```

## Serialize/Deserialize CNN model

### Python to JSON -> JSON to Rust

Serializing the vanilla CNN model created with numpy into JSON and desearilizing the model into a `NeuralNetwork` Rust object

```bash
# change directory to cnn folder
cd ref_cnn
# generate json file for the model
python generate_cnn_json.py

cargo test serialize::tests::deserialize_model_json -- --show-output
```

### Rust to JSON

```bash
# serializes a CNN model with random weights into src/json/nn.json
cargo test serialize::tests::serialize_model_json -- --show-output
```

### Full circle

Create a `NeuralNetwork` object with random weights in Rust, serialize it into JSON and deserialize back into a `NeuralNetwork` Rust object

```bash
# serializes a CNN model with random weights into src/json/nn.json
cargo test serialize::tests::serde_full_circle -- --show-output
```

### Serialization/Deserialization benchmarks

Benchmarks for serializing and deserializing the reference CNN (Rust/JSON) using [serde](https://serde.rs/).

```bash
# full serialization benchmark times (M1 Max Macbook Pro)
# cargo bench - 579,057,637 ns/iter (+/- 20,202,535)
cargo bench bench_serialize_neural_net

# full deserialization benchmark times (M1 Max Macbook Pro)
# cargo bench - 565,564,850 ns/iter (+/- 61,387,641)
cargo bench bench_deserialize_neural_net
```

0 comments on commit 4401652

Please sign in to comment.