Skip to content

Commit

Permalink
Merge pull request #6 from worldcoin/io
Browse files Browse the repository at this point in the history
  • Loading branch information
dcbuild3r authored May 11, 2023
2 parents 53f14c3 + 15bc6a6 commit cbf3d23
Show file tree
Hide file tree
Showing 19 changed files with 1,105 additions and 527 deletions.
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,2 +1,4 @@
/target
.*
__pycache__
/src/json/*
16 changes: 14 additions & 2 deletions Cargo.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

4 changes: 3 additions & 1 deletion Cargo.toml
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,7 @@ auto_impl = "1.0.1"
bytesize = "1.1.0"
color-eyre = "0.6"
criterion = { version = "0.3", optional = true, features = [ "async_tokio" ] }
erased-serde = "0.3"
eyre = "0.6"
futures = "0.3"
itertools = "0.10"
Expand All @@ -41,6 +42,7 @@ rand = "0.8.4"
rand_pcg = "0.3.1"
rayon = "1.5.1"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0.85"
serde_cbor = "0.11.1"
structopt = "0.3"
thiserror = "1.0"
Expand All @@ -51,7 +53,7 @@ tracing-log = "0.1.2"
tracing-subscriber = { version = "0.3", features = [ "env-filter", "json" ] }
tracing-test = "0.2"
log = "0.4.14"
ndarray = "0.15.4"
ndarray = {version = "0.15.4", features = ["serde"]}
ndarray-rand = "0.14.0"
ndarray-stats = " 0.5.0"
rand_isaac = "0.3.0"
Expand Down
171 changes: 169 additions & 2 deletions Readme.md
Original file line number Diff line number Diff line change
@@ -1,10 +1,177 @@
# ZKP Neural Networks

Prototype of evalutation neural networks in zero knowledge proofs using the plonky2 proof system.
Prototype of evalutation of neural networks inside zero knowledge proofs using the plonky2 proof system.

## ZKML

To find out more about zero-knowledge machine learning, check out the [awesome-zkml](https://github.com/worldcoin/awesome-zkml) repository we have created. It aggregates scientific research papers, codebases, articles, and use cases in the field of ZKML.

## Potential Worldcoin use cases

[Worldcoin](https://worldcoin.org/) is a [Privacy-Preserving Proof-of-Personhood Protocol](https://worldcoin.org/the-worldcoin-protocol). ZKML could help us make our protocol more trustless, and make it more easily upgradeable and auditabie.

- Verifying that a user has created a valid and unique [WorldID](https://id.worldcoin.org) locally by running the IrisCode model on self-hosted biometric data and is calling [_addMember(uint256 groupId, uint256 identityCommitment)](https://github.com/semaphore-protocol/semaphore/blob/4e6be04729ed2d7e29461a3915877a66a2c9c4d2/contracts/base/SemaphoreGroups.sol#L43) function on the WorldID Semaphore identity group with a valid identityCommitment. -> Makes protocol more permissionless
- Making the Orb trustless, provide proof that fraud filters on the hardware and firmware are applied
- Enable IrisCode upgradeability

## Technological stack

- [Python](https://www.python.org/), [numpy](https://numpy.org/) - Flexible dynamic programming language and the fundamental package for scientific computing in Python -> used to create a vanilla implementation of a [CNN](https://en.wikipedia.org/wiki/Convolutional_neural_network)
- [Rust](https://www.rust-lang.org/) - A performant, memory safe, systems-level programming language.
- [plonky2](https://github.com/mir-protocol/plonky2): Powerful zero-knowledge proving system developed by the [Polygon Zero team](https://polygon.technology/solutions/polygon-zero/) -> to create zero knowledge circuits of the inference step of a neural network
- [serde](https://serde.rs/) - Serialization/Deserialization library for Rust

## Build and run

```
```bash
cargo +nightly run --release -- -vvv --input-size 1000 --output-size 1000
```

## Validate equality of Rust and Python CNN implementations

```bash
# open Python CNN implementation directory
cd ref_cnn
# run CNN model and check result
python3 vanilla_cnn.py
# generate JSON files for the random number generated matrices in the model
python3 generate_cnn_json.py
cd ../
# run Rust CNN implementation and compare results against your previous results
cargo test serialize::tests::deserialize_nn_json -- --show-output
```

- This will run the vanilla CNN Python implementation and generate JSON files for the random matrices generated by numpy, these will be deserealized by serde in the Rust implementation and turned into an ndarray `ArcArray<f32, IxDyn>`.
- With this approach we can get rid of any randomness in matrix generation and verify that we are using the same data.
- It also creates a standardized intermediary format that can be understood by Rust to import and export ML models easily from other languages (Python in our case).

### Example output

- Python

```text
> python ref_cnn/vanilla_cnn.py
layer | output shape | #parameters | #ops
-------------------- | --------------- | --------------- | ---------------
conv 32x5x5x3 | (116, 76, 32) | 2400 | 21158400
max-pool | (58, 38, 32) | 0 | 0
relu | (58, 38, 32) | 0 | 0
conv 32x5x5x32 | (54, 34, 32) | 25600 | 47001600
max-pool | (27, 17, 32) | 0 | 0
relu | (27, 17, 32) | 0 | 0
flatten | (14688,) | 0 | 0
conv 1000x14688 | (1000,) | 14689000 | 14688000
relu | (1000,) | 0 | 0
conv 5x1000 | (5,) | 5005 | 5000
normalize | (5,) | 0 | 6
final output: [-0.11425511 -0.13403508 -0.41759714 -0.24778798 0.85626755]
```

- Rust

```text
---- serialize::tests::deserialize_nn_json stdout ----
layer | output shape | #parameters | #ops
-----------------------------------------------------------------------------
conv 32x5x5x3 | [116, 76, 32] | 2400 | 7052800
max-pool | [38, 58, 32] | 0 | 282112
relu | [58, 38, 32] | 70528 | 0
conv 32x5x5x32 | [54, 34, 32] | 25600 | 1468800
max-pool | [17, 27, 32] | 0 | 58752
relu | [27, 17, 32] | 14688 | 0
flatten | [14688] | 0 | 0
full | [1000] | 14689000 | 14688000
relu | [1000] | 1000 | 0
full | [5] | 5005 | 5000
normalize | [5] | 0 | 6
final output (normalized):
[-0.11425512, -0.13403504, -0.41759717, -0.24778795, 0.8562675]
```

## Benchmark Python vs Rust CNN implementations

```bash
cd ref_cnn
python benchmark_cnn.py
# generates matrices for the Rust implementation to use
python generate_cnn_json.py
cargo bench bench_neural_net
```

### Example output

Machine: M1 Max Macbook Pro

- Python: 0.830s

```text
The average time is 0.8297840171150046 seconds for 1000 runs
```

- Rust: 0.151s

```text
test nn::bench_neural_net ... bench: 151,632,316 ns/iter (+/- 1,469,992)
```

In this benchmark the Rust implementation is **5.5x faster**!

## Run tests

Verify that all components of the rust codebase are working fine and that no breaking changes were introduced.

```text
cargo test
```

In order to see output use `cargo test -- --output`, i.e.:

```text
cargo test nn::tests::neural_net -- --show-output
```

## Serialize/Deserialize CNN model

### Python to JSON -> JSON to Rust

Serializing the vanilla CNN model created with numpy into JSON and desearilizing the model into a `NeuralNetwork` Rust object

```bash
# change directory to cnn folder
cd ref_cnn
# generate json file for the model
python generate_cnn_json.py

cargo test serialize::tests::deserialize_model_json -- --show-output
```

### Rust to JSON

```bash
# serializes a CNN model with random weights into src/json/nn.json
cargo test serialize::tests::serialize_model_json -- --show-output
```

### Full circle

Create a `NeuralNetwork` object with random weights in Rust, serialize it into JSON and deserialize back into a `NeuralNetwork` Rust object

```bash
# serializes a CNN model with random weights into src/json/nn.json
cargo test serialize::tests::serde_full_circle -- --show-output
```

### Serialization/Deserialization benchmarks

Benchmarks for serializing and deserializing the reference CNN (Rust/JSON) using [serde](https://serde.rs/).

```bash
# full serialization benchmark times (M1 Max Macbook Pro)
# cargo bench - 579,057,637 ns/iter (+/- 20,202,535)
cargo bench bench_serialize_neural_net

# full deserialization benchmark times (M1 Max Macbook Pro)
# cargo bench - 565,564,850 ns/iter (+/- 61,387,641)
cargo bench bench_deserialize_neural_net
```
72 changes: 72 additions & 0 deletions ref_cnn/benchmark_cnn.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
### Will take a long time! Took 13.2min on my machine (M1 Max MBP - DC)
import timeit
import numpy as np
from vanilla_cnn import *

if __name__ == "__main__":

# instantiate matrices outside of time it block
np.random.seed(12345)

x = np.random.randint(low=-5, high=5, size=(120, 80, 3))

f = np.random.randint(low=-10, high=+10, size=(32, 5, 5, 3))

k = np.random.randint(low=-10, high=+10, size=(32,5,5,32))

weights1 = np.random.randint(low=-10, high=+10, size=(1000, 14688))

biases1 = np.random.randint(low=-10, high=+10, size=(1000))

weights2 = np.random.randint(low=-10, high=+10, size=(5, 1000))

biases2 = np.random.randint(low=-10, high=+10, size=(5))

times = []

runs = 100

for _ in range(runs):
starttime = timeit.default_timer()
# conv layer
x, n_params, n_multiplications, name = conv_layer(x, f)

# max pooling
x, n_params, n_multiplications, name = max_pooling_layer(x, 2)

# relu layer
x, n_params, n_multiplications, name = relu_layer(x)

# conv layer
x, n_params, n_multiplications, name = conv_layer(x, k)

# max pooling
x, n_params, n_multiplications, name = max_pooling_layer(x, 2)

# relu layer
x, n_params, n_multiplications, name = relu_layer(x)

# flatten
x, n_params, n_multiplications, name = flatten_layer(x)

# fully connected
x, n_params, n_multiplications, name = fully_connected_layer(x, weights1, biases1)

# relu layer
x, n_params, n_multiplications, name = relu_layer(x)

# fully connected
x, n_params, n_multiplications, name = fully_connected_layer(x, weights2, biases2)

# normalization
x, n_params, n_multiplications, name = normalize(x)

times.append(timeit.default_timer() - starttime)

np.random.seed(12345)

x = np.random.randint(low=-5, high=5, size=(120, 80, 3))

average = sum(times) / len(times)
print(f'The average time is {average} seconds for {runs} runs')
# Result = 0.8297840171150046 for 1000 runs
Loading

0 comments on commit cbf3d23

Please sign in to comment.