Skip to content

Commit

Permalink
[DataType] Add more supports and controls over dtypes (#74)
Browse files Browse the repository at this point in the history
* feat: add more dtypes and fix op rules

* refactor: deprecate iree

* chore: skip u8 for tflite for its own crash bug

* chore: inline nnsmith-torch tensor type cvt

* feat: skip_dtypes for model

* chore: update rdm

* fix: use isort 5.12 in pre-commit to resolve conflict

closes #73
  • Loading branch information
ganler authored Feb 14, 2023
1 parent 02a1b7f commit 23713a3
Show file tree
Hide file tree
Showing 21 changed files with 185 additions and 300 deletions.
1 change: 0 additions & 1 deletion .github/workflows/ci.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,6 @@ jobs:
- name: Test TensorFlow
run: |
pip install -r requirements/sys/tensorflow.txt --pre --upgrade
pip install -r requirements/sys/iree.txt --pre --upgrade
pytest tests/tensorflow --log-cli-level=DEBUG
yes | python nnsmith/cli/model_gen.py model.type=tensorflow mgen.method=symbolic
python nnsmith/cli/model_exec.py model.type=tensorflow backend.type=xla model.path=nnsmith_output/model/
Expand Down
2 changes: 1 addition & 1 deletion .pre-commit-config.yaml
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
repos:
- repo: https://github.com/pycqa/isort
rev: 5.10.1
rev: 5.12.0
hooks:
- id: isort
name: isort (python)
Expand Down
36 changes: 19 additions & 17 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@
| Model\Engine | [TVM](https://github.com/apache/tvm) | [ORT](https://github.com/microsoft/onnxruntime) | [TensorRT](https://github.com/NVIDIA/TensorRT) | [TFLite](https://www.tensorflow.org/lite) | [XLA](https://www.tensorflow.org/xla) | [Torch-JIT](https://pytorch.org/docs/stable/jit.html) |
| ------------ | ------------------------------------ | ----------------------------------------------- | ---------------------------------------------- | ----------------------------------------- | ------------------------------------- | ----------------------------------------------------- |
| ONNX |||| | | |
| TensorFlow | 🔨 | | | ⚠️ | ⚠️ | |
| TensorFlow | 🔨 | | | | | |
| PyTorch | 🔨 | 🔨 | | | | 🔨 |


Expand All @@ -27,24 +27,24 @@

## Setup

**Install latest stable release:**
**Install latest code (GitHub HEAD):**

```shell
pip install "nnsmith[torch,onnx]" --upgrade
pip install "git+https://github.com/ise-uiuc/nnsmith@main#egg=nnsmith[torch,onnx]" --upgrade
# [optional] add more front- and back-ends such as [tf] and [tvm,ort,xla,...] in "[...]"
```

<details><summary><b>Install GitHub HEAD: </b> <i>[click to expand]</i></summary>
<details><summary><b>Install latest stable release: </b> <i>[expand]</i></summary>
<div>

```shell
pip install "git+https://github.com/ise-uiuc/nnsmith@main#egg=nnsmith[torch,onnx]" --upgrade
# or pip install "git+ssh://[email protected]/ise-uiuc/nnsmith@main#egg=nnsmith[torch,onnx]" --upgrade
pip install "nnsmith[torch,onnx]" --upgrade
```

</div>
</details>

<details><summary><b>Install latest pre-release: </b> <i>[click to expand]</i></summary>
<details><summary><b>Install latest pre-release: </b> <i>[expand]</i></summary>
<div>

```shell
Expand All @@ -60,7 +60,7 @@ pip install "nnsmith[torch,onnx]" \

## Quick Start

<details><summary><b>Setting up graphviz for debugging</b> <i>[click to expand]</i></summary>
<details><summary><b>Setting up graphviz for debugging</b> <i>[expand]</i></summary>
<div>

Graphviz provides `dot` for visualizing graphs in nice pictures. But it needs to be installed via the following methods:
Expand Down Expand Up @@ -92,7 +92,7 @@ See other commands under [`doc/cli`](doc/cli.md). We use [hydra](https://hydra.c
- `pip install --upgrade --pre -r requirements/sys/[system].txt` to allow generating and running specific frameworks;
- **Why "--upgrade --pre"?** In fact, all the sources under `requirements/sys/` are nightly release (except tvm) as we want to "save the world" by catching new bugs;

<details><summary><b>Pre-commits</b> <i>[click to expand]</i></summary>
<details><summary><b>Pre-commits</b> <i>[expand]</i></summary>
<div>

You can use `pre-commit` to simpify development:
Expand All @@ -104,7 +104,7 @@ You can use `pre-commit` to simpify development:
</div>
</details>

<details><summary><b>Local development</b> <i>[click to expand]</i></summary>
<details><summary><b>Local development</b> <i>[expand]</i></summary>
<div>

- Develop locally by setting `export PYTHONPATH=$PYTHONPATH:$(pwd)` (`pwd` should be this git folder.)
Expand All @@ -113,7 +113,7 @@ You can use `pre-commit` to simpify development:
</div>
</details>

<details><summary><b>Simplify the code</b> <i>[click to expand]</i></summary>
<details><summary><b>Simplify the code</b> <i>[expand]</i></summary>
<div>

*Simplicity is prerequisite for reliability.* --Edsger W. Dijkstra
Expand All @@ -123,7 +123,7 @@ We want **code simplicity**: keeping minimal dependencies and focusing on a smal
</div>
</details>

<details><summary><b>Test before commit</b> <i>[click to expand]</i></summary>
<details><summary><b>Test before commit</b> <i>[expand]</i></summary>
<div>

```shell
Expand All @@ -144,22 +144,24 @@ pytest tests/tensorflow -s

## Paper

<details><summary><b>ASPLOS'23 | NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers.</b> <i>[click to expand citation]</i></summary>
<details><summary><b>NNSmith: Generating Diverse and Valid Test Cases for Deep Learning Compilers.</b> <i>[expand citation]</i></summary>
<div>

```bibtex
@article{liu2022finding,
title={Finding Deep-Learning Compilation Bugs with NNSmith},
@inproceedings{liu2023nnsmith,
title={Nnsmith: Generating diverse and valid test cases for deep learning compilers},
author={Liu, Jiawei and Lin, Jinkun and Ruffy, Fabian and Tan, Cheng and Li, Jinyang and Panda, Aurojit and Zhang, Lingming},
journal={arXiv preprint arXiv:2207.13066},
year={2022}
booktitle={Proceedings of the 28th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Volume 2},
pages={530--543},
year={2023}
}
```

</div>
</details>

<p align="center">
<a href="https://dl.acm.org/doi/10.1145/3575693.3575707"><img src="https://img.shields.io/badge/Paper-ASPLOS'23-a55fed.svg"></a>
<a href="https://arxiv.org/abs/2207.13066"><img src="https://img.shields.io/badge/arXiv-2207.13066-b31b1b.svg"></a>
<a href="http://nnsmith-asplos.rtfd.io/"><img src="https://img.shields.io/badge/artifact-doc-black.svg"></a>
<a href="https://github.com/ganler/nnsmith-asplos-artifact"><img src="https://img.shields.io/badge/artifact-git-black.svg"></a>
Expand Down
3 changes: 2 additions & 1 deletion doc/cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,8 @@ and backends:
- `tvm`
- `ort`: ONNXRuntime
- `trt`: TensorRT
- `iree`
- `xla`: XLA
- `tflite`: TFLite

Meanwhile, the backend of `xla` and `tflite` is installed as part of TensorFlow.

Expand Down
39 changes: 31 additions & 8 deletions nnsmith/abstract/dtype.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,9 @@ class DType(Enum):
float32 = "float32"
float64 = "float64"
uint8 = "uint8" # Support quantized models.
uint16 = "uint16"
uint32 = "uint32"
uint64 = "uint64"
int8 = "int8"
int16 = "int16"
int32 = "int32"
Expand All @@ -31,6 +34,9 @@ def short(self) -> str:
DType.float32: "f32",
DType.float64: "f64",
DType.uint8: "u8",
DType.uint16: "u16",
DType.uint32: "u32",
DType.uint64: "u64",
DType.int8: "i8",
DType.int16: "i16",
DType.int32: "i32",
Expand All @@ -42,6 +48,7 @@ def short(self) -> str:

@staticmethod
def is_float(dtype): # Don't use string. Make it well-formed.
assert isinstance(dtype, DType)
return dtype in [DType.float32, DType.float64]

@staticmethod
Expand All @@ -60,6 +67,9 @@ def from_str(s):
"float32": DType.float32,
"float64": DType.float64,
"uint8": DType.uint8,
"uint16": DType.uint16,
"uint32": DType.uint32,
"uint64": DType.uint64,
"int8": DType.int8,
"int16": DType.int16,
"int32": DType.int32,
Expand All @@ -75,6 +85,10 @@ def numpy(self):
DType.float32: np.float32,
DType.float64: np.float64,
DType.uint8: np.uint8,
DType.uint8: np.uint8,
DType.uint16: np.uint16,
DType.uint32: np.uint32,
DType.uint64: np.uint64,
DType.int8: np.int8,
DType.int16: np.int16,
DType.int32: np.int32,
Expand All @@ -84,7 +98,6 @@ def numpy(self):
DType.bool: np.bool_,
}[self]

# TODO(@ganler): put "torchization" in a separate file.
def torch(self) -> "torch.dtype":
import torch

Expand All @@ -93,6 +106,7 @@ def torch(self) -> "torch.dtype":
DType.float32: torch.float32,
DType.float64: torch.float64,
DType.uint8: torch.uint8,
# PyTorch does not support other unsigned int types: https://github.com/pytorch/pytorch/issues/58734
DType.int8: torch.int8,
DType.int16: torch.int16,
DType.int32: torch.int32,
Expand Down Expand Up @@ -128,6 +142,9 @@ def tensorflow(self) -> "tf.Dtype":
DType.float32: tf.float32,
DType.float64: tf.float64,
DType.uint8: tf.uint8,
DType.uint16: tf.uint16,
DType.uint32: tf.uint32,
DType.uint64: tf.uint64,
DType.int8: tf.int8,
DType.int16: tf.int16,
DType.int32: tf.int32,
Expand All @@ -146,6 +163,9 @@ def from_tensorflow(dtype) -> "DType":
tf.float32: DType.float32,
tf.float64: DType.float64,
tf.uint8: DType.uint8,
tf.uint16: DType.uint16,
tf.uint32: DType.uint32,
tf.uint64: DType.uint64,
tf.int8: DType.int8,
tf.int16: DType.int16,
tf.int32: DType.int32,
Expand All @@ -161,6 +181,9 @@ def sizeof(self) -> int:
DType.float32: 4,
DType.float64: 8,
DType.uint8: 1,
DType.uint16: 2,
DType.uint32: 4,
DType.uint64: 8,
DType.int8: 1,
DType.int16: 2,
DType.int32: 4,
Expand All @@ -175,14 +198,14 @@ def sizeof(self) -> int:
# "DTYPE_GEN_ALL" is surely a subset of all types but it is
# used to conservatively to avoid unsupported data types while
# applying nnsmith to various frameworks.
DTYPE_ALL = [dt for dt in DType]
DTYPE_GEN_ALL = [
DType.float32,
DType.float64,
DTYPE_GEN_FLOATS = [DType.float16, DType.float32, DType.float64]
DTYPE_GEN_INTS = [
DType.int8,
DType.int16,
DType.int32,
DType.int64,
DType.bool,
DType.uint8,
]
DTYPE_GEN_COMPLEX = [DType.complex64, DType.complex128]
DTYPE_GEN_ALL = DTYPE_GEN_FLOATS + DTYPE_GEN_INTS + DTYPE_GEN_COMPLEX
DTYPE_GEN_NON_BOOL = [dtype for dtype in DTYPE_GEN_ALL if dtype != DType.bool]
DTYPE_GEN_FLOATS = [DType.float32, DType.float64]
DTYPE_GEN_INTS = [DType.int32, DType.int64]
Loading

0 comments on commit 23713a3

Please sign in to comment.