This is the artifact of the research paper, "Fuzzing Automatic Differentiation in Deep-Learning Libraries" (in submission). The authors make their best attempt to anonymize the artifacts. Please note that it may still be possible that the authors' identities are revealed by deeply studying this repository.
∇Fuzz is a fully automated API-level fuzzer targeting AD in DL libraries, which utilizes differential testing on different execution scenarios to test both first-order and high-order gradients, and also includes automated filtering strategies to remove false positives caused by numerical instability.
This is the ∇Fuzz's implementation on PyTorch, TensorFlow, JAX, and OneFlow.
Until submission, ∇Fuzz has detected 173 bugs in total for PyTorch, TensorFlow, JAX and OneFlow, with 144 already confirmed by developers.
We provide a list of confirmed bug reports on PyTorch, TensorFlow, JAX and OneFlow.
Please kindly note that this might reveal our identity, please think twice before clicking into this.
-
Our testing framework leverages MongoDB so you need to install and run MongoDB first.
- After installing MongoDB and before loading the database, run the command
ulimit -n 640000
to adjust the limit that the system resources a process may use. You can see this document for more details.
- After installing MongoDB and before loading the database, run the command
-
Python version >= 3.9.0 (It must support f-string.)
- highly recommend to use Python 3.9
-
Check our dependent python libraries in
requirements.txt
and install with:pip install -r requirements.txt
- For OneFlow, please use the following command to install:
python3 -m pip install --find-links https://release.oneflow.info oneflow==0.7.0
-
JAX install
For JAX, if you want to reproduce our results on version 0.3.14
, you could use the following commands to install jaxlib on CPU via the wheel archive:
# Install jaxlib on CPU via the wheel archive
pip install jax[cpu]==0.3.14 -f https://storage.googleapis.com/jax-releases/jax_releases.html
# Install the jaxlib 0.3.25 CPU wheel directly
pip install jaxlib==0.3.14 -f https://storage.googleapis.com/jax-releases/jax_releases.html
For more information about JAX installation, please refer to the official website.
Run the following commands in current directory (NablaFuzz
) to load the database.
mongorestore NablaFuzz-PyTorch-Jax/dump
mongorestore NablaFuzz-Oneflow/dump
mongorestore NablaFuzz-TensorFlow/dump
Before running, you can set the number of mutants for each API (which is 1000 by default):
NUM_MUTANT=100
Also, you can set the number of APIs you want to test (which is -1 by default, meaning all APIs will be tested)
NUM_API=100
First go into the NablaFuzz-PyTorch-Jax/src
directory,
cd NablaFuzz-PyTorch-Jax/src
Run the following command to start ∇Fuzz to test PyTorch
python torch_test.py --num $NUM_MUTANT --max_api $NUM_API
The output will be put in NablaFuzz-PyTorch-Jax/output-ad/torch/union
directory by default.
To filter out the inconsistent gradients caused by numerical instability, you can run the folloing commands:
python torch-diff-filter.py --num $NUM_MUTANT --max_api $NUM_API
Run the following command to start ∇Fuzz to test JAX
python jax_test.py
The output will be put in NablaFuzz-PyTorch-Jax/output-ad/jax/union
directory by default.
To filter out the inconsistent gradients caused by numerical instability, you can run the folloing commands:
python jax-diff-filter.py
First go into the NablaFuzz-Oneflow
directory,
cd NablaFuzz-Oneflow
Run the following commands to start ∇Fuzz to test Oneflow
python oneflow_test.py --num $NUM_MUTANT --max_api $NUM_API
The error code snippets will be put in NablaFuzz-Oneflow/errors
.
First go into the NablaFuzz-TensorFlow/src
directory,
cd NablaFuzz-TensorFlow/src
Run the following commands to start ∇Fuzz to test TensorFlow
python tf_adtest.py --num $NUM_MUTANT --max_api $NUM_API
python tf_filter.py
The outputs will be put in NablaFuzz-TensorFlow/expr_outputs
by default.