Skip to content

Releases: Trusted-AI/adversarial-robustness-toolbox

ART 0.10.0

12 Jul 23:58
Compare
Choose a tag to compare

This release contains contains new black-box attacks, detectors, updated attacks and several bug fixes.

Added

  • Added HopSkipJump attack, a powerful new black-box attack (#80)
  • Added new example script demonstrating the perturbation of a neural network layer between input and output (#92)
  • Added a notebook demonstrating BoundaryAttack
  • Added a detector based on Fast Generalized Subset Scanning (#100)

Changed

  • Changed Basic Iterative Method (BIM) attack to be a special case of Projected Gradient Descent attack with norm=np.inf and without random initialisation (#90)
  • Reduced calls to method predict in attacks FastGradientMethod and BasicIterativeMethod to improve performance (#70)
  • Updated pretrained models in notebooks with on-demand downloads of the pretrained models (#63, #88)
  • Added batch processing to AdversarialPatch attack (#96)
  • Increased Tensorflow versions in unit testing on Travis CI to 1.12.3, 1.13.1, and 1.14.0 (#94)
  • Attacks are now accepting the argument batch_size which is used in calls to classifier.predict within the attack replacing the default batch_size=128 of classifier.predict (#105)
  • Change order of preprocessing defences and standardisation in classifiers, now defences are applied on the provided input data and standardisation (preprocessing argument of classifier) is applied after the defences (#84
  • Update all defences to account for clip_values (#84)

Removed

  • Removed pretrained models in directory models used in notebooks and replaced with ondemand downloads (#63, #88)
  • Removed argument patch_shape from attack AdversarialPatch (#77)
  • Stopped unit testing for Python 2 on Travis CI (#83)

Fixed

  • Fixed all Pylint and LGTM alerts and warnings (#110)
  • Fixed broken links in notebooks (#63, #88)
  • Fixed broken links to imagenet data in notebook attack_defense_imagenet (#109)
  • Fixed calculation of attack budget eps by accounting for initial benign sample in projection to eps-ball for random initialisation in FastGradientMethod and BasicIterativeMethod (#85)

ART 0.9.0

20 May 14:45
Compare
Choose a tag to compare

This release contains breaking changes to attacks and defences with regards to setting attributes, removes restrictions on input shapes which enables the use of feature vectors and several bug fixes.

Added

  • implement pickle for classifiers tensorflow and pytorch (#39)
  • added example data_augmentation.py demonstrating the use of data generators

Changed

  • renamed and moved tests (#58)
  • change input shape restrictions, classifiers accept now any input shape, for example feature vectors; attacks requiring spatial inputs are raising exceptions (#49)
  • clipping of data ranges becomes optional in classifiers which allows attacks to accept unbounded data ranges (#49)
  • [Breaking changes] class attributes in attacks can no longer be changed with method generate, changing attributes is only possible with methods __init__ and set_params
  • [Breaking changes] class attributes in defenses can no longer be changed with method generate, changing attributes is only possible with methods __call__ and set_params
  • resolved inconsistency in PGD random_init with Madry's version

Removed

  • deprecated static adversarial trainer StaticAdversarialTrainer

Fixed

  • Fixed bug in attack ZOO (#60)

ART 0.8.0

30 Apr 21:18
Compare
Choose a tag to compare

This release includes new evasion attacks, like ZOO, boundary attack and the adversarial patch, as well as the capacity to break non-differentiable defences.

Added

  • ZOO black-box attack (class ZooAttack)
  • Decision boundary black-box attack (class BoundaryAttack)
  • Adversarial patch (class AdversarialPatch)
  • Function to estimate gradients in Preprocessor API, along with its implementation for all concrete instances.
    This allows to break non-differentiable defences.
  • Attributes apply_fit and apply_predict in Preprocessor API that indicate if a defence should be used at training and/or test time
  • Classifiers are now capable of running a full backward pass through defences
  • save function for TensorFlow models
  • New notebook with usage example for the adversarial patch
  • New notebook showing how to synthesize an adversarially robust architecture (see ICLR SafeML Workshop 2019: Evolutionary Search for Adversarially Robust Neural Network by M. Sinn, M. Wistuba, B. Buesser, M.-I. Nicolae, M.N. Tran)

Changed

  • [Breaking change] Defences in classifiers are now to be specified as Preprocessor instances instead of strings
  • [Breaking change] Parameter random_init in FastGradientMethod, ProjectedGradientDescent and BasicIterativeMethod has been renamed to num_random_init and allows now to specify the number of random initialization to run before choosing the best attack
  • Possibility to specify batch size when calling get_activations from Classifier API

ART 0.7.0

01 Apr 15:32
Compare
Choose a tag to compare

This release contains a new poison removal method, as well as some restructuring of features recently added to the library.

Added

  • Poisoning fixing method performing retraining as part of the ActivationDefence class
  • Example script of how to use the poison removal method
  • New module wrappers containing features that alter the behaviour of a Classifier. These are to be used as wrappers for classifiers and to be passed directly to evasion attack instances.

Changed

  • ExpectationOverTransformations has been moved to the wrappers module
  • QueryEfficientBBGradientEstimation has been moved to the wrappers module

Removed

  • Attacks no longer take an expectation parameter (breaking). This has been replaced by a direct call to the attack with an ExpectationOverTransformation instance.

Fixed

  • Bug in spatial transformations attack: when attack does not succeed, original samples are returned now (issue #40, fixed in #42, #43)
  • Bug in Keras with loss functions that do not take labels in one-hot encoding (issue #41)
  • Bug fix in activation defence against poisoning: incorrect test condition
  • Bug fix in DeepFool: inverted stop condition when working with batches
  • Import problem in utils.py: top level imports were forcing users to install all supported ML frameworks

ART 0.6.0

13 Mar 08:03
Compare
Choose a tag to compare

Added

  • PixelDefend defense
  • Query-efficient black-box gradient estimates (NES)
  • A general wrapper for classifiers allowing to change their behaviour (see art/classifiers/wrapper.py)
  • 3D plot in visualization
  • Saver for PyTorchClassifier
  • Pickling for KerasClassifier
  • Representation for all classifiers

Changed

  • We now use pretrained models for unit tests (see art/utils.py, functions get_classifier_pt, get_classifier_kr, get_classifier_tf)
  • Keras models now accept any loss function

Removed

  • Detector abstract class. Detectors now directly extend Classifier

Thanking also our external contributors!
@AkashGanesan

ART 0.5.0

01 Feb 18:58
Compare
Choose a tag to compare

This release of ART adds two new evasion attacks, provides some bug fixes, as well as some new features, like access to the learning phase (training/test) through the Classifier API, batching in evasion attacks and expectation over transformations.

Added

  • Spatial transformations evasion attack (class art.attacks.SpatialTransformations)
  • Elastic net (EAD) evasion attack (class art.attacks.ElasticNet)
  • Data generator support for multiple types of TensorFlow iterators
  • New function and property to the Classifier API that allow to explicitly control the learning phase (train/test)
  • Reports for poisoning module
  • Most evasion attacks now support batching, this is specified by the new parameter batch_size
  • ExpectationOverTransformations class, to be used with evasion attacks
  • Parameter expectation of evasion attacks allows to specify the use of expectation over transformations

Changed

  • Update list of attacks supported by universarl perturbation
  • PyLint and Travis configs

Fixed

  • Indexing error in C&W L_2 attack (issue #29)
  • Universal perturbation stop condition: attack was always stopping after one iteration
  • Error with data subsampling in AdversarialTrainer when the ratio of adversarial samples is 1

ART 0.4.0

24 Jan 15:47
Compare
Choose a tag to compare

Added

  • Class art.classifiers.EnsembleClassifier: support for ensembles under Classifier interface
  • Module art.data_generators: data feeders for dynamic loading and augmentation for all frameworks
  • New function fit_generator to classifiers and adversarial trainer
  • C&W L_inf attack
  • Class art.defences.JpegCompression: JPEG compression as preprocessing defence
  • Class art.defences.ThermometerEncoding: thermometer encoding as preprocessing defence
  • Class art.defences.TotalVarMin: total variance minimization as preprocessing defence
  • Function art.utils.master_seed: setting master seed for random number generators
  • pylint for Travis

Changed

  • Restructure analyzers from poisoning module

Fixed

  • PyTorch classifier support on GPU

ART 0.3.0

24 Jan 18:10
Compare
Choose a tag to compare

This release brings many new features to ART, including a poisoning module, an adversarial sample detection module and support for MXNet models.

Added

  • Access to layers and model activations through the Classifier API
  • MXNet support
  • Poison detection module, containing the poisoning detection method based on clustering activations
  • Jupyter notebook with poisoning attack and detection example on MNIST
  • Adversarial samples detection module, containing two detectors: one working based on inputs and one based on activations

Changed

  • Optimized JSMA attack (art.attacks.SaliencyMapMethod) - can now run on ImageNet data
  • Optimized C&W attack (art.attacks.CarliniL2Method)
  • Improved adversarial trainer, now covering a wide range of setups

Removed

  • Hard-coded config folder. Config now gets created on the fly when running ART for the first time. Produced config gets stored in home folder ~/.art

ART 0.2.0

28 Jan 15:04
Compare
Choose a tag to compare

This release makes ART framework-independent. The following backends are now supported: TensorFlow, Keras and PyTorch.

Added

  • New framework-independent Classifier interface
  • Backend support for TensorFlow, Keras and PyTorch
  • Basic interface for detecting adversarial samples (no concrete method implemented for now)
  • Gaussian augmentation

Changed

  • All attacks now fit the new Classifier interface

Fixed

  • to_categorical utility function for unsqueezed labels
  • Norms in CLEVER score
  • Source code folder name to correct PyPI install

Removed

  • hard-coded architectures for datasets / model types: CNN, ResNet, MLP

ART 0.1.0

28 Jan 14:44
Compare
Choose a tag to compare

This is the initial release of ART. The following features are currently supported:

  • Classifier interface, supporting a few predefined architectures (CNN, ResNet, MLP) for standard datasets (MNIST, CIFAR10), as well as custom models from users
  • Attack interface, supporting a few evasion attacks
    • FGM & FSGM
    • Jacobian saliency map attack
    • Carlini & Wagner L_2 attack
    • DeepFool
    • NewtonFool
    • Virtual adversarial method (to be used for virtual adversarial training)
    • Universal perturbation
  • Defences
    • Preprocessing interface, currently implemented by feature squeezing, label smoothing, spatial smoothing
    • Adversarial training
  • Metrics for measuring robustness: empirical robustness (minimal perturbation), loss sensitivity and CLEVER score
  • Utilities for loading datasets, some preprocessing, common maths manipulations
  • Scripts for launching some basic pipelines for training, tests and attacking
  • Unit tests