Skip to content

ART 1.2.0

Compare
Choose a tag to compare
@beat-buesser beat-buesser released this 15 Mar 19:30

This release of ART v1.2.0 introduces new APIs and implementations of model transforming, model training and output post-processing defences, along with new APIs and implementations of poisoning attacks and new implementations of evasion and extraction attacks. Furthermore, ART now also supports Pandas Dataframe as input to its classifier and attack methods.

Added

  • Added support for Pandas Dataframe as input to Classifiers and Attacks in addition to numpy.ndarray enabling defences and attacks on models expecting dataframes as input (#244)
  • Started a collection of notebooks of adversarial robustness evaluations by adding the evaluation of the EMPIR defence (#319)
  • Added an example notebook for adversarial attacks on video data classification (#321)
  • Added an example notebook for adversarial attacks on audio data classification (#271)
  • Added Backdoor Poisoning Attack (#292)
  • Added new API for Transformer defences (#293)
  • Added Defensive Distillation as a transformation defence (#293)
  • Added new API for Trainer defences (#)
  • Added Madry's Protocol for adversarial training as training defence (#294)
  • Added new API for Postprocessor defences (#267)
  • Added KnockoffNets as extraction attack (#230)
  • Added Few Pixel Attack as evasion attack (#280)
  • Added Threshold Attack as evasion attack (#281)
  • Added option for random epsilon as parameter to the projected gradient descent attack which selects the epsilon from a truncated normal distribution ranging [0, eps] with sigma of eps/2 (#257)

Changed

  • Started to refactor the unittests. The tests of KerasClassifier, TensorFlowClassifier, TensorFlowV2Classifier, Boundary attack and Fast Gradient Method have been moved to the new testing system based on pytest with the other tests planned to follow in future releases. (#270)
  • Boundary and HopSkipJump attack work now with non-square images (#288)
  • Applied Black style formatting
  • PyTorchClassifier now allows the user to select a specific GPU (#290)
  • The classifiers now accept soft-labels (probabilities) as input in their fit methods in addition to hard-labels (one-hot encoded or index labels) (#251)
  • Integrated the post-processing defences into the classifiers following the pre-processing defences (#267)
  • Run unittests with TensorFlow everywhere in v2 mode instead of compatibility mode (#264)
  • Updated Poisoning attack API (#305)
  • Increased definitions of test requirements (#302)

Removed

  • Removed implementations of post-processing defences as classifier wrappers (#267)

Fixed

  • Improved the logging of unitttests (#227)
  • Updated method fit_generator in all neural network classifiers (#323)