Skip to content

Latest commit

 

History

History
348 lines (208 loc) · 46.3 KB

README.md

File metadata and controls

348 lines (208 loc) · 46.3 KB

Papers-of-Robust-ML

Related papers for robust machine learning (main focus on the application of computer vision).

Statement

Since there are tens of new papers on adversarial examples in each conference and jounral, we are only able to update those we just read and consider as insightful.

It follows the idea of https://github.com/P2333/Papers-of-Robust-ML

Contents

Transfer Learning

  • Do Better ImageNet Models Transfer Better?(CVPR 2019)
    This paper inspects the down-stream tasks in the computer vision which is pre-trained on ImageNet and obtains three conclusions: 1) The features learned by ImageNet is beneficial to some other computer vision tasks; 2) The regularization method utilized in the recognition task of ImageNet dataset is not so useful to transfer learning; 3) Fine-tuning on the task of fine-grained recognition does not surpass training from scratch.
  • Do ImageNet Classifiers Generalize to ImageNet?(ICML 2019)
    The empirical result of this paper suggests that the accuracy drop is not caused by the adaptivity, but by the models’ inability to generalize to slightly “harder” images than those found in the original test sets.

Noisy Label

General Defenses (training phase)

General Defenses (inference phase)

Adversarial Detection

Certified Defense and Model Verification

Theoretical Analysis

Empirical Analysis

Reinforcement Learning

  • Certifiable Robustness to Adversarial State Uncertainty in Deep Reinforcement Learning(TNNLLS 2021)
    This paper leverages research on certified adversarial robustness to develop an online certifiably robust for deep reinforcement learning algorithms. The proposed defense computes guaranteed lower bounds on state-action values during execution to identify and choose a robust action under a worst case deviation in input space due to possible adversaries or noise.

  • Adversarially Robust Policy Learning: Active Construction of Physically-Plausible Perturbations(IROS 2017)
    This paper introduces Adversarially Robust Policy Learning (ARPL), an algorithm that leverages active computation of physically-plausible adversarial examples during training to enable robust policy learning in the source domain and robust performance under both random and adversarial input perturbations.

  • Robust Adversarial Reinforcement Learning(ICML 2017)
    This paper advocates a two-pronged approach with adversarial agents for modeling disturbances and adversaries with the incorporation of the domain knowledge to deliver the policy robust to uncertainties and model initializations.

Poison Attack

##Federate Learning

Beyond Safety

Seminal Work

Benchmark Datasets