Skip to content

Codes for the paper "Proper Measure for Adversarial Robustness"

License

Notifications You must be signed in to change notification settings

hjk92g/proper_measure_robustness

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

2D_1nn_ensemble.py: plot speculated optimally robust classifiers when data contain input noise.

  • By changing p_ord (default: 1), it is possible to choose different distance metric (only norms with ).
  • Different examples are available by uncommenting and commenting the data definition part.
  • By changing swc_gradual (default: 0), it is possible to plot results using gradual nearest neighbor (1-NN) classifiers.
  • n_noise decides the number of classifiers will be used for getting ensemble classifiers. Using large n_noise will give smooth ensemble classifiers, but it takes much time.

Speculated optimally robust classifiers when data contain input noise

norm
norm
norm

Speculated optimally robust classifiers combined with the gradual nearest neighbor classifiers when data contain input noise

norm
norm
norm

2D_genuine_Proj.py: project points for calculation of genuine adversarial accuracy by maximum perturbation norm

  • The function gen_Proj(x_prime,x_nat,eps) applies projections for calculating genuine adversarial accuracy based on maximum norm. x_prime indicates samples that will be applied projections, x_nat indicates clean samples and eps indicates epsilon that will be used for lp ball projection.
  • The function gen_Proj2(x_prime,x_nats,eps) applies projections for calculating genuine adversarial accuracy based on maximum norm. x_nats indicates clean samples and m nearest neighbors of clean samples (It needs to be concatenated along axis 1, i.e. the second axis.). The difference with gen_Proj is that this function use only m nearest neighbors for projection to roughly apply projection. As gen_Proj2 uses only m nearest neighbors, it is faster and requires less memory.
  • The code does not include gradient steps in projected gradient descent (PGD). For actual PGD calculation, one can apply gradient step (after random initialization), and then apply gen_Proj (or gen_Proj2) to project and iterate both gradient step and gen_Proj (or gen_Proj2) several times.

Visualization of projection process with different distance metrics (after projections)

norm norm norm

Visualization of projection results

norm norm norm

Visualization of simple case of properly applied adversarial training

Properly applied adversarial training refers to adversarial training with no conflicting regions originating from overlapping regions (of different classes).

norm norm norm

About

Codes for the paper "Proper Measure for Adversarial Robustness"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages