Skip to content

Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)

Notifications You must be signed in to change notification settings

rotcx/Fooling-LIME-SHAP

 
 

Repository files navigation

Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods

This is the code for our paper, "Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods."

Read the paper.

Getting started

Setup virtual environment and install requirements:

conda create -n fooling_limeshap python=3.7
source activate fooling_limeshap
pip install -r requirements.txt

You should be able to run the code now!

We provide a short walk through on COMPAS in COMPAS_Example.ipynb. This is a nice place to get started to see how our method works. Applications of the attack on each data set can be found in compas_experiment.py, cc_experiment.py, and german_experiment.py.

References

Please consider citing our paper if you found this work useful!

@inproceedings{advlime:aies20,
  author = {Dylan Slack and Sophie Hilgard and Emily Jia and Sameer Singh and Himabindu Lakkaraju},
  title = {Fooling LIME and SHAP: Adversarial Attacks on Post hoc Explanation Methods},
  booktitle = {AAAI/ACM Conference on AI, Ethics, and Society (AIES)},
  year = {2020}
}

Contact

This code was developed by Dylan Slack, Sophie Hilgard, and Emily Jia. Reach out to us with any questions!

Our emails are: [email protected], [email protected], and [email protected].

About

Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 56.2%
  • Python 43.8%