This repository hosts the Benchmark framework, a pivotal tool in the study of adversarial attack transferability within artificial intelligence and deep learning models. Inspired by the findings presented in the "A Review of Transferability Adversarial Attacks" study, this framework serves as a standardized platform for evaluating and comparing various methodologies developed to augment the transferability of adversarial attacks.
The Benchmark framework integrates ten leading methodologies for adversarial attack transferability. Its primary purpose is to facilitate comparative analysis across diverse model architectures, thereby aiding in the identification of the most effective techniques for enhancing the robustness of AI models against adversarial threats.
The framework encompasses a spectrum of advanced techniques, including:
- Generative Structure: Techniques focusing on generating adversarial examples.
- Semantic Similarity: Methods that ensure semantic consistency in the generated attacks.
- Gradient Editing: Approaches involving the modification of gradients to craft effective attacks.
- Target Modification: Techniques that alter the target classifications for more sophisticated attacks.
- Ensemble Approach: Strategies that combine multiple methods for a comprehensive attack vector.
The Benchmark-main
folder contains several key components:
- Directories:
attacks
: Contains scripts and modules related to different attack methodologies.configs
: Configuration files for various experimental setups.torch_nets
: Modules and scripts for neural network models using PyTorch.
- Python Files:
loader.py
: Responsible for loading datasets and models.main.py
: The main script for running experiments and tests.
coming soon
coming soon