-
Notifications
You must be signed in to change notification settings - Fork 1.2k
ART Attacks
Work in progress ...
The attack descriptions include a link to the original publication and tags describing framework-support of implementations in ART:
-
all/Numpy
: implementation based on Numpy to support all frameworks -
TensorFlow
: implementation based on TensorFlow optimised for TensorFlow estimators -
PyTorch
: implementation based on PyTorch optimised for PyTorch estimators
-
Auto-Attack (Croce and Hein, 2020)
Auto-Attack runs one or more evasion attacks, defaults or provided by the user, against a classification task. Auto-Attack optimises the attack strength by only attacking correctly classified samples and by first running the untargeted version of each attack followed by running the targeted version against each possible target label.
-
Auto Projected Gradient Descent (Auto-PGD) (Croce and Hein, 2020)
all/Numpy
Auto Projected Gradient Descent attacks classification and optimizes its attack strength by adapting the step size across iterations depending on the overall attack budget and progress of the optimisations. After adapting its steps size Auto-Attack restarts from the best example found so far.
-
Shadow Attack (Ghiasi et al., 2020)
TensorFlow
,PyTorch
Shadow Attack causes certifiably robust networks to misclassify an image and produce "spoofed" certificates of robustness by applying large but naturally looking perturbations.
-
Wasserstein Attack (Wong et al., 2020)
all/Numpy
Wasserstein Attack generates adversarial examples with minimised Wasserstein distances and perturbations according to the content of the original images.
-
Brendel & Bethge Attack (Brendel et al., 2019)
all/Numpy
Brendel & Bethge attack is a powerful gradient-based adversarial attack that follows the adversarial boundary (the boundary between the space of adversarial and non-adversarial images as defined by the adversarial criterion) to find the minimum distance to the clean image.
-
Targeted Universal Adversarial Perturbations (Hirano and Takemoto, 2019)
all/Numpy
This attack creates targeted universal adversarial perturbations combining iterative methods to generate untargeted examples and fast gradient sign method to create a targeted perturbation.
-
High Confidence Low Uncertainty (HCLU) Attack (Grosse et al., 2018)
GPy
The HCLU attack Creates adversarial examples achieving high confidence and low uncertainty on a Gaussian process classifier.
-
Iterative Frame Saliency (Inkawhich et al., 2018)
The Iterative Frame Saliency attack creates adversarial examples for optical flow-based image and video classification models.
-
DPatch (Liu et al., 2018)
all/Numpy
DPatch creates digital, rectangular patches that attack object detectors.
-
ShapeShifter (Chen et al., 2018)
-
Projected Gradient Descent (PGD) (Madry et al., 2017)
-
NewtonFool (Jang et al., 2017)
-
Elastic Net (Chen et al., 2017)
-
Adversarial Patch (Brown et al., 2017)
all/Numpy
,TensorFlow
This attack generates adversarial patches that can be printed and applied in the physical world to attack image and video classification models.
-
Decision Tree Attack (Papernot et al., 2016)
all/Numpy
The Decision Tree Attack creates adversarial examples for decision tree classifiers by exploiting the structure of the tree and searching for leaves with different classes near the leaf corresponding to the prediction for the benign sample.
-
Carlini & Wagner (C&W)
L_2
andL_inf
attack (Carlini and Wagner, 2016)all/Numpy
The Carlini & Wagner attacks in L2 and Linf norm are some of the strongest white-box attacks. A major difference with respect to the original implementation (https://github.com/carlini/nn_robust_attacks) is that ART's implementation uses line search in the optimization of the attack objective.
-
Basic Iterative Method (BIM) (Kurakin et al., 2016)
all/Numpy
-
Jacobian Saliency Map (Papernot et al., 2016)
-
Universal Perturbation (Moosavi-Dezfooli et al., 2016)
-
Feature Adversaries (Sabour et al., 2016)
all/Numpy
-
DeepFool (Moosavi-Dezfooli et al., 2015)
-
Virtual Adversarial Method (Miyato et al., 2015)
-
Fast Gradient Method (Goodfellow et al., 2014)
all/Numpy
- Square Attack (Andriushchenko et al., 2020)
- HopSkipJump Attack (Chen et al., 2019)
- Threshold Attack (Vargas et al., 2019)
- Pixel Attack (Vargas et al., 2019, Su et al., 2019)
- Simple Black-box Adversarial (SimBA) (Guo et al., 2019)
- Spatial Transformation (Engstrom et al., 2017)
- Query-efficient Black-box (Ilyas et al., 2017)
- Zeroth Order Optimisation (ZOO) (Chen et al., 2017)
- Decision-based/Boundary Attack (Brendel et al., 2018)
- Adversarial Backdoor Embedding (Tan and Shokri, 2019)
- Clean Label Feature Collision Attack (Shafahi, Huang et. al., 2018)
- Backdoor Attack (Gu, et. al., 2017)
- Poisoning Attack on Support Vector Machines (SVM) (Biggio et al., 2013)
- Functionally Equivalent Extraction (Jagielski et al., 2019)
- Copycat CNN (Correia-Silva et al., 2018)
- KnockoffNets (Orekondy et al., 2018)
- Attribute Inference Black-Box
- Attribute Inference White-Box Lifestyle DecisionTree (Fredrikson et al., 2015)
- Attribute Inference White-Box DecisionTree (Fredrikson et al., 2015)
- Membership Inference Black-Box
- Membership Inference Black-Box Rule-Based
- Label-Only Boundary Distance Attack (Choquette-Choo et al., 2020) (ART 1.5)
- Label-Only Gap Attack (Choquette-Choo et al., 2020) (ART 1.5)
- MIFace (Fredrikson et al., 2015)