Skip to content

Latest commit

 

History

History
169 lines (123 loc) · 13.2 KB

File metadata and controls

169 lines (123 loc) · 13.2 KB

Adversarial Robustness Toolbox notebooks

Adversarial training

adversarial_retraining.ipynb [on nbviewer] shows how to load and evaluate the MNIST and CIFAR-10 models synthesized and adversarially trained by Sinn et al., 2019.

adversarial_training_mnist.ipynb [on nbviewer] demonstrates adversarial training of a neural network to harden the model against adversarial samples using the MNIST dataset.

Tensorflow v2

art-for-tensorflow-v2-callable.ipynb [on nbviewer] show how to use ART with Tensorflow v2 in eager execution mode with a model in form of a callable class or python function.

art-for-tensorflow-v2-keras.ipynb [on nbviewer] demonstrates ART with Tensorflow v2 using tensorflow.keras without eager execution.

Attacks

attack_adversarial_patch.ipynb [on nbviewer] shows how to use ART to create real-world adversarial patches that fool real-world object detection and classification models.

attack_decision_based_boundary.ipynb [on nbviewer] demonstrates Decision-Based Adversarial Attack (Boundary) attack. This is a black-box attack which only requires class predictions.

attack_decision_tree.ipynb [on nbviewer] shows how to compute adversarial examples on decision trees (Papernot et al., 2016). It traversing the structure of a decision tree classifier to create adversarial examples can be computed without explicit gradients.

attack_defence_imagenet.ipynb [on nbviewer] explains the basic workflow of using ART with defences and attacks on an neural network classifier for ImageNet.

attack_hopskipjump.ipynb [on nbviewer] demonstrates the HopSkipJumpAttack. This is a black-box attack that only requires class predictions. It is an advanced version of the Boundary attack.

Classifiers

classifier_blackbox.ipynb [on nbviewer] demonstrates BlackBoxClassifier, the most general and versatile classifier of ART requiring only a single predict function definition without any additional assumptions or requirements. The notebook shows how use BlackBoxClassifier to attack a remote, deployed model (in this case on IBM Watson Machine Learning, https://cloud.ibm.com) using the HopSkiJump attack.

classifier_blackbox_tesseract.ipynb [on nbviewer] demonstrates a black-box attack on Tesseract OCR. It uses BlackBoxClassifier and HopSkipJump attack to change the image of one word into the image of another word and shows how to apply pre-processing defences.

classifier_catboost.ipynb [on nbviewer] shows how to use ART with CatBoost models. It demonstrates and analyzes Zeroth Order Optimisation attacks using the Iris and MNIST datasets.

classifier_gpy_gaussian_process.ipynb [on nbviewer] shows how to create adversarial examples for the Gaussian Process classifier of GPy. It crafts adversarial examples with the HighConfidenceLowUncertainty (HCLU) attack (Grosse et al., 2018), specifically targeting Gaussian Process classifiers, and compares it to Projected Gradient Descent (PGD) (Madry et al., 2017).

classifier_lightgbm.ipynb [on nbviewer] shows how to use ART with LightGBM models. It demonstrates and analyzes Zeroth Order Optimisation attacks using the Iris and MNIST datasets.

classifier_scikitlearn_AdaBoostClassifier.ipynb [on nbviewer] shows how to use ART with Scikit-learn AdaBoostClassifier. It demonstrates and analyzes Zeroth Order Optimisation attacks using the Iris and MNIST datasets.

classifier_scikitlearn_BaggingClassifier.ipynb [on nbviewer] shows how to use ART with Scikit-learn BaggingClassifier. It demonstrates and analyzes Zeroth Order Optimisation attacks using the Iris and MNIST datasets.

classifier_scikitlearn_DecisionTreeClassifier.ipynb [on nbviewer] shows how to use ART with Scikit-learn DecisionTreeClassifier. It demonstrates and analyzes Zeroth Order Optimisation attacks using the Iris and MNIST datasets.

classifier_scikitlearn_ExtraTreesClassifier.ipynb [on nbviewer] shows how to use ART with Scikit-learn ExtraTreesClassifier. It demonstrates and analyzes Zeroth Order Optimisation attacks using the Iris and MNIST datasets.

classifier_scikitlearn_GradientBoostingClassifier.ipynb [on nbviewer] shows how to use ART with Scikit-learn GradientBoostingClassifier. It demonstrates and analyzes Zeroth Order Optimisation attacks using the Iris and MNIST datasets.

classifier_scikitlearn_LogisticRegression.ipynb [on nbviewer] shows how to use ART with Scikit-learn LogisticRegression. It demonstrates and analyzes Projected Gradient Descent attacks using the MNIST dataset.

classifier_scikitlearn_pipeline_pca_cv_svc.ipynb [on nbviewer] contains an example of generating adversarial examples using a black-box attack against a scikit-learn pipeline consisting of principal component analysis (PCA), cross validation (CV) and a support vector machine classifier (SVC), but any other valid pipeline would work too. The pipeline is optimised using grid search with cross validation. The adversarial samples are created with black-box HopSkipJump attack. The training data is MNIST, because of its intuitive visualisation, but any other dataset including tabular data would be suitable too.

classifier_scikitlearn_RandomForestClassifier.ipynb [on nbviewer] shows how to use ART with Scikit-learn RandomForestClassifier. It demonstrates and analyzes Zeroth Order Optimisation attacks using the Iris and MNIST datasets.

classifier_scikitlearn_SVC_LinearSVC.ipynb [on nbviewer] shows how to use ART with Scikit-learn SVC and LinearSVC support vector machines. It demonstrates and analyzes Projected Gradient Descent attacks using the Iris and MNIST dataset for binary and multiclass classification for linear and radial-basis-function kernels.

classifier_xgboost.ipynb [on nbviewer] shows how to use ART with XGBoost models. It demonstrates and analyzes Zeroth Order Optimisation attacks using the Iris and MNIST datasets.

Detectors

detection_adversarial_samples_cifar10.ipynb [on nbviewer] demonstrates the detection of adversarial examples using ART. The classifier model is a neural network of a ResNet architecture in Keras for the CIFAR-10 dataset.

Poisoning

poisoning_attack_svm.ipynb [on nbviewer] demonstrates a poisoning attack on a Support Vector Machine.

poisoning_dataset_mnist.ipynb [on nbviewer] demonstrates the generation and detection of backdoors into neural networks by poisoning the training dataset.

Certification and Verification

output_randomized_smoothing_mnist.ipynb [on nbviewer] shows how to achieve certified adversarial robustness for neural networks via Randomized Smoothing.

robustness_verification_clique_method_tree_ensembles_gradient_boosted_decision_trees_classifiers.ipynb [on nbviewer] demonstrates the verification of adversarial robustness in decision tree ensemble classifiers (Gradient Boosted Decision Trees, Random Forests, etc.) using XGBoost, LightGBM and Scikit-learn.

MNIST

fabric_for_deep_learning_adversarial_samples_fashion_mnist.ipynb [on nbviewer] shows how to use ART with deep learning models trained with the Fabric for Deep Learning (FfDL).