Skip to content

Latest commit

 

History

History
178 lines (140 loc) · 9.67 KB

README.md

File metadata and controls

178 lines (140 loc) · 9.67 KB

Adversarial Robustness 360 Toolbox (ART) v1.1


Build Status Documentation Status GitHub version Language grade: Python Total alerts

中文README请按此处

Adversarial Robustness 360 Toolbox (ART) is a Python library supporting developers and researchers in defending Machine Learning models (Deep Neural Networks, Gradient Boosted Decision Trees, Support Vector Machines, Random Forests, Logistic Regression, Gaussian Processes, Decision Trees, Scikit-learn Pipelines, etc.) against adversarial threats (including evasion, extraction and poisoning) and helps making AI systems more secure and trustworthy. Machine Learning models are vulnerable to adversarial examples, which are inputs (images, texts, tabular data, etc.) deliberately crafted to produce a desired response by the Machine Learning model. ART provides the tools to build and deploy defences and test them with adversarial attacks.

Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial examples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary. ART includes attacks for testing defenses with state-of-the-art threat models.

Documentation of ART: https://adversarial-robustness-toolbox.readthedocs.io

Get started with examples and tutorials

The library is under continuous development. Feedback, bug reports and contributions are very welcome. Get in touch with us on Slack (invite here)!

Supported Machine Learning Libraries and Applications

Implemented Attacks, Defences, Detections, Metrics, Certifications and Verifications

Evasion Attacks:

Extraction Attacks:

Poisoning Attacks:

Defences:

Extraction Defences:

Robustness Metrics, Certifications and Verifications:

Detection of Adversarial Examples:

  • Basic detector based on inputs
  • Detector trained on the activations of a specific layer
  • Detector based on Fast Generalized Subset Scan (Speakman et al., 2018)

Detection of Poisoning Attacks:

Setup

Installation with pip

The toolbox is designed and tested to run with Python 3. ART can be installed from the PyPi repository using pip:

pip install adversarial-robustness-toolbox

Manual installation

The most recent version of ART can be downloaded or cloned from this repository:

git clone https://github.com/IBM/adversarial-robustness-toolbox

Install ART with the following command from the project folder art:

pip install .

ART provides unit tests that can be run with the following command:

bash run_tests.sh

Get Started with ART

Examples of using ART can be found in examples and examples/README.md provides an overview and additional information. It contains a minimal example for each machine learning framework. All examples can be run with the following command:

python examples/<example_name>.py

More detailed examples and tutorials are located in notebooks and notebooks/README.md provides and overview and more information.

Contributing

Adding new features, improving documentation, fixing bugs, or writing tutorials are all examples of helpful contributions. Furthermore, if you are publishing a new attack or defense, we strongly encourage you to add it to the Adversarial Robustness 360 Toolbox so that others may evaluate it fairly in their own work.

Bug fixes can be initiated through GitHub pull requests. When making code contributions to the Adversarial Robustness 360 Toolbox, we ask that you follow the PEP 8 coding standard and that you provide unit tests for the new features.

This project uses DCO. Be sure to sign off your commits using the -s flag or adding Signed-off-By: Name<Email> in the commit message.

Example

git commit -s -m 'Add new feature'

Citing ART

If you use ART for research, please consider citing the following reference paper:

@article{art2018,
    title = {Adversarial Robustness Toolbox v1.1.0},
    author = {Nicolae, Maria-Irina and Sinn, Mathieu and Tran, Minh~Ngoc and Buesser, Beat and Rawat, Ambrish and Wistuba, Martin and Zantedeschi, Valentina and Baracaldo, Nathalie and Chen, Bryant and Ludwig, Heiko and Molloy, Ian and Edwards, Ben},
    journal = {CoRR},
    volume = {1807.01069},
    year = {2018},
    url = {https://arxiv.org/pdf/1807.01069}
}