Metadata-Version: 2.1
Name: adversarial-robustness-toolbox
Version: 1.7.0
Summary: Toolbox for adversarial machine learning.
Home-page: https://github.com/Trusted-AI/adversarial-robustness-toolbox
Author: Irina Nicolae
Author-email: irinutza.n@gmail.com
Maintainer: Beat Buesser
Maintainer-email: beat.buesser@ie.ibm.com
License: MIT
Platform: UNKNOWN
Classifier: Development Status :: 3 - Alpha
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: MIT License
Classifier: Programming Language :: Python :: 3.6
Classifier: Programming Language :: Python :: 3.7
Classifier: Programming Language :: Python :: 3.8
Classifier: Topic :: Software Development :: Libraries
Classifier: Topic :: Software Development :: Libraries :: Python Modules
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Description-Content-Type: text/markdown
Requires-Dist: numpy (>=1.18.0)
Requires-Dist: scipy (>=1.4.1)
Requires-Dist: scikit-learn (<0.24.3,>=0.22.2)
Requires-Dist: six
Requires-Dist: setuptools
Requires-Dist: tqdm
Requires-Dist: numba (~=0.53.1)
Provides-Extra: all
Requires-Dist: mxnet ; extra == 'all'
Requires-Dist: catboost ; extra == 'all'
Requires-Dist: lightgbm ; extra == 'all'
Requires-Dist: tensorflow ; extra == 'all'
Requires-Dist: tensorflow-addons ; extra == 'all'
Requires-Dist: h5py ; extra == 'all'
Requires-Dist: torch ; extra == 'all'
Requires-Dist: torchvision ; extra == 'all'
Requires-Dist: xgboost ; extra == 'all'
Requires-Dist: pandas ; extra == 'all'
Requires-Dist: kornia ; extra == 'all'
Requires-Dist: matplotlib ; extra == 'all'
Requires-Dist: Pillow ; extra == 'all'
Requires-Dist: statsmodels ; extra == 'all'
Requires-Dist: pydub ; extra == 'all'
Requires-Dist: resampy ; extra == 'all'
Requires-Dist: ffmpeg-python ; extra == 'all'
Requires-Dist: cma ; extra == 'all'
Requires-Dist: librosa ; extra == 'all'
Requires-Dist: opencv-python ; extra == 'all'
Provides-Extra: catboost
Requires-Dist: catboost ; extra == 'catboost'
Provides-Extra: docs
Requires-Dist: sphinx (>=1.4) ; extra == 'docs'
Requires-Dist: sphinx-rtd-theme ; extra == 'docs'
Requires-Dist: sphinx-autodoc-annotation ; extra == 'docs'
Requires-Dist: sphinx-autodoc-typehints ; extra == 'docs'
Requires-Dist: matplotlib ; extra == 'docs'
Requires-Dist: numpy (>=1.18.0) ; extra == 'docs'
Requires-Dist: scipy (>=1.4.1) ; extra == 'docs'
Requires-Dist: six (>=1.13.0) ; extra == 'docs'
Requires-Dist: scikit-learn (<0.24.3,>=0.22.2) ; extra == 'docs'
Requires-Dist: Pillow (>=6.0.0) ; extra == 'docs'
Provides-Extra: gpy
Requires-Dist: GPy ; extra == 'gpy'
Provides-Extra: keras
Requires-Dist: keras ; extra == 'keras'
Requires-Dist: h5py ; extra == 'keras'
Provides-Extra: lightgbm
Requires-Dist: lightgbm ; extra == 'lightgbm'
Provides-Extra: lingvo_asr
Requires-Dist: tensorflow-gpu (==2.1.0) ; extra == 'lingvo_asr'
Requires-Dist: lingvo (==0.6.4) ; extra == 'lingvo_asr'
Requires-Dist: pydub ; extra == 'lingvo_asr'
Requires-Dist: resampy ; extra == 'lingvo_asr'
Requires-Dist: librosa ; extra == 'lingvo_asr'
Provides-Extra: mxnet
Requires-Dist: mxnet ; extra == 'mxnet'
Provides-Extra: pytorch
Requires-Dist: torch ; extra == 'pytorch'
Requires-Dist: torchvision ; extra == 'pytorch'
Provides-Extra: pytorch_audio
Requires-Dist: torch ; extra == 'pytorch_audio'
Requires-Dist: torchvision ; extra == 'pytorch_audio'
Requires-Dist: torchaudio ; extra == 'pytorch_audio'
Requires-Dist: pydub ; extra == 'pytorch_audio'
Requires-Dist: resampy ; extra == 'pytorch_audio'
Requires-Dist: librosa ; extra == 'pytorch_audio'
Provides-Extra: pytorch_image
Requires-Dist: torch ; extra == 'pytorch_image'
Requires-Dist: torchvision ; extra == 'pytorch_image'
Requires-Dist: kornia ; extra == 'pytorch_image'
Requires-Dist: Pillow ; extra == 'pytorch_image'
Requires-Dist: ffmpeg-python ; extra == 'pytorch_image'
Requires-Dist: opencv-python ; extra == 'pytorch_image'
Provides-Extra: tensorflow
Requires-Dist: tensorflow ; extra == 'tensorflow'
Requires-Dist: tensorflow-addons ; extra == 'tensorflow'
Requires-Dist: h5py ; extra == 'tensorflow'
Provides-Extra: tensorflow_audio
Requires-Dist: tensorflow ; extra == 'tensorflow_audio'
Requires-Dist: tensorflow-addons ; extra == 'tensorflow_audio'
Requires-Dist: h5py ; extra == 'tensorflow_audio'
Requires-Dist: pydub ; extra == 'tensorflow_audio'
Requires-Dist: resampy ; extra == 'tensorflow_audio'
Requires-Dist: librosa ; extra == 'tensorflow_audio'
Provides-Extra: tensorflow_image
Requires-Dist: tensorflow ; extra == 'tensorflow_image'
Requires-Dist: tensorflow-addons ; extra == 'tensorflow_image'
Requires-Dist: h5py ; extra == 'tensorflow_image'
Requires-Dist: Pillow ; extra == 'tensorflow_image'
Requires-Dist: ffmpeg-python ; extra == 'tensorflow_image'
Requires-Dist: opencv-python ; extra == 'tensorflow_image'
Provides-Extra: xgboost
Requires-Dist: xgboost ; extra == 'xgboost'

# Adversarial Robustness Toolbox (ART) v1.7
<p align="center">
  <img src="docs/images/art_lfai.png?raw=true" width="467" title="ART logo">
</p>
<br />

![Continuous Integration](https://github.com/Trusted-AI/adversarial-robustness-toolbox/workflows/Continuous%20Integration/badge.svg)
![CodeQL](https://github.com/Trusted-AI/adversarial-robustness-toolbox/workflows/CodeQL/badge.svg)
[![Documentation Status](https://readthedocs.org/projects/adversarial-robustness-toolbox/badge/?version=latest)](http://adversarial-robustness-toolbox.readthedocs.io/en/latest/?badge=latest)
[![PyPI](https://badge.fury.io/py/adversarial-robustness-toolbox.svg)](https://badge.fury.io/py/adversarial-robustness-toolbox)
[![Language grade: Python](https://img.shields.io/lgtm/grade/python/g/Trusted-AI/adversarial-robustness-toolbox.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Trusted-AI/adversarial-robustness-toolbox/context:python)
[![Total alerts](https://img.shields.io/lgtm/alerts/g/Trusted-AI/adversarial-robustness-toolbox.svg?logo=lgtm&logoWidth=18)](https://lgtm.com/projects/g/Trusted-AI/adversarial-robustness-toolbox/alerts/)
[![codecov](https://codecov.io/gh/Trusted-AI/adversarial-robustness-toolbox/branch/main/graph/badge.svg)](https://codecov.io/gh/Trusted-AI/adversarial-robustness-toolbox)
[![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
[![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
[![PyPI - Python Version](https://img.shields.io/pypi/pyversions/adversarial-robustness-toolbox)](https://pypi.org/project/adversarial-robustness-toolbox/)
[![slack-img](https://img.shields.io/badge/chat-on%20slack-yellow.svg)](https://ibm-art.slack.com/)
[![Downloads](https://pepy.tech/badge/adversarial-robustness-toolbox)](https://pepy.tech/project/adversarial-robustness-toolbox)
[![Downloads](https://pepy.tech/badge/adversarial-robustness-toolbox/month)](https://pepy.tech/project/adversarial-robustness-toolbox)

[中文README请按此处](README-cn.md)

Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable
developers and researchers to defend and evaluate Machine Learning models and applications against the
adversarial threats of Evasion, Poisoning, Extraction, and Inference. ART supports all popular machine learning frameworks
(TensorFlow, Keras, PyTorch, MXNet, scikit-learn, XGBoost, LightGBM, CatBoost, GPy, etc.), all data types
(images, tables, audio, video, etc.) and machine learning tasks (classification, object detection, speech recognition,
generation, certification, etc.).

## Adversarial Threats

<p align="center">
  <img src="docs/images/adversarial_threats_attacker.png?raw=true" width="400" title="ART logo">
  <img src="docs/images/adversarial_threats_art.png?raw=true" width="400" title="ART logo">
</p>
<br />

## ART for Red and Blue Teams (selection)

<p align="center">
  <img src="docs/images/white_hat_blue_red.png?raw=true" width="800" title="ART Red and Blue Teams">
</p>
<br />

## Learn more

| **[Get Started][get-started]**     | **[Documentation][documentation]**     | **[Contributing][contributing]**           |
|-------------------------------------|-------------------------------|-----------------------------------|
| - [Installation][installation]<br>- [Examples](examples/README.md)<br>- [Notebooks](notebooks/README.md) | - [Attacks][attacks]<br>- [Defences][defences]<br>- [Estimators][estimators]<br>- [Metrics][metrics]<br>- [Technical Documentation](https://adversarial-robustness-toolbox.readthedocs.io) | - [Slack](https://ibm-art.slack.com), [Invitation](https://join.slack.com/t/ibm-art/shared_invite/enQtMzkyOTkyODE4NzM4LTA4NGQ1OTMxMzFmY2Q1MzE1NWI2MmEzN2FjNGNjOGVlODVkZDE0MjA1NTA4OGVkMjVkNmQ4MTY1NmMyOGM5YTg)<br>- [Contributing](CONTRIBUTING.md)<br>- [Roadmap][roadmap]<br>- [Citing][citing] |

[get-started]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Get-Started
[attacks]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/ART-Attacks
[defences]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/ART-Defences
[estimators]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/ART-Estimators
[metrics]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/ART-Metrics
[contributing]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Contributing
[documentation]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Documentation
[installation]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Get-Started#setup
[roadmap]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Roadmap
[citing]: https://github.com/Trusted-AI/adversarial-robustness-toolbox/wiki/Contributing#citing-art

The library is under continuous development. Feedback, bug reports and contributions are very welcome!

# Acknowledgment
This material is partially based upon work supported by the Defense Advanced Research Projects Agency (DARPA) under
Contract No. HR001120C0013. Any opinions, findings and conclusions or recommendations expressed in this material are
those of the author(s) and do not necessarily reflect the views of the Defense Advanced Research Projects Agency (DARPA).


