Metadata-Version: 2.1
Name: adapters
Version: 0.0.0.dev20231123
Summary: A Unified Library for Parameter-Efficient and Modular Transfer Learning
Home-page: https://github.com/adapter-hub/adapters
Author: The AdapterHub team and community contributors
Author-email: pfeiffer@ukp.tu-darmstadt.de
License: Apache
Keywords: NLP deep learning transformer pytorch BERT adapters
Classifier: Development Status :: 4 - Beta
Classifier: Intended Audience :: Developers
Classifier: Intended Audience :: Education
Classifier: Intended Audience :: Science/Research
Classifier: License :: OSI Approved :: Apache Software License
Classifier: Operating System :: OS Independent
Classifier: Programming Language :: Python :: 3
Classifier: Programming Language :: Python :: 3.8
Classifier: Programming Language :: Python :: 3.9
Classifier: Programming Language :: Python :: 3.10
Classifier: Topic :: Scientific/Engineering :: Artificial Intelligence
Requires-Python: >=3.8.0
Description-Content-Type: text/markdown
License-File: LICENSE
Requires-Dist: transformers ==4.35.2
Provides-Extra: dev
Requires-Dist: pytest ; extra == 'dev'
Requires-Dist: pytest-xdist ; extra == 'dev'
Requires-Dist: timeout-decorator ; extra == 'dev'
Requires-Dist: parameterized ; extra == 'dev'
Requires-Dist: psutil ; extra == 'dev'
Requires-Dist: datasets !=2.5.0 ; extra == 'dev'
Requires-Dist: dill <0.3.5 ; extra == 'dev'
Requires-Dist: evaluate >=0.2.0 ; extra == 'dev'
Requires-Dist: pytest-timeout ; extra == 'dev'
Requires-Dist: black ==22.3 ; extra == 'dev'
Requires-Dist: sacrebleu <2.0.0,>=1.4.12 ; extra == 'dev'
Requires-Dist: rouge-score !=0.0.7,!=0.0.8,!=0.1,!=0.1.1 ; extra == 'dev'
Requires-Dist: nltk ; extra == 'dev'
Requires-Dist: GitPython <3.1.19 ; extra == 'dev'
Requires-Dist: hf-doc-builder >=0.3.0 ; extra == 'dev'
Requires-Dist: protobuf <=3.20.2 ; extra == 'dev'
Requires-Dist: sacremoses ; extra == 'dev'
Requires-Dist: rjieba ; extra == 'dev'
Requires-Dist: safetensors >=0.2.1 ; extra == 'dev'
Requires-Dist: beautifulsoup4 ; extra == 'dev'
Requires-Dist: pillow ; extra == 'dev'
Requires-Dist: accelerate >=0.20.3 ; extra == 'dev'
Requires-Dist: torch !=1.12.0,>=1.10 ; extra == 'dev'
Requires-Dist: sentencepiece !=0.1.92,>=0.1.91 ; extra == 'dev'
Requires-Dist: isort >=5.5.4 ; extra == 'dev'
Requires-Dist: flake8 >=3.8.3 ; extra == 'dev'
Requires-Dist: docutils ==0.16.0 ; extra == 'dev'
Requires-Dist: Jinja2 ==2.11.3 ; extra == 'dev'
Requires-Dist: markupsafe ==2.0.1 ; extra == 'dev'
Requires-Dist: myst-parser ; extra == 'dev'
Requires-Dist: sphinx ==3.2.1 ; extra == 'dev'
Requires-Dist: sphinx-markdown-tables ; extra == 'dev'
Requires-Dist: sphinx-rtd-theme ==0.4.3 ; extra == 'dev'
Requires-Dist: sphinx-copybutton ; extra == 'dev'
Requires-Dist: sphinxext-opengraph ==0.4.1 ; extra == 'dev'
Requires-Dist: sphinx-intl ; extra == 'dev'
Requires-Dist: sphinx-multiversion ; extra == 'dev'
Requires-Dist: scikit-learn ; extra == 'dev'
Requires-Dist: onnxruntime >=1.4.0 ; extra == 'dev'
Requires-Dist: onnxruntime-tools >=1.4.2 ; extra == 'dev'
Provides-Extra: docs
Requires-Dist: docutils ==0.16.0 ; extra == 'docs'
Requires-Dist: Jinja2 ==2.11.3 ; extra == 'docs'
Requires-Dist: markupsafe ==2.0.1 ; extra == 'docs'
Requires-Dist: myst-parser ; extra == 'docs'
Requires-Dist: sphinx ==3.2.1 ; extra == 'docs'
Requires-Dist: sphinx-markdown-tables ; extra == 'docs'
Requires-Dist: sphinx-rtd-theme ==0.4.3 ; extra == 'docs'
Requires-Dist: sphinx-copybutton ; extra == 'docs'
Requires-Dist: sphinxext-opengraph ==0.4.1 ; extra == 'docs'
Requires-Dist: sphinx-intl ; extra == 'docs'
Requires-Dist: sphinx-multiversion ; extra == 'docs'
Provides-Extra: onnxruntime
Requires-Dist: onnxruntime >=1.4.0 ; extra == 'onnxruntime'
Requires-Dist: onnxruntime-tools >=1.4.2 ; extra == 'onnxruntime'
Provides-Extra: quality
Requires-Dist: black ==22.3 ; extra == 'quality'
Requires-Dist: datasets !=2.5.0 ; extra == 'quality'
Requires-Dist: isort >=5.5.4 ; extra == 'quality'
Requires-Dist: flake8 >=3.8.3 ; extra == 'quality'
Requires-Dist: GitPython <3.1.19 ; extra == 'quality'
Requires-Dist: hf-doc-builder >=0.3.0 ; extra == 'quality'
Provides-Extra: sentencepiece
Requires-Dist: sentencepiece !=0.1.92,>=0.1.91 ; extra == 'sentencepiece'
Requires-Dist: protobuf <=3.20.2 ; extra == 'sentencepiece'
Provides-Extra: sklearn
Requires-Dist: scikit-learn ; extra == 'sklearn'
Provides-Extra: testing
Requires-Dist: pytest ; extra == 'testing'
Requires-Dist: pytest-xdist ; extra == 'testing'
Requires-Dist: timeout-decorator ; extra == 'testing'
Requires-Dist: parameterized ; extra == 'testing'
Requires-Dist: psutil ; extra == 'testing'
Requires-Dist: datasets !=2.5.0 ; extra == 'testing'
Requires-Dist: dill <0.3.5 ; extra == 'testing'
Requires-Dist: evaluate >=0.2.0 ; extra == 'testing'
Requires-Dist: pytest-timeout ; extra == 'testing'
Requires-Dist: black ==22.3 ; extra == 'testing'
Requires-Dist: sacrebleu <2.0.0,>=1.4.12 ; extra == 'testing'
Requires-Dist: rouge-score !=0.0.7,!=0.0.8,!=0.1,!=0.1.1 ; extra == 'testing'
Requires-Dist: nltk ; extra == 'testing'
Requires-Dist: GitPython <3.1.19 ; extra == 'testing'
Requires-Dist: hf-doc-builder >=0.3.0 ; extra == 'testing'
Requires-Dist: protobuf <=3.20.2 ; extra == 'testing'
Requires-Dist: sacremoses ; extra == 'testing'
Requires-Dist: rjieba ; extra == 'testing'
Requires-Dist: safetensors >=0.2.1 ; extra == 'testing'
Requires-Dist: beautifulsoup4 ; extra == 'testing'
Requires-Dist: pillow ; extra == 'testing'
Requires-Dist: accelerate >=0.20.3 ; extra == 'testing'
Provides-Extra: torch
Requires-Dist: torch !=1.12.0,>=1.10 ; extra == 'torch'
Requires-Dist: accelerate >=0.20.3 ; extra == 'torch'

<!---
Copyright 2020 The AdapterHub Team. All rights reserved.

Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
-->

> **Note**: This repository holds the codebase of the _Adapters_ library, which has replaced `adapter-transformers`. For the legacy codebase, go to: https://github.com/adapter-hub/adapter-transformers-legacy.

<p align="center">
<img style="vertical-align:middle" src="https://raw.githubusercontent.com/Adapter-Hub/adapters/main/docs/logo.png" />
</p>
<h1 align="center">
<span><i>Adapters</i></span>
</h1>

<h3 align="center">
A Unified Library for Parameter-Efficient and Modular Transfer Learning
</h3>

![Tests](https://github.com/Adapter-Hub/adapters/workflows/Tests/badge.svg?branch=adapters)
[![GitHub](https://img.shields.io/github/license/adapter-hub/adapters.svg?color=blue)](https://github.com/adapter-hub/adapters/blob/main/LICENSE)
[![PyPI](https://img.shields.io/pypi/v/adapters)](https://pypi.org/project/adapters/)

`adapters` is an add-on to [HuggingFace's Transformers](https://github.com/huggingface/transformers) library, integrating adapters into state-of-the-art language models by incorporating **[AdapterHub](https://adapterhub.ml)**, a central repository for pre-trained adapter modules.

## Installation

`adapters` currently supports **Python 3.8+** and **PyTorch 1.10+**.
After [installing PyTorch](https://pytorch.org/get-started/locally/), you can install `adapters` from PyPI ...

```
pip install -U adapters
```

... or from source by cloning the repository:

```
git clone https://github.com/adapter-hub/adapters.git
git checkout adapters
cd adapters
pip install .
```

## Quick Tour

#### Load pre-trained adapters:

```python
from adapters import AutoAdapterModel
from transformers import AutoTokenizer

model = AutoAdapterModel.from_pretrained("roberta-base")
tokenizer = AutoTokenizer.from_pretrained("roberta-base")

model.load_adapter("AdapterHub/roberta-base-pf-imdb", source="hf", set_active=True)

print(model(**tokenizer("This works great!", return_tensors="pt")).logits)
```

**[Learn More](https://docs.adapterhub.ml/loading.html)**

#### Adapt existing model setups:

```python
import adapters
from transformers import AutoModelForSequenceClassification

model = AutoModelForSequenceClassification.from_pretrained("t5-base")

adapters.init(model)

model.add_adapter("my_lora_adapter", config="lora")
model.train_adapter("my_lora_adapter")

# Your regular training loop...
```

**[Learn More](https://docs.adapterhub.ml/quickstart.html)**

#### Flexibly configure adapters:

```python
from adapters import ConfigUnion, PrefixTuningConfig, ParBnConfig, AutoAdapterModel

model = AutoAdapterModel.from_pretrained("microsoft/deberta-v3-base")

adapter_config = ConfigUnion(
    PrefixTuningConfig(prefix_length=20),
    ParBnConfig(reduction_factor=4),
)
model.add_adapter("my_adapter", config=adapter_config, set_active=True)
```

**[Learn More](https://docs.adapterhub.ml/overview.html)**

#### Easily compose adapters in a single model:

```python
from adapters import AdapterSetup, AutoAdapterModel
import adapters.composition as ac

model = AutoAdapterModel.from_pretrained("roberta-base")

qc = model.load_adapter("AdapterHub/roberta-base-pf-trec")
sent = model.load_adapter("AdapterHub/roberta-base-pf-imdb")

with AdapterSetup(ac.Parallel(qc, sent)):
    print(model(**tokenizer("What is AdapterHub?", return_tensors="pt")))
```

**[Learn More](https://docs.adapterhub.ml/adapter_composition.html)**

## Useful Resources

HuggingFace's great documentation on getting started with _Transformers_ can be found [here](https://huggingface.co/transformers/index.html). `adapters` is fully compatible with _Transformers_.

To get started with adapters, refer to these locations:

- **[Colab notebook tutorials](https://github.com/Adapter-Hub/adapters/tree/main/notebooks)**, a series notebooks providing an introduction to all the main concepts of (adapter-)transformers and AdapterHub
- **https://docs.adapterhub.ml**, our documentation on training and using adapters with _adapters_
- **https://adapterhub.ml** to explore available pre-trained adapter modules and share your own adapters
- **[Examples folder](https://github.com/Adapter-Hub/adapters/tree/main/examples/pytorch)** of this repository containing HuggingFace's example training scripts, many adapted for training adapters

## Implemented Methods

Currently, adapters integrates all architectures and methods listed below:

| Method | Paper(s) | Quick Links |
| --- | --- | --- |
| Bottleneck adapters | [Houlsby et al. (2019)](https://arxiv.org/pdf/1902.00751.pdf)<br> [Bapna and Firat (2019)](https://arxiv.org/pdf/1909.08478.pdf) | [Quickstart](https://docs.adapterhub.ml/quickstart.html), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/01_Adapter_Training.ipynb) |
| AdapterFusion | [Pfeiffer et al. (2021)](https://aclanthology.org/2021.eacl-main.39.pdf) | [Docs: Training](https://docs.adapterhub.ml/training.html#train-adapterfusion), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/03_Adapter_Fusion.ipynb) |
| MAD-X,<br> Invertible adapters | [Pfeiffer et al. (2020)](https://aclanthology.org/2020.emnlp-main.617/) | [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/04_Cross_Lingual_Transfer.ipynb) |
| AdapterDrop | [Rücklé et al. (2021)](https://arxiv.org/pdf/2010.11918.pdf) | [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/05_Adapter_Drop_Training.ipynb) |
| MAD-X 2.0,<br> Embedding training | [Pfeiffer et al. (2021)](https://arxiv.org/pdf/2012.15562.pdf) | [Docs: Embeddings](https://docs.adapterhub.ml/embeddings.html), [Notebook](https://colab.research.google.com/github/Adapter-Hub/adapters/blob/main/notebooks/08_NER_Wikiann.ipynb) |
| Prefix Tuning | [Li and Liang (2021)](https://arxiv.org/pdf/2101.00190.pdf) | [Docs](https://docs.adapterhub.ml/overview.html#prefix-tuning) |
| Parallel adapters,<br> Mix-and-Match adapters | [He et al. (2021)](https://arxiv.org/pdf/2110.04366.pdf) | [Docs](https://docs.adapterhub.ml/overview.html#mix-and-match-adapters) |
| Compacter | [Mahabadi et al. (2021)](https://arxiv.org/pdf/2106.04647.pdf) | [Docs](https://docs.adapterhub.ml/overview.html#compacter) |
| LoRA | [Hu et al. (2021)](https://arxiv.org/pdf/2106.09685.pdf) | [Docs](https://docs.adapterhub.ml/overview.html#lora) |
| (IA)^3 | [Liu et al. (2022)](https://arxiv.org/pdf/2205.05638.pdf) | [Docs](https://docs.adapterhub.ml/overview.html#ia-3) |
| UniPELT | [Mao et al. (2022)](https://arxiv.org/pdf/2110.07577.pdf) | [Docs](https://docs.adapterhub.ml/overview.html#unipelt) |

## Supported Models

We currently support the PyTorch versions of all models listed on the **[Model Overview](https://docs.adapterhub.ml/model_overview.html) page** in our documentation.

## Developing & Contributing

To get started with developing on _Adapters_ yourself and learn more about ways to contribute, please see https://docs.adapterhub.ml/contributing.html.

## Citation

If you use this library for your work, please consider citing our paper [AdapterHub: A Framework for Adapting Transformers](https://arxiv.org/abs/2007.07779):

```
@inproceedings{pfeiffer2020AdapterHub,
    title={AdapterHub: A Framework for Adapting Transformers},
    author={Pfeiffer, Jonas and
            R{\"u}ckl{\'e}, Andreas and
            Poth, Clifton and
            Kamath, Aishwarya and
            Vuli{\'c}, Ivan and
            Ruder, Sebastian and
            Cho, Kyunghyun and
            Gurevych, Iryna},
    booktitle={Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations},
    pages={46--54},
    year={2020}
}
```
