|
--- |
|
tags: |
|
- roberta |
|
- adapter-transformers |
|
datasets: |
|
- BigTMiami/citation_intent_dataset_condensed |
|
--- |
|
|
|
# Adapter `BigTMiami/m_cite_par_bn_v_4_pretrain_adapter` for roberta-base |
|
|
|
An [adapter](https://adapterhub.ml) for the `roberta-base` model that was trained on the [BigTMiami/citation_intent_dataset_condensed](https://huggingface.co/datasets/BigTMiami/citation_intent_dataset_condensed/) dataset and includes a prediction head for masked lm. |
|
|
|
This adapter was created for usage with the **[Adapters](https://github.com/Adapter-Hub/adapters)** library. |
|
|
|
## Usage |
|
|
|
First, install `adapters`: |
|
|
|
``` |
|
pip install -U adapters |
|
``` |
|
|
|
Now, the adapter can be loaded and activated like this: |
|
|
|
```python |
|
from adapters import AutoAdapterModel |
|
|
|
model = AutoAdapterModel.from_pretrained("roberta-base") |
|
adapter_name = model.load_adapter("BigTMiami/m_cite_par_bn_v_4_pretrain_adapter", source="hf", set_active=True) |
|
``` |
|
|
|
## Architecture & Training |
|
|
|
<!-- Add some description here --> |
|
|
|
## Evaluation results |
|
|
|
<!-- Add some description here --> |
|
|
|
## Citation |
|
|
|
<!-- Add some description here --> |