File size: 1,645 Bytes
d345fc0
 
424754b
 
 
 
 
 
 
 
d345fc0
424754b
 
 
 
 
 
 
 
 
7775f92
424754b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
license: mit
datasets:
- mrqa
language:
- en
metrics:
- squad
library_name: adapter-transformers
pipeline_tag: question-answering
---

# Description
This is the single-dataset adapter for the SQuAD partition of the MRQA 2019 Shared Task Dataset. The adapter was created by Friedman et al. (2021) and should be used with the `roberta-base` encoder.



The UKP-SQuARE team created this model repository to simplify the deployment of this model on the UKP-SQuARE platform. The GitHub repository of the original authors is https://github.com/princeton-nlp/MADE

# Usage
This model contains the same weights as https://huggingface.co/princeton-nlp/MADE/resolve/main/single_dataset_adapters/SQuAD/model.pt. The only difference is that our repository follows the standard format of AdapterHub. Therefore, you could load this model as follows:

```
from transformers import RobertaForQuestionAnswering, RobertaTokenizerFast

model = RobertaForQuestionAnswering.from_pretrained("roberta-base")
model.load_adapter("UKP-SQuARE/SQuAD_Adapter_RoBERTa",  source="hf")
model.set_active_adapters("SQuAD")

tokenizer = RobertaTokenizerFast.from_pretrained('roberta-base')

pipe = pipeline("question-answering", model=model, tokenizer=tokenizer)
pipe({"question": "What is the capital of Germany?",  "context": "The capital of Germany is Berlin."})
```

Note you need the adapter-transformers library https://adapterhub.ml

# Evaluation
Friedman et al. report an F1 score of **91.4 on SQuAD**.

Please refer to the original publication for more information.

# Citation
Single-dataset Experts for Multi-dataset Question Answering (Friedman et al., EMNLP 2021)