Question Answering
Transformers
PyTorch
English
roberta
Eval Results
Inference Endpoints
File size: 1,585 Bytes
a331776
 
 
 
 
075b66c
a331776
 
 
 
 
 
 
075b66c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1ae8322
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a331776
 
03e5f59
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
language:
- en
tags:
- question-answering
license: apache-2.0
datasets:
- adversarial_qa
- mbartolo/synQA
- squad
metrics:
- exact_match
- f1
model-index:
- name: mbartolo/roberta-large-synqa
  results:
  - task:
      type: question-answering
      name: Question Answering
    dataset:
      name: squad
      type: squad
      config: plain_text
      split: validation
    metrics:
    - name: Exact Match
      type: exact_match
      value: 89.6529
      verified: true
    - name: F1
      type: f1
      value: 94.8172
      verified: true
  - task:
      type: question-answering
      name: Question Answering
    dataset:
      name: adversarial_qa
      type: adversarial_qa
      config: adversarialQA
      split: validation
    metrics:
    - name: Exact Match
      type: exact_match
      value: 55.3333
      verified: true
    - name: F1
      type: f1
      value: 66.7464
      verified: true
---

# Model Overview
This is a RoBERTa-Large QA Model trained from https://huggingface.co/roberta-large in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator on Wikipedia passages from SQuAD, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning.

# Data
Training data: SQuAD + AdversarialQA
Evaluation data: SQuAD + AdversarialQA

# Training Process
Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.

# Additional Information
Please refer to https://arxiv.org/abs/2104.08678 for full details.