metadata
language: en
datasets:
- squad_v2
license: mit
thumbnail: >-
https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
tags:
- exbert
model-index:
- name: deepset/roberta-base-squad2-distilled
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad_v2
type: squad_v2
config: squad_v2
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 80.8593
verified: true
- name: F1
type: f1
value: 84.0104
verified: true
- task:
type: question-answering
name: Question Answering
dataset:
name: adversarial_qa
type: adversarial_qa
config: adversarialQA
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 29.8333
verified: true
- name: F1
type: f1
value: 41.2435
verified: true
Overview
Language model: deepset/roberta-base-squad2-distilled
Language: English
Training data: SQuAD 2.0 training set
Eval data: SQuAD 2.0 dev set
Infrastructure: 4x V100 GPU
Published: Dec 8th, 2021
Details
- haystack's distillation feature was used for training. deepset/roberta-large-squad2 was used as the teacher model.
Hyperparameters
batch_size = 80
n_epochs = 4
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 1.5
distillation_loss_weight = 0.75
Performance
"exact": 79.8366040596311
"f1": 83.916407079888
Authors
- Timo M枚ller:
timo.moeller [at] deepset.ai
- Julian Risch:
julian.risch [at] deepset.ai
- Malte Pietsch:
malte.pietsch [at] deepset.ai
- Michel Bartels:
michel.bartels [at] deepset.ai
About us
We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
- FARM
- Haystack
Get in touch: Twitter | LinkedIn | Slack | GitHub Discussions | Website
By the way: we're hiring!