Edit model card

Overview

Language model: deepset/roberta-base-squad2-distilled
Language: English
Training data: SQuAD 2.0 training set
Eval data: SQuAD 2.0 dev set
Infrastructure: 1x V100 GPU
Published: Apr 21st, 2021

Details

  • haystack's distillation feature was used for training. deepset/bert-large-uncased-whole-word-masking-squad2 was used as the teacher model.

Hyperparameters

batch_size = 6
n_epochs = 2
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
embeds_dropout_prob = 0.1
temperature = 5
distillation_loss_weight = 1

Performance

"exact": 68.6431398972458
"f1": 72.7637083790805

Authors

  • Timo Möller: timo.moeller [at] deepset.ai
  • Julian Risch: julian.risch [at] deepset.ai
  • Malte Pietsch: malte.pietsch [at] deepset.ai
  • Michel Bartels: michel.bartels [at] deepset.ai

About us

deepset logo We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.

Some of our work:

Get in touch: Twitter | LinkedIn | Discord | GitHub Discussions | Website

By the way: we're hiring!

Downloads last month
647
Safetensors
Model size
41.1M params
Tensor type
I64
·
F32
·
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train deepset/bert-medium-squad2-distilled

Space using deepset/bert-medium-squad2-distilled 1

Evaluation results