Language model: deepset/roberta-base-squad2-distilled
Training data: SQuAD 2.0 training set Eval data: SQuAD 2.0 dev set Infrastructure: 1x V100 GPU
Published: Apr 21st, 2021
- haystack's distillation feature was used for training. deepset/bert-large-uncased-whole-word-masking-squad2 was used as the teacher model.
batch_size = 6 n_epochs = 2 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 5 distillation_loss_weight = 1
"exact": 68.6431398972458 "f1": 72.7637083790805
- Timo Möller:
timo.moeller [at] deepset.ai
- Julian Risch:
julian.risch [at] deepset.ai
- Malte Pietsch:
malte.pietsch [at] deepset.ai
- Michel Bartels:
michel.bartels [at] deepset.ai
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
By the way: we're hiring!
- Downloads last month
This model can be loaded on the Inference API on-demand.