Language model: deepset/roberta-base-squad2-distilled
Training data: SQuAD 2.0 training set Eval data: SQuAD 2.0 dev set Infrastructure: 4x V100 GPU
Published: Dec 8th, 2021
- haystack's distillation feature was used for training. deepset/roberta-large-squad2 was used as the teacher model.
batch_size = 80 n_epochs = 4 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1.5 distillation_loss_weight = 0.75
"exact": 79.8366040596311 "f1": 83.916407079888
Timo Möller: firstname.lastname@example.org
Julian Risch: email@example.com
Malte Pietsch: firstname.lastname@example.org
Michel Bartels: email@example.com
deepset is the company behind the open-source NLP framework Haystack which is designed to help you build production ready NLP systems that use: Question answering, summarization, ranking etc.
Some of our other work:
- Distilled roberta-base-squad2 (aka "tinyroberta-squad2")
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
Get in touch and join the Haystack community
For more info on Haystack, visit our GitHub repo and Documentation.
We also have a Discord community open to everyone!
Twitter | LinkedIn | Discord | GitHub Discussions | Website
By the way: we're hiring!
- Downloads last month
Dataset used to train deepset/roberta-base-squad2-distilled
Spaces using deepset/roberta-base-squad2-distilled 5
- Exact Match on squad_v2validation set verified80.859
- F1 on squad_v2validation set verified84.010