Language model: deepset/tinybert-6L-768D-squad2
Training data: SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation Eval data: SQuAD 2.0 dev set Infrastructure: 1x V100 GPU
Published: Dec 8th, 2021
- haystack's intermediate layer and prediction layer distillation features were used for training (based on TinyBERT). deepset/bert-base-uncased-squad2 was used as the teacher model and huawei-noah/TinyBERT_General_6L_768D was used as the student model.
batch_size = 26 n_epochs = 5 max_seq_len = 384 learning_rate = 5e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1
batch_size = 26 n_epochs = 5 max_seq_len = 384 learning_rate = 3e-5 lr_schedule = LinearWarmup embeds_dropout_prob = 0.1 temperature = 1 distillation_loss_weight = 1.0
"exact": 71.87736882001179 "f1": 76.36111895973675
- Timo Möller:
timo.moeller [at] deepset.ai
- Julian Risch:
julian.risch [at] deepset.ai
- Malte Pietsch:
malte.pietsch [at] deepset.ai
- Michel Bartels:
michel.bartels [at] deepset.ai
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- German BERT (aka "bert-base-german-cased")
- GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")
By the way: we're hiring!
- Downloads last month