DistilBERT base uncased distilled SQuAD

This model is a fine-tune checkpoint of DistilBERT-base-uncased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. This model reaches a F1 score of 86.9 on the dev set (for comparison, Bert bert-base-uncased version reaches a F1 score of 88.5).

Downloads last month
146,407
Hosted inference API
Question Answering
This model can be loaded on the Inference API on-demand.