DistilBERT base cased distilled SQuAD

This model is a fine-tune checkpoint of DistilBERT-base-cased, fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. This model reaches a F1 score of 87.1 on the dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7).

New

Select AutoNLP in the “Train” menu to fine-tune this model automatically.

Downloads last month
157,466
Hosted inference API
Question Answering
This model can be loaded on the Inference API on-demand.