--- language: "en" datasets: - squad metrics: - squad license: apache-2.0 --- # DistilBERT base cased distilled SQuAD > Note: This model is a clone of [`distilbert-base-cased-distilled-squad`](https://huggingface.co/distilbert-base-cased-distilled-squad) for internal testing. This model is a fine-tune checkpoint of [DistilBERT-base-cased](https://huggingface.co/distilbert-base-cased), fine-tuned using (a second step of) knowledge distillation on SQuAD v1.1. This model reaches a F1 score of 87.1 on the dev set (for comparison, BERT bert-base-cased version reaches a F1 score of 88.7). Using the question answering `Evaluator` from evaluate gives: ``` {'exact_match': 79.54588457899716, 'f1': 86.81181300991533, 'latency_in_seconds': 0.008683730778997168, 'samples_per_second': 115.15787689073015, 'total_time_in_seconds': 91.78703433400005} ``` which is roughly consistent with the official score.