lewtun's picture
lewtun HF staff
Update README.md
8ca959b
metadata
language:
  - en
thumbnail: https://github.com/karanchahal/distiller/blob/master/distiller.jpg
tags:
  - question-answeringlicense: Apache 2.0
datasets:
  - squad
metrics:
  - squad

DistilBERT with a second step of distillation

Model description

This model replicates the "DistilBERT (D)" model from Table 2 of the DistilBERT paper. In this approach, a DistilBERT student is fine-tuned on SQuAD v1.1, while a fine-tuned BERT model acts as a teacher for a second step of task-specific distillation.

In this version, the following pre-trained models were used:

  • Student: distilbert-base-uncased
  • Teacher: maroo93/squad1.1## Intended uses & limitations

Training data

This model was trained on the SQuAD v1.1 dataset which can be obtained from the datasets library as follows:

from datasets import load_dataset
squad = load_dataset('squad')

Training procedure

Eval results

Exact Match F1
78.05 86.09

The score was calculated using the squad metric from datasets.

BibTeX entry and citation info

@misc{sanh2020distilbert,
      title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter}, 
      author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
      year={2020},
      eprint={1910.01108},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}