distilroberta-base / README.md
julien-c's picture
julien-c HF staff
Migrate model card from transformers-repo
a547ce1
|
raw
history blame
No virus
1.88 kB
metadata
language: en
tags:
  - exbert
license: apache-2.0
datasets:
  - openwebtext

DistilRoBERTa base model

This model is a distilled version of the RoBERTa-base model. It follows the same training procedure as DistilBERT. The code for the distillation process can be found here. This model is case-sensitive: it makes a difference between english and English.

The model has 6 layers, 768 dimension and 12 heads, totalizing 82M parameters (compared to 125M parameters for RoBERTa-base). On average DistilRoBERTa is twice as fast as Roberta-base.

We encourage to check RoBERTa-base model to know more about usage, limitations and potential biases.

Training data

DistilRoBERTa was pre-trained on OpenWebTextCorpus, a reproduction of OpenAI's WebText dataset (it is ~4 times less training data than the teacher RoBERTa).

Evaluation results

When fine-tuned on downstream tasks, this model achieves the following results:

Glue test results:

Task MNLI QQP QNLI SST-2 CoLA STS-B MRPC RTE
84.0 89.4 90.8 92.5 59.3 88.3 86.6 67.9

BibTeX entry and citation info

@article{Sanh2019DistilBERTAD,
  title={DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter},
  author={Victor Sanh and Lysandre Debut and Julien Chaumond and Thomas Wolf},
  journal={ArXiv},
  year={2019},
  volume={abs/1910.01108}
}