Conversational DistilRuBERT (Russian, cased, 6‑layer, 768‑hidden, 12‑heads, 135.4M parameters) was trained on OpenSubtitles, Dirty, Pikabu, and a Social Media segment of Taiga corpus (as Conversational RuBERT).
Our DistilRuBERT was highly inspired by , . Namely, we used
- KL loss (between teacher and student output logits)
- MLM loss (between tokens labels and student output logits)
- Cosine embedding loss between mean of two consecutive hidden states of the teacher and one hidden state of the student
The model was trained for about 100 hrs. on 8 nVIDIA Tesla P100-SXM2.0 16Gb.
To evaluate improvements in the inference speed, we ran teacher and student models on random sequences with seq_len=512, batch_size = 16 (for throughput) and batch_size=1 (for latency). All tests were performed on Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.20GHz and nVIDIA Tesla P100-SXM2.0 16Gb.
|Model||Size, Mb.||CPU latency, sec.||GPU latency, sec.||CPU throughput, samples/sec.||GPU throughput, samples/sec.|
: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016)
: Shavrina T., Shapovalova O. (2017) TO THE METHODOLOGY OF CORPUS CONSTRUCTION FOR MACHINE LEARNING: «TAIGA» SYNTAX TREE CORPUS AND PARSER. in proc. of “CORPORA2017”, international conference , Saint-Petersbourg, 2017.
: Sanh, V., Debut, L., Chaumond, J., & Wolf, T. (2019). DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108.
- Downloads last month
Unable to determine this model’s pipeline type. Check the docs .