ayeffkay commited on
Commit
9c97148
1 Parent(s): a47fee9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -3,7 +3,7 @@ language:
3
  - ru
4
  ---
5
  # distilrubert-tiny-cased-conversational
6
- Conversational DistilRuBERT-tiny \(Russian, cased, 3‑layers, 264‑hidden, 12‑heads, 11.8M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as tiny copy of [Conversational DistilRuBERT-small](https://huggingface.co/DeepPavlov/distilrubert-small-cased-conversational).
7
 
8
 
9
  Our DistilRuBERT-tiny is highly inspired by \[3\], \[4\] and architecture is very close to \[5\]. Namely, we use
@@ -45,7 +45,7 @@ We used `PyTorchBenchmark` from `transformers` to evaluate model's performance a
45
  | **distilrubert-tiny-cased-conversational** | 16 | 512 | **0.219** | **0.003** | **633** | **1291** |
46
 
47
 
48
- To evaluate model quality, we fine-tuned DistilRuBERT-tiny on classification (RuSentiment, ParaPhraser), NER and question answering data sets for Russian and obtained scores very similar to the [Conversational DistilRuBERT-small](https://huggingface.co/DeepPavlov/distilrubert-small-cased-conversational).
49
 
50
  \[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
51
 
 
3
  - ru
4
  ---
5
  # distilrubert-tiny-cased-conversational
6
+ Conversational DistilRuBERT-tiny \(Russian, cased, 3‑layers, 264‑hidden, 12‑heads, 11.8M parameters\) was trained on OpenSubtitles\[1\], [Dirty](https://d3.ru/), [Pikabu](https://pikabu.ru/), and a Social Media segment of Taiga corpus\[2\] (as [Conversational RuBERT](https://huggingface.co/DeepPavlov/rubert-base-cased-conversational)). It can be considered as tiny copy of [Conversational DistilRuBERT-small](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational).
7
 
8
 
9
  Our DistilRuBERT-tiny is highly inspired by \[3\], \[4\] and architecture is very close to \[5\]. Namely, we use
 
45
  | **distilrubert-tiny-cased-conversational** | 16 | 512 | **0.219** | **0.003** | **633** | **1291** |
46
 
47
 
48
+ To evaluate model quality, we fine-tuned DistilRuBERT-tiny on classification (RuSentiment, ParaPhraser), NER and question answering data sets for Russian and obtained scores very similar to the [Conversational DistilRuBERT-small](https://huggingface.co/DeepPavlov/distilrubert-tiny-cased-conversational).
49
 
50
  \[1\]: P. Lison and J. Tiedemann, 2016, OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles. In Proceedings of the 10th International Conference on Language Resources and Evaluation \(LREC 2016\)
51