cointegrated
commited on
Commit
•
f191937
1
Parent(s):
8c80f90
Update README.md
Browse files
README.md
CHANGED
@@ -13,13 +13,13 @@ license: mit
|
|
13 |
widget:
|
14 |
- text: "Миниатюрная модель для [MASK] разных задач."
|
15 |
---
|
16 |
-
This is a very small distilled version of the [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model for Russian and English (45 MB, 12M parameters).
|
17 |
|
18 |
This model is useful if you want to fine-tune it for a relatively simple Russian task (e.g. NER or sentiment classification), and you care more about speed and size than about accuracy. It is approximately x10 smaller and faster than a base-sized BERT. Its `[CLS]` embeddings can be used as a sentence representation aligned between Russian and English.
|
19 |
|
20 |
It was trained on the [Yandex Translate corpus](https://translate.yandex.ru/corpus), [OPUS-100](https://huggingface.co/datasets/opus100) and [Tatoeba](https://huggingface.co/datasets/tatoeba), using MLM loss (distilled from [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)), translation ranking loss, and `[CLS]` embeddings distilled from [LaBSE](https://huggingface.co/sentence-transformers/LaBSE), [rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence), Laser and USE.
|
21 |
|
22 |
-
There is a more detailed [description in Russian](https://habr.com/ru/post/562064/).
|
23 |
|
24 |
Sentence embeddings can be produced as follows:
|
25 |
|
|
|
13 |
widget:
|
14 |
- text: "Миниатюрная модель для [MASK] разных задач."
|
15 |
---
|
16 |
+
This is a very small distilled version of the [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) model for Russian and English (45 MB, 12M parameters). There is also an **updated version of this model**, [rubert-tiny2](https://huggingface.co/cointegrated/rubert-tiny2), with a larger vocabulary and better quality on practically all Russian NLU tasks.
|
17 |
|
18 |
This model is useful if you want to fine-tune it for a relatively simple Russian task (e.g. NER or sentiment classification), and you care more about speed and size than about accuracy. It is approximately x10 smaller and faster than a base-sized BERT. Its `[CLS]` embeddings can be used as a sentence representation aligned between Russian and English.
|
19 |
|
20 |
It was trained on the [Yandex Translate corpus](https://translate.yandex.ru/corpus), [OPUS-100](https://huggingface.co/datasets/opus100) and [Tatoeba](https://huggingface.co/datasets/tatoeba), using MLM loss (distilled from [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased)), translation ranking loss, and `[CLS]` embeddings distilled from [LaBSE](https://huggingface.co/sentence-transformers/LaBSE), [rubert-base-cased-sentence](https://huggingface.co/DeepPavlov/rubert-base-cased-sentence), Laser and USE.
|
21 |
|
22 |
+
There is a more detailed [description in Russian](https://habr.com/ru/post/562064/).
|
23 |
|
24 |
Sentence embeddings can be produced as follows:
|
25 |
|