Back to all models
fill-mask mask_token: [MASK]
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

							$
							curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
https://api-inference.huggingface.co/models/DeepPavlov/rubert-base-cased-sentence
Share Copied link to clipboard

Monthly model downloads

DeepPavlov/rubert-base-cased-sentence DeepPavlov/rubert-base-cased-sentence
1,623 downloads
last 30 days

pytorch

tf

Contributed by

DeepPavlov DeepPavlov MIPT university
6 models

How to use this model directly from the 🤗/transformers library:

			
Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("DeepPavlov/rubert-base-cased-sentence") model = AutoModelWithLMHead.from_pretrained("DeepPavlov/rubert-base-cased-sentence")

rubert-base-cased-sentence

Sentence RuBERT (Russian, cased, 12-layer, 768-hidden, 12-heads, 180M parameters) is a representation‑based sentence encoder for Russian. It is initialized with RuBERT and fine‑tuned on SNLI[1] google-translated to russian and on russian part of XNLI dev set[2]. Sentence representations are mean pooled token embeddings in the same manner as in Sentence‑BERT[3].

[1]: S. R. Bowman, G. Angeli, C. Potts, and C. D. Manning. (2015) A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326

[2]: Williams A., Bowman S. (2018) XNLI: Evaluating Cross-lingual Sentence Representations. arXiv preprint arXiv:1809.05053

[3]: N. Reimers, I. Gurevych (2019) Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks. arXiv preprint arXiv:1908.10084