Edit model card

ruSciBERT

Model was trained by Sber AI team and MLSA Lab of Institute for AI, MSU. If you use our model for your project, please tell us about it (nikgerasimenko@gmail.com).

Presentation at the AI Journey 2022

  • Task: mask filling
  • Type: encoder
  • Tokenizer: bpe
  • Dict size: 50265
  • Num Parameters: 123 M
  • Training Data Volume: 6.5 GB
Downloads last month
30
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.