<p><b>UzRoBerta model.</b> | |
Pre-prepared model in Uzbek (Cyrillic script) to model the masked language and predict the next sentences. | |
<p><b>Training data.</b> | |
UzBERT model was pretrained on ≈167K news articles (≈568Mb). | |
<p><b>UzRoBerta model.</b> | |
Pre-prepared model in Uzbek (Cyrillic script) to model the masked language and predict the next sentences. | |
<p><b>Training data.</b> | |
UzBERT model was pretrained on ≈167K news articles (≈568Mb). | |