RobeCzech is a monolingual RoBERTa language representation model trained on Czech data.
Please cite the corresponding publication:
- Milan Straka, Jakub Náplava, Jana Straková and David Samuel: Czech RoBERTa, a monolingual contextualized language representation model. Accepted to TSD 2021.
Preprint of the paper is available at https://arxiv.org/abs/2105.11314.
New: fine-tune this model in a few clicks by selecting AutoNLP in the "Train" menu!
- Downloads last month