roberta-hindi / About /intro.md
dk-crazydiv's picture
Modified readme data and examples
6ecb4b7

RoBERTa base model for Hindi language

Pretrained model on Hindi language using a masked language modeling (MLM) objective. Model is able to achieve competitive accuracy compared to pre-existing models on downstream tasks like NamedEntityRecognition and Classification. There are some MLM examples which show that there is a visible room for improvement, but this should serve well as a good base model for hindi languages & could be fine-tuned on specific datasets.

This is part of the Flax/Jax Community Week, organized by HuggingFace and TPU usage sponsored by Google.