l3cube-pune's picture
Update README.md
ca3a391
|
raw
history blame
581 Bytes
metadata
license: cc-by-4.0
language: gu

GujaratiBERT-Scratch

GujaratiBERT is a Gujarati BERT model trained on publicly available Gujarati monolingual datasets from scratch.

Preliminary details on the dataset, models, and baseline results can be found in our [ paper ].

Citing:

@article{joshi2022l3cubehind,
  title={L3Cube-HindBERT and DevBERT: Pre-Trained BERT Transformer models for Devanagari based Hindi and Marathi Languages},
  author={Joshi, Raviraj},
  journal={arXiv preprint arXiv:2211.11418},
  year={2022}
}