--- language: - hi - en - multilingual license: cc-by-4.0 tags: - hi - en - codemix datasets: - L3Cube-HingCorpus --- ## HingRoBERTa-Mixed HingRoBERTa-Mixed is a Hindi-English code-mixed BERT model trained on roman + devanagari text. It is a xlm-RoBERTa model fine-tuned on mixed script L3Cube-HingCorpus.
[dataset link] (https://github.com/l3cube-pune/code-mixed-nlp) More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2204.08398) Other models from HingBERT family:
HingBERT
HingMBERT
HingBERT-Mixed
HingRoBERTa
HingRoBERTa-Mixed
HingGPT
HingGPT-Devanagari
HingBERT-LID
``` @inproceedings{nayak-joshi-2022-l3cube, title = "{L}3{C}ube-{H}ing{C}orpus and {H}ing{BERT}: A Code Mixed {H}indi-{E}nglish Dataset and {BERT} Language Models", author = "Nayak, Ravindra and Joshi, Raviraj", booktitle = "Proceedings of the WILDRE-6 Workshop within the 13th Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.wildre-1.2", pages = "7--12", } ```