Edit model card

MeLID-RoBERTa

MeLID-RoBERTa is a MeRoBERTa model fine-tuned on L3Cube-MeLID, a codemixed Marathi-English language identification dataset.
[dataset link] (https://github.com/l3cube-pune/MarathiNLP)

More details on the dataset, models, and baseline results can be found in our [paper] (https://arxiv.org/abs/2306.14030)

Other models from the MeBERT family:
MeBERT
MeRoBERTa

MeBERT-Mixed
MeBERT-Mixed-v2
MeRoBERTa-Mixed

MeLID-RoBERTa
MeHate-RoBERTa
MeSent-RoBERTa
MeHate-BERT
MeLID-BERT

Citing:

@article{chavan2023my,
  title={My Boli: Code-mixed Marathi-English Corpora, Pretrained Language Models and Evaluation Benchmarks},
  author={Chavan, Tanmay and Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Joshi, Raviraj},
  journal={arXiv preprint arXiv:2306.14030},
  year={2023}
}
Downloads last month
16
Safetensors
Model size
124M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.