Edit model card

The model is pretrained on the OSCAR dataset for Bangla, English and Hindi. And further pre-trained on 560k code-mixed data (Bangla-English-Hindi). The base model is Distil-BERT and the intended use for this model is for the datasets that contain a Code-mixing of these languages.

To cite:

@article{raihan2023mixed, title={Mixed-Distil-BERT: Code-mixed Language Modeling for Bangla, English, and Hindi}, author={Raihan, Md Nishat and Goswami, Dhiman and Mahmud, Antara}, journal={arXiv preprint arXiv:2309.10272}, year={2023} }

Downloads last month
2
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.