Back to all models
Model card Files and versions Use in transformers
text-classification mask_token: [MASK]
Query this model
πŸ”₯ This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚑️ Upgrade your account to access the Inference API

Share Copied link to clipboard

Contributed by

rohanrajpal Rohan Rajpal
4 models

Model name

Model description

I took a bert-base-multilingual-cased model from huggingface and finetuned it on SAIL 2017 dataset.

Intended uses & limitations

How to use

# You can include sample code which will be formatted
#Coming soon!

Limitations and bias

Provide examples of latent issues and potential remediations.

Training data

I trained on the SAIL 2017 dataset link on this pretrained model.

Training procedure

No preprocessing.

Eval results

BibTeX entry and citation info

@inproceedings{khanuja-etal-2020-gluecos,
    title = "{GLUEC}o{S}: An Evaluation Benchmark for Code-Switched {NLP}",
    author = "Khanuja, Simran  and
      Dandapat, Sandipan  and
      Srinivasan, Anirudh  and
      Sitaram, Sunayana  and
      Choudhury, Monojit",
    booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
    month = jul,
    year = "2020",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://www.aclweb.org/anthology/2020.acl-main.329",
    pages = "3575--3585"
}