--- language: - as - bn - brx - doi - en - gom - gu - hi - kn - ks - kas - mai - ml - mr - mni - mnb - ne - or - pa - sa - sat - sd - snd - ta - te - ur language_details: >- asm_Beng, ben_Beng, brx_Deva, doi_Deva, eng_Latn, gom_Deva, guj_Gujr, hin_Deva, kan_Knda, kas_Arab, kas_Deva, mai_Deva, mal_Mlym, mar_Deva, mni_Beng, mni_Mtei, npi_Deva, ory_Orya, pan_Guru, san_Deva, sat_Olck, snd_Arab, snd_Deva, tam_Taml, tel_Telu, urd_Arab tags: - indicbert2 - ai4bharat - multilingual license: mit metrics: - accuracy pipeline_tag: fill-mask --- # IndicBERT A multilingual language model trained on IndicCorp v2 and evaluated on IndicXTREME benchmark. The model has 278M parameters and is available in 23 Indic languages and English. The models are trained with various objectives and datasets. The list of models are as follows: - IndicBERT-MLM [[Model](https://huggingface.co/ai4bharat/IndicBERTv2-MLM-only)] - A vanilla BERT style model trained on IndicCorp v2 with the MLM objective - +Samanantar [[Model](https://huggingface.co/ai4bharat/IndicBERTv2-MLM-Sam-TLM)] - TLM as an additional objective with Samanantar Parallel Corpus [[Paper](https://aclanthology.org/2022.tacl-1.9)] | [[Dataset](https://huggingface.co/datasets/ai4bharat/samanantar)] - +Back-Translation [[Model](https://huggingface.co/ai4bharat/IndicBERTv2-MLM-Back-TLM)] - TLM as an additional objective by translating the Indic parts of IndicCorp v2 dataset into English w/ IndicTrans model [[Model](https://github.com/AI4Bharat/indicTrans#download-model)] - IndicBERT-SS [[Model](https://huggingface.co/ai4bharat/IndicBERTv2-SS)] - To encourage better lexical sharing among languages we convert the scripts from Indic languages to Devanagari and train a BERT style model with the MLM objective ## Run Fine-tuning Fine-tuning scripts are based on transformers library. Create a new conda environment and set it up as follows: ```shell conda create -n finetuning python=3.9 pip install -r requirements.txt ``` All the tasks follow the same structure, please check individual files for detailed hyper-parameter choices. The following command runs the fine-tuning for a task: ```shell python IndicBERT/fine-tuning/$TASK_NAME/$TASK_NAME.py \ --model_name_or_path=$MODEL_NAME \ --do_train ``` Arguments: - MODEL_NAME: name of the model to fine-tune, can be a local path or a model from the [HuggingFace Model Hub](https://huggingface.co/models) - TASK_NAME: one of [`ner, paraphrase, qa, sentiment, xcopa, xnli, flores`] > For MASSIVE task, please use the instrction provided in the [official repository](https://github.com/alexa/massive)