Edit model card


IndicNER is a model trained to complete the task of identifying named entities from sentences in Indian languages. Our model is specifically fine-tuned to the 11 Indian languages mentioned above over millions of sentences. The model is then benchmarked over a human annotated testset and multiple other publicly available Indian NER datasets. The 11 languages covered by IndicNER are: Assamese, Bengali, Gujarati, Hindi, Kannada, Malayalam, Marathi, Oriya, Punjabi, Tamil, Telugu.

Training Corpus

Our model was trained on a dataset which we mined from the existing Samanantar Corpus. We used a bert-base-multilingual-uncased model as the starting point and then fine-tuned it to the NER dataset mentioned previously.

Evaluation Results

Benchmarking on our testset.

Language bn hi kn ml mr gu ta te as or pa
F1 score 79.75 82.33 80.01 80.73 80.51 73.82 80.98 80.88 62.50 27.05 74.88

The first 5 languages (bn, hi, kn, ml, mr ) have large human annotated testsets consisting of around 500-1000 sentences. The next 3 (gu, ta, te) have smaller human annotated testsets with only around 50 sentences. The final 3 (as, or, pa) languages have mined projected testsets not supervised by humans.


Download from this same Huggingface repo.


You can use this Colab notebook for samples on using IndicNER or for finetuning a pre-trained model on Naampadam dataset to build your own NER models.


If you are using IndicNER, please cite the following article:

      title={Naamapadam: A Large-Scale Named Entity Annotated Data for Indic
      author={Arnav Mhaske, Harshit Kedia, Rudramurthy. V, Anoop Kunchukuttan, Pratyush Kumar, Mitesh Khapra},
      eprint={to be published soon},

We would like to hear from you if:

  • You are using our resources. Please let us know how you are putting these resources to use.
  • You have any feedback on these resources.


The IndicNER code (and models) are released under the MIT License.


This work is the outcome of a volunteer effort as part of AI4Bharat initiative.


Downloads last month
Hosted inference API
This model can be loaded on the Inference API on-demand.