Back to all models

Unable to determine this model’s pipeline type. Check the docs .

Monthly model downloads

monsoon-nlp/tamillion monsoon-nlp/tamillion
last 30 days



Contributed by

monsoon-nlp Nick Doiron
7 models

How to use this model directly from the 🤗/transformers library:

Copy to clipboard
from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("monsoon-nlp/tamillion") model = AutoModel.from_pretrained("monsoon-nlp/tamillion")
Uploaded in S3


This is the second version of a Tamil language model trained with Google Research's ELECTRA.

Tokenization and pre-training CoLab:

V1: small model with GPU; 190,000 steps;

V2 (current): base model with TPU and larger corpus; 224,000 steps


Sudalai Rajkumar's Tamil-NLP page contains classification and regression tasks:


The model outperformed mBERT on news classification: (Random: 16.7%, mBERT: 53.0%, TaMillion: 75.1%)

The model slightly outperformed mBERT on movie reviews: (RMSE - mBERT: 0.657, TaMillion: 0.626)

Equivalent accuracy on the Tirukkural topic task.

Question Answering

I didn't find a Tamil-language question answering dataset, but this model could be finetuned to train a QA model. See Hindi and Bengali examples here:


Trained on IndicCorp Tamil (11GB) and 1 October 2020 dump of (482MB)


Included as vocab.txt in the upload