Releasing Hindi ELECTRA model

This is a first attempt at a Hindi language model trained with Google Research's ELECTRA.

Consider using this newer, larger model:

Tokenization and training CoLab

I originally used a modified ELECTRA for finetuning, but now use SimpleTransformers.

Blog post - I was greatly influenced by:

Example Notebooks

This small model has comparable results to Multilingual BERT on BBC Hindi news classification and on Hindi movie reviews / sentiment analysis (using SimpleTransformers)

You can get higher accuracy using ktrain by adjusting learning rate (also: changing model_type in config.json - this is an open issue with ktrain):

Question-answering on MLQA dataset:

A larger model (Hindi-TPU-Electra) using ELECTRA base size outperforms both models on Hindi movie reviews / sentiment analysis, but does not perform as well on the BBC news classification task.



The corpus is two files:

Bonus notes:

  • Adding English wiki text or parallel corpus could help with cross-lingual tasks and training


Bonus notes:

  • Created with HuggingFace Tokenizers; you can increase vocabulary size and re-train; remember to change ELECTRA vocab_size


Structure your files, with data-dir named "trainer" here

- vocab.txt
- pretrain_tfrecords
-- (all .tfrecord... files)
- models
-- modelname
--- checkpoint
--- graph.pbtxt
--- model.*

CoLab notebook gives examples of GPU vs. TPU setup


Use this process to convert an in-progress or completed ELECTRA checkpoint to a Transformers-ready model:

git clone
python ./transformers/src/transformers/
from transformers import TFElectraForPreTraining
model = TFElectraForPreTraining.from_pretrained("./dir_with_pytorch", from_pt=True)

Once you have formed one directory with config.json, pytorch_model.bin, tf_model.h5, special_tokens_map.json, tokenizer_config.json, and vocab.txt on the same level, run:

transformers-cli upload directory
Downloads last month
Hosted inference API
This model can be loaded on the Inference API on-demand.