Back to all models
fill-mask mask_token: [MASK]
Query this model
🔥 This model is currently loaded and running on the Inference API. ⚠️ This model could not be loaded by the inference API. ⚠️ This model can be loaded on the Inference API on-demand.
JSON Output
API endpoint  

⚡️ Upgrade your account to access the Inference API

							curl -X POST \
-H "Authorization: Bearer YOUR_ORG_OR_USER_API_TOKEN" \
-H "Content-Type: application/json" \
-d '"json encoded string"' \
Share Copied link to clipboard

Monthly model downloads

monsoon-nlp/hindi-tpu-electra monsoon-nlp/hindi-tpu-electra
last 30 days



Contributed by

monsoon-nlp Nick Doiron
7 models

How to use this model directly from the 🤗/transformers library:

Copy to clipboard
from transformers import AutoTokenizer, AutoModelWithLMHead tokenizer = AutoTokenizer.from_pretrained("monsoon-nlp/hindi-tpu-electra") model = AutoModelWithLMHead.from_pretrained("monsoon-nlp/hindi-tpu-electra")
Uploaded in S3

Hindi language model

Trained with ELECTRA base size settings

Tokenization and training CoLab

Example Notebooks

This model outperforms Multilingual BERT on Hindi movie reviews / sentiment analysis (using SimpleTransformers)

You can get higher accuracy using ktrain + TensorFlow, where you can adjust learning rate and other hyperparameters:

Question-answering on MLQA dataset:

A smaller model (Hindi-BERT) performs better on a BBC news classification task.


The corpus is two files:

Bonus notes:

  • Adding English wiki text or parallel corpus could help with cross-lingual tasks and training


Bonus notes:

  • Created with HuggingFace Tokenizers; you can increase vocabulary size and re-train; remember to change ELECTRA vocab_size


Structure your files, with data-dir named "trainer" here

- vocab.txt
- pretrain_tfrecords
-- (all .tfrecord... files)
- models
-- modelname
--- checkpoint
--- graph.pbtxt
--- model.*


Use this process to convert an in-progress or completed ELECTRA checkpoint to a Transformers-ready model:

git clone
python ./transformers/src/transformers/
from transformers import TFElectraForPreTraining
model = TFElectraForPreTraining.from_pretrained("./dir_with_pytorch", from_pt=True)

Once you have formed one directory with config.json, pytorch_model.bin, tf_model.h5, special_tokens_map.json, tokenizer_config.json, and vocab.txt on the same level, run:

transformers-cli upload directory