This is a first attempt at a Hindi language model trained with Google Research's ELECTRA.
Consider using this newer, larger model: https://huggingface.co/monsoon-nlp/hindi-tpu-electra
I originally used a modified ELECTRA for finetuning, but now use SimpleTransformers.
You can get higher accuracy using ktrain by adjusting learning rate (also: changing model_type in config.json - this is an open issue with ktrain): https://colab.research.google.com/drive/1mSeeSfVSOT7e-dVhPlmSsQRvpn6xC05w?usp=sharing
Question-answering on MLQA dataset: https://colab.research.google.com/drive/1i6fidh2tItf_-IDkljMuaIGmEU6HT2Ar#scrollTo=IcFoAHgKCUiQ
A larger model (Hindi-TPU-Electra) using ELECTRA base size outperforms both models on Hindi movie reviews / sentiment analysis, but does not perform as well on the BBC news classification task.
The corpus is two files:
Structure your files, with data-dir named "trainer" here
trainer - vocab.txt - pretrain_tfrecords -- (all .tfrecord... files) - models -- modelname --- checkpoint --- graph.pbtxt --- model.*
CoLab notebook gives examples of GPU vs. TPU setup
Use this process to convert an in-progress or completed ELECTRA checkpoint to a Transformers-ready model:
git clone https://github.com/huggingface/transformers python ./transformers/src/transformers/convert_electra_original_tf_checkpoint_to_pytorch.py --tf_checkpoint_path=./models/checkpointdir --config_file=config.json --pytorch_dump_path=pytorch_model.bin --discriminator_or_generator=discriminator python
from transformers import TFElectraForPreTraining model = TFElectraForPreTraining.from_pretrained("./dir_with_pytorch", from_pt=True) model.save_pretrained("tf")
Once you have formed one directory with config.json, pytorch_model.bin, tf_model.h5, special_tokens_map.json, tokenizer_config.json, and vocab.txt on the same level, run:
transformers-cli upload directory