MuRIL - Unofficial

Multilingual Representations for Indian Languages : Google open sourced this BERT model pre-trained on 17 Indian languages, and their transliterated counterparts.

The model was trained using a self-supervised masked language modeling task. We do whole word masking with a maximum of 80 predictions. The model was trained for 1000K steps, with a batch size of 4096, and a max sequence length of 512.

Original model and details: https://tfhub.dev/google/MuRIL/1

MLM-specific model hosted on HuggingFace: https://huggingface.co/simran-kh/muril-with-mlm-cased-temp

License: Apache 2.0

About this upload

I ported the TFHub .pb model to .h5 and then pytorch_model.bin for compatibility with Transformers.

Downloads last month
0
Hosted inference API
Fill Mask

Mask token: [MASK]