This is a second attempt at a Bangla/Bengali language model trained with Google Research's ELECTRA.

Tokenization and pre-training CoLab:

V1 - 120,000 steps; V2 - 190,000 steps


Classification with SimpleTransformers:

On Soham Chatterjee's news classification task: (Random: 16.7%, mBERT: 72.3%, Bangla-Electra: 82.3%)

Similar to mBERT on some tasks and configurations described in

Question Answering

This model can be used for Question Answering - this notebook uses Bangla questions from Google's TyDi dataset:


Trained on a web crawl from (deduped version, 5.8GB) and 1 July 2020 dump of (414MB)


Included as vocab.txt in the upload - vocab_size is 29898

Downloads last month
Hosted inference API

Unable to determine this model’s pipeline type. Check the docs .