bangla-electra / README.md
system's picture
system HF staff
Update README.md
b01b989

Bangla-Electra

This is a first attempt at a Bangla/Bengali language model trained with Google Research's ELECTRA.

Tokenization and pre-training CoLab: https://colab.research.google.com/drive/1gpwHvXAnNQaqcu-YNx1kafEVxz07g2jL

Current V1 is at 120,000 steps

Corpus

Trained on a web crawl from https://oscar-corpus.com/ (deduped version, 5.8GB) and 1 July 2020 dump of bn.wikipedia.org (414MB)

Vocabulary

Included as vocab.txt in the upload - vocab_size is 29898