|
# dv-wave |
|
|
|
This is a first attempt at a Dhivehi language model trained with Google Research's [ELECTRA](https://github.com/google-research/electra). |
|
|
|
Tokenization and training CoLab: https://colab.research.google.com/drive/1ZJ3tU9MwyWj6UtQ-8G7QJKTn-hG1uQ9v?usp=sharing |
|
|
|
|
|
## Corpus |
|
|
|
Trained on @Sofwath's 307MB corpus of Dhivehi news: https://github.com/Sofwath/DhivehiDatasets |
|
|
|
[OSCAR](https://oscar-corpus.com/) was considered; as of this writing their web crawl has 126MB |
|
of Dhivehi text (79MB deduped). |
|
|
|
## Vocabulary |
|
|
|
Included as vocab.txt in the upload - vocab_size is 29982 |
|
|