Update README.md
Browse files
README.md
CHANGED
@@ -2,10 +2,13 @@
|
|
2 |
language: my
|
3 |
tags:
|
4 |
- MyanBERTa
|
|
|
|
|
|
|
5 |
license: apache-2.0
|
6 |
datasets:
|
7 |
- MyCorpus
|
8 |
-
- blogs and websites
|
9 |
---
|
10 |
|
11 |
## Model description
|
@@ -14,3 +17,9 @@ This model is a BERT based Myanmar pre-trained language model.
|
|
14 |
MyanBERTa has been pre-trained for 528K steps on a word segmented Myanmar dataset consisting of 5,992,299 sentences (136M words).
|
15 |
As the tokenizer, byte-leve BPE tokenizer of 30,522 subword units which is learned after word segmentation is applied.
|
16 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
2 |
language: my
|
3 |
tags:
|
4 |
- MyanBERTa
|
5 |
+
- Myanmar
|
6 |
+
- BERT
|
7 |
+
- RoBERTa
|
8 |
license: apache-2.0
|
9 |
datasets:
|
10 |
- MyCorpus
|
11 |
+
- publicly available blogs and websites
|
12 |
---
|
13 |
|
14 |
## Model description
|
|
|
17 |
MyanBERTa has been pre-trained for 528K steps on a word segmented Myanmar dataset consisting of 5,992,299 sentences (136M words).
|
18 |
As the tokenizer, byte-leve BPE tokenizer of 30,522 subword units which is learned after word segmentation is applied.
|
19 |
|
20 |
+
```
|
21 |
+
Contributed by:
|
22 |
+
Aye Mya Hlaing
|
23 |
+
Win Pa Pa
|
24 |
+
```
|
25 |
+
|