lalital commited on
Commit
f012b3e
1 Parent(s): 0bdbe15

Add model card

Browse files
Files changed (1) hide show
  1. README.md +95 -1
README.md CHANGED
@@ -1 +1,95 @@
1
- ## wangchanberta-base-wiki-spm
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # `wangchanberta-base-wiki-spm`
2
+
3
+ Pretrained RoBERTa BASE model on Thai Wikipedia corpus.
4
+
5
+ <br>
6
+
7
+ ## Model description
8
+
9
+ <br>
10
+
11
+ The architecture of the pretrained model is based on RoBERTa [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692).
12
+
13
+ <br>
14
+
15
+
16
+
17
+ ## Intended uses & limitations
18
+
19
+ <br>
20
+
21
+ You can use the pretrained model for masked language modeling (i.e. predicting a mask token in the input text). In addition, we also provide finetuned models for multiclass/multilabel text classification and token classification task.
22
+
23
+ <br>
24
+
25
+ **Multiclass text classification**
26
+
27
+
28
+ - `wisesight_sentiment`
29
+
30
+ 4-class text classification task (`positive`, `neutral`, `negative`, and `question`) based on social media posts and tweets.
31
+
32
+ - `wongnai_reivews`
33
+
34
+ Users' review rating classification task (scale is ranging from 1 to 5)
35
+
36
+ - `generated_reviews_enth` : (`review_star` as label)
37
+
38
+ Generated users' review rating classification task (scale is ranging from 1 to 5).
39
+
40
+ **Multilabel text classification**
41
+
42
+ - `prachathai67k`
43
+
44
+ Thai topic classification with 12 labels based on news article corpus from prachathai.com. The detail is described in this [page](https://huggingface.co/datasets/prachathai67k).
45
+
46
+
47
+
48
+
49
+ **Token classification**
50
+
51
+ - `thainer`
52
+
53
+ Named-entity recognition tagging with 13 named-entities as descibed in this [page](https://huggingface.co/datasets/thainer).
54
+
55
+ - `lst20` : NER NER and POS tagging
56
+
57
+ Named-entity recognition tagging with 10 named-entities and Part-of-Speech tagging with 16 tags as descibed in this [page](https://huggingface.co/datasets/lst20).
58
+
59
+ <br>
60
+
61
+ ## How to use
62
+
63
+ <br>
64
+
65
+ <br>
66
+
67
+ ## Training data
68
+
69
+ `wangchanberta-base-wiki-spm` model was pretrained on Thai Wikipedia. Specifically we use the Wikipedia dump articles on 20 August 2020 (dumps.wikimedia.org/thwiki/20200820/). We opt out lists, and tables.
70
+
71
+ ### Preprocessing
72
+
73
+ Texts are preprocessed with the following rules:
74
+
75
+ - Replace non-breaking space, zero-width non-breaking space, and soft hyphen with spaces.
76
+ - Remove an empty parenthesis that occur right after the title of the first paragraph.
77
+ - Replace spaces wtth <_>.
78
+
79
+ <br>
80
+
81
+
82
+ Regarding the vocabulary, we use subword token trained with [SentencePice](https://github.com/google/sentencepiece) library on the training set of Thai Wikipedia corpus. The total number of subword tokens is 24,000.
83
+
84
+
85
+ We sample sentences contigously to have the length of at most 512 tokens. For some sentences that overlap the boundary of 512 tokens, we split such sentence with an additional token as document separator. This is the same approach as proposed by [[Liu et al., 2019]](https://arxiv.org/abs/1907.11692) (called "FULL-SENTENCES").
86
+
87
+ The details of the masking procedure for each sentence are the following:
88
+
89
+
90
+ **Train/Val/Test splits**
91
+
92
+ We split sequencially 944,782 sentences for training set, 24,863 sentences for validation set and 24,862 sentences for test set.
93
+
94
+
95
+ <br>