Aehus commited on
Commit
27f7d7f
1 Parent(s): 8ac74ae

Upload 6 files

Browse files
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - vi
4
+ metrics:
5
+ - f1
6
+ pipeline_tag: token-classification
7
+ tags:
8
+ - transformer
9
+ - vietnamese
10
+ - nlp
11
+ - bert
12
+ - deberta
13
+ - deberta-v3
14
+ ---
15
+
16
+ # ViDeBERTa: A powerful pre-trained language model for Vietnamese
17
+
18
+
19
+ ViDeBERTa, a new pre-trained monolingual language model for Vietnamese,
20
+ with three versions - ViDeBERTa_xsmall, ViDeBERTa_base, and ViDeBERTa_large,
21
+ which are pre-trained on 138GB of Vietnamese text of high-quality and diverse Vietnamese text using DeBERTaV3 architecture.
22
+
23
+ Please check the [official repository][github] for more implementation details and updates
24
+
25
+ The DeBERTa V3 xsmall model comes with 12 layers and a hidden size
26
+ of 384. It has only 22M backbone parameters with a vocabulary
27
+ containing 128K tokens which introduces 48M parameters in the
28
+ Embedding layer. This model was trained using CC100 dataset, which consists of 138 GB of Vietnamese text.
29
+
30
+ ## Fine-tuning on NLU tasks
31
+ We present the dev results on VLSP POS, PhoNER, ViQuAD dataset.
32
+
33
+ | Model|#Params(M)| POS | NER | MRC |
34
+ |-----------|-------|---------|-----|----------|
35
+ | XLM-R-base | 125M | 96.2 | - | 82.0 |
36
+ | XLM-R-large | 355M | 96.3 | 93.8 | 87.0 |
37
+ | PhoBERT-base | 135M | 96.7 | 80.1 |
38
+ | PhoBERT-large | 370M | 96.8 | 83.5 |
39
+ | ViT5-base | 310M | - | 94.5 | - |
40
+ | ViT5-large | 866M | - | 93.8 | - |
41
+ | **ViDeBERTa-xsmall** | **22M** | **96.4** | **93.6** | **81.3** |
42
+ | ViDeBERTa-base | 86M | 96.8 | 94.5 | 85.7 |
43
+ | ViDeBERTa-large | 304M | 97.2 | 95.3 | 89.9 |
44
+
45
+ ## Citation
46
+ If you find ViDeBERTa useful for your work, please cite the following papers:
47
+ ```latex
48
+ @article{dao2023videberta,
49
+ title={ViDeBERTa: A powerful pre-trained language model for Vietnamese},
50
+ author={Dao Tran, Cong and Pham, Nhut Huy and Nguyen, Anh and Son Hy, Truong and Vu, Tu},
51
+ journal={arXiv e-prints},
52
+ pages={arXiv--2301},
53
+ year={2023}
54
+ }
55
+ ```
56
+
57
+ [github]: https://github.com/HySonLab/ViDeBERTa
model_config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "attention_head_size": 64,
3
+ "attention_probs_dropout_prob": 0.1,
4
+ "hidden_act": "gelu",
5
+ "hidden_dropout_prob": 0.1,
6
+ "hidden_size": 384,
7
+ "initializer_range": 0.02,
8
+ "intermediate_size": 1536,
9
+ "layer_norm_eps": 1e-07,
10
+ "max_position_embeddings": 512,
11
+ "max_relative_positions": -1,
12
+ "model_type": "deberta-v2",
13
+ "norm_rel_ebd": "layer_norm",
14
+ "num_attention_heads": 6,
15
+ "num_hidden_layers": 12,
16
+ "pos_att_type": "p2c|c2p",
17
+ "position_biased_input": false,
18
+ "position_buckets": 256,
19
+ "relative_attention": true,
20
+ "share_att_key": true,
21
+ "type_vocab_size": 0,
22
+ "vocab_size": 128000
23
+ }
24
+
special_tokens_map.json ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "[CLS]",
3
+ "cls_token": "[CLS]",
4
+ "eos_token": "[SEP]",
5
+ "mask_token": "[MASK]",
6
+ "pad_token": "[PAD]",
7
+ "sep_token": "[SEP]",
8
+ "unk_token": "[UNK]"
9
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": "[CLS]",
3
+ "cls_token": "[CLS]",
4
+ "do_lower_case": false,
5
+ "eos_token": "[SEP]",
6
+ "mask_token": "[MASK]",
7
+ "name_or_path": "microsoft/deberta-v3-xsmall",
8
+ "pad_token": "[PAD]",
9
+ "sep_token": "[SEP]",
10
+ "sp_model_kwargs": {},
11
+ "special_tokens_map_file": null,
12
+ "split_by_punct": false,
13
+ "tokenizer_class": "DebertaV2Tokenizer",
14
+ "unk_token": "[UNK]",
15
+ "vocab_type": "spm"
16
+ }
videberta_xsmall.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c828b4b5051d1d4e0a14cadafda8c19bda4a9b17f488e36d56720487cf252cc
3
+ size 240707259