KPMGhyesukim commited on
Commit
90d865e
โ€ข
2 Parent(s): ce7ee89 f194497

Merge branch 'main' of https://huggingface.co/lighthouse/mdeberta-v3-base-kor-further into main

Browse files
Files changed (2) hide show
  1. README.md +106 -0
  2. added_tokens.json +0 -1
README.md CHANGED
@@ -1,3 +1,109 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - multilingual
4
+ - en
5
+ - ko
6
+ - ar
7
+ - bg
8
+ - de
9
+ - el
10
+ - es
11
+ - fr
12
+ - hi
13
+ - ru
14
+ - sw
15
+ - th
16
+ - tr
17
+ - ur
18
+ - vi
19
+ - zh
20
+ tags:
21
+ - deberta
22
+ - deberta-v3
23
+ - mdeberta
24
  license: mit
25
  ---
26
+
27
+ # mDeBERTa-v3-base-kor-further
28
+
29
+ > ๐Ÿ’ก ์•„๋ž˜ ํ”„๋กœ์ ํŠธ๋Š”ย KPMG Lighthouse Korea์—์„œ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
30
+ > KPMG Lighthouse Korea์—์„œ๋Š”, Financial area์˜ ๋‹ค์–‘ํ•œ ๋ฌธ์ œ๋“ค์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด Edge Technology์˜ NLP/Vision AI๋ฅผ ๋ชจ๋ธ๋งํ•˜๊ณ  ์žˆ์Šต๋‹ˆ๋‹ค.
31
+
32
+
33
+ ## What is DeBERTa?
34
+ - [DeBERTa](https://arxiv.org/abs/2006.03654)๋Š” `Disentangled Attention` + `Enhanced Mask Decoder` ๋ฅผ ์ ์šฉํ•˜์—ฌ ๋‹จ์–ด์˜ positional information์„ ํšจ๊ณผ์ ์œผ๋กœ ํ•™์Šตํ•ฉ๋‹ˆ๋‹ค. ์ด์™€ ๊ฐ™์€ ์•„์ด๋””์–ด๋ฅผ ํ†ตํ•ด, ๊ธฐ์กด์˜ BERT, RoBERTa์—์„œ ์‚ฌ์šฉํ–ˆ๋˜ absolute position embedding๊ณผ๋Š” ๋‹ฌ๋ฆฌ DeBERTa๋Š” ๋‹จ์–ด์˜ ์ƒ๋Œ€์ ์ธ ์œ„์น˜ ์ •๋ณด๋ฅผ ํ•™์Šต ๊ฐ€๋Šฅํ•œ ๋ฒกํ„ฐ๋กœ ํ‘œํ˜„ํ•˜์—ฌ ๋ชจ๋ธ์„ ํ•™์Šตํ•˜๊ฒŒ ๋ฉ๋‹ˆ๋‹ค. ๊ฒฐ๊ณผ์ ์œผ๋กœ, BERT, RoBERTA ์™€ ๋น„๊ตํ–ˆ์„ ๋•Œ ๋” ์ค€์ˆ˜ํ•œ ์„ฑ๋Šฅ์„ ๋ณด์—ฌ์ฃผ์—ˆ์Šต๋‹ˆ๋‹ค.
35
+ - [DeBERTa-v3](https://arxiv.org/abs/2111.09543)์—์„œ๋Š”, ์ด์ „ ๋ฒ„์ „์—์„œ ์‚ฌ์šฉํ–ˆ๋˜ MLM (Masked Language Model) ์„ RTD (Replaced Token Detection) Task ๋กœ ๋Œ€์ฒดํ•œ ELECTRA ์Šคํƒ€์ผ์˜ ์‚ฌ์ „ํ•™์Šต ๋ฐฉ๋ฒ•๊ณผ, Gradient-Disentangled Embedding Sharing ์„ ์ ์šฉํ•˜์—ฌ ๋ชจ๋ธ ํ•™์Šต์˜ ํšจ์œจ์„ฑ์„ ๊ฐœ์„ ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
36
+ - DeBERTa์˜ ์•„ํ‚คํ…์ฒ˜๋กœ ํ’๋ถ€ํ•œ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ๋ฅผ ํ•™์Šตํ•˜๊ธฐ ์œ„ํ•ด์„œ, `mDeBERTa-v3-base-kor-further` ๋Š” microsoft ๊ฐ€ ๋ฐœํ‘œํ•œ `mDeBERTa-v3-base` ๋ฅผ ์•ฝ 40GB์˜ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด์„œ **์ถ”๊ฐ€์ ์ธ ์‚ฌ์ „ํ•™์Šต**์„ ์ง„ํ–‰ํ•œ ์–ธ์–ด ๋ชจ๋ธ์ž…๋‹ˆ๋‹ค.
37
+
38
+ ## How to Use
39
+ - Requirements
40
+ ```
41
+ pip install transformers
42
+ pip install sentencepiece
43
+ ```
44
+ - Huggingface Hub
45
+ ```python
46
+ from transformers import AutoModel, AutoTokenizer
47
+
48
+ model = AutoModel.from_pretrained("mdeberta-v3-base-kor-further") # DebertaV2ForModel
49
+ tokenizer = AutoTokenizer.from_pretrained("mdeberta-v3-base-kor-further") # DebertaV2Tokenizer (SentencePiece)
50
+ ```
51
+
52
+ ## Pre-trained Models
53
+ - ๋ชจ๋ธ์˜ ์•„ํ‚คํ…์ฒ˜๋Š” ๊ธฐ์กด microsoft์—์„œ ๋ฐœํ‘œํ•œ `mdeberta-v3-base`์™€ ๋™์ผํ•œ ๊ตฌ์กฐ์ž…๋‹ˆ๋‹ค.
54
+
55
+ | | Vocabulary(K) | Backbone Parameters(M) | Hidden Size | Layers | Note |
56
+ | --- | --- | --- | --- | --- | --- |
57
+ | mdeberta-v3-base-kor-further (mdeberta-v3-base์™€ ๋™์ผ) | 250 | 86 | 768 | 12 | 250K new SPM vocab |
58
+
59
+ ## Further Pretraing Details (MLM Task)
60
+ - `mDeBERTa-v3-base-kor-further` ๋Š” `microsoft/mDeBERTa-v3-base` ๋ฅผ ์•ฝ 40GB์˜ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์— ๋Œ€ํ•ด์„œ MLM Task๋ฅผ ์ ์šฉํ•˜์—ฌ ์ถ”๊ฐ€์ ์ธ ์‚ฌ์ „ ํ•™์Šต์„ ์ง„ํ–‰ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
61
+
62
+ | | Max length | Learning Rate | Batch Size | Train Steps | Warm-up Steps |
63
+ | --- | --- | --- | --- | --- | --- |
64
+ | mdeberta-v3-base-kor-further | 512 | 2e-5 | 8 | 5M | 50k |
65
+
66
+
67
+ ## Datasets
68
+ - ๋ชจ๋‘์˜ ๋ง๋ญ‰์น˜(์‹ ๋ฌธ, ๊ตฌ์–ด, ๋ฌธ์–ด), ํ•œ๊ตญ์–ด Wiki, ๊ตญ๋ฏผ์ฒญ์› ๋“ฑ ์•ฝ 40 GB ์˜ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์…‹์ด ์ถ”๊ฐ€์ ์ธ ์‚ฌ์ „ํ•™์Šต์— ์‚ฌ์šฉ๋˜์—ˆ์Šต๋‹ˆ๋‹ค.
69
+ - Train: 10M lines, 5B tokens
70
+ - Valid: 2M lines, 1B tokens
71
+ - cf) ๊ธฐ์กด mDeBERTa-v3์€ XLM-R ๊ณผ ๊ฐ™์ด [cc-100 ๋ฐ์ดํ„ฐ์…‹](https://data.statmt.org/cc-100/)์œผ๋กœ ํ•™์Šต๋˜์—ˆ์œผ๋ฉฐ, ๊ทธ ์ค‘ ํ•œ๊ตญ์–ด ๋ฐ์ดํ„ฐ์…‹์˜ ํฌ๊ธฐ๋Š” 54GB์ž…๋‹ˆ๋‹ค.
72
+
73
+
74
+ ## Fine-tuning on NLU Tasks - Base Model
75
+ | Model | Size | NSMC(acc) | Naver NER(F1) | PAWS (acc) | KorNLI (acc) | KorSTS (spearman) | Question Pair (acc) | KorQuaD (Dev) (EM/F1) | Korean-Hate-Speech (Dev) (F1) |
76
+ | --- | --- | --- | --- | --- | --- | --- | --- | --- | --- |
77
+ | XLM-Roberta-Base | 1.03G | 89.03 | 86.65 | 82.80 | 80.23 | 78.45 | 93.80 | 64.70 / 88.94 | 64.06 |
78
+ | mdeberta-base | 534M | 90.01 | 87.43 | 85.55 | 80.41 | **82.65** | 94.06 | 65.48 / 89.74 | 62.91 |
79
+ | mdeberta-base-kor-further (Ours) | 534M | **90.52** | **87.87** | **85.85** | **80.65** | 81.90 | **94.98** | **66.07 / 90.35** | **68.16** |
80
+
81
+ ## Citation
82
+ ```
83
+ @misc{he2021debertav3,
84
+ title={DeBERTaV3: Improving DeBERTa using ELECTRA-Style Pre-Training with Gradient-Disentangled Embedding Sharing},
85
+ author={Pengcheng He and Jianfeng Gao and Weizhu Chen},
86
+ year={2021},
87
+ eprint={2111.09543},
88
+ archivePrefix={arXiv},
89
+ primaryClass={cs.CL}
90
+ }
91
+ ```
92
+
93
+ ```
94
+ @inproceedings{
95
+ he2021deberta,
96
+ title={DEBERTA: DECODING-ENHANCED BERT WITH DISENTANGLED ATTENTION},
97
+ author={Pengcheng He and Xiaodong Liu and Jianfeng Gao and Weizhu Chen},
98
+ booktitle={International Conference on Learning Representations},
99
+ year={2021},
100
+ url={https://openreview.net/forum?id=XPZIaotutsD}
101
+ }
102
+ ```
103
+
104
+ ## Reference
105
+ - [DeBERTa](https://github.com/microsoft/DeBERTa)
106
+ - [Huggingface Transformers](https://github.com/huggingface/transformers)
107
+ - [๋ชจ๋‘์˜ ๋ง๋ญ‰์น˜](https://corpus.korean.go.kr/)
108
+ - [Korpora: Korean Corpora Archives](https://github.com/ko-nlp/Korpora)
109
+ - [sooftware/Korean PLM](https://github.com/sooftware/Korean-PLM)
added_tokens.json DELETED
@@ -1 +0,0 @@
1
- {"[MASK]": 250101}