Trofish commited on
Commit
8e85059
โ€ข
1 Parent(s): 03056cd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +42 -5
README.md CHANGED
@@ -1,3 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
1
  # RoBERTa-base Korean
2
 
3
  ## ๋ชจ๋ธ ์„ค๋ช…
@@ -9,8 +20,8 @@
9
  - **์•„ํ‚คํ…์ฒ˜**: RobertaForMaskedLM
10
  - **๋ชจ๋ธ ํฌ๊ธฐ**: 256 hidden size, 8 hidden layers, 8 attention heads
11
  - **max_position_embeddings**: 514
12
- - **intermediate_size**: 2048
13
- - **vocab_size**: 1428
14
 
15
  ## ํ•™์Šต ๋ฐ์ดํ„ฐ
16
  ์‚ฌ์šฉ๋œ ๋ฐ์ดํ„ฐ์…‹์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค:
@@ -18,12 +29,12 @@
18
  - **AIHUB**: SNS, ์œ ํŠœ๋ธŒ ๋Œ“๊ธ€, ๋„์„œ ๋ฌธ์žฅ
19
  - **๊ธฐํƒ€**: ๋‚˜๋ฌด์œ„ํ‚ค, ํ•œ๊ตญ์–ด ์œ„ํ‚คํ”ผ๋””์•„
20
 
21
- ์ด ํ•ฉ์‚ฐ๋œ ๋ฐ์ดํ„ฐ๋Š” ์•ฝ 11GB ์ž…๋‹ˆ๋‹ค.
22
 
23
  ## ํ•™์Šต ์ƒ์„ธ
24
  - **BATCH_SIZE**: 112 (GPU๋‹น)
25
  - **ACCUMULATE**: 36
26
- - **Total_BATCH_SIZE**: 8064
27
  - **MAX_STEPS**: 12,500
28
  - **TRAIN_STEPS * BATCH_SIZE**: **100M**
29
  - **WARMUP_STEPS**: 2,400
@@ -33,12 +44,24 @@
33
 
34
 
35
  ![Evaluation Loss Graph](https://cdn-uploads.huggingface.co/production/uploads/64a0fd6fd3149e05bc5260dd/-64jKdcJAavwgUREwaywe.png)
36
-
37
  ![Evaluation Accuracy Graph](https://cdn-uploads.huggingface.co/production/uploads/64a0fd6fd3149e05bc5260dd/LPq5M6S8LTwkFSCepD33S.png)
38
 
39
  ## ์‚ฌ์šฉ ๋ฐฉ๋ฒ•
40
  ### tokenizer์˜ ๊ฒฝ์šฐ wordpiece๊ฐ€ ์•„๋‹Œ syllable ๋‹จ์œ„์ด๊ธฐ์— AutoTokenizer๊ฐ€ ์•„๋‹ˆ๋ผ SyllableTokenizer๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
41
  ### (๋ ˆํฌ์—์„œ ์ œ๊ณตํ•˜๊ณ  ์žˆ๋Š” syllabletokenizer.py๋ฅผ ๊ฐ€์ ธ์™€์„œ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.)
 
 
 
 
 
 
 
 
 
 
 
 
 
42
  ```python
43
  from transformers import AutoModel, AutoTokenizer
44
  from syllabletokenizer import SyllableTokenizer
@@ -50,3 +73,17 @@ tokenizer = SyllableTokenizer(vocab_file='vocab.json',**tokenizer_kwargs)
50
  # ํ…์ŠคํŠธ๋ฅผ ํ† ํฐ์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์˜ˆ์ธก ์ˆ˜ํ–‰
51
  inputs = tokenizer("์—ฌ๊ธฐ์— ํ•œ๊ตญ์–ด ํ…์ŠคํŠธ ์ž…๋ ฅ", return_tensors="pt")
52
  outputs = model(**inputs)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - klue/klue
5
+ language:
6
+ - ko
7
+ metrics:
8
+ - f1
9
+ - accuracy
10
+ - pearsonr
11
+ ---
12
  # RoBERTa-base Korean
13
 
14
  ## ๋ชจ๋ธ ์„ค๋ช…
 
20
  - **์•„ํ‚คํ…์ฒ˜**: RobertaForMaskedLM
21
  - **๋ชจ๋ธ ํฌ๊ธฐ**: 256 hidden size, 8 hidden layers, 8 attention heads
22
  - **max_position_embeddings**: 514
23
+ - **intermediate_size**: 2,048
24
+ - **vocab_size**: 1,428
25
 
26
  ## ํ•™์Šต ๋ฐ์ดํ„ฐ
27
  ์‚ฌ์šฉ๋œ ๋ฐ์ดํ„ฐ์…‹์€ ๋‹ค์Œ๊ณผ ๊ฐ™์Šต๋‹ˆ๋‹ค:
 
29
  - **AIHUB**: SNS, ์œ ํŠœ๋ธŒ ๋Œ“๊ธ€, ๋„์„œ ๋ฌธ์žฅ
30
  - **๊ธฐํƒ€**: ๋‚˜๋ฌด์œ„ํ‚ค, ํ•œ๊ตญ์–ด ์œ„ํ‚คํ”ผ๋””์•„
31
 
32
+ ์ด ํ•ฉ์‚ฐ๋œ ๋ฐ์ดํ„ฐ๋Š” **์•ฝ 11GB** ์ž…๋‹ˆ๋‹ค. **(4B tokens)**
33
 
34
  ## ํ•™์Šต ์ƒ์„ธ
35
  - **BATCH_SIZE**: 112 (GPU๋‹น)
36
  - **ACCUMULATE**: 36
37
+ - **Total_BATCH_SIZE**: 8,064
38
  - **MAX_STEPS**: 12,500
39
  - **TRAIN_STEPS * BATCH_SIZE**: **100M**
40
  - **WARMUP_STEPS**: 2,400
 
44
 
45
 
46
  ![Evaluation Loss Graph](https://cdn-uploads.huggingface.co/production/uploads/64a0fd6fd3149e05bc5260dd/-64jKdcJAavwgUREwaywe.png)
 
47
  ![Evaluation Accuracy Graph](https://cdn-uploads.huggingface.co/production/uploads/64a0fd6fd3149e05bc5260dd/LPq5M6S8LTwkFSCepD33S.png)
48
 
49
  ## ์‚ฌ์šฉ ๋ฐฉ๋ฒ•
50
  ### tokenizer์˜ ๊ฒฝ์šฐ wordpiece๊ฐ€ ์•„๋‹Œ syllable ๋‹จ์œ„์ด๊ธฐ์— AutoTokenizer๊ฐ€ ์•„๋‹ˆ๋ผ SyllableTokenizer๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
51
  ### (๋ ˆํฌ์—์„œ ์ œ๊ณตํ•˜๊ณ  ์žˆ๋Š” syllabletokenizer.py๋ฅผ ๊ฐ€์ ธ์™€์„œ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.)
52
+
53
+ ## ์„ฑ๋Šฅ ํ‰๊ฐ€
54
+ - **KLUE benchmark test๋ฅผ ํ†ตํ•ด์„œ ์„ฑ๋Šฅ์„ ํ‰๊ฐ€ํ–ˆ์Šต๋‹ˆ๋‹ค.**
55
+ - klue-roberta-base์— ๋น„ํ•ด์„œ ๋งค์šฐ ์ž‘์€ ํฌ๊ธฐ๋ผ ์„ฑ๋Šฅ์ด ๋‚ฎ๊ธฐ๋Š” ํ•˜์ง€๋งŒ hidden size 512์ธ ๋ชจ๋ธ์€ ํฌ๊ธฐ ๋Œ€๋น„ ์ข‹์€ ์„ฑ๋Šฅ์„ ๋ณด์˜€์Šต๋‹ˆ๋‹ค.
56
+
57
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a0fd6fd3149e05bc5260dd/I8e60cf9w-IQCHDgKiooq.png)
58
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64a0fd6fd3149e05bc5260dd/hkc5ko9Vo-pkKmtouN7xc.png)
59
+
60
+
61
+ ## ์‚ฌ์šฉ ๋ฐฉ๋ฒ•
62
+ ### tokenizer์˜ ๊ฒฝ์šฐ wordpiece๊ฐ€ ์•„๋‹Œ syllable ๋‹จ์œ„์ด๊ธฐ์— AutoTokenizer๊ฐ€ ์•„๋‹ˆ๋ผ SyllableTokenizer๋ฅผ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.
63
+ ### (๋ ˆํฌ์—์„œ ์ œ๊ณตํ•˜๊ณ  ์žˆ๋Š” syllabletokenizer.py๋ฅผ ๊ฐ€์ ธ์™€์„œ ์‚ฌ์šฉํ•ด์•ผ ํ•ฉ๋‹ˆ๋‹ค.)
64
+
65
  ```python
66
  from transformers import AutoModel, AutoTokenizer
67
  from syllabletokenizer import SyllableTokenizer
 
73
  # ํ…์ŠคํŠธ๋ฅผ ํ† ํฐ์œผ๋กœ ๋ณ€ํ™˜ํ•˜๊ณ  ์˜ˆ์ธก ์ˆ˜ํ–‰
74
  inputs = tokenizer("์—ฌ๊ธฐ์— ํ•œ๊ตญ์–ด ํ…์ŠคํŠธ ์ž…๋ ฅ", return_tensors="pt")
75
  outputs = model(**inputs)
76
+ ```
77
+
78
+ ## Citation
79
+ **klue**
80
+ ```
81
+ @misc{park2021klue,
82
+ title={KLUE: Korean Language Understanding Evaluation},
83
+ author={Sungjoon Park and Jihyung Moon and Sungdong Kim and Won Ik Cho and Jiyoon Han and Jangwon Park and Chisung Song and Junseong Kim and Yongsook Song and Taehwan Oh and Joohong Lee and Juhyun Oh and Sungwon Lyu and Younghoon Jeong and Inkwon Lee and Sangwoo Seo and Dongjun Lee and Hyunwoo Kim and Myeonghwa Lee and Seongbo Jang and Seungwon Do and Sunkyoung Kim and Kyungtae Lim and Jongwon Lee and Kyumin Park and Jamin Shin and Seonghyun Kim and Lucy Park and Alice Oh and Jungwoo Ha and Kyunghyun Cho},
84
+ year={2021},
85
+ eprint={2105.09680},
86
+ archivePrefix={arXiv},
87
+ primaryClass={cs.CL}
88
+ }
89
+ ```