bart-ko-mini / README.md
cosmoquester's picture
docs: Change table align
3b8ec83
---
language: ko
---
# Pretrained BART in Korean
This is pretrained BART model with multiple Korean Datasets.
I used multiple datasets for generalizing the model for both colloquial and written texts.
The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program.
The script which is used to pre-train model is [here](https://github.com/cosmoquester/transformers-bart-pretrain).
When you use the reference API, you must wrap the sentence with `[BOS]` and `[EOS]` like below example.
```
[BOS] ์•ˆ๋…•ํ•˜์„ธ์š”? ๋ฐ˜๊ฐ€์›Œ์š”~~ [EOS]
```
You can also test mask filling performance using `[MASK]` token like this.
```
[BOS] [MASK] ๋จน์—ˆ์–ด? [EOS]
```
## Benchmark
<table>
<tr>
<th style="text-align:center">Dataset</th>
<td style="text-align:center">KLUE NLI dev</th>
<td style="text-align:center">NSMC test</td>
<td style="text-align:center">QuestionPair test</td>
<td colspan="2" style="text-align:center">KLUE TC dev</td>
<td colspan="3" style="text-align:center">KLUE STS dev</td>
<td colspan="3" style="text-align:center">KorSTS dev</td>
<td colspan="2" style="text-align:center">HateSpeech dev</td>
</tr>
<tr>
<th style="text-align:center">Metric</th>
<!-- KLUE NLI -->
<td style="text-align:center">Acc</th>
<!-- NSMC -->
<td style="text-align:center">Acc</td>
<!-- QuestionPair -->
<td style="text-align:center">Acc</td>
<!-- KLUE TC -->
<td style="text-align:center">Acc</td>
<td style="text-align:center">F1</td>
<!-- KLUE STS -->
<td style="text-align:center">F1</td>
<td style="text-align:center">Pearson</td>
<td style="text-align:center">Spearman</td>
<!-- KorSTS -->
<td style="text-align:center">F1</td>
<td style="text-align:center">Pearson</td>
<td style="text-align:center">Spearman</td>
<!-- HateSpeech -->
<td style="text-align:center">Bias Acc</td>
<td style="text-align:center">Hate Acc</td>
</tr>
<tr>
<th style="text-align:center">Score</th>
<!-- KLUE NLI -->
<td style="text-align:center">0.5253</th>
<!-- NSMC -->
<td style="text-align:center">0.8425</td>
<!-- QuestionPair -->
<td style="text-align:center">0.8945</td>
<!-- KLUE TC -->
<td style="text-align:center">0.8047</td>
<td style="text-align:center">0.7988</td>
<!-- KLUE STS -->
<td style="text-align:center">0.7411</td>
<td style="text-align:center">0.7471</td>
<td style="text-align:center">0.7399</td>
<!-- KorSTS -->
<td style="text-align:center">0.7725</td>
<td style="text-align:center">0.6503</td>
<td style="text-align:center">0.6191</td>
<!-- HateSpeech -->
<td style="text-align:center">0.7537</td>
<td style="text-align:center">0.5605</td>
</tr>
</table>
- The performance was measured using [the notebooks here](https://github.com/cosmoquester/transformers-bart-finetune) with colab.
## Used Datasets
### [๋ชจ๋‘์˜ ๋ง๋ญ‰์น˜](https://corpus.korean.go.kr/)
- ์ผ์ƒ ๋Œ€ํ™” ๋ง๋ญ‰์น˜ 2020
- ๊ตฌ์–ด ๋ง๋ญ‰์น˜
- ๋ฌธ์–ด ๋ง๋ญ‰์น˜
- ์‹ ๋ฌธ ๋ง๋ญ‰์น˜
### AIhub
- [๊ฐœ๋ฐฉ๋ฐ์ดํ„ฐ ์ „๋ฌธ๋ถ„์•ผ๋ง๋ญ‰์น˜](https://aihub.or.kr/aidata/30717)
- [๊ฐœ๋ฐฉ๋ฐ์ดํ„ฐ ํ•œ๊ตญ์–ด๋Œ€ํ™”์š”์•ฝ](https://aihub.or.kr/aidata/30714)
- [๊ฐœ๋ฐฉ๋ฐ์ดํ„ฐ ๊ฐ์„ฑ ๋Œ€ํ™” ๋ง๋ญ‰์น˜](https://aihub.or.kr/aidata/7978)
- [๊ฐœ๋ฐฉ๋ฐ์ดํ„ฐ ํ•œ๊ตญ์–ด ์Œ์„ฑ](https://aihub.or.kr/aidata/105)
- [๊ฐœ๋ฐฉ๋ฐ์ดํ„ฐ ํ•œ๊ตญ์–ด SNS](https://aihub.or.kr/aidata/30718)
### [์„ธ์ข… ๋ง๋ญ‰์น˜](https://ithub.korean.go.kr/)