--- language: ko --- # Pretrained BART in Korean This is pretrained BART model with multiple Korean Datasets. I used multiple datasets for generalizing the model for both colloquial and written texts. The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program. The script which is used to pre-train model is [here](https://github.com/cosmoquester/transformers-bart-pretrain). When you use the reference API, you must wrap the sentence with `[BOS]` and `[EOS]` like below example. ``` [BOS] 안녕하세요? 반가워요~~ [EOS] ``` You can also test mask filling performance using `[MASK]` token like this. ``` [BOS] [MASK] 먹었어? [EOS] ``` ## Benchmark
Dataset | KLUE NLI dev | NSMC test | QuestionPair test | KLUE TC dev | KLUE STS dev | KorSTS dev | HateSpeech dev | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Metric | Acc | Acc | Acc | Acc | F1 | F1 | Pearson | Spearman | F1 | Pearson | Spearman | Bias Acc | Hate Acc |
Score | 0.5253 | 0.8425 | 0.8945 | 0.8047 | 0.7988 | 0.7411 | 0.7471 | 0.7399 | 0.7725 | 0.6503 | 0.6191 | 0.7537 | 0.5605 |