--- language: ko --- # Pretrained BART in Korean This is pretrained BART model with multiple Korean Datasets. I used multiple datasets for generalizing the model for both colloquial and written texts. The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program. The script which is used to pre-train model is [here](https://github.com/cosmoquester/transformers-bart-pretrain). When you use the reference API, you must wrap the sentence with `[BOS]` and `[EOS]` like below example. ``` [BOS] 안녕하세요? 반가워요~~ [EOS] ``` You can also test mask filling performance using `[MASK]` token like this. ``` [BOS] [MASK] 먹었어? [EOS] ``` ## Benchmark
Dataset | KLUE NLI dev | NSMC test | QuestionPair test | KLUE TC dev | KLUE STS dev | KorSTS dev | HateSpeech dev | ||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Metric | Acc | Acc | Acc | Acc | F1 | F1 | Pearson | Spearman | F1 | Pearson | Spearman | Bias Acc | Hate Acc |
Score | 0.639 | 0.8721 | 0.905 | 0.8551 | 0.8515 | 0.7406 | 0.7593 | 0.7551 | 0.7897 | 0.7269 | 0.7037 | 0.8068 | 0.5966 |