Pretrained BART in Korean

This is pretrained BART model with multiple Korean Datasets.

I used multiple datasets for generalizing the model for both colloquial and written texts.

The training is supported by TPU Research Cloud program.

The script which is used to pre-train model is here.

When you use the reference API, you must wrap the sentence with [BOS] and [EOS] like below example.

[BOS] ์•ˆ๋…•ํ•˜์„ธ์š”? ๋ฐ˜๊ฐ€์›Œ์š”~~ [EOS]

You can also test mask filling performance using [MASK] token like this.

[BOS] [MASK] ๋จน์—ˆ์–ด? [EOS]

Benchmark

Dataset KLUE NLI dev NSMC test QuestionPair test KLUE TC dev KLUE STS dev KorSTS dev HateSpeech dev
Metric Acc Acc Acc Acc F1 F1 Pearson Spearman F1 Pearson Spearman Bias Acc Hate Acc
Score 0.7390 0.8877 0.9208 0.8667 0.8637 0.7654 0.8090 0.8040 0.8067 0.7909 0.7784 0.8280 0.5669

Used Datasets

๋ชจ๋‘์˜ ๋ง๋ญ‰์น˜

  • ์ผ์ƒ ๋Œ€ํ™” ๋ง๋ญ‰์น˜ 2020
  • ๊ตฌ์–ด ๋ง๋ญ‰์น˜
  • ๋ฌธ์–ด ๋ง๋ญ‰์น˜
  • ์‹ ๋ฌธ ๋ง๋ญ‰์น˜

AIhub

์„ธ์ข… ๋ง๋ญ‰์น˜

Downloads last month
27
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.