--- language: ko --- # Pretrained BART in Korean This is pretrained BART model with multiple Korean Datasets. I used multiple datasets for generalizing the model for both colloquial and written texts. The training is supported by [TPU Research Cloud](https://sites.research.google/trc/) program. The script which is used to pre-train model is [here](https://github.com/cosmoquester/transformers-bart-pretrain). When you use the reference API, you must wrap the sentence with `[BOS]` and `[EOS]` like below example. ``` [BOS] 안녕하세요? 반가워요~~ [EOS] ``` ## Used Datasets ### [모두의 말뭉치](https://corpus.korean.go.kr/) - 일상 대화 말뭉치 2020 - 구어 말뭉치 - 문어 말뭉치 - 신문 말뭉치 ### AIhub - [개방데이터 전문분야말뭉치](https://aihub.or.kr/aidata/30717) - [개방데이터 한국어대화요약](https://aihub.or.kr/aidata/30714) - [개방데이터 감성 대화 말뭉치](https://aihub.or.kr/aidata/7978) - [개방데이터 한국어 음성](https://aihub.or.kr/aidata/105) - [개방데이터 한국어 SNS](https://aihub.or.kr/aidata/30718) ### [세종 말뭉치](https://ithub.korean.go.kr/)