--- annotations_creators: - no-annotation language_creators: - crowdsourced language: - ko license: - cc-by-sa-4.0 multilinguality: - monolingual pretty_name: KcBERT Pre-Training Corpus (Korean News Comments) size_categories: - 10M>> from datasets import load_dataset >>> dataset = load_dataset("Bingsu/KcBERT_Pre-Training_Corpus") >>> dataset DatasetDict({ train: Dataset({ features: ['text'], num_rows: 86246285 }) }) ``` ### Data Size download: 7.90 GiB
generated: 11.86 GiB
total: 19.76 GiB ※ You can download this dataset from [kaggle](https://www.kaggle.com/datasets/junbumlee/kcbert-pretraining-corpus-korean-news-comments), and it's 5 GiB. (12.48 GiB when uncompressed) ### Data Fields - text: `string` ### Data Splits | | train | | ---------- | -------- | | # of texts | 86246285 |