Datasets:
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,100 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- no-annotation
|
4 |
+
language_creators:
|
5 |
+
- crowdsourced
|
6 |
+
language:
|
7 |
+
- ko
|
8 |
+
license:
|
9 |
+
- cc-by-sa-4.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
pretty_name: KcBERT Pre-Training Corpus (Korean News Comments)
|
13 |
+
size_categories:
|
14 |
+
- 10M<n<100M
|
15 |
+
source_datasets:
|
16 |
+
- original
|
17 |
+
task_categories:
|
18 |
+
- fill-mask
|
19 |
+
- text-generation
|
20 |
+
task_ids:
|
21 |
+
- masked-language-modeling
|
22 |
+
- language-modeling
|
23 |
+
---
|
24 |
+
|
25 |
+
# KcBERT Pre-Training Corpus (Korean News Comments)
|
26 |
+
|
27 |
+
## Dataset Description
|
28 |
+
|
29 |
+
- **Homepage:** https://www.kaggle.com/datasets/junbumlee/kcbert-pretraining-corpus-korean-news-comments
|
30 |
+
|
31 |
+
- **Repository:** https://github.com/Beomi/KcBERT
|
32 |
+
|
33 |
+
- **Paper:** [Needs More Information]
|
34 |
+
|
35 |
+
- **Leaderboard:** [Needs More Information]
|
36 |
+
|
37 |
+
- **Point of Contact:** [Needs More Information]
|
38 |
+
|
39 |
+
|
40 |
+
## KcBERT
|
41 |
+
[beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base)
|
42 |
+
|
43 |
+
Github KcBERT Repo: [https://github.com/Beomi/KcBERT](https://github.com/Beomi/KcBERT)
|
44 |
+
KcBERT is Korean Comments BERT pretrained on this Corpus set.
|
45 |
+
(You can use it via Huggingface's Transformers library!)
|
46 |
+
|
47 |
+
This Kaggle Dataset contains **CLEANED** dataset preprocessed with the code below.
|
48 |
+
|
49 |
+
```python
|
50 |
+
import re
|
51 |
+
import emoji
|
52 |
+
from soynlp.normalizer import repeat_normalize
|
53 |
+
|
54 |
+
emojis = ''.join(emoji.UNICODE_EMOJI.keys())
|
55 |
+
pattern = re.compile(f'[^ .,?!/@$%~%·∼()\x00-\x7Fㄱ-힣{emojis}]+')
|
56 |
+
url_pattern = re.compile(
|
57 |
+
r'https?:\/\/(www\.)?[-a-zA-Z0-9@:%._\+~#=]{1,256}\.[a-zA-Z0-9()]{1,6}\b([-a-zA-Z0-9()@:%_\+.~#?&//=]*)')
|
58 |
+
|
59 |
+
def clean(x):
|
60 |
+
x = pattern.sub(' ', x)
|
61 |
+
x = url_pattern.sub('', x)
|
62 |
+
x = x.strip()
|
63 |
+
x = repeat_normalize(x, num_repeats=2)
|
64 |
+
return x
|
65 |
+
```
|
66 |
+
|
67 |
+
### License
|
68 |
+
[CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/)
|
69 |
+
|
70 |
+
## Dataset Structure
|
71 |
+
### Data Instance
|
72 |
+
```pycon
|
73 |
+
>>> from datasets import load_dataset
|
74 |
+
|
75 |
+
>>> dataset = load_dataset("Bingsu/KcBERT_Pre-Training_Corpus")
|
76 |
+
>>> dataset
|
77 |
+
DatasetDict({
|
78 |
+
train: Dataset({
|
79 |
+
features: ['text'],
|
80 |
+
num_rows: 86246285
|
81 |
+
})
|
82 |
+
})
|
83 |
+
```
|
84 |
+
|
85 |
+
### Data Size
|
86 |
+
download: 7.90 GiB
|
87 |
+
generated: 11.86 GiB
|
88 |
+
total: 19.76 GiB
|
89 |
+
|
90 |
+
※ You can download this dataset from [kaggle](https://www.kaggle.com/datasets/junbumlee/kcbert-pretraining-corpus-korean-news-comments), and it's 5 GiB. (12.48 GiB when uncompressed)
|
91 |
+
|
92 |
+
### Data Fields
|
93 |
+
|
94 |
+
- text: `string`
|
95 |
+
|
96 |
+
### Data Splits
|
97 |
+
|
98 |
+
| | train |
|
99 |
+
| ---------- | -------- |
|
100 |
+
| # of texts | 86246285 |
|