Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,108 @@
|
|
1 |
---
|
2 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
annotations_creators:
|
3 |
+
- crowdsourced
|
4 |
+
language_creators:
|
5 |
+
- crowdsourced
|
6 |
+
language:
|
7 |
+
- ko
|
8 |
+
license:
|
9 |
+
- cc-by-4.0
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
pretty_name: laion-translated-to-en-korean-subset
|
13 |
+
size_categories:
|
14 |
+
- 10M<n<100M
|
15 |
+
task_categories:
|
16 |
+
- feature-extraction
|
17 |
---
|
18 |
+
|
19 |
+
# laion-translated-to-en-korean-subset
|
20 |
+
|
21 |
+
## Dataset Description
|
22 |
+
- **Homepage:** [laion-5b](https://laion.ai/blog/laion-5b/)
|
23 |
+
- **Huggingface:** [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi)
|
24 |
+
- **Download Size** 1.40 GiB
|
25 |
+
- **Generated Size** 3.49 GiB
|
26 |
+
- **Total Size** 4.89 GiB
|
27 |
+
|
28 |
+
## About dataset
|
29 |
+
a subset data of [laion/laion2B-multi-joined-translated-to-en](https://huggingface.co/datasets/laion/laion2B-multi-joined-translated-to-en) and [laion/laion1B-nolang-joined-translated-to-en](https://huggingface.co/datasets/laion/laion1B-nolang-joined-translated-to-en), including only korean
|
30 |
+
|
31 |
+
### Lisence
|
32 |
+
CC-BY-4.0
|
33 |
+
|
34 |
+
## Data Structure
|
35 |
+
|
36 |
+
### Data Instance
|
37 |
+
|
38 |
+
```py
|
39 |
+
>>> from datasets import load_dataset
|
40 |
+
>>> dataset = load_dataset("Bingsu/laion-translated-to-en-korean-subset")
|
41 |
+
>>> dataset
|
42 |
+
DatasetDict({
|
43 |
+
train: Dataset({
|
44 |
+
features: ['hash', 'URL', 'TEXT', 'ENG TEXT', 'WIDTH', 'HEIGHT', 'LANGUAGE', 'similarity', 'pwatermark', 'punsafe', 'AESTHETIC_SCORE'],
|
45 |
+
num_rows: 12769693
|
46 |
+
})
|
47 |
+
})
|
48 |
+
```
|
49 |
+
|
50 |
+
```py
|
51 |
+
>>> dataset["train"].features
|
52 |
+
{'hash': Value(dtype='int64', id=None),
|
53 |
+
'URL': Value(dtype='large_string', id=None),
|
54 |
+
'TEXT': Value(dtype='large_string', id=None),
|
55 |
+
'ENG TEXT': Value(dtype='large_string', id=None),
|
56 |
+
'WIDTH': Value(dtype='int32', id=None),
|
57 |
+
'HEIGHT': Value(dtype='int32', id=None),
|
58 |
+
'LANGUAGE': Value(dtype='large_string', id=None),
|
59 |
+
'similarity': Value(dtype='float32', id=None),
|
60 |
+
'pwatermark': Value(dtype='float32', id=None),
|
61 |
+
'punsafe': Value(dtype='float32', id=None),
|
62 |
+
'AESTHETIC_SCORE': Value(dtype='float32', id=None)}
|
63 |
+
```
|
64 |
+
|
65 |
+
### Data Size
|
66 |
+
|
67 |
+
download: 1.40 GiB<br>
|
68 |
+
generated: 3.49 GiB<br>
|
69 |
+
total: 4.89 GiB
|
70 |
+
|
71 |
+
### Data Field
|
72 |
+
|
73 |
+
- 'hash': `int`
|
74 |
+
- 'URL': `string`
|
75 |
+
- 'TEXT': `string`
|
76 |
+
- 'ENG TEXT': `string`, null data are dropped
|
77 |
+
- 'WIDTH': `int`, null data is filled with 0
|
78 |
+
- 'HEIGHT': `int`, null data is filled with 0
|
79 |
+
- 'LICENSE': `string`
|
80 |
+
- 'LANGUAGE': `string`
|
81 |
+
- 'similarity': `float32`, CLIP similarity score, null data are filled with 0.0
|
82 |
+
- 'pwatermark': `float32`, Probability of containing a watermark, null data are filled with 0.0
|
83 |
+
- 'punsafe': `float32`, Probability of nsfw image, null data are filled with 0.0
|
84 |
+
- 'AESTHETIC_SCORE': `float32`, null data are filled with 0.0
|
85 |
+
|
86 |
+
### Data Splits
|
87 |
+
|
88 |
+
| | train |
|
89 |
+
| --------- | -------- |
|
90 |
+
| # of data | 12769693 |
|
91 |
+
|
92 |
+
|
93 |
+
### polars
|
94 |
+
|
95 |
+
```sh
|
96 |
+
pip install polars[fsspec]
|
97 |
+
```
|
98 |
+
|
99 |
+
```py
|
100 |
+
import polars as pl
|
101 |
+
from huggingface_hub import hf_hub_url
|
102 |
+
|
103 |
+
url = hf_hub_url("Bingsu/laion-translated-to-en-korean-subset", filename="train.parquet", repo_type="dataset")
|
104 |
+
# url = "https://huggingface.co/datasets/Bingsu/laion-translated-to-en-korean-subset/resolve/main/train.parquet"
|
105 |
+
df = pl.read_parquet(url)
|
106 |
+
```
|
107 |
+
|
108 |
+
pandas broke my colab session.
|