Datasets:
KaraKaraWitch
commited on
Commit
•
2ce3725
1
Parent(s):
cdbdd2d
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,56 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
size_categories:
|
3 |
+
- 10K<n<100K
|
4 |
+
pretty_name: OKReddit Visionary
|
5 |
+
task_categories:
|
6 |
+
- text-generation
|
7 |
+
- fill-mask
|
8 |
+
task_ids:
|
9 |
+
- language-modeling
|
10 |
+
- masked-language-modeling
|
11 |
+
source_datasets:
|
12 |
+
- original
|
13 |
+
language:
|
14 |
+
- en
|
15 |
+
---
|
16 |
+
|
17 |
+
<div>
|
18 |
+
<a href="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/jh7lskqN9TnF53HmKnFlh.png"><img src="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/jh7lskqN9TnF53HmKnFlh.png" title=""We've switched style models from 1.5 to SDXL! Yay! And yes, it's a Style lora once more."" style="margin-left:auto;margin-right:auto"></a>
|
19 |
+
</div>
|
20 |
+
|
21 |
+
# Dataset Summary
|
22 |
+
|
23 |
+
OKReddit Visionary is a collection of **50 GiB** (~74K pairs) of image Question & Answers. This dataset has been prepared for research or archival purposes.
|
24 |
+
|
25 |
+
This dataset includes (obviously) a filtered list of subreddits.
|
26 |
+
|
27 |
+
- **Curated by:** KaraKaraWitch
|
28 |
+
- **Funded by:** Recursal.ai
|
29 |
+
- **Shared by:** KaraKaraWitch
|
30 |
+
- **Language(s) (NLP):** Mainly English. Other languages are available at smaller sizes.
|
31 |
+
- **License:** `Scripts` folder are Apache 2.0. Refer to [Licensing Information](#licensing-information) for data license.
|
32 |
+
|
33 |
+
### Dataset Sources
|
34 |
+
|
35 |
+
- **Source Data:** [Academic Torrents](https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4) by (stuck_in_the_matrix, Watchful1, RaiderBDev & pushshift folks.)
|
36 |
+
|
37 |
+
## Languages
|
38 |
+
|
39 |
+
All the questions and answers should be in english at this size point.
|
40 |
+
|
41 |
+
## Dataset Structure
|
42 |
+
|
43 |
+
### Data Instances
|
44 |
+
|
45 |
+
The dataset can be loaded with webdataset. Do note that there are multiple extensions to check: `jpg`, `jpeg` or `png`. They have not been reconverted to preserve the original file from reddit.
|
46 |
+
|
47 |
+
```py
|
48 |
+
import webdataset as wds
|
49 |
+
# After concatting, you may use the file like a regular dataset.
|
50 |
+
|
51 |
+
# The dataset is compatible with WebDataset format. Example...
|
52 |
+
|
53 |
+
tar_file = "PackedTar.tar"
|
54 |
+
|
55 |
+
hf_dataset = wds.WebDataset(str(tar_root)).decode("pil")
|
56 |
+
```
|