|
--- |
|
size_categories: |
|
- 10K<n<100K |
|
pretty_name: OKReddit Visionary |
|
task_categories: |
|
- question-answering |
|
- image-to-text |
|
source_datasets: |
|
- original |
|
language: |
|
- en |
|
--- |
|
|
|
<div> |
|
<a href="https://soundcloud.com/lemmino/biosignature"><img src="https://cdn-uploads.huggingface.co/production/uploads/633e85093a17ab61de8d9073/jh7lskqN9TnF53HmKnFlh.png" title=""We've switched style models from 1.5 to SDXL! Yay! And yes, it's a Style lora once more."" style="margin-left:auto;margin-right:auto"></a> |
|
</div> |
|
|
|
# Dataset Summary |
|
|
|
OKReddit Visionary is a collection of **50 GiB** (~74K pairs) of image Question & Answers. This dataset has been prepared for research or archival purposes. |
|
|
|
- **Curated by:** KaraKaraWitch |
|
- **Funded by:** Recursal.ai |
|
- **Shared by:** KaraKaraWitch |
|
- **Special Thanks:** [harrison](https://huggingface.co/harrisonvanderbyl) (Suggestion) |
|
- **Language(s) (NLP):** Mainly English. |
|
- **License:** Refer to [Licensing Information](#licensing-information) for data license. |
|
|
|
### Dataset Sources |
|
|
|
- **Source Data:** [Academic Torrents](https://academictorrents.com/details/9c263fc85366c1ef8f5bb9da0203f4c8c8db75f4) by (stuck_in_the_matrix, Watchful1, RaiderBDev & pushshift folks.) |
|
|
|
## Supported Tasks and Leaderboards |
|
|
|
The dataset may be used for a variety of natural language processing (NLP) tasks including: |
|
|
|
- Visual Questioning: Dataset contains question and answer pairs |
|
- Text to Image (and vice versa). |
|
|
|
|
|
## Languages |
|
|
|
All the questions and answers should be in english at this size point. |
|
|
|
## Dataset Structure |
|
|
|
### Data Instances |
|
|
|
The dataset can be loaded with webdataset. Do note that there are multiple extensions to check: `jpg`, `jpeg` or `png`. They have not been reconverted to preserve the original file from reddit. |
|
|
|
```py |
|
import webdataset as wds |
|
# After concatting, you may use the file like a regular dataset. |
|
|
|
# The dataset is compatible with WebDataset format. Example... |
|
|
|
tar_file = "PackedTar.tar" |
|
|
|
hf_dataset = wds.WebDataset(str(tar_root)).decode("pil") |
|
``` |
|
|
|
# Dataset Creation |
|
|
|
## Curation Rationale |
|
|
|
Some subreddits are more often than not, Q&A subreddits. Where the submission author asks a question (+An Image), and recieve responses back. |
|
|
|
### Subreddits Picked |
|
|
|
Following a suggestion from harrison, I've selected the following subreddits for this dataset: |
|
|
|
- PeterExplainsTheJoke |
|
- whatisthisanimal |
|
- whatisthisbug |
|
- whatisthiscar |
|
- whatisthisthing |
|
|
|
Some subreddits were not present in the base OKReddit-RC3 (/r/PeterExplainsTheJoke for example) dataset and had to be pulled from an intermediary step. But the same quality metrics where used in the final subreddit filtering. |
|
|
|
### Picking good threads |
|
|
|
After filtering threads, another round of filtering threads by score is done by: |
|
|
|
1. Selecting submissions with >7 score. |
|
2. Select replies from said submissions when it's not a bot (`AutoModerator`) and the score is > 5. |
|
3. Scrape all images (Excluding galleries) |
|
4. Pack data into tar format. |
|
|
|
# Additional Information |
|
|
|
## Recursal's Vision |
|
|
|
> To make AI accessible to everyone, regardless of language, or economical status |
|
|
|
This is the collective goal of the `RWKV Open Source foundation` and `Recursal AI`, the commercial entity who backs it. |
|
|
|
We believe that AI should not be controlled by a select few individual organization. And that it should be made accessible regardless if you are rich or poor, or a native speaker of english. |
|
|
|
### About RWKV |
|
|
|
RWKV is an Open Source, non profit group, under the linux foundation. Focused on developing the RWKV AI architecture, in accordence to our vision. |
|
|
|
The RWKV architecture scales efficiently and economically. As an RNN & Transformer hybrid, it is able to provide the performance similar to leading transformer models, while having the compute and energy efficiency of an RNN based architecture. |
|
|
|
You can find out more about the project, and latest models, at the following |
|
|
|
- [https://blog.rwkv.com](https://blog.rwkv.com) |
|
- [https://wiki.rwkv.com](https://wiki.rwkv.com) |
|
|
|
|
|
### About Recursal AI |
|
|
|
Recursal AI, is the commercial entity built to provide support for RWKV model development and users, while providing commercial services via its public cloud, or private-cloud / on-premise offerings. |
|
|
|
As part of our vision. Our commitment, is to ensure open source development and access to the best foundational AI models and datasets. |
|
|
|
The following dataset/models provided here, is part of that commitment. |
|
|
|
You can find out more about recursal AI here |
|
|
|
- [https://recursal.ai](https://recursal.ai) |
|
- [https://blog.recursal.ai](https://blog.recursal.ai) |
|
|
|
### Licensing Information |
|
|
|
Since this dataset is derived from a public crawl of reddit, the original content may be subject to copyright and other licensing terms set by the original site owner and/or the content creators. |
|
Additionally, this dataset is for research and archival purposes only. |
|
|
|
Recursal Waifus (The banner image) are licensed under CC-BY-SA. |
|
They do not represent the related websites in any official capacity unless otherwise or announced by the website. |
|
You may use them as a banner image. However, you must always link back to the dataset. |
|
|
|
### Citation Information |
|
|
|
If you use this dataset in your research or project, please cite it as follows: |
|
|
|
```TeX |
|
@dataset{OKRedditVisionary, |
|
title = {OKReddit-Visionary}, |
|
year = {2024}, |
|
publisher = {KaraKaraWitch}, |
|
url = {<https://huggingface.co/datasets/recursal/OKReddit-Visionary>} |
|
} |
|
``` |
|
|
|
Additionally, pleace cite the following source bibtex as well. |
|
```TeX |
|
@article{, |
|
title= {Reddit comments/submissions 2005-06 to 2023-12}, |
|
journal= {}, |
|
author= {stuck_in_the_matrix, Watchful1, RaiderBDev}, |
|
year= {}, |
|
url= {}, |
|
abstract= {Reddit comments and submissions from 2005-06 to 2023-09 collected by pushshift and u/RaiderBDev. |
|
|
|
These are zstandard compressed ndjson files. Example python scripts for parsing the data can be found here https://github.com/Watchful1/PushshiftDumps |
|
|
|
The more recent dumps are collected by u/RaiderBDev and questions can be submitted here https://github.com/ArthurHeitmann/arctic_shift}, |
|
keywords= {reddit}, |
|
terms= {}, |
|
license= {}, |
|
superseded= {} |
|
} |
|
``` |