datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.54M
| likes
int64 0
6.27k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1.01M
|
---|---|---|---|---|---|---|---|---|
VatsaDev/mathworld | VatsaDev | "2024-02-04T01:20:49Z" | 32 | 0 | [
"license:mit",
"region:us"
] | null | "2024-02-04T01:18:49Z" | ---
license: mit
---
# Mathworld
- Wolfram Mathworld scarped, but without images
- Should be every link |
dkshjn/processed_truthy-v3 | dkshjn | "2024-02-04T02:37:07Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T02:36:58Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: source
dtype: string
- name: system
dtype: string
- name: prompt
dtype: string
- name: rejected
dtype: string
- name: formatted_chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: formatted_rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 2777217
num_examples: 1016
download_size: 1168067
dataset_size: 2777217
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "processed_truthy-v3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
RamiToocool/MyResume | RamiToocool | "2024-02-04T03:26:06Z" | 32 | 0 | [
"license:apache-2.0",
"region:us",
"biology"
] | null | "2024-02-04T03:18:51Z" | ---
license: apache-2.0
tags:
- biology
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
XiaoyaoCloud/qq | XiaoyaoCloud | "2024-02-04T03:54:05Z" | 32 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-02-04T03:54:02Z" | ---
license: apache-2.0
---
|
artificial-citizen/ava_chatml | artificial-citizen | "2024-02-04T05:15:00Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T05:14:59Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 22053608
num_examples: 6534
download_size: 10482250
dataset_size: 22053608
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Drobotwho/rightwingtwitter | Drobotwho | "2024-02-04T08:01:52Z" | 32 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T07:51:51Z" | ---
license: apache-2.0
---
|
asier86/certifs_tfdm | asier86 | "2024-02-04T08:28:32Z" | 32 | 0 | [
"license:unknown",
"region:us"
] | null | "2024-02-04T08:16:55Z" | ---
license: unknown
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
ZiHDeng/hf-ny8-v5 | ZiHDeng | "2024-02-04T08:31:29Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T08:31:25Z" | ---
dataset_info:
features:
- name: repo_id
dtype: string
- name: file_path
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 618878
num_examples: 1661
download_size: 20840
dataset_size: 618878
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fia24/annotated_68k | fia24 | "2024-02-04T08:32:40Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T08:32:37Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: translation
struct:
- name: en
dtype: string
- name: fr
dtype: string
splits:
- name: train
num_bytes: 3257035
num_examples: 54458
- name: val
num_bytes: 411042
num_examples: 6807
- name: test
num_bytes: 395238
num_examples: 6808
download_size: 1529700
dataset_size: 4063315
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: val
path: data/val-*
- split: test
path: data/test-*
---
|
kanxue/muep_cot_checkpoint | kanxue | "2024-02-09T14:05:25Z" | 32 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-02-04T08:49:01Z" | ---
license: apache-2.0
---
|
ralshinibr/SyntheticProtocolQA | ralshinibr | "2024-02-04T11:24:36Z" | 32 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T11:22:40Z" | ---
license: apache-2.0
---
|
Andaleciomusic/pirapora | Andaleciomusic | "2024-02-04T12:17:01Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T12:16:37Z" | ---
license: openrail
---
|
Andaleciomusic/lupegostoso | Andaleciomusic | "2024-02-04T12:27:00Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T12:26:33Z" | ---
license: openrail
---
|
fabbiofrazao/limacampos | fabbiofrazao | "2024-02-04T14:27:00Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T14:23:37Z" | ---
license: openrail
---
|
McSpicyWithMilo/target-elements-0.2split | McSpicyWithMilo | "2024-02-04T15:16:03Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T15:15:50Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: target_element
dtype: string
- name: instruction_type
dtype: string
splits:
- name: train
num_bytes: 36440.0
num_examples: 320
- name: test
num_bytes: 9110.0
num_examples: 80
download_size: 24201
dataset_size: 45550.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
# Dataset Card for "target-elements-0.2split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/random25eof_find_passage_train1000_eval1000_rare | tyzhu | "2024-02-04T15:44:23Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T15:30:49Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 298700
num_examples: 3000
- name: validation
num_bytes: 118222
num_examples: 1000
download_size: 181208
dataset_size: 416922
---
# Dataset Card for "random25eof_find_passage_train1000_eval1000_rare"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/random25eof_find_passage_train10000_eval1000_rare | tyzhu | "2024-02-04T15:44:43Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T15:31:09Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 2174452
num_examples: 21000
- name: validation
num_bytes: 118222
num_examples: 1000
download_size: 790893
dataset_size: 2292674
---
# Dataset Card for "random25eof_find_passage_train10000_eval1000_rare"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/random25eof_find_passage_train500000_eval1000_rare | tyzhu | "2024-02-04T15:44:53Z" | 32 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T15:31:49Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 104305810
num_examples: 1001000
- name: validation
num_bytes: 118222
num_examples: 1000
download_size: 0
dataset_size: 104424032
---
# Dataset Card for "random25eof_find_passage_train500000_eval1000_rare"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/random25eof_find_passage_train1000000_eval1000_rare | tyzhu | "2024-02-04T15:44:59Z" | 32 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T15:32:03Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 208524730
num_examples: 2001000
- name: validation
num_bytes: 118222
num_examples: 1000
download_size: 0
dataset_size: 208642952
---
# Dataset Card for "random25eof_find_passage_train1000000_eval1000_rare"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
tyzhu/random25eof_find_passage_train5000000_eval1000_rare | tyzhu | "2024-02-04T15:45:13Z" | 32 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T15:32:27Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
splits:
- name: train
num_bytes: 1042263000
num_examples: 10001000
- name: validation
num_bytes: 118222
num_examples: 1000
download_size: 0
dataset_size: 1042381222
---
# Dataset Card for "random25eof_find_passage_train5000000_eval1000_rare"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Tonyhacker/rosa_braw_stars | Tonyhacker | "2024-02-04T15:40:37Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T15:39:49Z" | ---
license: openrail
---
|
Fael2d/Voz70 | Fael2d | "2024-02-04T16:09:54Z" | 32 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-02-04T16:09:54Z" | ---
license: openrail
---
|
Fael2d/Voz63 | Fael2d | "2024-02-04T16:11:26Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T16:10:32Z" | ---
license: openrail
---
|
clonandovoz/clonandovoz | clonandovoz | "2024-02-04T17:04:28Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T17:01:37Z" | ---
license: openrail
---
|
Fael2d/VOZ1 | Fael2d | "2024-02-04T19:11:30Z" | 32 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-02-04T17:06:55Z" | ---
license: openrail
---
|
weijie210/ultrafeedback_critique_pairwise | weijie210 | "2024-02-04T18:11:07Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T18:10:42Z" | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 431336663.29764986
num_examples: 119288
- name: test
num_bytes: 22700787.775657617
num_examples: 6278
download_size: 197357421
dataset_size: 454037451.07330745
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Edopangui/promociones2 | Edopangui | "2024-02-04T18:37:36Z" | 32 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-02-04T18:35:14Z" | ---
license: apache-2.0
---
|
clonandovoz/clonandovozz | clonandovoz | "2024-02-04T18:36:48Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T18:36:18Z" | ---
license: openrail
---
|
Edopangui/promo | Edopangui | "2024-02-04T18:45:11Z" | 32 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-02-04T18:43:59Z" | ---
license: apache-2.0
---
|
ekazuki/subject-parliament-temp-ds | ekazuki | "2024-02-04T18:53:46Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T18:52:37Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: group
dtype: string
- name: text_truncated
dtype: string
- name: subject_str
dtype: string
splits:
- name: train
num_bytes: 132500156
num_examples: 55000
download_size: 73301768
dataset_size: 132500156
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dknoar01/dknoar | dknoar01 | "2024-02-04T19:09:08Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T18:52:40Z" | ---
license: openrail
---
|
Edopangui/promo2 | Edopangui | "2024-02-04T19:10:09Z" | 32 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T19:09:30Z" | ---
license: apache-2.0
---
|
RENILSON/cloneadolescente | RENILSON | "2024-02-04T19:20:21Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T19:10:53Z" | ---
license: openrail
---
|
clonandovoz/novoclon | clonandovoz | "2024-02-04T19:22:01Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T19:21:40Z" | ---
license: openrail
---
|
razvanalex/cwi-2018-en-news | razvanalex | "2024-02-04T19:40:26Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T19:40:19Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: start_char
dtype: int64
- name: end_char
dtype: int64
- name: token
dtype: string
- name: n_natives
dtype: int64
- name: n_non_natives
dtype: int64
- name: n_dif_natives
dtype: int64
- name: n_dif_non_natives
dtype: int64
- name: label
dtype: int64
- name: probability
dtype: float64
splits:
- name: train
num_bytes: 4046567
num_examples: 14002
- name: validation
num_bytes: 497271
num_examples: 1764
- name: test
num_bytes: 560249
num_examples: 2095
download_size: 471596
dataset_size: 5104087
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
razvanalex/cwi-2018-en-wikinews | razvanalex | "2024-02-04T19:40:47Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T19:40:45Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: start_char
dtype: int64
- name: end_char
dtype: int64
- name: token
dtype: string
- name: n_natives
dtype: int64
- name: n_non_natives
dtype: int64
- name: n_dif_natives
dtype: int64
- name: n_dif_non_natives
dtype: int64
- name: label
dtype: int64
- name: probability
dtype: float64
splits:
- name: train
num_bytes: 2184024
num_examples: 7746
- name: validation
num_bytes: 233481
num_examples: 870
- name: test
num_bytes: 360578
num_examples: 1287
download_size: 278197
dataset_size: 2778083
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
razvanalex/cwi-2018-de | razvanalex | "2024-02-04T19:42:29Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T19:42:25Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: start_char
dtype: int64
- name: end_char
dtype: int64
- name: token
dtype: string
- name: n_natives
dtype: int64
- name: n_non_natives
dtype: int64
- name: n_dif_natives
dtype: int64
- name: n_dif_non_natives
dtype: int64
- name: label
dtype: int64
- name: probability
dtype: float64
splits:
- name: train
num_bytes: 1694167
num_examples: 6151
- name: validation
num_bytes: 211260
num_examples: 795
- name: test
num_bytes: 262987
num_examples: 959
download_size: 250987
dataset_size: 2168414
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
razvanalex/cwi-2018-es | razvanalex | "2024-02-04T19:43:32Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T19:43:29Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: start_char
dtype: int64
- name: end_char
dtype: int64
- name: token
dtype: string
- name: n_natives
dtype: int64
- name: n_non_natives
dtype: int64
- name: n_dif_natives
dtype: int64
- name: n_dif_non_natives
dtype: int64
- name: label
dtype: int64
- name: probability
dtype: float64
splits:
- name: train
num_bytes: 4294416
num_examples: 13750
- name: validation
num_bytes: 505004
num_examples: 1622
- name: test
num_bytes: 681238
num_examples: 2233
download_size: 491700
dataset_size: 5480658
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
razvanalex/cwi-2018-fr | razvanalex | "2024-02-04T19:44:51Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T19:44:49Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: sentence
dtype: string
- name: start_char
dtype: int64
- name: end_char
dtype: int64
- name: token
dtype: string
- name: n_natives
dtype: int64
- name: n_non_natives
dtype: int64
- name: n_dif_natives
dtype: int64
- name: n_dif_non_natives
dtype: int64
- name: label
dtype: int64
- name: probability
dtype: float64
splits:
- name: test
num_bytes: 774054
num_examples: 2251
download_size: 71217
dataset_size: 774054
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Edopangui/promo3 | Edopangui | "2024-02-04T20:13:31Z" | 32 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T20:12:52Z" | ---
license: apache-2.0
---
|
Moulin00/MigelLion | Moulin00 | "2024-02-04T20:37:07Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T20:34:35Z" | ---
license: openrail
---
|
Bruno1424/MEKA_COME_COME | Bruno1424 | "2024-02-04T20:51:08Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T20:48:37Z" | ---
license: openrail
---
|
Tailsaro/tetsos | Tailsaro | "2024-02-04T21:24:45Z" | 32 | 0 | [
"license:cc-by-4.0",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-04T21:23:15Z" | ---
license: cc-by-4.0
---
|
alberto2/LLamaVoz | alberto2 | "2024-02-04T21:52:39Z" | 32 | 0 | [
"license:llama2",
"region:us"
] | null | "2024-02-04T21:52:39Z" | ---
license: llama2
---
|
Temo/alpaca-kartuli-0.1 | Temo | "2024-02-04T22:45:59Z" | 32 | 0 | [
"language:ka",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T21:59:37Z" | ---
license: cc-by-4.0
language:
- ka
size_categories:
- 10K<n<100K
---
# Alpaca-kartuli-0.1
<!-- Provide a quick summary of the dataset. -->
alpaca Dataset-ის ქართულად გადმოთრგმნილი ვერსია.
- **წყარო:** [alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned#dataset-card-for-alpaca-cleaned)
|
khangtran97/gpt89-v2 | khangtran97 | "2024-02-04T23:01:29Z" | 32 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T22:59:17Z" | ---
license: mit
---
|
khannz/docs | khannz | "2024-02-04T23:36:21Z" | 32 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-02-04T23:36:20Z" | ---
license: apache-2.0
---
|
steven2521/squad_v2_rag_qa | steven2521 | "2024-02-04T23:53:22Z" | 32 | 0 | [
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-04T23:49:56Z" | ---
license: mit
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
sequence: int64
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
sequence: int64
- name: text
sequence: string
- name: question_embedding
sequence: float32
splits:
- name: train
num_bytes: 820791044
num_examples: 130319
- name: validation
num_bytes: 75187085
num_examples: 11873
download_size: 966385539
dataset_size: 895978129
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
projetosoclts/Malu | projetosoclts | "2024-02-05T00:33:06Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-05T00:32:28Z" | ---
license: openrail
---
|
DataStudio/OCRWordLevelClear_07 | DataStudio | "2024-02-05T01:07:54Z" | 32 | 0 | [
"region:us"
] | null | "2024-02-05T00:56:58Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 4665530815.72
num_examples: 1034148
download_size: 4456935622
dataset_size: 4665530815.72
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mbenachour/cms_rules1 | mbenachour | "2024-02-05T04:20:52Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T04:17:43Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 106548.0
num_examples: 13
- name: test
num_bytes: 16392.0
num_examples: 2
download_size: 68989
dataset_size: 122940.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
outeiral/VOZIA | outeiral | "2024-02-05T06:41:08Z" | 32 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-02-05T06:39:40Z" | ---
license: openrail
---
|
OUTEIRAL2/VOZIA2 | OUTEIRAL2 | "2024-02-05T06:52:44Z" | 32 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-02-05T06:51:44Z" | ---
license: openrail
---
|
OUTEIRAL2/VOZIA3 | OUTEIRAL2 | "2024-02-05T07:08:57Z" | 32 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-02-05T06:57:08Z" | ---
license: openrail
---
|
OUTEIRAL2/VOZIA4 | OUTEIRAL2 | "2024-02-05T07:26:04Z" | 32 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-02-05T07:21:12Z" | ---
license: openrail
---
|
OUTEIRAL2/VOZIA5 | OUTEIRAL2 | "2024-02-05T07:27:50Z" | 32 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-02-05T07:27:15Z" | ---
license: openrail
---
|
DeepFoldProtein/Foldseek_over70_proteome_UniDoc_test | DeepFoldProtein | "2024-02-05T07:37:13Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T07:32:00Z" | ---
dataset_info:
features:
- name: uniprotAccession
dtype: string
- name: domain
sequence:
sequence: int64
- name: ndom
dtype: int64
- name: taxId
dtype: string
- name: uniprotSequence
dtype: string
splits:
- name: train
num_bytes: 4002
num_examples: 12
download_size: 7272
dataset_size: 4002
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Sai-Manisha/Fine-tuning-feb-5 | Sai-Manisha | "2024-02-05T10:29:46Z" | 32 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T08:14:59Z" | ---
license: mit
---
|
myrtotsok/clf-3 | myrtotsok | "2024-02-05T08:56:39Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T08:56:36Z" | ---
dataset_info:
features:
- name: request
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 121051
num_examples: 1120
- name: validation
num_bytes: 30256
num_examples: 280
download_size: 28195
dataset_size: 151307
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
tyzhu/find_marker_both_sent_train_400_eval_40_in_context | tyzhu | "2024-02-05T09:23:20Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T09:23:12Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: title
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 3738032
num_examples: 1994
- name: validation
num_bytes: 383715
num_examples: 200
download_size: 833365
dataset_size: 4121747
---
# Dataset Card for "find_marker_both_sent_train_400_eval_40_in_context"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
LedaMC/LEDAmc_AI_TrainingSet_Sprint0224_V1_20240205 | LedaMC | "2024-02-05T09:28:23Z" | 32 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-02-05T09:26:41Z" | ---
license: apache-2.0
---
|
Y11IC/mini-platypus | Y11IC | "2024-02-20T15:40:27Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T09:30:59Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4202564
num_examples: 1000
download_size: 2248345
dataset_size: 4202564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Atipico1/NQ-colbert-top-10 | Atipico1 | "2024-02-05T09:44:30Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T09:43:49Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answers
sequence: string
- name: ctxs
list:
- name: hasanswer
dtype: bool
- name: score
dtype: float64
- name: text
dtype: string
- name: title
dtype: string
splits:
- name: train
num_bytes: 574077206
num_examples: 87925
- name: test
num_bytes: 23673906
num_examples: 3610
download_size: 340649717
dataset_size: 597751112
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
tyson0420/stackexchange-4dpo-filby-clang-keywords | tyson0420 | "2024-02-05T09:55:16Z" | 32 | 0 | [
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T09:50:20Z" | ---
license: cc-by-sa-4.0
---
|
jivuu0/minhavozjp | jivuu0 | "2024-02-10T02:18:32Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-05T10:55:40Z" | ---
license: openrail
---
|
ibm/Wish-QA-MED-Llama | ibm | "2024-02-05T11:14:23Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T11:14:04Z" | ---
dataset_info:
features:
- name: pubid
dtype: int64
- name: title_question
dtype: string
- name: context
dtype: string
- name: long_answer
dtype: string
- name: text
dtype: string
- name: qa
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: doc_score
dtype: float64
- name: score_qa
dtype: float64
- name: ans_num_words
dtype: int64
- name: text_num_words
dtype: int64
- name: text_longer_1.5
dtype: int64
splits:
- name: train
num_bytes: 52697515
num_examples: 10000
download_size: 27722168
dataset_size: 52697515
---
# Dataset Card for "Wish-QA-MED-Llama"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
DjSteker/winogrande_train_s_spanish | DjSteker | "2024-02-05T13:59:32Z" | 32 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T12:23:07Z" | ---
dataset_info:
features:
- name: qID
dtype: string
- name: sentence
dtype: string
- name: option1
dtype: string
- name: option2
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 114262
num_examples: 640
download_size: 51145
dataset_size: 114262
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chathuranga-jayanath/selfapr-manipulation-bug-error-context-10000 | chathuranga-jayanath | "2024-02-05T16:06:39Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T12:38:19Z" | ---
dataset_info:
features:
- name: fix
dtype: string
- name: ctx
dtype: string
splits:
- name: train
num_bytes: 5017924
num_examples: 8000
- name: validation
num_bytes: 614517
num_examples: 1000
- name: test
num_bytes: 608165
num_examples: 1000
download_size: 2850672
dataset_size: 6240606
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
DjSteker/alpaca-es-auto-filter | DjSteker | "2024-02-05T15:09:26Z" | 32 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T15:08:50Z" | ---
dataset_info:
features:
- name: text
dtype: 'null'
- name: inputs
struct:
- name: 1-instruction
dtype: string
- name: 2-input
dtype: string
- name: 3-output
dtype: string
- name: prediction
dtype: 'null'
- name: prediction_agent
dtype: 'null'
- name: annotation
dtype: string
- name: annotation_agent
dtype: string
- name: vectors
struct:
- name: input
sequence: float64
- name: instruction
sequence: float64
- name: output
sequence: float64
- name: multi_label
dtype: bool
- name: explanation
dtype: 'null'
- name: id
dtype: string
- name: metadata
struct:
- name: bias_score.label
dtype: string
- name: bias_score.score
dtype: float64
- name: en_index
dtype: int64
- name: hate_score.label
dtype: string
- name: hate_score.score
dtype: float64
- name: sf-multi-unprocessable-score
dtype: float64
- name: sf-unprocessable-score
dtype: float64
- name: tr-flag-1-instruction
dtype: bool
- name: tr-flag-2-input
dtype: bool
- name: tr-flag-3-output
dtype: bool
- name: status
dtype: string
- name: event_timestamp
dtype: timestamp[us]
- name: metrics
struct:
- name: text_length
dtype: int64
splits:
- name: train
num_bytes: 986677202
num_examples: 51942
download_size: 653488377
dataset_size: 986677202
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
fabiodrozdowiski/VozFabio | fabiodrozdowiski | "2024-02-05T15:14:05Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-05T15:09:30Z" | ---
license: openrail
---
|
stefania-radu/rendered_wikipedia_sw | stefania-radu | "2024-02-05T16:09:13Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T16:08:43Z" | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: num_patches
dtype: int64
splits:
- name: train
num_bytes: 794564331.125
num_examples: 106175
download_size: 746521227
dataset_size: 794564331.125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
stefania-radu/rendered_wikipedia_lg | stefania-radu | "2024-02-05T16:32:24Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T16:32:17Z" | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: num_patches
dtype: int64
splits:
- name: train
num_bytes: 70248280.5
num_examples: 8228
download_size: 70021265
dataset_size: 70248280.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gvlk/celebqav3 | gvlk | "2024-02-05T16:46:49Z" | 32 | 1 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T16:33:33Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: input_ids
sequence: int32
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 1945935
num_examples: 870
download_size: 308641
dataset_size: 1945935
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
stefania-radu/rendered_wikipedia_pcm | stefania-radu | "2024-02-05T16:33:45Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T16:33:40Z" | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: num_patches
dtype: int64
splits:
- name: train
num_bytes: 20577713.125
num_examples: 2135
download_size: 20657609
dataset_size: 20577713.125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
stefania-radu/rendered_wikipedia_wo | stefania-radu | "2024-02-05T16:34:53Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T16:34:48Z" | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: num_patches
dtype: int64
splits:
- name: train
num_bytes: 35560741.75
num_examples: 3938
download_size: 35390027
dataset_size: 35560741.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
CyberHarem/aqua_konosuba | CyberHarem | "2024-02-05T18:10:24Z" | 32 | 0 | [
"task_categories:text-to-image",
"license:mit",
"size_categories:1K<n<10K",
"library:datasets",
"library:mlcroissant",
"region:us",
"art",
"not-for-all-audiences"
] | [
"text-to-image"
] | "2024-02-05T16:36:55Z" | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of aqua/アクア (Kono Subarashii Sekai ni Shukufuku wo!)
This is the dataset of aqua/アクア (Kono Subarashii Sekai ni Shukufuku wo!), containing 758 images and their tags.
The core tags of this character are `blue_hair, long_hair, hair_ornament, hair_rings, blue_eyes, bow, green_bow`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:------------|:---------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 758 | 685.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aqua_konosuba/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 758 | 535.83 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aqua_konosuba/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 1558 | 1021.39 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aqua_konosuba/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 758 | 685.61 MiB | [Download](https://huggingface.co/datasets/CyberHarem/aqua_konosuba/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 1558 | 1.22 GiB | [Download](https://huggingface.co/datasets/CyberHarem/aqua_konosuba/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/aqua_konosuba',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 1girl, anime_coloring, bangs, bare_shoulders, detached_sleeves, open_mouth, solo, upper_body, hair_between_eyes, looking_at_viewer, single_hair_ring, medium_breasts |
| 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 1girl, breasts, detached_sleeves, solo, bare_shoulders, single_hair_ring, upper_body, anime_coloring, looking_at_viewer, smile, hair_between_eyes |
| 2 | 16 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, solo, open_mouth, anime_coloring, detached_sleeves |
| 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, detached_sleeves, single_hair_ring, solo, thighhighs, very_long_hair, blue_skirt |
| 4 | 13 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, smile, solo, detached_sleeves, closed_eyes, bare_shoulders |
| 5 | 10 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, anime_coloring, blush, open_mouth, solo, tears, parody, crying, detached_sleeves, closed_eyes, meme |
| 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, anime_coloring, bare_shoulders, closed_mouth, detached_sleeves, solo, bangs, hair_between_eyes, looking_at_viewer, upper_body |
| 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, anime_coloring, closed_mouth, solo, closed_eyes, hair_between_eyes, smile |
| 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | 1girl, bare_shoulders, collarbone, solo, upper_body, anime_coloring, blush, smile, breasts |
| 9 | 7 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | 1girl, blue_footwear, blue_shirt, blue_skirt, detached_sleeves, solo, thigh_boots, thighhighs_under_boots, breasts, open_mouth, white_thighhighs, very_long_hair |
| 10 | 7 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | 1girl, anime_coloring, bangs, closed_mouth, hair_between_eyes, solo, blurry, looking_at_viewer, portrait, smile |
| 11 | 7 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | 1boy, 1girl, detached_sleeves, solo_focus, blue_skirt, single_hair_ring, anime_coloring, breasts, looking_at_viewer, open_mouth |
| 12 | 6 | ![](samples/12/clu12-sample0.png) | ![](samples/12/clu12-sample1.png) | ![](samples/12/clu12-sample2.png) | ![](samples/12/clu12-sample3.png) | ![](samples/12/clu12-sample4.png) | 2girls, detached_sleeves, open_mouth, solo_focus, blue_skirt, anime_coloring |
| 13 | 5 | ![](samples/13/clu13-sample0.png) | ![](samples/13/clu13-sample1.png) | ![](samples/13/clu13-sample2.png) | ![](samples/13/clu13-sample3.png) | ![](samples/13/clu13-sample4.png) | 2girls, blue_footwear, blue_skirt, breasts, detached_sleeves, thighhighs, brown_hair, open_mouth, thigh_boots, 1boy, bare_shoulders, single_hair_ring |
| 14 | 7 | ![](samples/14/clu14-sample0.png) | ![](samples/14/clu14-sample1.png) | ![](samples/14/clu14-sample2.png) | ![](samples/14/clu14-sample3.png) | ![](samples/14/clu14-sample4.png) | 1girl, enmaided, maid_apron, maid_headdress, zettai_ryouiki, breasts, solo, frills, single_hair_ring, white_thighhighs |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 1girl | anime_coloring | bangs | bare_shoulders | detached_sleeves | open_mouth | solo | upper_body | hair_between_eyes | looking_at_viewer | single_hair_ring | medium_breasts | breasts | smile | thighhighs | very_long_hair | blue_skirt | closed_eyes | blush | tears | parody | crying | meme | closed_mouth | collarbone | blue_footwear | blue_shirt | thigh_boots | thighhighs_under_boots | white_thighhighs | blurry | portrait | 1boy | solo_focus | 2girls | brown_hair | enmaided | maid_apron | maid_headdress | zettai_ryouiki | frills |
|----:|----------:|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:----------------------------------|:--------|:-----------------|:--------|:-----------------|:-------------------|:-------------|:-------|:-------------|:--------------------|:--------------------|:-------------------|:-----------------|:----------|:--------|:-------------|:-----------------|:-------------|:--------------|:--------|:--------|:---------|:---------|:-------|:---------------|:-------------|:----------------|:-------------|:--------------|:-------------------------|:-------------------|:---------|:-----------|:-------|:-------------|:---------|:-------------|:-----------|:-------------|:-----------------|:-----------------|:---------|
| 0 | 7 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 1 | 8 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | | X | X | | X | X | X | X | X | | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 2 | 16 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | X | X | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | |
| 3 | 5 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | X | | | | X | | X | | | | X | | | | X | X | X | | | | | | | | | | | | | | | | | | | | | | | | |
| 4 | 13 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | X | | | X | X | | X | | | | | | | X | | | | X | | | | | | | | | | | | | | | | | | | | | | | |
| 5 | 10 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | X | X | | | X | X | X | | | | | | | | | | | X | X | X | X | X | X | | | | | | | | | | | | | | | | | | |
| 6 | 5 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | X | X | X | X | X | | X | X | X | X | | | | | | | | | | | | | | X | | | | | | | | | | | | | | | | | |
| 7 | 5 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | X | X | | | | | X | | X | | | | | X | | | | X | | | | | | X | | | | | | | | | | | | | | | | | |
| 8 | 5 | ![](samples/8/clu8-sample0.png) | ![](samples/8/clu8-sample1.png) | ![](samples/8/clu8-sample2.png) | ![](samples/8/clu8-sample3.png) | ![](samples/8/clu8-sample4.png) | X | X | | X | | | X | X | | | | | X | X | | | | | X | | | | | | X | | | | | | | | | | | | | | | | |
| 9 | 7 | ![](samples/9/clu9-sample0.png) | ![](samples/9/clu9-sample1.png) | ![](samples/9/clu9-sample2.png) | ![](samples/9/clu9-sample3.png) | ![](samples/9/clu9-sample4.png) | X | | | | X | X | X | | | | | | X | | | X | X | | | | | | | | | X | X | X | X | X | | | | | | | | | | | |
| 10 | 7 | ![](samples/10/clu10-sample0.png) | ![](samples/10/clu10-sample1.png) | ![](samples/10/clu10-sample2.png) | ![](samples/10/clu10-sample3.png) | ![](samples/10/clu10-sample4.png) | X | X | X | | | | X | | X | X | | | | X | | | | | | | | | | X | | | | | | | X | X | | | | | | | | | |
| 11 | 7 | ![](samples/11/clu11-sample0.png) | ![](samples/11/clu11-sample1.png) | ![](samples/11/clu11-sample2.png) | ![](samples/11/clu11-sample3.png) | ![](samples/11/clu11-sample4.png) | X | X | | | X | X | | | | X | X | | X | | | | X | | | | | | | | | | | | | | | | X | X | | | | | | | |
| 12 | 6 | ![](samples/12/clu12-sample0.png) | ![](samples/12/clu12-sample1.png) | ![](samples/12/clu12-sample2.png) | ![](samples/12/clu12-sample3.png) | ![](samples/12/clu12-sample4.png) | | X | | | X | X | | | | | | | | | | | X | | | | | | | | | | | | | | | | | X | X | | | | | | |
| 13 | 5 | ![](samples/13/clu13-sample0.png) | ![](samples/13/clu13-sample1.png) | ![](samples/13/clu13-sample2.png) | ![](samples/13/clu13-sample3.png) | ![](samples/13/clu13-sample4.png) | | | | X | X | X | | | | | X | | X | | X | | X | | | | | | | | | X | | X | | | | | X | | X | X | | | | | |
| 14 | 7 | ![](samples/14/clu14-sample0.png) | ![](samples/14/clu14-sample1.png) | ![](samples/14/clu14-sample2.png) | ![](samples/14/clu14-sample3.png) | ![](samples/14/clu14-sample4.png) | X | | | | | | X | | | | X | | X | | | | | | | | | | | | | | | | | X | | | | | | | X | X | X | X | X |
|
CyberHarem/lalatina_dustiness_ford_konosuba | CyberHarem | "2024-02-05T17:33:58Z" | 32 | 0 | [
"task_categories:text-to-image",
"license:mit",
"size_categories:1K<n<10K",
"library:datasets",
"library:mlcroissant",
"region:us",
"art",
"not-for-all-audiences"
] | [
"text-to-image"
] | "2024-02-05T16:39:04Z" | ---
license: mit
task_categories:
- text-to-image
tags:
- art
- not-for-all-audiences
size_categories:
- n<1K
---
# Dataset of lalatina_dustiness_ford/ダクネス (Kono Subarashii Sekai ni Shukufuku wo!)
This is the dataset of lalatina_dustiness_ford/ダクネス (Kono Subarashii Sekai ni Shukufuku wo!), containing 451 images and their tags.
The core tags of this character are `blonde_hair, long_hair, hair_ornament, ponytail, x_hair_ornament, blue_eyes, breasts`, which are pruned in this dataset.
Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)).
## List of Packages
| Name | Images | Size | Download | Type | Description |
|:-----------------|---------:|:-----------|:----------------------------------------------------------------------------------------------------------------------------------|:-----------|:---------------------------------------------------------------------|
| raw | 451 | 408.05 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lalatina_dustiness_ford_konosuba/resolve/main/dataset-raw.zip) | Waifuc-Raw | Raw data with meta information (min edge aligned to 1400 if larger). |
| 800 | 451 | 321.01 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lalatina_dustiness_ford_konosuba/resolve/main/dataset-800.zip) | IMG+TXT | dataset with the shorter side not exceeding 800 pixels. |
| stage3-p480-800 | 942 | 616.34 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lalatina_dustiness_ford_konosuba/resolve/main/dataset-stage3-p480-800.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
| 1200 | 451 | 407.89 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lalatina_dustiness_ford_konosuba/resolve/main/dataset-1200.zip) | IMG+TXT | dataset with the shorter side not exceeding 1200 pixels. |
| stage3-p480-1200 | 942 | 756.29 MiB | [Download](https://huggingface.co/datasets/CyberHarem/lalatina_dustiness_ford_konosuba/resolve/main/dataset-stage3-p480-1200.zip) | IMG+TXT | 3-stage cropped dataset with the area not less than 480x480 pixels. |
### Load Raw Dataset with Waifuc
We provide raw dataset (including tagged images) for [waifuc](https://deepghs.github.io/waifuc/main/tutorials/installation/index.html) loading. If you need this, just run the following code
```python
import os
import zipfile
from huggingface_hub import hf_hub_download
from waifuc.source import LocalSource
# download raw archive file
zip_file = hf_hub_download(
repo_id='CyberHarem/lalatina_dustiness_ford_konosuba',
repo_type='dataset',
filename='dataset-raw.zip',
)
# extract files to your directory
dataset_dir = 'dataset_dir'
os.makedirs(dataset_dir, exist_ok=True)
with zipfile.ZipFile(zip_file, 'r') as zf:
zf.extractall(dataset_dir)
# load the dataset with waifuc
source = LocalSource(dataset_dir)
for item in source:
print(item.image, item.meta['filename'], item.meta['tags'])
```
## List of Clusters
List of tag clustering result, maybe some outfits can be mined here.
### Raw Text Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | Tags |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:------------------------------------------------------------------------------------------------------------|
| 0 | 19 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | 2girls, parody, armor, anime_coloring, gloves, blue_hair |
| 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | 2girls, armor, blush, open_mouth, smile, gloves, 1boy, parody |
| 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | 1girl, armor, gloves, parody, solo, open_mouth |
| 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | 1girl, anime_coloring, armor, parody, solo |
| 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | 1girl, armor, open_mouth, solo, sword, parody, anime_coloring |
| 5 | 11 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | 1girl, armor, sword, solo, style_parody, black_gloves |
| 6 | 10 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | 1girl, armor, solo, holding_sword, gloves, parody, anime_coloring |
| 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | 1girl, blush, completely_nude, large_breasts, solo, window, barefoot, collarbone, covering_breasts, indoors |
### Table Version
| # | Samples | Img-1 | Img-2 | Img-3 | Img-4 | Img-5 | 2girls | parody | armor | anime_coloring | gloves | blue_hair | blush | open_mouth | smile | 1boy | 1girl | solo | sword | style_parody | black_gloves | holding_sword | completely_nude | large_breasts | window | barefoot | collarbone | covering_breasts | indoors |
|----:|----------:|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:--------------------------------|:---------|:---------|:--------|:-----------------|:---------|:------------|:--------|:-------------|:--------|:-------|:--------|:-------|:--------|:---------------|:---------------|:----------------|:------------------|:----------------|:---------|:-----------|:-------------|:-------------------|:----------|
| 0 | 19 | ![](samples/0/clu0-sample0.png) | ![](samples/0/clu0-sample1.png) | ![](samples/0/clu0-sample2.png) | ![](samples/0/clu0-sample3.png) | ![](samples/0/clu0-sample4.png) | X | X | X | X | X | X | | | | | | | | | | | | | | | | | |
| 1 | 7 | ![](samples/1/clu1-sample0.png) | ![](samples/1/clu1-sample1.png) | ![](samples/1/clu1-sample2.png) | ![](samples/1/clu1-sample3.png) | ![](samples/1/clu1-sample4.png) | X | X | X | | X | | X | X | X | X | | | | | | | | | | | | | |
| 2 | 8 | ![](samples/2/clu2-sample0.png) | ![](samples/2/clu2-sample1.png) | ![](samples/2/clu2-sample2.png) | ![](samples/2/clu2-sample3.png) | ![](samples/2/clu2-sample4.png) | | X | X | | X | | | X | | | X | X | | | | | | | | | | | |
| 3 | 7 | ![](samples/3/clu3-sample0.png) | ![](samples/3/clu3-sample1.png) | ![](samples/3/clu3-sample2.png) | ![](samples/3/clu3-sample3.png) | ![](samples/3/clu3-sample4.png) | | X | X | X | | | | | | | X | X | | | | | | | | | | | |
| 4 | 6 | ![](samples/4/clu4-sample0.png) | ![](samples/4/clu4-sample1.png) | ![](samples/4/clu4-sample2.png) | ![](samples/4/clu4-sample3.png) | ![](samples/4/clu4-sample4.png) | | X | X | X | | | | X | | | X | X | X | | | | | | | | | | |
| 5 | 11 | ![](samples/5/clu5-sample0.png) | ![](samples/5/clu5-sample1.png) | ![](samples/5/clu5-sample2.png) | ![](samples/5/clu5-sample3.png) | ![](samples/5/clu5-sample4.png) | | | X | | | | | | | | X | X | X | X | X | | | | | | | | |
| 6 | 10 | ![](samples/6/clu6-sample0.png) | ![](samples/6/clu6-sample1.png) | ![](samples/6/clu6-sample2.png) | ![](samples/6/clu6-sample3.png) | ![](samples/6/clu6-sample4.png) | | X | X | X | X | | | | | | X | X | | | | X | | | | | | | |
| 7 | 6 | ![](samples/7/clu7-sample0.png) | ![](samples/7/clu7-sample1.png) | ![](samples/7/clu7-sample2.png) | ![](samples/7/clu7-sample3.png) | ![](samples/7/clu7-sample4.png) | | | | | | | X | | | | X | X | | | | | X | X | X | X | X | X | X |
|
donbatatone/narkeshao | donbatatone | "2024-02-05T17:08:15Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-05T17:06:11Z" | ---
license: openrail
---
|
Thanmay/commonsense_qa-ta | Thanmay | "2024-02-05T17:12:33Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T17:12:27Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: question_concept
dtype: string
- name: choices
sequence:
- name: label
dtype: string
- name: text
dtype: string
- name: answerKey
dtype: string
- name: itv2 ta question
dtype: string
splits:
- name: validation
num_bytes: 547460
num_examples: 1221
- name: test
num_bytes: 520757
num_examples: 1140
download_size: 510339
dataset_size: 1068217
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Thanmay/commonsense_qa-gu | Thanmay | "2024-02-05T17:14:07Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T17:14:02Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: question
dtype: string
- name: question_concept
dtype: string
- name: choices
sequence:
- name: label
dtype: string
- name: text
dtype: string
- name: answerKey
dtype: string
- name: itv2 gu question
dtype: string
splits:
- name: validation
num_bytes: 493203
num_examples: 1221
- name: test
num_bytes: 468965
num_examples: 1140
download_size: 492913
dataset_size: 962168
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
ibrahimahmood/PIDRAY | ibrahimahmood | "2024-02-20T05:16:58Z" | 32 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:parquet",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T17:15:31Z" | ---
license: mit
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': images
'1': labels
splits:
- name: train
num_bytes: 6424849.0
num_examples: 60
download_size: 6415751
dataset_size: 6424849.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
leoleo2024/VOZDUKE | leoleo2024 | "2024-03-26T16:39:28Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-05T18:54:26Z" | ---
license: openrail
---
|
DuCorsa/FernandoIA | DuCorsa | "2024-02-05T19:35:22Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-05T19:34:01Z" | ---
license: openrail
---
|
furry-br/angel-dustV2 | furry-br | "2024-02-05T20:24:56Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-05T19:53:20Z" | ---
license: openrail
---
|
Ziggy1/dataset | Ziggy1 | "2024-02-05T23:05:59Z" | 32 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-05T20:59:32Z" | ---
license: apache-2.0
---
|
Atipico1/mrqa_preprocessed_thres-0.95_by-dpr | Atipico1 | "2024-02-06T05:25:33Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T21:29:41Z" | ---
dataset_info:
features:
- name: subset
dtype: string
- name: qid
dtype: string
- name: question
dtype: string
- name: answers
sequence: string
- name: masked_query
dtype: string
- name: context
dtype: string
- name: answer_sent
dtype: string
- name: answer_in_context
sequence: string
- name: query_embedding
sequence: float32
splits:
- name: train
num_bytes: 823710051.792633
num_examples: 204348
download_size: 858780623
dataset_size: 823710051.792633
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
MRQA Loaded without SearchQA ! -> Size : 450309
Remove duplicates by string match -> Before : 450309 | After : 401207
Before context preprocess: 401207
After context preprocess: 381972
Before split: 381972
After split: 378213
After context length filtering: 233328
After answer length filtering: 222697
Remove duplicates by similarity-> Before : 222697 | After : 204348
|
pccl-org/formal-logic-simple-order-new-objects-paired-bigger-5000 | pccl-org | "2024-02-27T21:09:02Z" | 32 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T21:34:36Z" | ---
dataset_info:
features:
- name: greater_than
dtype: string
- name: less_than
dtype: string
- name: paired_example
sequence:
sequence: string
- name: correct_example
sequence: string
- name: incorrect_example
sequence: string
- name: distance
dtype: int64
- name: index
dtype: int64
- name: index_in_distance
dtype: int64
splits:
- name: train
num_bytes: 3166998759
num_examples: 12492503
download_size: 1120426911
dataset_size: 3166998759
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "formal-logic-simple-order-new-objects-paired-bigger-5000"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jetaudio/zh2en_names | jetaudio | "2024-02-25T04:07:46Z" | 32 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T21:40:56Z" | ---
dataset_info:
features:
- name: trg
dtype: string
- name: scr
dtype: string
splits:
- name: train
num_bytes: 40707111.848671876
num_examples: 1023730
download_size: 29232333
dataset_size: 40707111.848671876
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
stefania-radu/rendered_wikipedia_fi | stefania-radu | "2024-04-03T08:45:51Z" | 32 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T21:43:49Z" | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: num_patches
dtype: int64
splits:
- name: train
num_bytes: 11245230282.5
num_examples: 1141884
download_size: 11299863559
dataset_size: 11245230282.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
stefania-radu/rendered_wikipedia_id | stefania-radu | "2024-04-03T08:43:43Z" | 32 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T21:46:34Z" | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: num_patches
dtype: int64
splits:
- name: train
num_bytes: 11561408609.625
num_examples: 1291243
download_size: 11592553083
dataset_size: 11561408609.625
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FVilmar/lombardi | FVilmar | "2024-02-05T22:25:15Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-05T21:47:44Z" | ---
license: openrail
---
|
stefania-radu/rendered_wikipedia_ko | stefania-radu | "2024-04-03T08:38:55Z" | 32 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T22:01:52Z" | ---
dataset_info:
features:
- name: pixel_values
dtype: image
- name: num_patches
dtype: int64
splits:
- name: train
num_bytes: 19055705879.75
num_examples: 1098130
download_size: 19117198705
dataset_size: 19055705879.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ekazuki/subject-to-group | ekazuki | "2024-02-05T22:09:06Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T22:09:04Z" | ---
dataset_info:
features:
- name: group
dtype: string
- name: subject
dtype: string
splits:
- name: train
num_bytes: 8624704
num_examples: 313251
download_size: 4579563
dataset_size: 8624704
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ekazuki/embedding-to-group | ekazuki | "2024-02-05T22:10:55Z" | 32 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-05T22:10:19Z" | ---
dataset_info:
features:
- name: group
dtype: string
- name: embedding
sequence: float64
splits:
- name: train
num_bytes: 965694758
num_examples: 313251
download_size: 814149184
dataset_size: 965694758
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
FVilmar/faabricio_silv | FVilmar | "2024-02-05T22:48:24Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-05T22:48:07Z" | ---
license: openrail
---
|
FVilmar/cid_moreira | FVilmar | "2024-02-05T23:29:11Z" | 32 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-02-05T23:28:43Z" | ---
license: openrail
---
|
yeonggeunjang/carrotPerMiles | yeonggeunjang | "2024-02-06T00:56:52Z" | 32 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-02-06T00:56:52Z" | ---
license: apache-2.0
---
|
hjhkoream/potato | hjhkoream | "2024-02-06T02:31:17Z" | 32 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-06T01:25:02Z" | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: name
dtype: string
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 200172044.718
num_examples: 2961
- name: test
num_bytes: 63216906.0
num_examples: 987
- name: valid
num_bytes: 70564153.0
num_examples: 987
download_size: 331048809
dataset_size: 333953103.718
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: valid
path: data/valid-*
---
|
vwxyzjn/EleutherAI_pythia-1b-deduped__dpo_on_policy__tldr | vwxyzjn | "2024-02-13T15:35:58Z" | 32 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-06T01:25:06Z" | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: chosen_token
sequence: int64
- name: rejected_token
sequence: int64
- name: chosen_token_label
sequence: int64
- name: rejected_token_label
sequence: int64
splits:
- name: dpo_on_policy__1__1707191080
num_bytes: 5903392
num_examples: 256
- name: dpo_on_policy__1__1707191514
num_bytes: 737346
num_examples: 32
- name: dpo_on_policy__1__1707191827
num_bytes: 1474271
num_examples: 64
- name: dpo_on_policy__1__1707191954
num_bytes: 5903392
num_examples: 256
- name: dpo_on_policy__1__1707192216
num_bytes: 5903392
num_examples: 256
- name: dpo_on_policy__1__1707192515
num_bytes: 5903178
num_examples: 256
- name: dpo_on_policy__1__1707200734
num_bytes: 2686492433
num_examples: 116480
- name: dpo_on_policy__1__1707792349
num_bytes: 2686492433
num_examples: 116480
- name: dpo_on_policy__1__1707792340
num_bytes: 2686492433
num_examples: 116480
- name: epoch_1
num_bytes: 2691157952
num_examples: 116480
- name: dpo_on_policy__1__1707795707
num_bytes: 2686492433
num_examples: 116480
- name: epoch_2
num_bytes: 2722175510
num_examples: 116480
- name: epoch_3
num_bytes: 2690611469
num_examples: 116480
- name: dpo_on_policy__1__1707833448
num_bytes: 2686492433
num_examples: 116480
- name: dpo_on_policy__1__1707833448epoch_1
num_bytes: 2691512798
num_examples: 116480
download_size: 3859313963
dataset_size: 24253744865
---
# Dataset Card for "EleutherAI_pythia-1b-deduped__dpo_on_policy__tldr"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
harpreetsahota/quantization_experiment_results | harpreetsahota | "2024-02-06T04:34:59Z" | 32 | 1 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-02-06T01:36:28Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: unquantized_generated_text
dtype: string
- name: unquantized_execution_time (s)
dtype: float64
- name: bnb_quantized_generated_text
dtype: string
- name: bnb_quantized_execution_time (s)
dtype: float64
- name: gptq_4bit_generated_text
dtype: string
- name: gptq_4bit_execution_time (s)
dtype: float64
- name: gptq_2bit_generated_text
dtype: string
- name: gptq_2bit_execution_time (s)
dtype: float64
- name: gguf_quantized_generated_text
dtype: string
- name: gguf_quantized_execution_time (s)
dtype: float64
splits:
- name: train
num_bytes: 179988
num_examples: 50
download_size: 107125
dataset_size: 179988
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|