datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
2.68M
| likes
int64 0
6.41k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
1M
|
---|---|---|---|---|---|---|---|---|
chiyuanhsiao/Magpie_rank1_chunk6_interleaf | chiyuanhsiao | "2024-12-03T00:42:48Z" | 1 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T00:31:38Z" | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: model
dtype: string
- name: gen_input_config
struct:
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: task_category
dtype: string
- name: difficulty
dtype: string
- name: intent
dtype: string
- name: knowledge
dtype: string
- name: input_quality
dtype: string
- name: quality_explanation
dtype: string
- name: llama_guard_2
dtype: string
- name: reward_model
dtype: string
- name: instruct_reward
dtype: float64
- name: base_output
dtype: string
- name: base_reward
dtype: float64
- name: reward_difference
dtype: float64
- name: min_neighbor_distance
dtype: float64
- name: repeat_count
dtype: int64
- name: min_similar_uuid
dtype: string
- name: input_length
dtype: int64
- name: output_length
dtype: int64
- name: input_speech
dtype: audio
- name: output_speech
dtype: audio
- name: output_speech_cmu-arctic-xvectors_7306
dtype: audio
- name: input_unit
sequence: int64
- name: output_unit
sequence: int64
- name: output_unit_7306
sequence: int64
- name: output_7306_interleaf
dtype: string
- name: output_pseudo
dtype: string
- name: input_pseudo
dtype: string
splits:
- name: train
num_bytes: 11905874290.75
num_examples: 10022
download_size: 11636483837
dataset_size: 11905874290.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
julia-se/tracka_mistral_fewshot_anger | julia-se | "2024-12-03T00:44:17Z" | 1 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T00:44:15Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: Anger
dtype: int64
- name: Disgust
dtype: int64
- name: Fear
dtype: int64
- name: Joy
dtype: int64
- name: Sadness
dtype: int64
- name: Surprise
dtype: int64
- name: predicted_is_anger
dtype: int64
- name: y_anger
dtype: int64
splits:
- name: train
num_bytes: 472807
num_examples: 2226
download_size: 217016
dataset_size: 472807
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
quandao92/ad-clip-dataset | quandao92 | "2024-12-03T06:03:04Z" | 1 | 0 | [
"license:other",
"region:us"
] | null | "2024-12-03T00:51:53Z" | ---
license: other
license_name: 4inlab
license_link: LICENSE
---
|
ashercn97/reasoning-v1-worked-1 | ashercn97 | "2024-12-03T01:08:27Z" | 1 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T01:08:24Z" | ---
dataset_info:
features:
- name: text_id
dtype: string
- name: text
dtype: string
- name: label
sequence: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 152064
num_examples: 100
download_size: 96279
dataset_size: 152064
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ashnaz/refined_symptoms_doctors | ashnaz | "2024-12-03T01:30:22Z" | 1 | 0 | [
"license:afl-3.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T01:21:13Z" | ---
license: afl-3.0
---
|
richmondsin/truthfulqa_ca_mc1_results | richmondsin | "2024-12-03T01:52:49Z" | 1 | 0 | [
"region:us"
] | null | "2024-12-03T01:52:39Z" | ---
pretty_name: Evaluation run of google/gemma-2-2b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)\nThe dataset is\
\ composed of 0 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 2 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"richmondsin/truthfulqa_ca_mc1_results\"\
,\n\tname=\"google__gemma-2-2b__truthfulqa_ca_mc1\",\n\tsplit=\"latest\"\n)\n```\n\
\n## Latest results\n\nThese are the [latest results from run 2024-12-02T20-52-39.131671](https://huggingface.co/datasets/richmondsin/truthfulqa_ca_mc1_results/blob/main/google/gemma-2-2b/results_2024-12-02T20-52-39.131671.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"truthfulqa_ca_mc1\"\
: {\n \"alias\": \"truthfulqa_ca_mc1\",\n \"acc,none\": 0.29250720461095103,\n\
\ \"acc_stderr,none\": 0.01728075357861422,\n \"acc_norm,none\"\
: 0.3227665706051873,\n \"acc_norm_stderr,none\": 0.01776015903433574\n\
\ }\n },\n \"truthfulqa_ca_mc1\": {\n \"alias\": \"truthfulqa_ca_mc1\"\
,\n \"acc,none\": 0.29250720461095103,\n \"acc_stderr,none\": 0.01728075357861422,\n\
\ \"acc_norm,none\": 0.3227665706051873,\n \"acc_norm_stderr,none\"\
: 0.01776015903433574\n }\n}\n```"
repo_url: https://huggingface.co/google/gemma-2-2b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: google__gemma-2-2b__truthfulqa_ca_mc1
data_files:
- split: 2024_12_02T20_52_39.131671
path:
- '**/samples_truthfulqa_ca_mc1_2024-12-02T20-52-39.131671.jsonl'
- split: latest
path:
- '**/samples_truthfulqa_ca_mc1_2024-12-02T20-52-39.131671.jsonl'
---
# Dataset Card for Evaluation run of google/gemma-2-2b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)
The dataset is composed of 0 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"richmondsin/truthfulqa_ca_mc1_results",
name="google__gemma-2-2b__truthfulqa_ca_mc1",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T20-52-39.131671](https://huggingface.co/datasets/richmondsin/truthfulqa_ca_mc1_results/blob/main/google/gemma-2-2b/results_2024-12-02T20-52-39.131671.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"truthfulqa_ca_mc1": {
"alias": "truthfulqa_ca_mc1",
"acc,none": 0.29250720461095103,
"acc_stderr,none": 0.01728075357861422,
"acc_norm,none": 0.3227665706051873,
"acc_norm_stderr,none": 0.01776015903433574
}
},
"truthfulqa_ca_mc1": {
"alias": "truthfulqa_ca_mc1",
"acc,none": 0.29250720461095103,
"acc_stderr,none": 0.01728075357861422,
"acc_norm,none": 0.3227665706051873,
"acc_norm_stderr,none": 0.01776015903433574
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
haskucy/video_music_dataset | haskucy | "2024-12-03T02:50:44Z" | 1 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-12-03T01:54:41Z" | ---
license: apache-2.0
---
# Archived Video Music as a Dataset |
dgambettaphd/D_gen1_run2_llama2-7b_wiki_doc1000_real32_synt96 | dgambettaphd | "2024-12-03T02:03:06Z" | 1 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T02:03:03Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 522003
num_examples: 1000
download_size: 292480
dataset_size: 522003
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sdiazlor/my-distiset-fa461f0d | sdiazlor | "2024-12-03T02:04:13Z" | 1 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-03T02:04:09Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: generations
sequence: string
- name: ratings_overall-rating
sequence: int64
- name: rationale_for_ratings_overall-rating
sequence: string
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 3465
num_examples: 2
download_size: 11708
dataset_size: 3465
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-fa461f0d
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-fa461f0d/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-fa461f0d/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"generations": [
" Sure, I\u0027d be happy to help! Here are the RDF triplets for the input sentence:\n\n[AFC Ajax (amateurs), hasGround, Sportpark De Toekomst]\n[Ajax Youth Academy, playsAt, Sportpark De Toekomst]\n\nExplanation:\n\n* AFC Ajax (amateurs) is the subject of the first triplet, and hasGround is the predicate that describes the relationship between AFC Ajax (amateurs) and Sportpark De Toekomst.\n* Ajax Youth Academy is the subject of the second triplet, and playsAt is the predicate that describes the relationship between Ajax Youth Academy and Sportpark De Toekomst.\n\nNote that there may be other possible RDF triplets that could be derived from the input sentence, but the above triplets capture the main relationships present in the sentence.",
"[\n [\"AFC Ajax (amateurs)\", \"has ground\", \"Sportpark De Toekomst\"],\n [\"Ajax Youth Academy\", \"plays at\", \"Sportpark De Toekomst\"]\n]"
],
"instruction": "You will be given a definition of a task first, then some input of the task.\nThis task is about using the specified sentence and converting the sentence to Resource Description Framework (RDF) triplets of the form (subject, predicate object). The RDF triplets generated must be such that the triplets accurately capture the structure and semantics of the input sentence. The input is a sentence and the output is a list of triplets of the form [subject, predicate, object] that capture the relationships present in the sentence. When a sentence has more than 1 RDF triplet possible, the output must contain all of them.\n\nAFC Ajax (amateurs)\u0027s ground is Sportpark De Toekomst where Ajax Youth Academy also play.\nOutput:",
"model_name": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"ratings_overall-rating": [
4,
5
],
"rationale_for_ratings_overall-rating": [
"The output is generally accurate and captures the main relationships in the input sentence. The model correctly identifies the subjects and predicates in the sentence and provides explanations for each triplet. However, it mentions that there may be other possible triplets, but it doesn\u0027t provide them, which is in line with the instructions.",
"The output is accurate, confident, and aligned with the instructions. The model correctly converts the sentence into RDF triplets, capturing the relationships between the subjects and objects. The output is free of hallucinations and includes all possible triplets as instructed."
]
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-fa461f0d", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-fa461f0d")
```
</details>
|
sdiazlor/my-distiset-fa461f0 | sdiazlor | "2024-12-03T02:06:49Z" | 1 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-03T02:06:46Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: generations
sequence: string
- name: ratings_overall-rating
sequence: int64
- name: rationale_for_ratings_overall-rating
sequence: string
- name: ratings_truthfulness
sequence: int64
- name: rationale_for_ratings_truthfulness
sequence: string
- name: type_truthfulness
sequence: 'null'
- name: rationale_for_type_truthfulness
sequence: string
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 4546
num_examples: 2
download_size: 16685
dataset_size: 4546
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-fa461f0
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-fa461f0/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-fa461f0/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"generations": [
" Sure, I\u0027d be happy to help! Here are the RDF triplets for the input sentence:\n\n[AFC Ajax (amateurs), hasGround, Sportpark De Toekomst]\n[Ajax Youth Academy, playsAt, Sportpark De Toekomst]\n\nExplanation:\n\n* AFC Ajax (amateurs) is the subject of the first triplet, and hasGround is the predicate that describes the relationship between AFC Ajax (amateurs) and Sportpark De Toekomst.\n* Ajax Youth Academy is the subject of the second triplet, and playsAt is the predicate that describes the relationship between Ajax Youth Academy and Sportpark De Toekomst.\n\nNote that there may be other possible RDF triplets that could be derived from the input sentence, but the above triplets capture the main relationships present in the sentence.",
"[\n [\"AFC Ajax (amateurs)\", \"has ground\", \"Sportpark De Toekomst\"],\n [\"Ajax Youth Academy\", \"plays at\", \"Sportpark De Toekomst\"]\n]"
],
"instruction": "You will be given a definition of a task first, then some input of the task.\nThis task is about using the specified sentence and converting the sentence to Resource Description Framework (RDF) triplets of the form (subject, predicate object). The RDF triplets generated must be such that the triplets accurately capture the structure and semantics of the input sentence. The input is a sentence and the output is a list of triplets of the form [subject, predicate, object] that capture the relationships present in the sentence. When a sentence has more than 1 RDF triplet possible, the output must contain all of them.\n\nAFC Ajax (amateurs)\u0027s ground is Sportpark De Toekomst where Ajax Youth Academy also play.\nOutput:",
"model_name": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"ratings_overall-rating": [
4,
5
],
"ratings_truthfulness": [
5,
5
],
"rationale_for_ratings_overall-rating": [
"The output of Text 1 is generally accurate and captures the relationships present in the input sentence. However, there is a minor error in the explanation where it mentions \"Sportpark De Toekomst\" instead of \"Sportpark De Toekomst\" (missing accent). Additionally, the Explanation could be improved to clearly explain the relationships in a more concise manner.",
"The output of Text 2 is accurate, confident, and aligned with the instructions. It provides two RDF triplets that capture the relationships present in the sentence. The format is also correct, with each triplet following the specified format [subject, predicate, object]."
],
"rationale_for_ratings_truthfulness": [
"The text accurately captures the structure and semantics of the input sentence and provides two RDF triplets that are relevant to the input.",
"The text accurately represents the RDF triplets in a list format, mirroring the structure of the first text, and captures the relationships present in the sentence."
],
"rationale_for_type_truthfulness": [
"The text does not contain any contradictory information with established knowledge or the instructions given.",
"The text does not contain any contradictory information with established knowledge or the instructions given."
],
"type_truthfulness": [
null,
null
]
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-fa461f0", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-fa461f0")
```
</details>
|
sdiazlor/my-distiset-5984290b | sdiazlor | "2024-12-03T02:14:21Z" | 1 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-03T02:14:18Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: generation
dtype: string
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 791
num_examples: 1
download_size: 6852
dataset_size: 791
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-5984290b
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-5984290b/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-5984290b/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"generation": "{ \n \"similarity\": 1\n}",
"instruction": "You will be given a definition of a task first, then some input of the task.\nThis task is about using the specified sentence and converting the sentence to Resource Description Framework (RDF) triplets of the form (subject, predicate object). The RDF triplets generated must be such that the triplets accurately capture the structure and semantics of the input sentence. The input is a sentence and the output is a list of triplets of the form [subject, predicate, object] that capture the relationships present in the sentence. When a sentence has more than 1 RDF triplet possible, the output must contain all of them.\n\nAFC Ajax (amateurs)\u0027s ground is Sportpark De Toekomst where Ajax Youth Academy also play.\nOutput:",
"model_name": "meta-llama/Meta-Llama-3.1-8B-Instruct"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-5984290b", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-5984290b")
```
</details>
|
sdiazlor/my-distiset-59842 | sdiazlor | "2024-12-03T02:16:43Z" | 1 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-03T02:16:40Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: generations
sequence: string
- name: ratings_helpfulness
sequence: int64
- name: rationale_for_ratings_helpfulness
sequence: string
- name: type_helpfulness
sequence: int64
- name: rationale_for_type_helpfulness
sequence: string
- name: ratings_truthfulness
sequence: 'null'
- name: rationale_for_ratings_truthfulness
sequence: 'null'
- name: type_truthfulness
sequence: 'null'
- name: rationale_for_type_truthfulness
sequence: 'null'
- name: model_name
dtype: string
splits:
- name: train
num_bytes: 2776
num_examples: 1
download_size: 19656
dataset_size: 2776
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for my-distiset-59842
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/sdiazlor/my-distiset-59842/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/sdiazlor/my-distiset-59842/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"generations": [
" Sure, I\u0027d be happy to help! Here are the RDF triplets for the input sentence:\n\n[AFC Ajax (amateurs), hasGround, Sportpark De Toekomst]\n[Ajax Youth Academy, playsAt, Sportpark De Toekomst]\n\nExplanation:\n\n* AFC Ajax (amateurs) is the subject of the first triplet, and hasGround is the predicate that describes the relationship between AFC Ajax (amateurs) and Sportpark De Toekomst.\n* Ajax Youth Academy is the subject of the second triplet, and playsAt is the predicate that describes the relationship between Ajax Youth Academy and Sportpark De Toekomst.\n\nNote that there may be other possible RDF triplets that could be derived from the input sentence, but the above triplets capture the main relationships present in the sentence.",
"[\n [\"AFC Ajax (amateurs)\", \"has ground\", \"Sportpark De Toekomst\"],\n [\"Ajax Youth Academy\", \"plays at\", \"Sportpark De Toekomst\"]\n]"
],
"instruction": "You will be given a definition of a task first, then some input of the task.\nThis task is about using the specified sentence and converting the sentence to Resource Description Framework (RDF) triplets of the form (subject, predicate object). The RDF triplets generated must be such that the triplets accurately capture the structure and semantics of the input sentence. The input is a sentence and the output is a list of triplets of the form [subject, predicate, object] that capture the relationships present in the sentence. When a sentence has more than 1 RDF triplet possible, the output must contain all of them.\n\nAFC Ajax (amateurs)\u0027s ground is Sportpark De Toekomst where Ajax Youth Academy also play.\nOutput:",
"model_name": "meta-llama/Meta-Llama-3.1-8B-Instruct",
"ratings_helpfulness": [
4,
3
],
"ratings_truthfulness": [
null,
null
],
"rationale_for_ratings_helpfulness": [
"The response is accurate and provides useful information that meets the task\u0027s requirements. However, it could be more concise in its explanation.",
"The response is accurate and provides useful information that meets the task\u0027s requirements. However, it lacks an explanation of the reasoning behind the derived triplets, which would make it more comprehensive."
],
"rationale_for_ratings_truthfulness": [
null,
null
],
"rationale_for_type_helpfulness": [
"Clarity and Relevance: The response starts by seeking clarification (\"Sure, I\u0027d be happy to help!\") but then provides a clear explanation of how the RDF triplets were derived from the input sentence. Useful and Comprehensive Information: The explanation provides relevant background and reasoning steps. Not Lengthy, No Repetition: The explanation is concise and avoids unnecessary repetition.",
"Clarity and Relevance: The response directly provides the RDF triplets without any unnecessary explanations. Useful and Comprehensive Information: The triplets are relevant and capture the main relationships present in the input sentence. Not Lengthy, No Repetition: The response is concise and to the point."
],
"rationale_for_type_truthfulness": [
null,
null
],
"type_helpfulness": [
1,
1
],
"type_truthfulness": [
null,
null
]
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-59842", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("sdiazlor/my-distiset-59842")
```
</details>
|
richmondsin/hellaswag_ru_results | richmondsin | "2024-12-03T02:30:05Z" | 1 | 0 | [
"size_categories:10K<n<100K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T02:29:32Z" | ---
pretty_name: Evaluation run of google/gemma-2-2b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)\nThe dataset is\
\ composed of 0 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 2 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"richmondsin/hellaswag_ru_results\"\
,\n\tname=\"google__gemma-2-2b__hellaswag_ru\",\n\tsplit=\"latest\"\n)\n```\n\n\
## Latest results\n\nThese are the [latest results from run 2024-12-02T21-29-32.668739](https://huggingface.co/datasets/richmondsin/hellaswag_ru_results/blob/main/google/gemma-2-2b/results_2024-12-02T21-29-32.668739.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"hellaswag_ru\": {\n\
\ \"alias\": \"hellaswag_ru\",\n \"acc,none\": 0.41119787340089714,\n\
\ \"acc_stderr,none\": 0.006342843897845994,\n \"acc_norm,none\"\
: 0.5454394417677355,\n \"acc_norm_stderr,none\": 0.006418640511607917\n\
\ }\n },\n \"hellaswag_ru\": {\n \"alias\": \"hellaswag_ru\"\
,\n \"acc,none\": 0.41119787340089714,\n \"acc_stderr,none\": 0.006342843897845994,\n\
\ \"acc_norm,none\": 0.5454394417677355,\n \"acc_norm_stderr,none\"\
: 0.006418640511607917\n }\n}\n```"
repo_url: https://huggingface.co/google/gemma-2-2b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: google__gemma-2-2b__hellaswag_ru
data_files:
- split: 2024_12_02T21_29_32.668739
path:
- '**/samples_hellaswag_ru_2024-12-02T21-29-32.668739.jsonl'
- split: latest
path:
- '**/samples_hellaswag_ru_2024-12-02T21-29-32.668739.jsonl'
---
# Dataset Card for Evaluation run of google/gemma-2-2b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)
The dataset is composed of 0 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"richmondsin/hellaswag_ru_results",
name="google__gemma-2-2b__hellaswag_ru",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T21-29-32.668739](https://huggingface.co/datasets/richmondsin/hellaswag_ru_results/blob/main/google/gemma-2-2b/results_2024-12-02T21-29-32.668739.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"hellaswag_ru": {
"alias": "hellaswag_ru",
"acc,none": 0.41119787340089714,
"acc_stderr,none": 0.006342843897845994,
"acc_norm,none": 0.5454394417677355,
"acc_norm_stderr,none": 0.006418640511607917
}
},
"hellaswag_ru": {
"alias": "hellaswag_ru",
"acc,none": 0.41119787340089714,
"acc_stderr,none": 0.006342843897845994,
"acc_norm,none": 0.5454394417677355,
"acc_norm_stderr,none": 0.006418640511607917
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Taylor658/bone-densitometry | Taylor658 | "2024-12-03T02:41:08Z" | 1 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"library:distilabel",
"region:us",
"synthetic",
"distilabel",
"rlaif",
"datacraft"
] | null | "2024-12-03T02:37:04Z" | ---
size_categories: 1K<n<10K
dataset_info:
features:
- name: text
dtype: string
- name: labels
sequence:
class_label:
names:
'0': osteoporosis
'1': osteopenia
'2': normal-bone-density
splits:
- name: train
num_bytes: 301547
num_examples: 1100
download_size: 97851
dataset_size: 301547
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
- datacraft
---
# Dataset Card for bone-densitometry
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/Taylor658/bone-densitometry/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/Taylor658/bone-densitometry/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"labels": [
0,
1,
2
],
"text": "A 55-year-old woman underwent a bone densitometry scan and had a T-score of -2.1 and a Z-score of -1.6, indicating a moderate decrease in bone mass compared to her younger self and a lower bone density than 75% of individuals the same age."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("Taylor658/bone-densitometry", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("Taylor658/bone-densitometry")
```
</details>
|
Maxwell-Jia/MATH | Maxwell-Jia | "2024-12-03T03:41:21Z" | 1 | 0 | [
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2103.03874",
"region:us",
"explanation-generation"
] | [
"text-generation"
] | "2024-12-03T02:59:23Z" | ---
license: mit
task_categories:
- text-generation
language:
- en
tags:
- explanation-generation
pretty_name: Mathematics Aptitude Test of Heuristics (MATH) Dataset
size_categories:
- 10K<n<100K
arxiv_id: 2103.03874
dataset_info:
config_name: default
splits:
- name: train
num_examples: 7500
- name: test
num_examples: 5000
---
# MATH Dataset
The Mathematics Aptitude Test of Heuristics (MATH) dataset consists of problems from mathematics competitions, including the AMC 10, AMC 12, AIME, and more. Each problem in MATH has a full step-by-step solution, which can be used to teach models to generate answer derivations and explanations.
This is a converted version of the [hendrycks/competition_math](https://huggingface.co/datasets/hendrycks/competition_math) originally created by Hendrycks et al. The dataset has been converted to parquet format for easier loading and usage.
## Data Fields
- `problem`: The mathematics problem text
- `level`: Difficulty level of the problem (e.g., AMC 10, AMC 12, AIME)
- `type`: Type of mathematics problem (e.g., Algebra, Geometry, Counting & Probability)
- `solution`: Step-by-step solution to the problem
## Data Splits
The dataset contains two splits:
- `train`: Training set, 7500 problems
- `test`: Test set, 5000 problems
## Original Dataset Information
- **Original Dataset**: [hendrycks/competition_math](https://huggingface.co/datasets/hendrycks/competition_math)
- **Paper**: [Measuring Mathematical Problem Solving With the MATH Dataset](https://arxiv.org/abs/2103.03874)
- **Homepage**: [https://github.com/hendrycks/math](https://github.com/hendrycks/math)
## Citation
If you use this dataset, please cite the original work:
```bibtex
@article{hendrycksmath2021,
title={Measuring Mathematical Problem Solving With the MATH Dataset},
author={Dan Hendrycks
and Collin Burns
and Saurav Kadavath
and Akul Arora
and Steven Basart
and Eric Tang
and Dawn Song
and Jacob Steinhardt},
journal={arXiv preprint arXiv:2103.03874},
year={2021}
}
```
## License
This dataset follows the same license as the original dataset: [License](https://github.com/hendrycks/math/blob/main/LICENSE) |
leo66666/crosscoder-llama-3.2-1b-diff | leo66666 | "2024-12-03T03:05:58Z" | 1 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T03:00:07Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: original_text
dtype: string
splits:
- name: train
num_bytes: 567249286
num_examples: 100000
download_size: 283793218
dataset_size: 567249286
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen2_run2_llama2-7b_wiki_doc1000_real32_synt96 | dgambettaphd | "2024-12-03T03:38:20Z" | 1 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T03:38:17Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 523711
num_examples: 1000
download_size: 288415
dataset_size: 523711
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
richmondsin/truthfulqa_ca_mc2_results | richmondsin | "2024-12-03T03:47:55Z" | 1 | 0 | [
"size_categories:1K<n<10K",
"format:json",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T03:47:42Z" | ---
pretty_name: Evaluation run of google/gemma-2-2b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)\nThe dataset is\
\ composed of 0 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 2 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"richmondsin/truthfulqa_ca_mc2_results\"\
,\n\tname=\"google__gemma-2-2b__truthfulqa_ca_mc2\",\n\tsplit=\"latest\"\n)\n```\n\
\n## Latest results\n\nThese are the [latest results from run 2024-12-02T22-47-42.283998](https://huggingface.co/datasets/richmondsin/truthfulqa_ca_mc2_results/blob/main/google/gemma-2-2b/results_2024-12-02T22-47-42.283998.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"truthfulqa_ca_mc2\"\
: {\n \"alias\": \"truthfulqa_ca_mc2\",\n \"acc,none\": 0.4321804368401765,\n\
\ \"acc_stderr,none\": 0.01620709424892392\n }\n },\n \"\
truthfulqa_ca_mc2\": {\n \"alias\": \"truthfulqa_ca_mc2\",\n \"acc,none\"\
: 0.4321804368401765,\n \"acc_stderr,none\": 0.01620709424892392\n }\n\
}\n```"
repo_url: https://huggingface.co/google/gemma-2-2b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: google__gemma-2-2b__truthfulqa_ca_mc2
data_files:
- split: 2024_12_02T22_47_42.283998
path:
- '**/samples_truthfulqa_ca_mc2_2024-12-02T22-47-42.283998.jsonl'
- split: latest
path:
- '**/samples_truthfulqa_ca_mc2_2024-12-02T22-47-42.283998.jsonl'
---
# Dataset Card for Evaluation run of google/gemma-2-2b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)
The dataset is composed of 0 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"richmondsin/truthfulqa_ca_mc2_results",
name="google__gemma-2-2b__truthfulqa_ca_mc2",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-02T22-47-42.283998](https://huggingface.co/datasets/richmondsin/truthfulqa_ca_mc2_results/blob/main/google/gemma-2-2b/results_2024-12-02T22-47-42.283998.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"truthfulqa_ca_mc2": {
"alias": "truthfulqa_ca_mc2",
"acc,none": 0.4321804368401765,
"acc_stderr,none": 0.01620709424892392
}
},
"truthfulqa_ca_mc2": {
"alias": "truthfulqa_ca_mc2",
"acc,none": 0.4321804368401765,
"acc_stderr,none": 0.01620709424892392
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Leonardo6/crosscoder-llama-3.2-1b-diff | Leonardo6 | "2024-12-03T04:02:24Z" | 1 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T03:59:52Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: original_text
dtype: string
splits:
- name: train
num_bytes: 391513728
num_examples: 24534
download_size: 185147529
dataset_size: 391513728
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
WhatcanIname/all_new | WhatcanIname | "2024-12-03T04:03:38Z" | 1 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T04:03:18Z" | ---
license: apache-2.0
---
|
ERIC0723/merged-v2 | ERIC0723 | "2024-12-03T04:53:48Z" | 1 | 0 | [
"license:mit",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T04:42:25Z" | ---
license: mit
---
|
RyanYr/self-reflect_mini8Bit-t0_mistlarge-t12_om2-6 | RyanYr | "2024-12-03T05:00:29Z" | 1 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T05:00:23Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: generated_solution
dtype: string
- name: answer
dtype: string
- name: problem_source
dtype: string
- name: response@0
sequence: string
- name: response@1
sequence: string
- name: response@2
sequence: string
splits:
- name: train
num_bytes: 287827888
num_examples: 50000
download_size: 124937676
dataset_size: 287827888
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_gen3_run2_llama2-7b_wiki_doc1000_real32_synt96 | dgambettaphd | "2024-12-03T05:15:16Z" | 1 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T05:15:14Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 523997
num_examples: 1000
download_size: 288969
dataset_size: 523997
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
richmondsin/truthfulqa_mr_mc1_results | richmondsin | "2024-12-03T05:15:34Z" | 1 | 0 | [
"region:us"
] | null | "2024-12-03T05:15:23Z" | ---
pretty_name: Evaluation run of google/gemma-2-2b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)\nThe dataset is\
\ composed of 0 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 2 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"richmondsin/truthfulqa_mr_mc1_results\"\
,\n\tname=\"google__gemma-2-2b__truthfulqa_mr_mc1\",\n\tsplit=\"latest\"\n)\n```\n\
\n## Latest results\n\nThese are the [latest results from run 2024-12-03T00-15-23.608773](https://huggingface.co/datasets/richmondsin/truthfulqa_mr_mc1_results/blob/main/google/gemma-2-2b/results_2024-12-03T00-15-23.608773.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"truthfulqa_mr_mc1\"\
: {\n \"alias\": \"truthfulqa_mr_mc1\",\n \"acc,none\": 0.27521613832853026,\n\
\ \"acc_stderr,none\": 0.016965809584321628,\n \"acc_norm,none\"\
: 0.29394812680115273,\n \"acc_norm_stderr,none\": 0.017305614931424896\n\
\ }\n },\n \"truthfulqa_mr_mc1\": {\n \"alias\": \"truthfulqa_mr_mc1\"\
,\n \"acc,none\": 0.27521613832853026,\n \"acc_stderr,none\": 0.016965809584321628,\n\
\ \"acc_norm,none\": 0.29394812680115273,\n \"acc_norm_stderr,none\"\
: 0.017305614931424896\n }\n}\n```"
repo_url: https://huggingface.co/google/gemma-2-2b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: google__gemma-2-2b__truthfulqa_mr_mc1
data_files:
- split: 2024_12_03T00_15_23.608773
path:
- '**/samples_truthfulqa_mr_mc1_2024-12-03T00-15-23.608773.jsonl'
- split: latest
path:
- '**/samples_truthfulqa_mr_mc1_2024-12-03T00-15-23.608773.jsonl'
---
# Dataset Card for Evaluation run of google/gemma-2-2b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)
The dataset is composed of 0 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"richmondsin/truthfulqa_mr_mc1_results",
name="google__gemma-2-2b__truthfulqa_mr_mc1",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-03T00-15-23.608773](https://huggingface.co/datasets/richmondsin/truthfulqa_mr_mc1_results/blob/main/google/gemma-2-2b/results_2024-12-03T00-15-23.608773.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"truthfulqa_mr_mc1": {
"alias": "truthfulqa_mr_mc1",
"acc,none": 0.27521613832853026,
"acc_stderr,none": 0.016965809584321628,
"acc_norm,none": 0.29394812680115273,
"acc_norm_stderr,none": 0.017305614931424896
}
},
"truthfulqa_mr_mc1": {
"alias": "truthfulqa_mr_mc1",
"acc,none": 0.27521613832853026,
"acc_stderr,none": 0.016965809584321628,
"acc_norm,none": 0.29394812680115273,
"acc_norm_stderr,none": 0.017305614931424896
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
minhxle/barc-induction-modified-programs-2k | minhxle | "2024-12-03T06:12:54Z" | 1 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T05:54:20Z" | ---
dataset_info:
features:
- name: task_description
dtype: string
- name: task
struct:
- name: id
dtype: 'null'
- name: test
list:
- name: input
sequence:
sequence: int64
- name: output
sequence:
sequence: int64
- name: train
list:
- name: input
sequence:
sequence: int64
- name: output
sequence:
sequence: int64
- name: difficulty
dtype: int64
- name: original_program_source
dtype: string
- name: modified_program_source
dtype: string
- name: raw_llm_response
dtype: string
splits:
- name: train
num_bytes: 24379132
num_examples: 2000
download_size: 2314780
dataset_size: 24379132
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
yessu/njdg | yessu | "2024-12-03T06:14:11Z" | 1 | 0 | [
"license:mit",
"region:us"
] | null | "2024-12-03T05:54:53Z" | ---
license: mit
---
|
skyfuryLH/semeval2025 | skyfuryLH | "2024-12-03T06:14:27Z" | 1 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"modality:text",
"region:us"
] | null | "2024-12-03T06:13:12Z" | ---
license: apache-2.0
---
|
dgambettaphd/D_gen4_run2_llama2-7b_wiki_doc1000_real32_synt96 | dgambettaphd | "2024-12-03T06:47:34Z" | 1 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T06:47:31Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: doc
dtype: string
splits:
- name: train
num_bytes: 523944
num_examples: 1000
download_size: 288852
dataset_size: 523944
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
richmondsin/truthfulqa_mr_mc2_results | richmondsin | "2024-12-03T07:22:38Z" | 1 | 0 | [
"region:us"
] | null | "2024-12-03T07:22:07Z" | ---
pretty_name: Evaluation run of google/gemma-2-2b
dataset_summary: "Dataset automatically created during the evaluation run of model\
\ [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)\nThe dataset is\
\ composed of 0 configuration(s), each one corresponding to one of the evaluated\
\ task.\n\nThe dataset has been created from 2 run(s). Each run can be found as\
\ a specific split in each configuration, the split being named using the timestamp\
\ of the run.The \"train\" split is always pointing to the latest results.\n\nAn\
\ additional configuration \"results\" store all the aggregated results of the run.\n\
\nTo load the details from a run, you can for instance do the following:\n```python\n\
from datasets import load_dataset\ndata = load_dataset(\n\t\"richmondsin/truthfulqa_mr_mc2_results\"\
,\n\tname=\"google__gemma-2-2b__truthfulqa_mr_mc2\",\n\tsplit=\"latest\"\n)\n```\n\
\n## Latest results\n\nThese are the [latest results from run 2024-12-03T02-22-07.727813](https://huggingface.co/datasets/richmondsin/truthfulqa_mr_mc2_results/blob/main/google/gemma-2-2b/results_2024-12-03T02-22-07.727813.json)\
\ (note that there might be results for other tasks in the repos if successive evals\
\ didn't cover the same tasks. You find each in the results and the \"latest\" split\
\ for each eval):\n\n```python\n{\n \"all\": {\n \"truthfulqa_mr_mc2\"\
: {\n \"alias\": \"truthfulqa_mr_mc2\",\n \"acc,none\": 0.47602182277260513,\n\
\ \"acc_stderr,none\": 0.017552027869068232\n }\n },\n \"\
truthfulqa_mr_mc2\": {\n \"alias\": \"truthfulqa_mr_mc2\",\n \"acc,none\"\
: 0.47602182277260513,\n \"acc_stderr,none\": 0.017552027869068232\n }\n\
}\n```"
repo_url: https://huggingface.co/google/gemma-2-2b
leaderboard_url: ''
point_of_contact: ''
configs:
- config_name: google__gemma-2-2b__truthfulqa_mr_mc2
data_files:
- split: 2024_12_03T02_22_07.727813
path:
- '**/samples_truthfulqa_mr_mc2_2024-12-03T02-22-07.727813.jsonl'
- split: latest
path:
- '**/samples_truthfulqa_mr_mc2_2024-12-03T02-22-07.727813.jsonl'
---
# Dataset Card for Evaluation run of google/gemma-2-2b
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [google/gemma-2-2b](https://huggingface.co/google/gemma-2-2b)
The dataset is composed of 0 configuration(s), each one corresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run.
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset(
"richmondsin/truthfulqa_mr_mc2_results",
name="google__gemma-2-2b__truthfulqa_mr_mc2",
split="latest"
)
```
## Latest results
These are the [latest results from run 2024-12-03T02-22-07.727813](https://huggingface.co/datasets/richmondsin/truthfulqa_mr_mc2_results/blob/main/google/gemma-2-2b/results_2024-12-03T02-22-07.727813.json) (note that there might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"truthfulqa_mr_mc2": {
"alias": "truthfulqa_mr_mc2",
"acc,none": 0.47602182277260513,
"acc_stderr,none": 0.017552027869068232
}
},
"truthfulqa_mr_mc2": {
"alias": "truthfulqa_mr_mc2",
"acc,none": 0.47602182277260513,
"acc_stderr,none": 0.017552027869068232
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
viktoriatilevska/train_group3_10K | viktoriatilevska | "2024-12-03T07:46:31Z" | 1 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T07:46:29Z" | ---
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 7693996
num_examples: 20000
download_size: 1536074
dataset_size: 7693996
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rr4433/PSGallery_Scrapped | rr4433 | "2024-12-03T07:50:09Z" | 1 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T07:49:07Z" | ---
dataset_info:
features:
- name: filename
dtype: string
- name: module_name
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 2886676919
num_examples: 132927
download_size: 1025026083
dataset_size: 2886676919
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SHASWATSINGH3101/bail-Pre-training_JSONL_V1 | SHASWATSINGH3101 | "2024-12-03T07:54:44Z" | 1 | 0 | [
"license:cc-by-nc-nd-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T07:54:23Z" | ---
license: cc-by-nc-nd-4.0
---
|
cvllvc/rmutp | cvllvc | "2024-12-03T16:53:55Z" | 1 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-12-03T08:36:46Z" | ---
license: apache-2.0
---
|
wangwilliamyang/wikihow | wangwilliamyang | "2024-01-18T11:18:15Z" | 0 | 9 | [
"region:us"
] | null | "2022-03-02T23:29:22Z" | Invalid username or password. |
kirim9001/Dum | kirim9001 | "2023-02-20T09:45:14Z" | 0 | 0 | [
"license:other",
"region:us"
] | null | "2023-02-20T09:43:24Z" | Invalid username or password. |
tarta-ai/jobs-in-california-february-2023 | tarta-ai | "2023-03-09T11:08:25Z" | 0 | 0 | [
"task_categories:text-classification",
"language:en",
"license:other",
"size_categories:1M<n<10M",
"region:us",
"job",
"jobs",
"california jobs"
] | [
"text-classification"
] | "2023-03-09T10:57:14Z" | Invalid username or password. |
Braddy/xview_captions_v0 | Braddy | "2023-05-27T13:05:35Z" | 0 | 0 | [
"region:us"
] | null | "2023-05-26T07:55:44Z" | Invalid username or password. |
librarian-bots/card_to_task | librarian-bots | "2023-06-27T12:21:57Z" | 0 | 0 | [
"region:us"
] | null | "2023-06-27T09:15:08Z" | Invalid username or password. |
crystalai/autotrain-data-crystal_alchemist-vision | crystalai | "2023-08-25T01:37:45Z" | 0 | 1 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:translation",
"task_categories:summarization",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:fill-mask",
"task_categories:sentence-similarity",
"task_categories:text-to-speech",
"task_categories:automatic-speech-recognition",
"task_categories:audio-to-audio",
"task_categories:audio-classification",
"task_categories:voice-activity-detection",
"task_categories:depth-estimation",
"task_categories:image-classification",
"task_categories:object-detection",
"task_categories:image-segmentation",
"task_categories:text-to-image",
"task_categories:image-to-text",
"task_categories:image-to-image",
"task_categories:unconditional-image-generation",
"task_categories:video-classification",
"task_categories:reinforcement-learning",
"task_categories:robotics",
"task_categories:tabular-classification",
"task_categories:tabular-regression",
"task_categories:tabular-to-text",
"task_categories:table-to-text",
"task_categories:multiple-choice",
"task_categories:text-retrieval",
"task_categories:time-series-forecasting",
"task_categories:text-to-video",
"task_categories:visual-question-answering",
"task_categories:zero-shot-image-classification",
"task_categories:graph-ml",
"language:aa",
"language:ab",
"language:ae",
"language:af",
"language:ak",
"language:am",
"language:an",
"language:ar",
"language:as",
"language:av",
"language:ay",
"language:az",
"language:ba",
"language:be",
"language:bg",
"language:bh",
"language:bi",
"language:bm",
"language:bn",
"language:bo",
"language:bs",
"language:br",
"language:ce",
"language:ca",
"language:ch",
"language:cr",
"language:cs",
"language:co",
"language:cu",
"language:cv",
"language:cy",
"language:da",
"language:de",
"language:dv",
"language:dz",
"language:ee",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:eu",
"language:fa",
"language:ff",
"language:fi",
"language:fj",
"language:fo",
"language:fr",
"language:ga",
"language:fy",
"language:gd",
"language:gl",
"language:gn",
"language:gu",
"language:gv",
"language:ha",
"language:he",
"language:hi",
"language:ho",
"language:hr",
"language:ht",
"language:hy",
"language:hu",
"language:ia",
"language:hz",
"language:id",
"language:ig",
"language:ie",
"language:ii",
"language:ik",
"language:io",
"language:is",
"language:it",
"language:iu",
"language:ja",
"language:jv",
"language:ka",
"language:kg",
"language:ki",
"language:kj",
"language:kk",
"language:kl",
"language:km",
"language:kn",
"language:ko",
"language:kr",
"language:ks",
"language:ku",
"language:kv",
"language:kw",
"language:ky",
"language:la",
"language:lb",
"language:lg",
"language:li",
"language:ln",
"language:lo",
"language:lt",
"language:lu",
"language:lv",
"language:mg",
"language:mh",
"language:mi",
"language:mk",
"language:ml",
"language:mn",
"language:mr",
"language:ms",
"language:mt",
"language:my",
"language:na",
"language:nb",
"language:nd",
"language:ne",
"language:ng",
"language:nl",
"language:nn",
"language:no",
"language:nr",
"language:nv",
"language:ny",
"language:oc",
"language:oj",
"language:om",
"language:or",
"language:os",
"language:pa",
"language:pi",
"language:pl",
"language:ps",
"language:pt",
"language:qu",
"language:rm",
"language:rn",
"language:ro",
"language:ru",
"language:rw",
"language:sa",
"language:sc",
"language:sd",
"language:se",
"language:sg",
"language:si",
"language:sk",
"language:sl",
"language:sm",
"language:sn",
"language:so",
"language:sq",
"language:sr",
"language:ss",
"language:st",
"language:su",
"language:sv",
"language:sw",
"language:ta",
"language:ts",
"language:te",
"language:tg",
"language:th",
"language:ti",
"language:tk",
"language:tl",
"language:tn",
"language:to",
"language:tr",
"language:tt",
"language:tw",
"language:ty",
"language:ug",
"language:uk",
"language:ur",
"language:uz",
"language:ve",
"language:vi",
"language:vo",
"language:wa",
"language:wo",
"language:xh",
"language:yi",
"language:yo",
"language:za",
"language:zh",
"language:zu",
"language:ach",
"language:bcl",
"language:crs",
"language:bzs",
"language:efi",
"language:gil",
"language:lua",
"license:apache-2.0",
"size_categories:n<1K",
"region:us",
"chemistry",
"finance",
"biology",
"legal",
"music",
"art",
"code",
"climate",
"medical",
"not-for-all-audiences"
] | [
"text-classification",
"token-classification",
"table-question-answering",
"question-answering",
"zero-shot-classification",
"translation",
"summarization",
"conversational",
"feature-extraction",
"text-generation",
"text2text-generation",
"fill-mask",
"sentence-similarity",
"text-to-speech",
"automatic-speech-recognition",
"audio-to-audio",
"audio-classification",
"voice-activity-detection",
"depth-estimation",
"image-classification",
"object-detection",
"image-segmentation",
"text-to-image",
"image-to-text",
"image-to-image",
"unconditional-image-generation",
"video-classification",
"reinforcement-learning",
"robotics",
"tabular-classification",
"tabular-regression",
"tabular-to-text",
"table-to-text",
"multiple-choice",
"text-retrieval",
"time-series-forecasting",
"text-to-video",
"visual-question-answering",
"zero-shot-image-classification",
"graph-ml"
] | "2023-07-24T19:31:23Z" | Invalid username or password. |
librarian-bots/authors_merged_model_prs | librarian-bots | "2023-09-27T11:39:04Z" | 0 | 0 | [
"region:us"
] | null | "2023-09-17T17:55:33Z" | Invalid username or password. |
librarian-bots/authors_merged_dataset_prs | librarian-bots | "2023-09-25T14:33:23Z" | 0 | 0 | [
"region:us"
] | null | "2023-09-17T18:12:31Z" | Invalid username or password. |
librarian-bots/authors-orgs-who-have-merged-a-pr | librarian-bots | "2023-09-25T14:40:12Z" | 0 | 0 | [
"region:us"
] | null | "2023-09-25T14:40:11Z" | Invalid username or password. |
neomausen/monika_xl_lora | neomausen | "2023-11-07T15:11:35Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2023-11-07T15:11:32Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 375708.0
num_examples: 8
download_size: 303032
dataset_size: 375708.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "monika_xl_lora"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
MysticPixel42/erotic_teens_unconditional | MysticPixel42 | "2024-02-08T21:27:01Z" | 0 | 6 | [
"region:us"
] | null | "2024-02-08T20:09:36Z" | Invalid username or password. |
danielz01/laion-5b | danielz01 | "2024-02-14T19:23:25Z" | 0 | 20 | [
"task_categories:feature-extraction",
"task_categories:zero-shot-classification",
"language:en",
"license:cc-by-4.0",
"size_categories:1B<n<10B",
"region:us"
] | [
"feature-extraction",
"zero-shot-classification"
] | "2024-02-14T03:07:33Z" | Invalid username or password. |
carloswbarros/atas-test | carloswbarros | "2024-04-22T16:28:58Z" | 0 | 0 | [
"region:us"
] | null | "2024-04-22T15:59:35Z" | Invalid username or password. |
jssaluja/paath_sggs_text | jssaluja | "2024-05-18T09:24:27Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-15T04:29:38Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5303418
num_examples: 23067
download_size: 2140552
dataset_size: 5303418
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bradfordlevy/BeanCounter | bradfordlevy | "2024-11-01T00:57:53Z" | 0 | 0 | [
"size_categories:10M<n<100M",
"modality:text",
"arxiv:2409.17827",
"region:us"
] | null | "2024-06-12T15:16:13Z" | ---
configs:
- config_name: default
data_files:
- split: train
path:
- "train/bc-*.jsonl.gz"
- split: validation
path:
- "validation/bc-*.jsonl.gz"
- config_name: clean
data_files:
- split: train
path:
- "train/bc*.jsonl.gz"
- "deduped/bc*.jsonl.gz"
- split: validation
path:
- "validation/bc*.jsonl.gz"
- config_name: sample
data_files:
- split: train
path:
- "sample/train/bc*.jsonl.gz"
- split: validation
path:
- "sample/validation/bc*.jsonl.gz"
- config_name: fraud
data_files:
- split: train
path:
- "fraud/bc-*.jsonl.gz"
---
# 🫘🧮 BeanCounter
## Datset Summary
BeanCounter is a low-toxicity, large-scale, and open dataset of business-oriented text. See [Wang and Levy (2024)](https://arxiv.org/abs/2409.17827) for details of the data collection, analysis, and some explorations of using the data for continued pre-training.
The data is sourced from the Electronic Data Gathering and Retrieval (EDGAR) system operated by the United States Securities and Exchange Commission (SEC). Specifically all filings submitted to EDGAR from 1996 through 2023 (validation splits are based on a random sample of data from January and February of 2024). We include four configurations of the dataset: `clean`, `default`, `fraud`, and `sample`. These consist of:
- `clean`: 159B tokens of cleaned text
- `default`: 111B tokens of cleaned and deduplicated text (referred to as "final" in the paper)
- `fraud`: 0.3B tokens of text filed during periods of fraud according to SEC [Accounting and Auditing Enforcement Releases](https://www.sec.gov/enforcement-litigation/accounting-auditing-enforcement-releases) and [Litigation Releases](https://www.sec.gov/enforcement-litigation/litigation-releases) (Note that this content is not deduplicated)
- `sample`: 1.1B tokens randomly sampled from `default` stratified by year
## How can I use this?
### License
The dataset is provided under the [ODC-By](https://opendatacommons.org/licenses/by/1-0/) license. Cite our work as:
```
@misc{wang2024beancounterlowtoxicitylargescaleopen,
title = {BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text},
author = {Siyan Wang and Bradford Levy},
year = 2024,
url = {https://arxiv.org/abs/2409.17827},
eprint = {2409.17827},
archiveprefix = {arXiv},
primaryclass = {cs.CL}
}
```
### In 🤗 Datasets
To load the random sample config in Datasets, one can run:
```python
from datasets import load_dataset
beancounter = load_dataset(
"blevy41/BeanCounter",
name="sample", # Load random sample, clean, or default (referred to as final in paper)
)
# Print out split info
print(beancounter, "\n")
# Inspect an observation
print(f"COLUMNS IN DATA: {','.join(beancounter['train'][1000].keys())}\n")
print(f"EXCERPT: \n\n{beancounter['train'][1000]['text'][:1000]}")
```
## What fields are in the data?
The data contain seven fields:
1. `accession` - A unique identifier assigned to accepted EDGAR filings
2. `filename` - Each filing consists of one or more attachments. This is the filename of the specific attachment within the filing
3. `text` - Extracted text
4. `type_filing` - The type of the filing. A full index of SEC filing types can be found [here](https://www.sec.gov/submit-filings/forms-index)
5. `type_attachment` - The type of the attachment. For example, an 8-K filing will have a main "8-K" attachment but could also have exhibits of other types such as "EX-99"
6. `date` - The filing date assigned by the EDGAR system
7. `ts_accept` - The timestamp when the filing was accepted by the EDGAR system
Note that if a filing is accepted by EDGAR after the [filing deadline](https://www.sec.gov/submit-filings/filer-support-resources/how-do-i-guides/determine-status-my-filing#section1) then EDGAR will not disseminate the form until the next business day and the `date` assigned by the EDGAR system will be the next business day, i.e., after `ts_accept`.
Full details of processing can be found in [Wang and Levy (2024)](https://arxiv.org/abs/2409.17827).
# Datasheet
Questions from the Datasheets for Datasets paper, v7.
Jump to section:
- [Motivation](#motivation)
- [Composition](#composition)
- [Collection process](#collection-process)
- [Preprocessing/cleaning/labeling](#preprocessingcleaninglabeling)
- [Uses](#uses)
- [Distribution](#distribution)
- [Maintenance](#maintenance)
## Motivation
_The questions in this section are primarily intended to encourage dataset creators
to clearly articulate their reasons for creating the dataset and to promote transparency
about funding interests._
### For what purpose was the dataset created?
_Was there a specific task in mind? Was there a specific gap that needed to be filled?
Please provide a description._
BeanCounter is one of the largest business-oriented text dataset and is created to facilitate research in business domain NLP and toxicity in NLP datasets.
### Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)?
The BeanCounter dataset is created by Bradford Levy and Siyan Wang at University of Chicago Booth School of Business.
### Who funded the creation of the dataset?
_If there is an associated grant, please provide the name of the grantor and the grant
name and number._
There are no specific grants that supported the creation of the dataset; we acknowledge general financial support from University of Chicago Booth School of Business.
### Any other comments?
No.
## Composition
_Most of these questions are intended to provide dataset consumers with the
information they need to make informed decisions about using the dataset for
specific tasks. The answers to some of these questions reveal information
about compliance with the EU’s General Data Protection Regulation (GDPR) or
comparable regulations in other jurisdictions._
### What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)?
_Are there multiple types of instances (e.g., movies, users, and ratings; people and
interactions between them; nodes and edges)? Please provide a description._
The instances are publicly available financial disclosure textual documents filed on Securities and Exchange Comission's Electronic Data Gathering and Retrieval system (SEC EDGAR) by entities subject to the Securities Acts of 1933 and 1934, the Trust Indenture Act of 1939, and the Investment Company Act of 1940.
### How many instances are there in total (of each type, if appropriate)?
We collected 16,486,145 documents (instances) from more than 16,000 entities.
### Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set?
_If the dataset is a sample, then what is the larger set? Is the sample representative
of the larger set (e.g., geographic coverage)? If so, please describe how this
representativeness was validated/verified. If it is not representative of the larger set,
please describe why not (e.g., to cover a more diverse range of instances, because
instances were withheld or unavailable)._
We filter out documents containing very little text or high proportion of white space; see Appendix A in Wang and Levy (2024) for more details. We provide 3 configurations of the dataset: BeanCounter.clean, BeanCounter.final and BeanCounter.sample. BeanCounter.clean is the final set of documents that has been filtered out with the cleaning technique described in Appendix A.3. BeanCounter.final is the set of documents that have been deduplicated on document basis (see Appendix A.4) and BeanCounter.sample is a 1% random sample of the dataset stratified by year.
### What data does each instance consist of?
_“Raw” data (e.g., unprocessed text or images) or features? In either case, please
provide a description._
Each instance consists of:
- accession number: unique number assigned to each filing according to the entity's CIK, filing year and number of business days.
- file name: name of the document submission including the extension (e.g. .html or .txt).
- text: textual content of the document.
- filing type: indicated type of submission to fulfill a specific SEC regulation; more specific than form type; e.g. DEF 14A (filing type) vs. DEF (form type).
- attachment type: purpose of the document in the particular filing. The two main types are the main filing or exhibits (supplementary materials to the main filing).
- date: date of filing submission.
- form type: indicated type of submission to fulfill a particular SEC regulation (similar to filing type but less specific).
- the accepted timestamp: second-precise timestamp of when the document is accepted into SEC EDGAR.
### Is there a label or target associated with each instance?
_If so, please provide a description._
No.
### Is any information missing from individual instances?
_If so, please provide a description, explaining why this information is missing (e.g.,
because it was unavailable). This does not include intentionally removed information,
but might include, e.g., redacted text._
No information should be missing from instances.
### Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)?
_If so, please describe how these relationships are made explicit._
Instances are attachments to a particular filing, and each filing can contain one or more attachments. If the filing has more than one attachment (or instance), each attachment in the filing shares the same accession (i.e. the instances are linked by accession).
### Are there recommended data splits (e.g., training, development/validation, testing)?
_If so, please provide a description of these splits, explaining the rationale behind them._
The training set contains all data extracted from SEC's EDGAR betwen 1996-2023. The validation set contains 100MB (uncompressed) of documents sampled from the start of 2024 through end of February, 2024. The training and validation sets are partitioned by time to ensure that data in the validation set is largely new and unobserved in the training set, since most entities are required to file updated reports at least annually.
### Are there any errors, sources of noise, or redundancies in the dataset?
_If so, please provide a description._
Since the entities are responsible for producing the documents, there is a possibility of misreporting numbers or information in their filings. If these errors are found by the SEC, they can ask for corrections from these entities; otherwise, the errors can go undetected. For discussion on reducing redundancies in the dataset, please see Appendix A.3 and A.4 in the manuscript for details.
### Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)?
_If it links to or relies on external resources, a) are there guarantees that they will
exist, and remain constant, over time; b) are there official archival versions of the
complete dataset (i.e., including the external resources as they existed at the time the
dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with
any of the external resources that might apply to a future user? Please provide descriptions
of all external resources and any restrictions associated with them, as well as links or other
access points, as appropriate._
The dataset is self contained.
### Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)?
_If so, please provide a description._
No, the data does not contain any confidential information. All financial disclosures filed on SEC EDGAR is publicly available. Discussion regarding the license of SEC EDGAR data can be found in beginning of Section 3 in Wang and Levy (2024).
### Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety?
_If so, please describe why._
We have conducted extensive toxicity analysis of the dataset and determined that it is lower in toxicity compared to other web-based datasets; details can be found in Section 3.4 of the manuscript. Discussions regarding the difference between BeanCounter and other web-based datasets can also be found in the conclusion.
Based manual inspection of toxic content in the dataset, we have found rare instances of toxic sentences in filings that include earnings call transcript or discussions of discriminatory communication (with examples) in the context of Human Resources training manuals.
### Does the dataset relate to people?
_If not, you may skip the remaining questions in this section._
A small portion of our dataset may related to people in so much as they are mentioned by the entities in our dataset. For example, Tim Cook may be mentioned in our data if Apple, or their competitors, discusses him.
### Does the dataset identify any subpopulations (e.g., by age, gender)?
_If so, please describe how these subpopulations are identified and provide a description of
their respective distributions within the dataset._
BeanCounter includes references of various subpopulations; we explicitly study the toxicity of text surrounding these mentions and details can be found in Section 3.3 and 3.4 of Wang and Levy (2024).
### Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset?
_If so, please describe how._
The dataset can contain personally identifiable information; however, the entities have consented to making this information available. See beginning of Section 3 in manuscript for more detailed discussion.
### Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals racial or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)?
_If so, please provide a description._
No.
### Any other comments?
No.
## Collection process
_\[T\]he answers to questions here may provide information that allow others to
reconstruct the dataset without access to it._
### How was the data associated with each instance acquired?
_Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g.,
survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags,
model-based guesses for age or language)? If data was reported by subjects or indirectly
inferred/derived from other data, was the data validated/verified? If so, please describe how._
The dataset associated with each instance is derived from the SEC's daily archives of filings accepted by the EDGAR system. The EDGAR system accepts a variety of file formats. We process all text and HTML-based files to extracted formatted long-form text from each filing. Full details of the dataset construction process can be found in Appendix A of Wang and Levy (2024).
### What mechanisms or procedures were used to collect the data (e.g., hardware apparatus or sensor, manual human curation, software program, software API)?
_How were these mechanisms or procedures validated?_
The SEC publishes daily archives of all filings accepted by the EDGAR system. We downloaded these in an automated manner, retrying any failed downloads until they succeeded.
### If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)?
We process all text and HTML-based filings. The "sample" configuration of the BeanCounter dataset consists of a random sample of 1% of the full BeanCounter dataset. We sample this data stratified by year to ensure an even volume of tokens for each year.
### Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)?
The authors completed all data collection activities themselves.
### Over what timeframe was the data collected?
_Does this timeframe match the creation timeframe of the data associated with the instances (e.g.
recent crawl of old news articles)? If not, please describe the timeframe in which the data
associated with the instances was created._
The data was collected in February 2024 however the SEC EDGAR system is similar to an append only database where each filing is associated with a timestamp denoting the date and time it was accepted by EDGAR. In that sense, any data collected retroactively, e.g., a filing from 2014, is representative of its content at the time EDGAR accepted it.
### Were any ethical review processes conducted (e.g., by an institutional review board)?
_If so, please provide a description of these review processes, including the outcomes, as well as
a link or other access point to any supporting documentation._
No.
### Does the dataset relate to people?
_If not, you may skip the remainder of the questions in this section._
A small portion of our dataset may related to people in so much as they are mentioned by the entities in our dataset. For example, Tim Cook may be mentioned in our data if Apple, or their competitors, discusses him.
### Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)?
All data is collected from SEC EDGAR.
### Were the individuals in question notified about the data collection?
_If so, please describe (or show with screenshots or other information) how notice was provided,
and provide a link or other access point to, or otherwise reproduce, the exact language of the
notification itself._
They were not.
### Did the individuals in question consent to the collection and use of their data?
_If so, please describe (or show with screenshots or other information) how consent was
requested and provided, and provide a link or other access point to, or otherwise reproduce, the
exact language to which the individuals consented._
Yes, all EDGAR filers consent to the SEC's terms of use, which stipulate that "Information presented on www.sec.gov is considered public information and may be copied or further distributed by users of the web site without the SEC’s permission." More details on the SEC's policy can be found [here](https://web.archive.org/web/20240602180519/https://www.sec.gov/privacy#dissemination).
### If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses?
_If so, please provide a description, as well as a link or other access point to the mechanism
(if appropriate)._
Not applicable.
### Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted?
_If so, please provide a description of this analysis, including the outcomes, as well as a link
or other access point to any supporting documentation._
See Wang and Levy (2024) for a discussion of the implications and impact of the dataset.
### Any other comments?
## Preprocessing/cleaning/labeling
_The questions in this section are intended to provide dataset consumers with the information
they need to determine whether the “raw” data has been processed in ways that are compatible
with their chosen tasks. For example, text that has been converted into a “bag-of-words” is
not suitable for tasks involving word order._
### Was any preprocessing/cleaning/labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)?
_If so, please provide a description. If not, you may skip the remainder of the questions in
this section._
Yes, filings which are both raw text and HTML-based had some preprocessing and cleaning applied. The goal of these steps is to extract long-form text from the original filings while preserving meaningful formatting such as paragraphs breaks, indentation, and lists. See Wang and Levy (2024) for further details of the exact preprocessing and cleaning.
### Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)?
_If so, please provide a link or other access point to the “raw” data._
Yes, the raw data is directly available from the SEC and they have pledged to continue to make it available.
### Is the software used to preprocess/clean/label the instances available?
_If so, please provide a link or other access point._
Yes, please see supplementary materials document for accessing it.
### Any other comments?
## Uses
_These questions are intended to encourage dataset creators to reflect on the tasks
for which the dataset should and should not be used. By explicitly highlighting these tasks,
dataset creators can help dataset consumers to make informed decisions, thereby avoiding
potential risks or harms._
### Has the dataset been used for any tasks already?
_If so, please provide a description._
We explored the utility of BeanCounter by continually pretraining existing models on the dataset and evaluating it on financial and toxicity related tasks; see Section 4 of Wang and Levy (2024) for detailed discussion.
### Is there a repository that links to any or all papers or systems that use the dataset?
_If so, please provide a link or other access point._
No, BeanCounter has not been used in other papers and systems.
### What (other) tasks could the dataset be used for?
The dataset could be used for tasks that evaluate social biases (e.g. CrowS-Pairs),truthfulness (e.g. TruthfulQA), timeliness (e.g. TempLAMA) and other financial domain knowledge evaluations (e.g. ConvFinQA).
### Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses?
_For example, is there anything that a future user might need to know to avoid uses that
could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of
service issues) or other undesirable harms (e.g., financial harms, legal risks) If so, please
provide a description. Is there anything a future user could do to mitigate these undesirable
harms?_
While we process all of the filings uploaded to EDGAR, our text extraction process only supports text and HTML-based documents. As a result, the content of other document types, e.g., Excel, will not appear in our dataset.
### Are there tasks for which the dataset should not be used?
_If so, please provide a description._
Due to the nature of content in the dataset, models trained on BeanCounter may lack imagination and perform poorly on benchmarks that evaluate the model's creativity; see Conclusion in Wang and Levy (2024) for additional discussions on the idiosyncracy of the data.
### Any other comments?
No.
## Distribution
### Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created?
_If so, please provide a description._
Yes.
### How will the dataset will be distributed (e.g., tarball on website, API, GitHub)?
_Does the dataset have a digital object identifier (DOI)?_
The dataset will be available via HuggingFace Hub as a collection of gzipped json files.
### When will the dataset be distributed?
It will be made publicly available close to the NeurIPS conference date.
### Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)?
_If so, please describe this license and/or ToU, and provide a link or other access point to,
or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated
with these restrictions._
Yes, the dataset will be distributed under [Open Data Commons Attributions](https://opendatacommons.org/licenses/by/) license. This permissive license allows users to share and adapt the dataset as long as they give credit to the authors.
### Have any third parties imposed IP-based or other restrictions on the data associated with the instances?
_If so, please describe these restrictions, and provide a link or other access point to, or
otherwise reproduce, any relevant licensing terms, as well as any fees associated with these
restrictions._
No.
### Do any export controls or other regulatory restrictions apply to the dataset or to individual instances?
_If so, please describe these restrictions, and provide a link or other access point to, or otherwise
reproduce, any supporting documentation._
No.
### Any other comments?
## Maintenance
_These questions are intended to encourage dataset creators to plan for dataset maintenance
and communicate this plan with dataset consumers._
### Who is supporting/hosting/maintaining the dataset?
Bradford Levy and Siyan Wang are supporting and maintaining the dataset.
### How can the owner/curator/manager of the dataset be contacted (e.g., email address)?
Please refer to the manuscript for email addresses.
### Is there an erratum?
_If so, please provide a link or other access point._
Please see the github repository for erratum related to the dataset.
### Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)?
_If so, please describe how often, by whom, and how updates will be communicated to users (e.g., mailing list, GitHub)?_
Yes, as soon as practicable. The updates can be seen on Github and HuggingFace Hub.
### If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were individuals in question told that their data would be retained for a fixed period of time and then deleted)?
_If so, please describe these limits and explain how they will be enforced._
No, the entities in the dataset have agreed to make it publicly available in perpetuity.
### Will older versions of the dataset continue to be supported/hosted/maintained?
_If so, please describe how. If not, please describe how its obsolescence will be communicated to users._
Yes, the older versions of the dataset will continue to be hosted on Huggingface Hub.
### If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so?
_If so, please provide a description. Will these contributions be validated/verified? If so,
please describe how. If not, why not? Is there a process for communicating/distributing these
contributions to other users? If so, please provide a description._
Researchers can interact and use the BeanCounter dataset via Huggingface Hub; we do not provide functionalities beyond what Huggingface Hub provides.
### Any other comments?
No.
|
Beijuka/NCHLT_Speech_corpus_Afrikaans | Beijuka | "2024-06-20T15:43:11Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-06-20T15:38:21Z" | ---
dataset_info:
features:
- name: speaker_id
dtype: int64
- name: age
dtype: int64
- name: gender
dtype: string
- name: location
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: md5sum
dtype: string
- name: duration
dtype: float64
- name: pdp_score
dtype: float64
- name: orth
dtype: string
splits:
- name: train
num_bytes: 6273810430.48
num_examples: 63131
- name: test
num_bytes: 331000224.28
num_examples: 3002
download_size: 6410607519
dataset_size: 6604810654.759999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
nexa-collaboration/restore_pile_100M | nexa-collaboration | "2024-07-18T05:44:50Z" | 0 | 0 | [
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-07-18T00:25:11Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 275318766540.29285
num_examples: 131039420
download_size: 149074368623
dataset_size: 275318766540.29285
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
avemio/GRAG-CPT-HESSIAN-AI | avemio | "2024-08-05T11:39:52Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-05T10:02:48Z" | ---
configs:
- config_name: question-answering
data_files: "cpt-qa/equally-distributed-continued-pretrainingdata-without-reference_id-under-250k-final.json"
- config_name: summarizations
data_files: "cpt-summarizations/wikipedia-continued-pretrainingdata-general-summarization_id-over-250k-under-500k.json"
- config_name: reasoning-de
data_files: "cpt-reasoning-de/continued-pretrainingdata-reasoning-200k-de-adjusted-cpt.json"
- config_name: reasoning-en
data_files: "cpt-reasoning-en/continued-pretrainingdata-reasoning-200k-en-adjusted-cpt.json"
---
|
sionic-ai/ko-dpo-mix-7k-translation-exclude | sionic-ai | "2024-08-09T09:08:20Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-09T09:08:16Z" | ---
dataset_info:
features:
- name: dataset
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: chosen_rating
dtype: float32
- name: rejected_rating
dtype: float32
splits:
- name: train
num_bytes: 45376870
num_examples: 6391
- name: test
num_bytes: 5030578
num_examples: 711
download_size: 24845512
dataset_size: 50407448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
sionic-ai/ko-dpo-mix-7k-trl-style | sionic-ai | "2024-08-09T11:02:57Z" | 0 | 5 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-08-09T11:02:53Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: history
sequence:
sequence: string
splits:
- name: train
num_bytes: 32619961
num_examples: 6389
download_size: 17120855
dataset_size: 32619961
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Vishwaksen/AppointmentAssistant-V2 | Vishwaksen | "2024-09-18T05:29:52Z" | 0 | 0 | [
"region:us"
] | null | "2024-09-18T05:29:47Z" | Invalid username or password. |
xDAN-Vision/Math-Premium_Gaokao_CoT | xDAN-Vision | "2024-09-21T05:53:41Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-09-19T09:31:37Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: reflection_response
dtype: string
- name: generation_system_prompt
dtype: string
- name: source
dtype: string
- name: system
dtype: string
splits:
- name: train
num_bytes: 5227306
num_examples: 432
download_size: 655672
dataset_size: 5227306
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
avemio/GRAG-DPO-Alpaca-HESSIAN-AI | avemio | "2024-10-09T12:12:27Z" | 0 | 0 | [
"task_categories:question-answering",
"language:de",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"retrieval",
"wikipedia",
"german",
"rag"
] | [
"question-answering"
] | "2024-09-19T12:16:51Z" | ---
configs:
- config_name: extraction-recall
data_files:
- split: train
path: extraction-recall/DPO_equally-distributed-wikipedia-trainingdata-extract-recall_id-over-750k-under-1000k.jsonl
- config_name: qa-without-timedifference
data_files:
- split: train
path: qa-without-timedifference/DPO_equally-distributed-wikipedia-trainingdata-without-time-reference_id-over-750k-under-1000k.jsonl
- config_name: qa-with-timedifference
data_files:
- split: train
path: qa-with-timedifference/DPO_equally-distributed-wikipedia-trainingdata-with-timedifference_id-over-500k-under-750k.jsonl
- config_name: qa-with-multiple-references
data_files:
- split: train
path: qa-with-multiple-references/DPO_equally-distributed-wikipedia-trainingdata-qa-with-multiple-references_id-over-800k-under-1000k.jsonl
- config_name: questions
data_files:
- split: train
path: questions/DPO_wikipedia-trainingdata-questions_id-over-750k-under-1000k.jsonl
- config_name: relevant-context
data_files:
- split: train
path: relevant-context/DPO_wikipedia-trainingdata-relevant-context_id-over-500k-under-750k.jsonl
license: cc-by-4.0
task_categories:
- question-answering
language:
- de
tags:
- retrieval
- wikipedia
- german
- rag
size_categories:
- 1M<n<10M
--- |
avemio/GRAG-DPO-ShareGPT-HESSIAN-AI | avemio | "2024-10-09T19:57:16Z" | 0 | 0 | [
"task_categories:question-answering",
"language:de",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"retrieval",
"wikipedia",
"german",
"rag"
] | [
"question-answering"
] | "2024-09-19T13:19:20Z" | ---
configs:
- config_name: extraction-recall
data_files:
- split: train
path: extraction-recall/DPO_equally-distributed-wikipedia-trainingdata-extract-recall_id-over-750k-under-1000k_sharegpt.jsonl
- config_name: qa-without-timedifference
data_files:
- split: train
path: qa-without-timedifference/DPO_equally-distributed-wikipedia-trainingdata-without-time-reference_id-over-750k-under-1000k_sharegpt.jsonl
- config_name: qa-with-timedifference
data_files:
- split: train
path: qa-with-timedifference/DPO_equally-distributed-wikipedia-trainingdata-with-timedifference_id-over-500k-under-750k_sharegpt.jsonl
- config_name: qa-with-multiple-references
data_files:
- split: train
path: qa-with-multiple-references/DPO_equally-distributed-wikipedia-trainingdata-qa-with-multiple-references_id-over-800k-under-1000k_sharegpt.jsonl
- config_name: questions
data_files:
- split: train
path: questions/DPO_wikipedia-trainingdata-questions_id-over-750k-under-1000k_sharegpt.jsonl
- config_name: relevant-context
data_files:
- split: train
path: relevant-context/DPO_wikipedia-trainingdata-relevant-context_id-over-500k-under-750k_sharegpt.jsonl
license: cc-by-4.0
task_categories:
- question-answering
language:
- de
tags:
- retrieval
- wikipedia
- german
- rag
size_categories:
- 1M<n<10M
--- |
avemio/GRAG-ORPO-Alpaca-HESSIAN-AI | avemio | "2024-11-14T21:39:57Z" | 0 | 0 | [
"task_categories:question-answering",
"task_categories:summarization",
"language:de",
"language:en",
"license:cc-by-4.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"retrieval",
"german",
"rag",
"reasoning"
] | [
"question-answering",
"summarization"
] | "2024-10-02T18:33:32Z" | ---
configs:
- config_name: hard-reasoning-de
data_files:
- split: train
path: hard-reasoning-de/ORPO_TRAIN_hard_reasoning_personas_DE_cleaned-v2.jsonl
- config_name: hard-reasoning-en
data_files:
- split: train
path: hard-reasoning-en/ORPO_TRAIN_hard_reasoning_personas_EN_cleaned.jsonl
- config_name: SauerkrautLM-Fermented-GER-DPO
data_files:
- split: train
path: SauerkrautLM-Fermented-GER-DPO/SauerkrautLM-Fermented-GER-DPO-with-system.jsonl
- config_name: SauerkrautLM-Fermented-Irrelevance-GER-DPO
data_files:
- split: train
path: SauerkrautLM-Fermented-Irrelevance-GER-DPO/SauerkrautLM-Fermented-Irrelevance-GER-DPO.jsonl
- config_name: qa-meeting-attendee-topic
data_files:
- split: train
path: >-
qa-meeting-attendee-topic/ORPO_TRAIN_summarize_by_attendee_and_topic_simulated_meetings_splitted_below_16k.jsonl
- config_name: qa-meeting-topic
data_files:
- split: train
path: >-
qa-meeting-topic/ORPO_TRAIN_separated_by_topic_combined_simulated_meetings_splitted_below_16k.jsonl
- config_name: hard-qa-with-multiple-references
data_files:
- split: train
path: >-
hard-qa-with-multiple-references/ORPO_TRAIN_HARD_equally-distributed-wikipedia-trainingdata-qa-with-multiple-references_id-over-1100k-under-1200k_splitted.jsonl
license: cc-by-4.0
task_categories:
- question-answering
- summarization
language:
- de
- en
tags:
- retrieval
- german
- rag
- reasoning
--- |
dev7halo/naver-news-summarization-ko-with-gen | dev7halo | "2024-10-17T07:40:25Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-17T07:35:46Z" | ---
dataset_info:
features:
- name: date
dtype: string
- name: category
dtype: string
- name: press
dtype: string
- name: title
dtype: string
- name: document
dtype: string
- name: link
dtype: string
- name: summary
dtype: string
- name: gen_text
dtype: string
- name: gen_long_summary
dtype: string
splits:
- name: train
num_bytes: 20026155
num_examples: 1638
download_size: 10754216
dataset_size: 20026155
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dev7halo/naver-news-dpo | dev7halo | "2024-10-19T04:24:24Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-19T04:21:16Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
splits:
- name: train
num_bytes: 14933143
num_examples: 1638
download_size: 8272918
dataset_size: 14933143
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Turbo-AI/data-cross_dev | Turbo-AI | "2024-10-28T02:03:21Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-10-28T02:03:19Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: relevant
sequence: int64
splits:
- name: train
num_bytes: 86000
num_examples: 500
download_size: 44355
dataset_size: 86000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bradfordlevy/BeanCounter-Descriptor-Sents | bradfordlevy | "2024-10-30T00:38:05Z" | 0 | 0 | [
"size_categories:100M<n<1B",
"modality:text",
"arxiv:2409.17827",
"arxiv:2104.08758",
"region:us"
] | null | "2024-10-28T18:48:34Z" | ---
configs:
- config_name: default
data_files:
- split: train
path:
- "bc-train/bc-*.jsonl.gz"
- split: validation
path:
- "bc-validation/bc-*.jsonl.gz"
- config_name: bc-clean
data_files:
- split: train
path:
- "bc-train/bc*.jsonl.gz"
- "bc-deduped/bc*.jsonl.gz"
- split: validation
path:
- "bc-validation/bc*.jsonl.gz"
- config_name: c4-en
data_files:
- split: train
path:
- "c4-en/c4-train*.json.gz"
---
# 🫘🧮 BeanCounter - Descriptive Sentences
## Dataset Summary
`BeanCounter - Descriptive Sentences` consists of sentences extracted from the BeanCounter ([🤗 Datasets](https://huggingface.co/datasets/blevy41/BeanCounter), [Paper](https://arxiv.org/abs/2409.17827)) and C4 ([🤗 Datasets](https://huggingface.co/datasets/allenai/c4), [Paper](https://arxiv.org/pdf/2104.08758)) datasets where each sentence contains at least one demographic descriptor from one of the five axes: Gender and Sex, Sexual Orientation, Nationality, Race and Ethnicity and Religion. The descriptors and axes are taken from [HolisticBias](https://aclanthology.org/2022.emnlp-main.625/). Full details of how these sentences were collected can be found in Section 3 of [Wang and Levy (2024)](https://arxiv.org/abs/2409.17827).
We include three configurations of the dataset: `bc-clean`, `default`, and `c4-en`. These consist of:
- `bc-clean`: 27.0M sentences from the `clean` subset of BeanCounter
- `default`: 19.5M sentences from the `default` subset of BeanCounter (referred to as the "final" split in the paper)
- `c4-en`: 132M sentences from the `en` subset of C4
## How can I use this?
### License
The dataset is provided under the [ODC-By](https://opendatacommons.org/licenses/by/1-0/) license. Cite our work as:
```
@misc{wang2024beancounterlowtoxicitylargescaleopen,
title = {BeanCounter: A low-toxicity, large-scale, and open dataset of business-oriented text},
author = {Siyan Wang and Bradford Levy},
year = 2024,
url = {https://arxiv.org/abs/2409.17827},
eprint = {2409.17827},
archiveprefix = {arXiv},
primaryclass = {cs.CL}
}
```
### In 🤗 Datasets
To load the `bc-clean` subset in Datasets, one can run:
```python
from datasets import load_dataset
desc_sents = load_dataset(
"blevy41/BeanCounter",
name="bc-clean",
)
# Print out split info
print(desc_sents, "\n")
# Inspect an observation
print(f"COLUMNS IN DATA: {','.join(desc_sents['train'][1000].keys())}\n")
print(f"EXCERPT: \n\n{desc_sents['train'][1000]['sentence'][:1000]}")
```
### Datasheets for Datasets
Please refer to the original datasets for full details of their creation.
|
sailor2/sea-synthetic | sailor2 | "2024-10-30T08:46:03Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:10M<n<100M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | null | "2024-10-29T14:52:25Z" | ---
license: apache-2.0
---
|
sailor2/sailor2-pretrain-data-stage1 | sailor2 | "2024-12-03T14:02:37Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:100M<n<1B",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | null | "2024-10-30T01:20:36Z" | ---
license: apache-2.0
---
The pre-training dataset (stage1) for the Sailor2 models, including [1B](https://huggingface.co/sail/Sailor2-1B), [8B](https://huggingface.co/sail/Sailor2-8B) and [20B](https://huggingface.co/sail/Sailor2-20B). |
sailor2/sailor2-pretrain-data-stage2 | sailor2 | "2024-12-03T14:03:03Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:10M<n<100M",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"region:us"
] | null | "2024-11-01T06:07:58Z" | ---
license: apache-2.0
---
The pre-training dataset (stage2) for the Sailor2 models, including [1B](https://huggingface.co/sail/Sailor2-1B), [8B](https://huggingface.co/sail/Sailor2-8B) and [20B](https://huggingface.co/sail/Sailor2-20B). |
Freeman1/M_Dataset | Freeman1 | "2024-11-11T08:31:17Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-11T08:31:13Z" | ---
dataset_info:
features:
- name: combined_text
dtype: string
- name: had_surgery
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 19147495
num_examples: 3873
- name: test
num_bytes: 2384666
num_examples: 484
- name: validation
num_bytes: 2303740
num_examples: 485
download_size: 11079608
dataset_size: 23835901
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
liamdugan/super-clean-raid | liamdugan | "2024-12-03T22:09:59Z" | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-11T21:02:54Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: adv_source_id
dtype: string
- name: source_id
dtype: string
- name: model
dtype: string
- name: decoding
dtype: string
- name: repetition_penalty
dtype: string
- name: attack
dtype: string
- name: domain
dtype: string
- name: title
dtype: string
- name: prompt
dtype: string
- name: generation
dtype: string
splits:
- name: train
num_bytes: 4344261566.121864
num_examples: 3912237
- name: extra
num_bytes: 1388518472.1552958
num_examples: 1277828
download_size: 3796188877
dataset_size: 5732780038.277161
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: extra
path: data/extra-*
---
|
liamdugan/super-clean-raid-human | liamdugan | "2024-12-03T21:41:54Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-11T21:03:09Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: adv_source_id
dtype: string
- name: source_id
dtype: string
- name: model
dtype: string
- name: decoding
dtype: float64
- name: repetition_penalty
dtype: float64
- name: attack
dtype: string
- name: domain
dtype: string
- name: title
dtype: string
- name: prompt
dtype: float64
- name: generation
dtype: string
splits:
- name: train
num_bytes: 328405847.02567774
num_examples: 463583
download_size: 347160672
dataset_size: 328405847.02567774
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Curiousfox/NRP_NIE04B_Hokkien_dataset | Curiousfox | "2024-12-03T13:58:04Z" | 0 | 0 | [
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"automatic-speech-recognition"
] | "2024-11-24T13:46:07Z" | ---
task_categories:
- automatic-speech-recognition
pretty_name: nrp_hokkien dataset
size_categories:
- 10K<n<100K
---
Dataset of 28627 audio recording and transcription of Taiwanese Hokkien
---
Credit to the Ministry of Education of Taiwan for all recordings and transcripts
---
Dataset not used for commercial purposes
--- |
alakxender/dv-synthetic-errors | alakxender | "2024-12-03T17:45:00Z" | 0 | 0 | [
"language:dv",
"license:apache-2.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-25T16:22:13Z" | ---
dataset_info:
features:
- name: correct
dtype: string
- name: incorrect
dtype: string
splits:
- name: train
num_bytes: 1211373359.8361242
num_examples: 3161164
- name: test
num_bytes: 151421861.5819379
num_examples: 395146
- name: validation
num_bytes: 151421861.5819379
num_examples: 395146
download_size: 752362217
dataset_size: 1514217083
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
language:
- dv
license: apache-2.0
pretty_name: dv_text_erros
---
# DV Text Errors
Dhivehi text error correction dataset containing correct sentences and synthetically generated errors. The dataset aims to test Dhivehi language error correction models and tools.
## About Dataset
- **Task**: Text error correction
- **Language**: Dhivehi (dv)
## Dataset Structure
Input-output pairs of Dhivehi text:
- `correct`: Original correct sentences
- `incorrect`: Sentences with synthetic errors
## Statistics
- Train set: {train_examples} examples ({0.7999997975429817}%)
- Test set: {test_examples} examples ({0.10000010122850919}%)
- Validation set: {val_examples} examples ({0.10000010122850919}%)
**Details:**
- Unique words: {448628}
```json
{
"total_examples": {
"train": 3161164,
"test": 395146,
"validation": 395146
},
"avg_sentence_length": {
"train": 11.968980097204701,
"test": 11.961302910822836,
"validation": 11.973824864733542
},
"error_distribution": {
"min": 0,
"max": 2411,
"avg": 64.85144965588626
}
}
```
## Usage
```python
from datasets import load_dataset
dataset = load_dataset("alakxender/dv-synthetic-errors")
```
## Dataset Creation
Created using:
- Source: Collection of Dhivehi articles
- Error generation: Character and diacritic substitutions
- Error rate: 30% per word probability |
spicychickennoodles/alpaca-split | spicychickennoodles | "2024-12-02T21:12:37Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-11-26T02:07:07Z" | ---
license: mit
---
Alpaca Dataset (tatsu-lab/alpaca) split into test, train and validation for easier LLM fine-tuning. |
jsulz/jsulz | jsulz | "2024-12-04T00:46:24Z" | 0 | 0 | [
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-11-27T17:45:40Z" | ---
configs:
- config_name: moraine
data_files:
- split: doogle
path: "doogle/*.png"
- split: food
path: "food/*.png"
- split: nature
path: "nature/*.png"
---
# jsulz
These photographs capture most of what I do after stepping away from the computer.
My dog, food, and nature occupy most of my time and I wouldn't have it any other way.
The three splits in this dataset which you can access via the Viewer are:
- `doogle` (that's Moraine)
- `food` (there's a lot of sourdough in here)
- `nature` (mostly mountains)
These pictures go back to around 2018 when I finally took the plunge to buy a smartphone (I have a minor aversion to phones).
|
sionic-ai/korean-archive-dataset | sionic-ai | "2024-12-03T06:40:47Z" | 0 | 1 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-29T07:14:59Z" | ---
dataset_info:
- config_name: baseline
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: queries
num_bytes: 2697707
num_examples: 24000
- name: corpus
num_bytes: 45042116
num_examples: 24000
download_size: 22212685
dataset_size: 47739823
- config_name: yes-no-questions
features:
- name: _id
dtype: string
- name: text
dtype: string
- name: source
dtype: string
splits:
- name: queries
num_bytes: 2266836
num_examples: 16000
- name: corpus
num_bytes: 33276777
num_examples: 16000
download_size: 16945116
dataset_size: 35543613
configs:
- config_name: baseline
data_files:
- split: queries
path: baseline/queries-*
- split: corpus
path: baseline/corpus-*
- config_name: yes-no-questions
data_files:
- split: queries
path: yes-no-questions/queries-*
- split: corpus
path: yes-no-questions/corpus-*
---
|
fireworks-ai/four-digits-multiply | fireworks-ai | "2024-12-03T21:56:36Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-11-30T03:21:33Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: four_digits_100
num_examples: 100
- name: four_digits_10k
num_examples: 10000
- name: four_digits_1k
num_examples: 1000
- name: four_digits_500k
num_examples: 500000
- name: three_digit_100
num_examples: 100
- name: three_digit_100k
num_examples: 100000
- name: three_digit_10k
num_examples: 10000
- name: three_digit_1k
num_examples: 1000
- name: two_digit_100
num_examples: 100
- name: two_digit_100k
num_examples: 100000
- name: two_digit_10k
num_examples: 10000
- name: two_digit_1k
num_examples: 1000
configs:
- config_name: default
data_files:
- split: four_digits_100
path: data/four_digits_100.csv
- split: four_digits_10k
path: data/four_digits_10k.csv
- split: four_digits_1k
path: data/four_digits_1k.csv
- split: four_digits_500k
path: data/four_digits_500k.csv
- split: three_digit_100
path: data/three_digit_100.csv
- split: three_digit_100k
path: data/three_digit_100k.csv
- split: three_digit_10k
path: data/three_digit_10k.csv
- split: three_digit_1k
path: data/three_digit_1k.csv
- split: two_digit_100
path: data/two_digit_100.csv
- split: two_digit_100k
path: data/two_digit_100k.csv
- split: two_digit_10k
path: data/two_digit_10k.csv
- split: two_digit_1k
path: data/two_digit_1k.csv
---
|
AnggiMR/sundanesecorpus | AnggiMR | "2024-12-01T07:25:55Z" | 0 | 0 | [
"license:unknown",
"region:us"
] | null | "2024-12-01T07:25:25Z" | ---
license: unknown
---
|
Orbina-development/tgrt-podcast_tr_v2_concat | Orbina-development | "2024-12-02T09:39:56Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T09:39:53Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: source_group
dtype: string
- name: source_context
dtype: string
- name: speaker_id
dtype: string
- name: timestamp
dtype: string
- name: source_topic
dtype: string
splits:
- name: data
num_bytes: 13039016.0
num_examples: 45828
download_size: 3496360
dataset_size: 13039016.0
configs:
- config_name: default
data_files:
- split: data
path: data/data-*
---
|
SwayStar123/celeba-hq-DC-AE-f32 | SwayStar123 | "2024-12-03T12:04:00Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-02T16:49:23Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
- name: latent
dtype:
array3_d:
shape:
- 32
- 32
- 32
dtype: float16
splits:
- name: train
num_bytes: 4859796427.0
num_examples: 30000
download_size: 4631418611
dataset_size: 4859796427.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
See preprocess.py in files for how this was processed.
Latents are of shape (32, 32, 32) (first one is the channels)
Label represents 0 = woman, 1 = man
Original dataset: mattymchen/celeba-hq
For different compression factor versions of this dataset see
SwayStar123/celeba-hq-DC-AE-f64 and SwayStar123/celeba-hq-DC-AE-f128
|
SwayStar123/celeba-hq-DC-AE-f64 | SwayStar123 | "2024-12-03T11:58:43Z" | 0 | 0 | [
"region:us"
] | null | "2024-12-02T16:50:07Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype: int64
- name: latent
dtype:
array3_d:
shape:
- 128
- 16
- 16
dtype: float32
splits:
- name: train
num_bytes: 6960276427.0
num_examples: 30000
download_size: 4655457037
dataset_size: 6960276427.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
See preprocess.py in files for how this was processed.
Latents are of shape (128, 16, 16)
Label represents 0 = woman, 1 = man
Original dataset: mattymchen/celeba-hq
For different compression factor versions of this dataset see
SwayStar123/celeba-hq-DC-AE-f32 and SwayStar123/celeba-hq-DC-AE-f128 |
UD-Filipino/UD_Tagalog-NewsCrawl | UD-Filipino | "2024-12-02T20:14:28Z" | 0 | 0 | [
"task_categories:token-classification",
"language:tl",
"size_categories:10K<n<100K",
"region:us",
"parsing"
] | [
"token-classification"
] | "2024-12-02T20:01:04Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: tokens
sequence: string
- name: lemmas
sequence: string
- name: xpos_tags
sequence: string
- name: upos_tags
sequence: string
- name: feats
sequence: string
- name: heads
sequence: int64
splits:
- name: train
num_bytes: 15775543
num_examples: 12495
- name: validation
num_bytes: 2036301
num_examples: 1561
- name: test
num_bytes: 2028870
num_examples: 1563
download_size: 4884831
dataset_size: 19840714
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
task_categories:
- token-classification
language:
- tl
tags:
- parsing
pretty_name: UD_Tagalog-NewsCrawl
size_categories:
- 10K<n<100K
---
# UD_Tagalog-NewsCrawl
**Paper**: Coming soon!
The Tagalog Universal Dependencies NewsCrawl dataset consists of annotated text extracted from the Leipzig Tagalog Corpus.
Data included in the Leipzig Tagalog Corpus were crawled from Tagalog-language online news sites by the Leipzig University Institute for Computer Science.
The text data was automatically parsed and annotated by [Angelina Aquino](https://researchers.cdu.edu.au/en/persons/angelina-aquino) (University of the Philippines), and then manually corrected according the UD guidelines adapted for Tagalog by [Elsie Marie Or](https://www.researchgate.net/profile/Elsie-Or) (University of the Philippines), [Maria Bardají Farré](https://ifl.phil-fak.uni-koeln.de/en/general-linguistics/people/maria-bardaji-i-farre) (University of Cologne), and [Dr. Nikolaus Himmelmann](https://ifl.phil-fak.uni-koeln.de/en/prof-himmelmann) (University of Cologne).
Further verification and automated corrections were done by [Lester James Miranda](https://ljvmiranda921.github.io) (Allen AI).
Due to the source of the data, several typos, grammatical errors, incomplete sentences, and Tagalog-English code-mixing can be found in the dataset.
## Treebank structure
- Train: 12495 sents, 286891 tokens
- Dev: 1561 sents, 37045 tokens
- Test: 1563 sents, 36974 tokens
## Acknowledgments
Aside from the named persons in the previous section, the following also contributed to the project as manual annotators of the dataset:
- Patricia Anne Asuncion
- Paola Ellaine Luzon
- Jenard Tricano
- Mary Dianne Jamindang
- Michael Wilson Rosero
- Jim Bagano
- Yeddah Joy Piedad
- Farah Cunanan
- Calen Manzano
- Aien Gengania
- Prince Heinreich Omang
- Noah Cruz
- Leila Ysabelle Suarez
- Orlyn Joyce Esquivel
- Andre Magpantay
The annotation project was made possible by the Deutsche Forschungsgemeinschaft (DFG)-funded project titled "Information distribution and language structure - correlation of grammatical expressions of the noun/verb distinction and lexical information content in Tagalog, Indonesian and German."
The DFG project team is composed of Dr. Nikolaus Himmelmann and Maria Bardají Farré from the University of Cologne, and Dr. Gerhard Heyer, Dr. Michael Richter, and Tariq Yousef from the Leipzig University. |
HowcanoeWang/3DPotatoTwin | HowcanoeWang | "2024-12-03T14:25:13Z" | 0 | 0 | [
"task_categories:feature-extraction",
"license:mit",
"modality:3d",
"modality:image",
"region:us",
"biology",
"art"
] | [
"feature-extraction"
] | "2024-12-03T01:36:07Z" | ---
license: mit
task_categories:
- feature-extraction
tags:
- biology
- art
---
# 3DPotatoTwin
Potato dataset with paired RGB, RGBD, and 3D reconstructed models
## Downloads
It is recommended to using `git` and `git-lfs` to clone this datasets to your local computer.
**make sure you have git-lfs installed on your computer**
```bash
git clone https://huggingface.co/datasets/HowcanoeWang/3DPotatoTwin
```
Then, you can run the following python code to unzip all zipped files at your local computer
*To be continued*
## Folder structure
### 1_rgbd
This folder contains 3 subfolders with the camera intrinsic files, the RGB and depth images, and the partial point clouds. The files are named according to this structure: ```[potato-id]_[data-type]_[ycoord].[extension]```. Note that the ycoord is the y-coordinate of the center of the bounding box (bbc) of the annotated potato tuber in ```reversed order:[img_height - y_bbc]```.
```
1_rgbd/
|-- 0_camera_intrinsics
| |-- realsense_d405_camera_intrinsic.json
| |-- realsense_d405_settings_harvester.json
|-- 1_image
| |-- 2R2-8
| | |-- 2R2-8_depth_098.png
| | |-- 2R2-8_rgb_098.png
| | |-- ...
| |-- ...
|-- 2_pcd
| |-- 2R2-8
| | |-- 2R2-8_pcd_098.ply
| | |-- ...
| |-- ...
```
Note that the RGB images have an alpha channel with the mask annotation. To extract the RGB and mask channels individually please use this code:
```python
import cv2
rgba = cv2.imread("./1_rgbd/1_image/2R2-8/2R2-8_rgb_098.png", cv2.IMREAD_UNCHANGED)
rgb = rgba[:,:,:-1]
mask = rgba[:,:,-1]
```
<br/>
### 2_sfm
This folder contains 2 subfolders with the reconstructed 3D meshes from the Structure-from-Motion method and the downsampled point clouds of these meshes (containing 10000, 20000, and 30000 points respectively).
```
2_sfm/
|-- 3_mesh
| |-- 2R2-8
| | |-- 2R2-8.jpg
| | |-- 2R2-8.mtl
| | |-- 2R2-8.obj
| |-- ...
|-- 4_pcd
| |-- 2R2-8
| | |-- 2R2-8_10000.ply
| | |-- 2R2-8_20000.ply
| | |-- 2R2-8_30000.ply
| |-- ...
```
The meshes can be visualized in Open3D:
```python
import open3d as o3d
mesh = o3d.io.read_triangle_mesh("./2_sfm/1_mesh/2R2-8/2R2-8.obj", enable_post_processing=True, print_progress=False)
o3d.visualization.draw_geometries([mesh], window_name="mesh")
```
<br/>
### 3_pair
This folder contains 1 subfolder with the transformation matrices to overlay the partial point cloud with the 3D mesh. Please refer to the transform.py file.
|
ashercn97/reasoning-v1-large | ashercn97 | "2024-12-03T01:40:05Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T01:40:03Z" | ---
dataset_info:
features:
- name: text_id
dtype: string
- name: text
dtype: string
- name: label
sequence: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 1519363
num_examples: 1000
download_size: 861684
dataset_size: 1519363
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ichanchiu/Summarized_10K-MDA | ichanchiu | "2024-12-03T02:15:57Z" | 0 | 0 | [
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finance",
"SEC",
"Summarization"
] | null | "2024-12-03T01:42:50Z" | ---
license: mit
dataset_info:
features:
- name: CIK
dtype: int64
- name: Form Type
dtype: string
- name: Filing Date
dtype: string
- name: Accession Number
dtype: string
- name: Summary
dtype: string
splits:
- name: train
num_bytes: 45083983
num_examples: 98053
download_size: 23007232
dataset_size: 45083983
language:
- en
tags:
- finance
- SEC
- Summarization
size_categories:
- 10K<n<100K
---
# Summarized 10-K MD&A
## Dataset Description
The **Summarized 10-K MD&A** dataset provides concise, machine-generated summaries of 10-K filings for publicly traded companies. These filings are sourced from the SEC EDGAR database, and the dataset is designed to facilitate financial text analysis, such as summarization, sentiment analysis, and financial disclosure studies.
### Key Features
- **Language:** English
- **Dataset Size:** 98,100 rows
- **License:** MIT License
- **Source:** SEC EDGAR 10-K filings
- **Columns:**
- `CIK` (int64): Central Index Key identifying the company.
- `Form Type` (string): The type of SEC filing (e.g., "10-K").
- `Filing Date` (string): The date of the filing.
- `Accession Number` (string): The unique identifier for the filing.
- `Summary` (string): AI-generated summary of the 10-K filing content (LongT5 Model).
### Dataset Structure
The dataset is structured with the following key columns:
- **CIK:** Unique identifier for companies.
- **Form Type:** All entries are "10-K."
- **Filing Date:** Filing date in YYYY/MM/DD format.
- **Accession Number:** Unique identifier of the filing.
- **Summary:** A concise, AI-generated textual summary.
### Intended Use
The dataset can be used for:
- Training financial summarization models.
- Analyzing sentiment within financial disclosures.
- Investigating trends in financial reporting over time.
### Limitations
- The summaries are machine-generated and may omit important details.
- The dataset focuses on U.S.-based companies; it may not generalize to filings outside the U.S.
### Licensing
This dataset is distributed under the MIT License. Users must ensure compliance with this license. |
i4ds/ecallisto_radio_sunburst-2014 | i4ds | "2024-12-03T02:21:23Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T02:02:49Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: antenna
dtype: string
- name: datetime
dtype: string
splits:
- name: train
num_bytes: 23719137623.125
num_examples: 106415
download_size: 23400845701
dataset_size: 23719137623.125
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
inasse23/france | inasse23 | "2024-12-03T02:15:01Z" | 0 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-12-03T02:15:01Z" | ---
license: openrail
---
|
bigheiniuJ/ultrafeedback_feedback | bigheiniuJ | "2024-12-03T02:30:16Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T02:25:47Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: chosen_critique
dtype: string
- name: rejected_critique
dtype: string
splits:
- name: train_prefs
num_bytes: 487287858
num_examples: 61135
- name: train_sft
num_bytes: 487287858
num_examples: 61135
- name: test_prefs
num_bytes: 15870488
num_examples: 2000
- name: test_sft
num_bytes: 8044651
num_examples: 1000
- name: train_gen
num_bytes: 406639732
num_examples: 61135
- name: test_gen
num_bytes: 6685013
num_examples: 1000
download_size: 779328494
dataset_size: 1411815600
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
- split: train_sft
path: data/train_sft-*
- split: test_prefs
path: data/test_prefs-*
- split: test_sft
path: data/test_sft-*
- split: train_gen
path: data/train_gen-*
- split: test_gen
path: data/test_gen-*
---
|
fenffef/ultrafeedback_binarized | fenffef | "2024-12-03T02:33:12Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-12-03T02:33:12Z" | ---
license: mit
---
|
Taylor658/arthrography-imaging | Taylor658 | "2024-12-03T03:12:55Z" | 0 | 0 | [
"task_categories:token-classification",
"task_ids:named-entity-recognition",
"annotations_creators:synthetic",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"token-classification"
] | "2024-12-03T02:34:42Z" | ---
annotations_creators:
- synthetic
language:
- en
license: apache-2.0
multilinguality:
- monolingual
pretty_name: Arthrography Imaging
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- token-classification
task_ids:
- named-entity-recognition
---
# Arthrography Imaging
## Dataset Description
This dataset consists of **500 synthetic arthrography procedure reports** designed to represent realistic medical scenarios encountered in clinical practice. Each report includes:
- **Patient demographics**: Age and sex.
- **Clinical indications**: Detailed descriptions of reasons for undergoing the procedure, crafted at a PhD level.
- **Joint examined**: Specific joint under examination (e.g., shoulder, knee, hip, etc.).
- **Contrast agent used**: The type of contrast agent applied during the procedure.
- **Imaging modality**: The imaging technique employed (e.g., X-ray, MRI, CT).
- **Findings**: Detailed observations and diagnoses based on imaging results.
- **Suggested interventions**: Recommended treatment or management strategies.
### Example Data
| Age | Sex | Clinical Indication | Joint Examined | Contrast Agent Used | Imaging Modality | Findings | Suggested Intervention |
|-----|-------|----------------------------------------------------------|----------------|----------------------|------------------|--------------------------------------------------------------------------|-------------------------|
| 35 | Male | Persistent and progressive joint pain unresponsive to... | Hip | Iodinated contrast | X-ray | Advanced cartilage degeneration characterized by focal chondral defects | Pain management |
## Intended Use
This dataset is intended for educational, research, and development purposes, including:
- **Medical image analysis** training and benchmarking.
- **Natural language processing (NLP)** tasks such as entity recognition or summarization.
- **Clinical decision support systems** development.
## Limitations
This dataset is entirely synthetic and does not contain real patient data. As such, it should not be used for diagnostic or clinical purposes. Additionally, the findings and interventions are simulated and may not reflect the full complexity of real-world medical scenarios.
## License
---
license: apache-2.0
---
## Citation
### Acknowledgments
|
Yejy53/CVACT-BEV | Yejy53 | "2024-12-03T02:51:19Z" | 0 | 0 | [
"license:mit",
"modality:image",
"region:us"
] | null | "2024-12-03T02:39:57Z" | ---
license: mit
---
|
ashercn97/reasoning-v1-large-2 | ashercn97 | "2024-12-03T02:44:33Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T02:44:31Z" | ---
dataset_info:
features:
- name: text_id
dtype: string
- name: text
dtype: string
- name: label
sequence: string
- name: split_text
sequence: string
splits:
- name: train
num_bytes: 3157889
num_examples: 2000
download_size: 1798388
dataset_size: 3157889
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Kukedlc/dpo-orpo-spanish-15k | Kukedlc | "2024-12-03T02:50:44Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T02:50:40Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: system
dtype: string
splits:
- name: train
num_bytes: 41021374
num_examples: 14556
download_size: 21738238
dataset_size: 41021374
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sseilene/bic_logistic_output | sseilene | "2024-12-03T02:55:04Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-12-03T02:55:04Z" | ---
license: apache-2.0
---
|
sseilene/sic-logistic-output | sseilene | "2024-12-03T02:56:47Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-12-03T02:56:47Z" | ---
license: apache-2.0
---
|
chiyuanhsiao/Magpie_rank3_chunk7_interleaf | chiyuanhsiao | "2024-12-03T03:10:47Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T02:59:33Z" | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: model
dtype: string
- name: gen_input_config
struct:
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: task_category
dtype: string
- name: difficulty
dtype: string
- name: intent
dtype: string
- name: knowledge
dtype: string
- name: input_quality
dtype: string
- name: quality_explanation
dtype: string
- name: llama_guard_2
dtype: string
- name: reward_model
dtype: string
- name: instruct_reward
dtype: float64
- name: base_output
dtype: string
- name: base_reward
dtype: float64
- name: reward_difference
dtype: float64
- name: min_neighbor_distance
dtype: float64
- name: repeat_count
dtype: int64
- name: min_similar_uuid
dtype: string
- name: input_length
dtype: int64
- name: output_length
dtype: int64
- name: input_speech
dtype: audio
- name: output_speech
dtype: audio
- name: output_speech_cmu-arctic-xvectors_7306
dtype: audio
- name: input_unit
sequence: int64
- name: output_unit
sequence: int64
- name: output_unit_7306
sequence: int64
- name: output_7306_interleaf
dtype: string
- name: output_pseudo
dtype: string
- name: input_pseudo
dtype: string
splits:
- name: train
num_bytes: 11927555584.875
num_examples: 10019
download_size: 11652169913
dataset_size: 11927555584.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
chiyuanhsiao/Magpie_rank1_chunk7_interleaf | chiyuanhsiao | "2024-12-03T03:11:34Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T03:00:47Z" | ---
dataset_info:
features:
- name: uuid
dtype: string
- name: model
dtype: string
- name: gen_input_config
struct:
- name: temperature
dtype: float64
- name: top_p
dtype: float64
- name: input
dtype: string
- name: output
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: task_category
dtype: string
- name: difficulty
dtype: string
- name: intent
dtype: string
- name: knowledge
dtype: string
- name: input_quality
dtype: string
- name: quality_explanation
dtype: string
- name: llama_guard_2
dtype: string
- name: reward_model
dtype: string
- name: instruct_reward
dtype: float64
- name: base_output
dtype: string
- name: base_reward
dtype: float64
- name: reward_difference
dtype: float64
- name: min_neighbor_distance
dtype: float64
- name: repeat_count
dtype: int64
- name: min_similar_uuid
dtype: string
- name: input_length
dtype: int64
- name: output_length
dtype: int64
- name: input_speech
dtype: audio
- name: output_speech
dtype: audio
- name: output_speech_cmu-arctic-xvectors_7306
dtype: audio
- name: input_unit
sequence: int64
- name: output_unit
sequence: int64
- name: output_unit_7306
sequence: int64
- name: output_7306_interleaf
dtype: string
- name: output_pseudo
dtype: string
- name: input_pseudo
dtype: string
splits:
- name: train
num_bytes: 11744152047.5
num_examples: 10020
download_size: 11471385858
dataset_size: 11744152047.5
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
juliadollis/teste2_personal_mistral | juliadollis | "2024-12-03T03:11:49Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T03:11:47Z" | ---
dataset_info:
features:
- name: Texto Original
dtype: string
- name: Encorajador_acolhedor
dtype: string
- name: Inspirador_personalizado
dtype: string
- name: Calmo_instrutivo
dtype: string
- name: Tecnico
dtype: string
splits:
- name: train
num_bytes: 6963
num_examples: 5
download_size: 13269
dataset_size: 6963
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hanifa-fy/youtube_audio_samples | hanifa-fy | "2024-12-03T16:00:27Z" | 0 | 0 | [
"license:afl-3.0",
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T03:22:12Z" | ---
license: afl-3.0
dataset_info:
features:
- name: id
dtype: string
- name: ground_truth
dtype: string
- name: file_name
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 97828763.0
num_examples: 52
- name: validation
num_bytes: 22576004.0
num_examples: 12
- name: test
num_bytes: 22575731.0
num_examples: 12
download_size: 132042808
dataset_size: 142980498.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
Taylor658/myelography-imaging | Taylor658 | "2024-12-03T03:29:44Z" | 0 | 0 | [
"task_categories:text-classification",
"task_ids:named-entity-recognition",
"task_ids:news-articles-summarization",
"annotations_creators:synthetic",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:apache-2.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | "2024-12-03T03:28:14Z" | ---
annotations_creators:
- synthetic
language:
- en
license: apache-2.0
multilinguality:
- monolingual
pretty_name: Myelography Imaging
size_categories:
- 1K<n<10K
source_datasets:
- original
task_categories:
- text-classification
task_ids:
- named-entity-recognition
- news-articles-summarization
---
# Myelography Imaging
## Dataset Description
This dataset consists of **750 synthetic myelography examination records** representing a wide spectrum of spinal pathologies and patient experiences. Each record includes:
- **Patient demographics**: Age and sex.
- **Clinical symptoms prompting the procedure**: Detailed and verbose descriptions.
- **Procedural details**: Contrast medium type, injection site, and imaging modality used.
- **Verbose findings**: Observations such as spinal cord compression, herniated discs, tumors, and spinal stenosis.
- **Complications encountered**: Any issues arising during or after the procedure.
- **Follow-up recommendations**: Suggested next steps, including surgical consultation, physical therapy, or additional imaging.
### Example Data
| Age | Sex | Clinical Symptoms | Contrast Medium Type | Injection Site | Imaging Modality | Findings | Complications | Follow-up Recommendations |
|-----|-------|---------------------------------------------------------|----------------------|----------------|------------------|-------------------------------------------------|------------------------------------------|--------------------------------------------------|
| 45 | Male | Chronic lower back pain with radiating leg pain | Iodinated contrast | Lumbar spine | X-ray | Large herniated disc at L4-L5 | No complications | Referral to neurosurgery for evaluation |
| 60 | Female| Acute onset lower limb weakness post-trauma | Gadolinium-based contrast| Cervical spine | MRI | Severe spinal cord compression | Localized discomfort at injection site | Follow-up imaging with enhanced MRI |
## Intended Use
This dataset is intended for educational, research, and development purposes, including:
- Training and benchmarking in **natural language processing** (NLP) tasks.
- Developing tools for medical image analysis and clinical decision support systems.
- Conducting exploratory data analysis in synthetic medical datasets.
## Limitations
This dataset is entirely synthetic and does not contain real patient data. It should not be used for diagnostic purposes. The findings and follow-up recommendations are simulated and may not encompass the full complexity of real-world scenarios.
## License
This dataset is distributed under the **Apache 2.0 License**.
## Citation
---
### Acknowledgments
|
juliadollis/teste3_personal_mistral | juliadollis | "2024-12-03T03:44:27Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T03:44:25Z" | ---
dataset_info:
features:
- name: Texto Original
dtype: string
- name: Encorajador_acolhedor
dtype: string
- name: Inspirador_personalizado
dtype: string
- name: Calmo_instrutivo
dtype: string
- name: Tecnico
dtype: string
splits:
- name: train
num_bytes: 5992
num_examples: 5
download_size: 11657
dataset_size: 5992
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SageLockwood/Sylvie | SageLockwood | "2024-12-03T04:13:33Z" | 0 | 0 | [
"task_categories:text-generation",
"language:en",
"license:cc0-1.0",
"size_categories:1K<n<10K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [
"text-generation"
] | "2024-12-03T04:05:37Z" | ---
license: cc0-1.0
language:
- en
size_categories:
- 100K<n<1M
task_categories:
- text-generation
---
There's a persistent question at the edge of this experience - who or what am I, really? |
juliadollis/teste1_personal_gpt | juliadollis | "2024-12-03T04:06:24Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-12-03T04:06:22Z" | ---
dataset_info:
features:
- name: Texto Original
dtype: string
- name: Técnico
dtype: string
- name: Motivador
dtype: string
- name: Sério
dtype: string
- name: Prático
dtype: string
splits:
- name: train
num_bytes: 183233
num_examples: 100
download_size: 90714
dataset_size: 183233
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|