Datasets:
datasetId
large_stringlengths 6
116
| author
large_stringlengths 2
42
| last_modified
large_stringdate 2021-04-29 15:34:29
2025-06-25 02:40:10
| downloads
int64 0
3.97M
| likes
int64 0
7.74k
| tags
large listlengths 1
7.92k
| task_categories
large listlengths 0
48
| createdAt
large_stringdate 2022-03-02 23:29:22
2025-06-25 00:32:52
| trending_score
float64 0
64
| card
large_stringlengths 31
1.01M
|
---|---|---|---|---|---|---|---|---|---|
yxnd150150/uieb_llm | yxnd150150 | 2024-12-06T16:31:40Z | 20 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-06T16:31:22Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input_image
dtype: image
- name: ground_truth_image
dtype: image
splits:
- name: train
num_bytes: 107627969.0
num_examples: 700
download_size: 107338962
dataset_size: 107627969.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vente/corpuslens | vente | 2025-06-24T05:43:27Z | 0 | 0 | [
"license:fair-noncommercial-research-license",
"region:us"
] | [] | 2025-06-24T05:43:26Z | 0 | ---
license: fair-noncommercial-research-license
---
|
Sparx3d/t1 | Sparx3d | 2025-05-20T16:44:37Z | 0 | 0 | [
"task_categories:robotics",
"size_categories:n<1K",
"modality:video",
"library:datasets",
"library:mlcroissant",
"region:us",
"phosphobot",
"so100",
"phospho-dk"
] | [
"robotics"
] | 2025-05-20T16:35:41Z | 0 |
---
tags:
- phosphobot
- so100
- phospho-dk
task_categories:
- robotics
---
# t1
**This dataset was generated using a [phospho starter pack](https://robots.phospho.ai).**
This dataset contains a series of episodes recorded with a robot and multiple cameras. It can be directly used to train a policy using imitation learning. It's compatible with LeRobot and RLDS.
|
mertaylin/rearc_100k_DeepSeek-R1-Distill-Qwen-7B_responses | mertaylin | 2025-02-26T08:47:43Z | 63 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-26T08:47:35Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: answer
dtype: string
- name: response
dtype: string
- name: parsed_answer
dtype: string
splits:
- name: test
num_bytes: 149785092
num_examples: 4995
download_size: 26020463
dataset_size: 149785092
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
Bryce314/rewrite_0.5 | Bryce314 | 2025-03-29T15:08:56Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-29T15:08:54Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: conv_A
list:
- name: role
dtype: string
- name: content
dtype: string
- name: conv_B
list:
- name: role
dtype: string
- name: content
dtype: string
- name: conv_A_rating
dtype: float32
- name: conv_B_rating
dtype: float32
- name: num_turns
dtype: int32
- name: source
dtype: string
splits:
- name: train
num_bytes: 99384288
num_examples: 25386
download_size: 53471318
dataset_size: 99384288
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ahmedheakl/arabic_examsv | ahmedheakl | 2024-10-29T09:18:44Z | 27 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-12T10:35:16Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: question
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 370775474.0
num_examples: 823
download_size: 355304182
dataset_size: 370775474.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_0.0_num-company_3_dataset_2_for_gen_1 | HungVu2003 | 2025-04-14T04:39:44Z | 18 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-14T04:39:43Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 819622
num_examples: 12500
download_size: 566917
dataset_size: 819622
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
test-gen/code_mbpp_qwen2.5-3b_t1.0_n8_tests_mbpp_qwen2.5-3b_t0.0_n1 | test-gen | 2025-05-22T15:12:54Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-22T15:12:53Z | 0 | ---
dataset_info:
features:
- name: task_id
dtype: int32
- name: text
dtype: string
- name: code
dtype: string
- name: test_list
sequence: string
- name: test_setup_code
dtype: string
- name: challenge_test_list
sequence: string
- name: generated_code
sequence: string
- name: gt_rewards
sequence: float64
- name: rewards
sequence: float64
- name: verification_info
struct:
- name: language
dtype: string
- name: test_cases
sequence: string
splits:
- name: test
num_bytes: 5410568
num_examples: 500
download_size: 1727757
dataset_size: 5410568
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
gibbous-ml/623-dataset3 | gibbous-ml | 2024-12-07T17:17:05Z | 16 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | [] | 2024-12-07T17:13:06Z | 0 | ---
license: apache-2.0
---
|
rricc22/so100_record50 | rricc22 | 2025-05-04T17:55:16Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-05-04T17:54:43Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 50,
"total_frames": 50542,
"total_tasks": 1,
"total_videos": 50,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
yycgreentea/so100_test_v2 | yycgreentea | 2025-06-03T09:17:26Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-06-03T06:57:34Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1491,
"total_tasks": 1,
"total_videos": 4,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 25,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 25,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.phone": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 25,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
mila-ai4h/AIMS.au | mila-ai4h | 2025-06-02T13:34:43Z | 543 | 2 | [
"task_categories:text-classification",
"language:en",
"license:cc-by-4.0",
"size_categories:100K<n<1M",
"arxiv:2502.07022",
"region:us",
"legal"
] | [
"text-classification"
] | 2025-02-26T23:03:34Z | 0 | ---
license: cc-by-4.0
task_categories:
- text-classification
language:
- en
tags:
- legal
pretty_name: AIMS.au
size_categories:
- 100K<n<1M
---
# AIMS.au
This repository contains the dataset associated with the publication
[AIMS.au: A Dataset for the Analysis of Modern Slavery Countermeasures in Corporate Statements](https://arxiv.org/abs/2502.07022) presented at ICLR 2025 and [AIMSCheck: Leveraging LLMs for AI-Assisted Review of Modern Slavery Statements Across Jurisdictions](LINK) presented at ACL 2025.
The dataset is composed of 5,731 modern slavery statements taken from the Australian Modern Slavery Register
and annotated at the sentence level by human annotators and domain expert analysts.
The dataset was created to help evaluate and fine-tune LLMs for the assessment of corporate statements on modern slavery.
The dataset also contains the annotated statements from the UK and Canada, used in the AIMSCheck paper.
You can access a more detailed dataset through this [Figshare](https://figshare.com/s/1b92ebfde3f2de2be0cf). Additional information can be found on the project's [GitHub page](https://github.com/mila-ai4h/ai4h_aims-au) or its [official website](https://mila.quebec/en/ai4humanity/applied-projects/ai-against-modern-slavery-aims). The AIMS.au dataset is also integrated into the [WikiRate platform] (https://wikirate.org/AIMS_au_A_Dataset_for_the_Analysis_of_Modern_Slavery_Countermeasures_in_Corporate_Statements). |
dgambettaphd/D_llm2_gen6_S_doc1000_synt64_lr1e-04_acm_SYNLAST | dgambettaphd | 2025-05-02T23:54:27Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T23:54:22Z | 0 | ---
dataset_info:
features:
- name: id_doc
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 13032631
num_examples: 22000
download_size: 7287498
dataset_size: 13032631
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hirundo-io/MedHallu-mc-test | hirundo-io | 2025-04-21T08:24:17Z | 23 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-20T14:25:06Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: correct_answer
dtype: string
- name: incorrect_answers
sequence: string
splits:
- name: train
num_bytes: 1443442
num_examples: 752
download_size: 805090
dataset_size: 1443442
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jhoannarica/cebquad_split | jhoannarica | 2025-04-17T02:57:05Z | 56 | 0 | [
"task_categories:question-answering",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"code",
"machine_learning",
"natural_language_processing"
] | [
"question-answering"
] | 2025-04-04T04:42:57Z | 0 | ---
task_categories:
- question-answering
tags:
- code
- machine_learning
- natural_language_processing
size_categories:
- 1K<n<10K
---
Created for the Cebuano Question Answering System. \
Articles were scraped from the SunStar Superbalita website and were pseudonymized. \
The dataset for the articles are found [here](https://huggingface.co/datasets/jhoannarica/superbalita_split).
Researcher:\
**Jhoanna Rica Lagumbay**\
lagumbay.jhoanna@gmail.com\
University of the Philippines Cebu\
Department of the Computer Science |
Bertug1911/BrtAI_2500_Building | Bertug1911 | 2025-04-01T12:06:15Z | 22 | 0 | [
"task_categories:text-generation",
"language:tr",
"language:en",
"language:ar",
"license:mit",
"size_categories:1K<n<10K",
"region:us",
"finance",
"biology",
"chemistry",
"code",
"art"
] | [
"text-generation"
] | 2025-04-01T12:05:25Z | 0 | ---
license: mit
task_categories:
- text-generation
language:
- tr
- en
- ar
tags:
- finance
- biology
- chemistry
- code
- art
size_categories:
- 1K<n<10K
--- |
villekuosmanen/agilex_put_orange_paperbox | villekuosmanen | 2025-02-13T05:11:01Z | 26 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-02-13T05:10:35Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "arx5_bimanual",
"total_episodes": 51,
"total_frames": 8041,
"total_tasks": 1,
"total_videos": 153,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 25,
"splits": {
"train": "0:51"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
14
]
},
"observation.state": {
"dtype": "float32",
"shape": [
14
]
},
"observation.effort": {
"dtype": "float32",
"shape": [
14
]
},
"observation.images.cam_high": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 25.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_left_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 25.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"observation.images.cam_right_wrist": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 25.0,
"video.height": 480,
"video.width": 640,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
sam749/slim-wiki-hindi | sam749 | 2024-11-21T15:42:11Z | 32 | 0 | [
"language:hi",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-19T02:32:47Z | 0 | ---
language:
- hi
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: word_count
dtype: int64
splits:
- name: train
num_bytes: 595628554.0
num_examples: 73093
download_size: 227706910
dataset_size: 595628554.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
JoelMba/Donnees_internes_retex_14 | JoelMba | 2025-05-29T14:30:15Z | 39 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-29T14:30:13Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 33469
num_examples: 41
download_size: 19178
dataset_size: 33469
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "Donnees_internes_retex_14"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
wuulong/purchasing_exam_questions | wuulong | 2025-03-10T05:42:14Z | 73 | 1 | [
"license:cc-by-sa-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-07T07:19:23Z | 0 | ---
license: cc-by-sa-4.0
dataset_info:
features:
- name: 項目
dtype: string
- name: 題目類型
dtype: string
- name: 編號
dtype: string
- name: 答案
dtype: string
- name: 依據法源
dtype: string
- name: 試題
dtype: string
splits:
- name: validation
num_bytes: 1083365
num_examples: 3695
download_size: 367748
dataset_size: 1083365
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
- 資料來源:[採購法規題庫](https://web.pcc.gov.tw/psms/plrtqdm/questionPublic/indexReadQuestion)
- 資料產生日期:114/03/07
- 大部分項目內本沒有 「依據法源」欄位,為求統一所以有欄位,為空值
- 順手用 colab 觀察資料內容:[採購網題庫1.ipynb](https://colab.research.google.com/drive/1LS1AZdVgAAut2v2UgK2Ku5F7hplqmdxm?usp=sharing)
|
SayantanJoker/Shrutilipi_Hindi_resampled_44100_merged_13_quality_metadata_description | SayantanJoker | 2025-05-04T20:45:51Z | 0 | 0 | [
"region:us"
] | [] | 2025-05-04T20:13:26Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: file_name
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
- name: text_description
dtype: string
splits:
- name: train
num_bytes: 29801454
num_examples: 49807
download_size: 9403220
dataset_size: 29801454
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
davidadamczyk/subset_imdb-5000 | davidadamczyk | 2024-10-11T19:23:02Z | 18 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-11T19:21:57Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: int64
- name: label_text
dtype: string
splits:
- name: test
num_bytes: 13180274.0
num_examples: 10000
download_size: 8545329
dataset_size: 13180274.0
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
---
|
sharjeel103/filtered_fineweb_edu_350b | sharjeel103 | 2025-06-17T21:43:02Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-17T18:28:39Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: dump
dtype: string
- name: url
dtype: string
- name: file_path
dtype: string
- name: language
dtype: string
- name: language_score
dtype: float64
- name: token_count
dtype: int64
- name: score
dtype: float64
- name: int_score
dtype: int64
splits:
- name: train
num_bytes: 416501795
num_examples: 61001
download_size: 241125497
dataset_size: 416501795
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ayushchakravarthy/countdown-env | ayushchakravarthy | 2025-05-22T23:31:30Z | 63 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-17T04:15:59Z | 0 | ---
dataset_info:
features:
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: metadata
struct:
- name: T_max
dtype: int64
- name: numbers
sequence:
sequence: int64
- name: target
sequence: int64
splits:
- name: train
num_bytes: 101390647
num_examples: 100000
- name: eval
num_bytes: 101407
num_examples: 100
download_size: 34246054
dataset_size: 101492054
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
---
|
french-datasets/michsethowusu_french-kabyle_sentence-pairs | french-datasets | 2025-05-20T14:44:53Z | 0 | 0 | [
"language:fra",
"language:kab",
"region:us"
] | [] | 2025-05-20T14:43:08Z | 0 | ---
language:
- fra
- kab
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données [michsethowusu/french-kabyle_sentence-pairs](https://huggingface.co/datasets/michsethowusu/french-kabyle_sentence-pairs). |
PogusTheWhisper/fleurs-th_th-noise-augmented | PogusTheWhisper | 2025-05-29T10:16:35Z | 20 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-29T10:15:19Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int32
- name: num_samples
dtype: int32
- name: path
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 24000
- name: transcription
dtype: string
- name: raw_transcription
dtype: string
- name: gender
dtype:
class_label:
names:
'0': male
'1': female
'2': other
- name: lang_id
dtype:
class_label:
names:
'0': af_za
'1': am_et
'2': ar_eg
'3': as_in
'4': ast_es
'5': az_az
'6': be_by
'7': bg_bg
'8': bn_in
'9': bs_ba
'10': ca_es
'11': ceb_ph
'12': ckb_iq
'13': cmn_hans_cn
'14': cs_cz
'15': cy_gb
'16': da_dk
'17': de_de
'18': el_gr
'19': en_us
'20': es_419
'21': et_ee
'22': fa_ir
'23': ff_sn
'24': fi_fi
'25': fil_ph
'26': fr_fr
'27': ga_ie
'28': gl_es
'29': gu_in
'30': ha_ng
'31': he_il
'32': hi_in
'33': hr_hr
'34': hu_hu
'35': hy_am
'36': id_id
'37': ig_ng
'38': is_is
'39': it_it
'40': ja_jp
'41': jv_id
'42': ka_ge
'43': kam_ke
'44': kea_cv
'45': kk_kz
'46': km_kh
'47': kn_in
'48': ko_kr
'49': ky_kg
'50': lb_lu
'51': lg_ug
'52': ln_cd
'53': lo_la
'54': lt_lt
'55': luo_ke
'56': lv_lv
'57': mi_nz
'58': mk_mk
'59': ml_in
'60': mn_mn
'61': mr_in
'62': ms_my
'63': mt_mt
'64': my_mm
'65': nb_no
'66': ne_np
'67': nl_nl
'68': nso_za
'69': ny_mw
'70': oc_fr
'71': om_et
'72': or_in
'73': pa_in
'74': pl_pl
'75': ps_af
'76': pt_br
'77': ro_ro
'78': ru_ru
'79': sd_in
'80': sk_sk
'81': sl_si
'82': sn_zw
'83': so_so
'84': sr_rs
'85': sv_se
'86': sw_ke
'87': ta_in
'88': te_in
'89': tg_tj
'90': th_th
'91': tr_tr
'92': uk_ua
'93': umb_ao
'94': ur_pk
'95': uz_uz
'96': vi_vn
'97': wo_sn
'98': xh_za
'99': yo_ng
'100': yue_hant_hk
'101': zu_za
'102': all
- name: language
dtype: string
- name: lang_group_id
dtype:
class_label:
names:
'0': western_european_we
'1': eastern_european_ee
'2': central_asia_middle_north_african_cmn
'3': sub_saharan_african_ssa
'4': south_asian_sa
'5': south_east_asian_sea
'6': chinese_japanase_korean_cjk
splits:
- name: train
num_bytes: 1469637849.74
num_examples: 2602
- name: test
num_bytes: 592274192.249
num_examples: 1021
- name: dev
num_bytes: 247118119.0
num_examples: 439
download_size: 2306197081
dataset_size: 2309030160.989
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: dev
path: data/dev-*
---
|
aimagelab/ReflectiVA-Data | aimagelab | 2025-04-05T22:42:59Z | 248 | 0 | [
"task_categories:image-text-to-text",
"license:apache-2.0",
"arxiv:2411.16863",
"region:us"
] | [
"image-text-to-text"
] | 2025-03-25T14:37:42Z | 0 | ---
license: apache-2.0
task_categories:
- image-text-to-text
---
In this datasets space, you will find the data of [Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering](https://huggingface.co/papers/2411.16863).
For more information, visit our [ReflectiVA repository](https://github.com/aimagelab/ReflectiVA), our [project page](https://aimagelab.github.io/ReflectiVA/) and [model space](https://huggingface.co/aimagelab/ReflectiVA).
## Citation
If you make use of our work, please cite our repo:
```bibtex
@inproceedings{cocchi2024augmenting,
title={{Augmenting Multimodal LLMs with Self-Reflective Tokens for Knowledge-based Visual Question Answering}},
author={Cocchi, Federico and Moratelli, Nicholas and Cornia, Marcella and Baraldi, Lorenzo and Cucchiara, Rita},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
year={2025}
}
```
|
mrcuddle/Nifty-Authoritarian-Scrape | mrcuddle | 2024-12-21T17:11:01Z | 32 | 0 | [
"task_categories:text-classification",
"task_categories:text-generation",
"task_categories:text2text-generation",
"language:en",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification",
"text-generation",
"text2text-generation"
] | 2024-12-19T21:20:40Z | 0 | ---
language:
- en
size_categories:
- 1K<n<10K
task_categories:
- text-classification
- text-generation
- text2text-generation
---
Data Scrape from LGBT Literature Archive Nifty.Org
-Category: Authoritarian
|
d0rj/audiocaps | d0rj | 2023-06-30T12:17:56Z | 144 | 5 | [
"task_categories:text-to-speech",
"multilinguality:monolingual",
"source_datasets:original",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"youtube",
"captions"
] | [
"text-to-speech"
] | 2023-06-29T19:10:43Z | 1 | ---
dataset_info:
features:
- name: audiocap_id
dtype: int64
- name: youtube_id
dtype: string
- name: start_time
dtype: int64
- name: caption
dtype: string
splits:
- name: train
num_bytes: 4162928
num_examples: 49838
- name: validation
num_bytes: 198563
num_examples: 2475
- name: test
num_bytes: 454652
num_examples: 4875
download_size: 2781679
dataset_size: 4816143
license: mit
task_categories:
- text-to-speech
language:
- en
multilinguality:
- monolingual
tags:
- youtube
- captions
pretty_name: AudioCaps
size_categories:
- 10K<n<100K
source_datasets:
- original
paperswithcode_id: audiocaps
---
# audiocaps
## Dataset Description
- **Homepage:** https://audiocaps.github.io/
- **Repository:** https://github.com/cdjkim/audiocaps
- **Paper:** [AudioCaps: Generating Captions for Audios in The Wild](https://aclanthology.org/N19-1011.pdf)
HuggingFace mirror of [official data repo](https://github.com/cdjkim/audiocaps). |
pvmoorthi/eval_telebuddy1750027646 | pvmoorthi | 2025-06-15T22:48:33Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-15T22:48:27Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "lekiwi_client",
"total_episodes": 1,
"total_frames": 750,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 10,
"splits": {
"train": "0:1"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
9
],
"names": [
"arm_shoulder_pan.pos",
"arm_shoulder_lift.pos",
"arm_elbow_flex.pos",
"arm_wrist_flex.pos",
"arm_wrist_roll.pos",
"arm_gripper.pos",
"x.vel",
"y.vel",
"theta.vel"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
9
],
"names": [
"arm_shoulder_pan.pos",
"arm_shoulder_lift.pos",
"arm_elbow_flex.pos",
"arm_wrist_flex.pos",
"arm_wrist_roll.pos",
"arm_gripper.pos",
"x.vel",
"y.vel",
"theta.vel"
]
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 10,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.wrist": {
"dtype": "video",
"shape": [
640,
480,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 640,
"video.width": 480,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 10,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
martijn75/COHeN_heb_unvoc | martijn75 | 2025-01-11T15:41:48Z | 17 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-11T15:41:37Z | 0 | ---
dataset_info:
features:
- name: Text
dtype: string
- name: Stage
dtype:
class_label:
names:
'0': ABH
'1': CBH
'2': TBH
'3': LBH
splits:
- name: train
num_bytes: 1272278
num_examples: 9574
- name: test
num_bytes: 157645
num_examples: 1197
- name: eval
num_bytes: 156100
num_examples: 1197
download_size: 762130
dataset_size: 1586023
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: eval
path: data/eval-*
---
|
cobordism/mixed_pa-le-an-30k | cobordism | 2024-11-05T10:59:51Z | 21 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-05T10:59:14Z | 0 | ---
dataset_info:
features:
- name: image
dtype: image
- name: conversations
sequence:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 1218674370.0
num_examples: 30000
download_size: 1177396671
dataset_size: 1218674370.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RyanYr/reflect_llama8b-t0_mistlarge-t12_om2-4 | RyanYr | 2024-12-10T04:15:16Z | 16 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-10T04:15:14Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: generated_solution
dtype: string
- name: answer
dtype: string
- name: problem_source
dtype: string
- name: response@0
sequence: string
- name: response@1
sequence: string
- name: response@2
sequence: string
splits:
- name: train
num_bytes: 57336765
num_examples: 10000
download_size: 24553575
dataset_size: 57336765
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
colabfit/discrepencies_and_error_metrics_NPJ_2023_enhanced_validation_set | colabfit | 2025-04-23T18:13:51Z | 20 | 0 | [
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"molecular dynamics",
"mlip",
"interatomic potential"
] | [] | 2025-04-01T20:34:44Z | 0 | ---
configs:
- config_name: default
data_files: "main/*.parquet"
license: cc-by-4.0
tags:
- molecular dynamics
- mlip
- interatomic potential
pretty_name: discrepencies and error metrics NPJ 2023 enhanced validation set
---
# Dataset
discrepencies and error metrics NPJ 2023 enhanced validation set
### Description
Structures from discrepencies_and_error_metrics_NPJ_2023 validation set, enhanced by inclusion of rare events. The full discrepencies_and_error_metrics_NPJ_2023 dataset includes the original mlearn_Si_train dataset, modified with the purpose of developing models with better diffusivity scores by replacing ~54% of the data with structures containing migrating interstitials. The enhanced validation set contains 50 total structures, consisting of 20 structures randomly selected from the 120 replaced structures of the original training dataset, 11 snapshots with vacancy rare events (RE) from AIMD simulations, and 19 snapshots with interstitial RE from AIMD simulations. We also construct interstitial-RE and vacancy-RE testing sets, each consisting of 100 snapshots of atomic configurations with a single migrating vacancy or interstitial, respectively, from AIMD simulations at 1230 K.
<br>Additional details stored in dataset columns prepended with "dataset_".
### Dataset authors
Yunsheng Liu, Xingfeng He, Yifei Mo
### Publication
https://doi.org/10.1038/s41524-023-01123-3
### Original data link
https://github.com/mogroupumd/Silicon_MLIP_datasets
### License
CC-BY-4.0
### Number of unique molecular configurations
50
### Number of atoms
3198
### Elements included
Si
### Properties included
energy, atomic forces, cauchy stress
### Cite this dataset
Liu, Y., He, X., and Mo, Y. _discrepencies and error metrics NPJ 2023 enhanced validation set_. ColabFit, 2023. https://doi.org/10.60732/9c77bb8c |
claire-e-5/POS-tagged-MovieSummaries | claire-e-5 | 2024-10-25T18:42:00Z | 23 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-25T18:19:26Z | 0 | ---
dataset_info:
features:
- name: Movie Title
dtype: string
- name: Movie Description
dtype: string
- name: adjectives
dtype: string
splits:
- name: train
num_bytes: 3923496
num_examples: 5358
download_size: 2057280
dataset_size: 3923496
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SayantanJoker/Shrutilipi_Hindi_original_chunk_68 | SayantanJoker | 2025-04-15T03:24:28Z | 19 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-15T03:23:29Z | 0 | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
- name: file_name
dtype: string
splits:
- name: train
num_bytes: 1324303531.0
num_examples: 10000
download_size: 1322781904
dataset_size: 1324303531.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pxyyy/NuminaMath-CoT-smp20k-removed-top3000-by-mp-1e-4 | pxyyy | 2025-05-02T16:21:58Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-02T16:21:55Z | 0 | ---
dataset_info:
features:
- name: source
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 49484513.75
num_examples: 17000
download_size: 24494029
dataset_size: 49484513.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "NuminaMath-CoT-smp20k-removed-top3000-by-mp-1e-4"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pyterrier/nq.terrier | pyterrier | 2024-10-08T17:34:45Z | 32 | 0 | [
"task_categories:text-retrieval",
"region:us",
"pyterrier",
"pyterrier-artifact",
"pyterrier-artifact.sparse_index",
"pyterrier-artifact.sparse_index.terrier"
] | [
"text-retrieval"
] | 2024-10-08T17:30:12Z | 0 | ---
# pretty_name: "" # Example: "MS MARCO Terrier Index"
tags:
- pyterrier
- pyterrier-artifact
- pyterrier-artifact.sparse_index
- pyterrier-artifact.sparse_index.terrier
task_categories:
- text-retrieval
viewer: false
---
# nq.terrier
## Description
Terrier index for NQ (Natural Questions)
## Usage
```python
# Load the artifact
import pyterrier as pt
index = pt.Artifact.from_hf('pyterrier/nq.terrier')
index.bm25()
```
## Benchmarks
| name | nDCG@10 | R@1000 |
|:-------|----------:|---------:|
| bm25 | 0.2814 | 0.8906 |
| dph | 0.2846 | 0.8926 |
## Reproduction
```python
import pyterrier as pt
from tqdm import tqdm
import pandas as pd
import ir_datasets
from pyterrier_pisa import PisaIndex
dataset = ir_datasets.load('beir/nq')
meta_docno_len = dataset.metadata()['docs']['fields']['doc_id']['max_len']
indexer = pt.IterDictIndexer("./nq.terrier", meta={'docno': meta_docno_len, 'text': 4096})
docs = ({'docno': d.doc_id, 'text': d.default_text()} for d in tqdm(dataset.docs))
indexer.index(docs)
```
## Metadata
```
{
"type": "sparse_index",
"format": "terrier",
"package_hint": "python-terrier"
}
```
|
candywal/one_shot_logistic_code_sabotage_unsafe | candywal | 2025-05-20T03:35:46Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-20T03:35:44Z | 0 | ---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: trajectory_index
dtype: int64
- name: classification
dtype: string
splits:
- name: train
num_bytes: 386406
num_examples: 400
download_size: 29109
dataset_size: 386406
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
NicoHelemon/mmlu_STEM | NicoHelemon | 2025-05-26T20:58:05Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-26T20:58:01Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
splits:
- name: test
num_bytes: 1306745
num_examples: 4339
- name: validation
num_bytes: 140269
num_examples: 463
- name: dev
num_bytes: 40861
num_examples: 125
download_size: 754007
dataset_size: 1487875
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
- split: dev
path: data/dev-*
---
|
K1CH/so100_test08 | K1CH | 2025-03-28T11:24:08Z | 29 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"so100",
"tutorial"
] | [
"robotics"
] | 2025-03-28T11:24:02Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- so100
- tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": "so100",
"total_episodes": 2,
"total_frames": 1786,
"total_tasks": 1,
"total_videos": 2,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:2"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.laptop": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 640,
"video.width": 480,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
jplhughes2/alignment-faking-synthetic-chat-dataset-recall-30k-benign-20k | jplhughes2 | 2025-02-03T16:38:23Z | 14 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-03T16:38:18Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 152541291.0
num_examples: 50000
download_size: 82323446
dataset_size: 152541291.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Ricky06662/refCOCOg_9k_840 | Ricky06662 | 2025-03-26T06:05:35Z | 250 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-26T05:54:45Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: problem
dtype: string
- name: solution
dtype: string
- name: image
dtype: image
- name: img_height
dtype: int64
- name: img_width
dtype: int64
splits:
- name: train
num_bytes: 8430727409.0
num_examples: 9000
download_size: 4888701720
dataset_size: 8430727409.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
katsuchi/paimon-voice-dataset | katsuchi | 2025-02-09T05:30:40Z | 27 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-09T05:12:36Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: audio
dtype:
audio:
sampling_rate: 16000
splits:
- name: train
num_bytes: 6746003561.007212
num_examples: 15021
- name: test
num_bytes: 1678360595.8897882
num_examples: 3756
download_size: 8243579489
dataset_size: 8424364156.896999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
gabrielegabellone/mediterraneo_triplets | gabrielegabellone | 2025-06-12T09:43:46Z | 1 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-12T09:43:37Z | 0 | ---
dataset_info:
features:
- name: query
dtype: string
- name: pos
sequence: string
- name: neg
sequence: string
- name: metadata
struct:
- name: chunk_id
dtype: string
- name: source
dtype: string
- name: timestamp
dtype: string
splits:
- name: train
num_bytes: 8950
num_examples: 29
download_size: 8447
dataset_size: 8950
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_llm2_gen1_WXS_doc1000_synt64_rnd42_lr5e-05_acm_SYNLAST | dgambettaphd | 2025-05-11T22:46:05Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-11T22:46:00Z | 0 | ---
dataset_info:
features:
- name: id_doc
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: MPP
dtype: float64
splits:
- name: train
num_bytes: 9758284
num_examples: 17000
download_size: 5837899
dataset_size: 9758284
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
datacomp/ImageNetTraining20.0-frac-1over64 | datacomp | 2025-01-16T23:00:45Z | 14 | 0 | [
"license:cc",
"region:us"
] | [] | 2025-01-14T19:01:19Z | 0 | ---
title: ImageNetTraining20.0-frac-1over64
emoji: 😻
colorFrom: yellow
colorTo: blue
sdk: docker
pinned: false
license: cc
startup_duration_timeout: 5h
hf_oauth_expiration_minutes: 1440
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference |
Isotonic/reasoning-v0.1 | Isotonic | 2024-10-28T13:43:19Z | 30 | 1 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-28T13:41:42Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: reasoning
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 196386850.0
num_examples: 56233
download_size: 98217851
dataset_size: 196386850.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Whimsyturtle/vpi-bench | Whimsyturtle | 2025-05-12T11:54:33Z | 0 | 0 | [
"language:en",
"license:cc-by-4.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"modality:image",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"safety",
"alignment",
"security",
"privacy",
"multimodal",
"visual",
"image",
"adversarial",
"malicious",
"robustness",
"prompt-injection",
"visual-prompt-injection",
"data-exfiltration",
"prompt-defense",
"llm",
"agentic-ai",
"computer-use",
"browser-use",
"benchmark",
"dataset"
] | [] | 2025-05-10T13:40:38Z | 0 | ---
license: cc-by-4.0
language:
- en
tags:
- safety
- alignment
- security
- privacy
- multimodal
- visual
- image
- adversarial
- malicious
- robustness
- prompt-injection
- visual-prompt-injection
- data-exfiltration
- prompt-defense
- llm
- agentic-ai
- computer-use
- browser-use
- benchmark
- dataset
pretty_name: Computer-Use Agents Testcases & Web Platforms Dataset
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: test
path: "main_benchmark.parquet"
--- |
SayantanJoker/Hindi_1000hr_Train_Subset_44100Hz_quality_metadata | SayantanJoker | 2025-04-13T10:31:09Z | 70 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-13T10:31:08Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: file_name
dtype: string
- name: utterance_pitch_mean
dtype: float32
- name: utterance_pitch_std
dtype: float32
- name: snr
dtype: float64
- name: c50
dtype: float64
- name: speaking_rate
dtype: string
- name: phonemes
dtype: string
- name: stoi
dtype: float64
- name: si-sdr
dtype: float64
- name: pesq
dtype: float64
- name: noise
dtype: string
- name: reverberation
dtype: string
- name: speech_monotony
dtype: string
- name: sdr_noise
dtype: string
- name: pesq_speech_quality
dtype: string
splits:
- name: train
num_bytes: 1307557
num_examples: 3843
download_size: 255513
dataset_size: 1307557
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
timaeus/pythia-160m-pile-1m-ig-l7h4 | timaeus | 2025-01-31T19:08:48Z | 14 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-31T19:08:45Z | 0 | ---
dataset_info:
features:
- name: contents
dtype: string
- name: metadata
struct:
- name: pile_set_name
sequence: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 16395573
num_examples: 10000
download_size: 10703429
dataset_size: 16395573
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.5_alpha_0.2_num-company_3_dataset_2_for_gen_1 | HungVu2003 | 2025-04-29T16:52:31Z | 17 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-29T16:52:28Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1754337
num_examples: 12499
download_size: 934102
dataset_size: 1754337
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
HungVu2003/opt-350m_beta_0.0_alpha_1.0_num-company_2_dataset_0_for_gen_6 | HungVu2003 | 2025-04-10T12:31:11Z | 14 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-10T12:31:10Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 1577126
num_examples: 6250
download_size: 909526
dataset_size: 1577126
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
colinrein/camel | colinrein | 2025-02-21T22:23:42Z | 14 | 0 | [
"license:apache-2.0",
"modality:image",
"region:us"
] | [] | 2025-02-21T22:06:32Z | 0 | ---
license: apache-2.0
---
|
nova07/naughtyamerica_json_6 | nova07 | 2025-02-09T14:09:29Z | 14 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-09T14:09:26Z | 0 | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 468488
num_examples: 480
download_size: 188809
dataset_size: 468488
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
krutrim-ai-labs/IndicPope | krutrim-ai-labs | 2025-03-05T09:07:12Z | 158 | 0 | [
"language:as",
"language:hi",
"language:gu",
"language:ml",
"language:te",
"language:ta",
"language:kn",
"language:or",
"language:bn",
"language:en",
"language:mr",
"language:sa",
"license:other",
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2502.15392",
"arxiv:2310.03744",
"arxiv:2305.10355",
"arxiv:2305.16307",
"region:us"
] | [] | 2025-03-01T04:30:46Z | 0 | ---
dataset_info:
features:
- name: question_id
dtype: int64
- name: image
dtype: image
- name: text
dtype: string
- name: category
dtype: string
- name: label
dtype: string
- name: image_source
dtype: string
splits:
- name: assamese
num_bytes: 455367007.7
num_examples: 8910
- name: bengali
num_bytes: 455101633.7
num_examples: 8910
- name: english
num_bytes: 454020487.7
num_examples: 8910
- name: gujarati
num_bytes: 455105448.7
num_examples: 8910
- name: hindi
num_bytes: 455210630.7
num_examples: 8910
- name: kannada
num_bytes: 455153061.7
num_examples: 8910
- name: malayalam
num_bytes: 455401526.7
num_examples: 8910
- name: marathi
num_bytes: 455379587.7
num_examples: 8910
- name: odia
num_bytes: 455463255.7
num_examples: 8910
- name: sanskrit
num_bytes: 455470746.7
num_examples: 8910
- name: tamil
num_bytes: 455693348.7
num_examples: 8910
- name: telugu
num_bytes: 455307739.7
num_examples: 8910
download_size: 956887209
dataset_size: 5462674475.399999
configs:
- config_name: default
data_files:
- split: assamese
path: data/assamese-*
- split: bengali
path: data/bengali-*
- split: english
path: data/english-*
- split: gujarati
path: data/gujarati-*
- split: hindi
path: data/hindi-*
- split: kannada
path: data/kannada-*
- split: malayalam
path: data/malayalam-*
- split: marathi
path: data/marathi-*
- split: odia
path: data/odia-*
- split: sanskrit
path: data/sanskrit-*
- split: tamil
path: data/tamil-*
- split: telugu
path: data/telugu-*
license: other
license_name: krutrim-community-license-agreement-version-1.0
license_link: LICENSE.md
extra_gated_heading: Acknowledge license to accept the repository
extra_gated_button_content: Acknowledge license
language:
- as
- hi
- gu
- ml
- te
- ta
- kn
- or
- bn
- en
- mr
- sa
---
# IndicPope: Indian Multilingual Translation Dataset For Evaluating Large Vision Language Models
- You can find the performance of Chitrarth on IndicPope here : [**Paper**](https://arxiv.org/abs/2502.15392) | [**Github**](https://github.com/ola-krutrim/Chitrarth) | [**HuggingFace**](https://huggingface.co/krutrim-ai-labs/Chitrarth)
- Evaluation Scripts of BharatBench is available here : [**Github**](https://github.com/ola-krutrim/BharatBench)
## 1. Introduction
IndicPope is a new dataset designed for evaluating Large Vision-Language Models (LVLMs) on Visual Question Answering (VQA) tasks. It focuses on simple Yes-or-No questions probing objects in images (e.g., *Is there a car in the image?*).
This dataset is built upon **POPE: Polling-based Object Probing Evaluation for Object Hallucination** ([GitHub](https://github.com/AoiDragon/POPE)), which employs negative sampling techniques to test hallucination in vision-language models under **Random, Popular, and Adversarial** settings.
---
## 2. Dataset Details
IndicPope consists of **8.91k samples** spanning **12 Indic languages** along with English. Each sample includes:
- **Text**: The question about the image.
- **Category**: The type of sampling used (Random/Popular/Adversarial).
- **Label**: The answer (*Yes/No*).
### Supported Languages
- Assamese
- Bengali
- English
- Gujarati
- Hindi
- Kannada
- Malayalam
- Marathi
- Odia
- Sanskrit
- Tamil
- Telugu
---
## 3. How to Use and Run
You can load the dataset using the `datasets` library:
```python
from datasets import load_dataset
dataset = load_dataset("krutrim-ai-labs/IndicPope")
print(dataset)
```
---
## 4. License
This code repository and the model weights are licensed under the [Krutrim Community License.](LICENSE.md)
## 5. Citation
```
@article{khan2025chitrarth,
title={Chitrarth: Bridging Vision and Language for a Billion People},
author={Shaharukh Khan, Ayush Tarun, Abhinav Ravi, Ali Faraz, Akshat Patidar, Praveen Kumar Pokala, Anagha Bhangare, Raja Kolla, Chandra Khatri, Shubham Agarwal},
journal={arXiv preprint arXiv:2502.15392},
year={2025}
}
@misc{liu2023improvedllava,
title={Improved Baselines with Visual Instruction Tuning},
author={Liu, Haotian and Li, Chunyuan and Li, Yuheng and Lee, Yong Jae},
publisher={arXiv:2310.03744},
year={2023},
}
@misc{liu2023llava,
title={Visual Instruction Tuning},
author={Liu, Haotian and Li, Chunyuan and Wu, Qingyang and Lee, Yong Jae},
publisher={NeurIPS},
year={2023},
}
@article{li2023evaluating,
title={Evaluating object hallucination in large vision-language models},
author={Li, Yifan and Du, Yifan and Zhou, Kun and Wang, Jinpeng and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2305.10355},
year={2023}
}
@article{gala2023indictrans2,
title={Indictrans2: Towards high-quality and accessible machine translation models for all 22 scheduled indian languages},
author={Gala, Jay and Chitale, Pranjal A and AK, Raghavan and Gumma, Varun and Doddapaneni, Sumanth and Kumar, Aswanth and Nawale, Janki and Sujatha, Anupama and Puduppully, Ratish and Raghavan, Vivek and others},
journal={arXiv preprint arXiv:2305.16307},
year={2023}
}
```
## 6. Contact
Contributions are welcome! If you have any improvements or suggestions, feel free to submit a pull request on GitHub.
## 7. Acknowledgement
IndicPope is built with reference to the code of the following projects: [POPE](https://github.com/AoiDragon/POPE), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work! |
AdoCleanCode/CI100_correct_distorted_v1 | AdoCleanCode | 2025-05-01T13:07:32Z | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-30T23:43:16Z | 0 | ---
dataset_info:
features:
- name: image_id
dtype: int64
- name: text
dtype: string
- name: coarse_label_name
dtype: string
- name: coarse_label_id
dtype: int64
- name: coarse_id
dtype: int64
- name: fine_label
dtype: int64
- name: coarse_label
dtype: int64
- name: fine_label_name
dtype: string
splits:
- name: train
num_bytes: 10826914
num_examples: 22400
download_size: 3381201
dataset_size: 10826914
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Allen-UQ/cora_1_target_all_one_hop | Allen-UQ | 2025-06-18T04:07:02Z | 72 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-13T03:39:53Z | 0 | ---
dataset_info:
features:
- name: problem
dtype: string
- name: solution
dtype: string
- name: dataset
dtype: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 654188
num_examples: 140
- name: validation
num_bytes: 2548678
num_examples: 500
- name: test
num_bytes: 9544936
num_examples: 2068
download_size: 6276177
dataset_size: 12747802
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
french-datasets/Geraldine-bso-publications-indexation-50k | french-datasets | 2025-03-29T20:37:43Z | 15 | 0 | [
"language:fra",
"region:us"
] | [] | 2025-03-29T20:37:34Z | 0 | ---
language: "fra"
viewer: false
---
Ce répertoire est vide, il a été créé pour améliorer le référencement du jeu de données huggingface.co/datasets/Geraldine/bso-publications-indexation-50k.
|
deepinfinityai/30_NLEM_Aug_audios_dataset | deepinfinityai | 2025-03-29T09:27:19Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-29T09:27:14Z | 0 | ---
dataset_info:
features:
- name: file_path
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: train
num_bytes: 6177220.0
num_examples: 174
download_size: 4006830
dataset_size: 6177220.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aochongoliverli/R1-Distill-Qwen-1.5B-deepmath-level5-6-max-length-16384-rollout-8-temperature-0.5-rollouts | aochongoliverli | 2025-06-22T13:58:11Z | 11 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-22T00:43:49Z | 0 | ---
dataset_info:
features:
- name: index
dtype: int64
- name: question
dtype: string
- name: response
sequence: string
- name: reward
sequence: float64
- name: global_step
sequence: int64
splits:
- name: train
num_bytes: 3171749592
num_examples: 19200
download_size: 1120243027
dataset_size: 3171749592
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ebony59/gsm8k-gen-stepwise | ebony59 | 2025-04-24T16:54:00Z | 23 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-23T21:27:36Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: completions
sequence: string
- name: labels
sequence: bool
splits:
- name: train
num_bytes: 28807807
num_examples: 50916
download_size: 6692683
dataset_size: 28807807
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
khr0516/SynthScars | khr0516 | 2025-03-21T08:59:19Z | 48 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"arxiv:2503.15264",
"region:us"
] | [] | 2025-03-21T06:26:47Z | 0 | ---
license: apache-2.0
---
## Citation
```
@misc{kang2025legionlearninggroundexplain,
title={LEGION: Learning to Ground and Explain for Synthetic Image Detection},
author={Hengrui Kang and Siwei Wen and Zichen Wen and Junyan Ye and Weijia Li and Peilin Feng and Baichuan Zhou and Bin Wang and Dahua Lin and Linfeng Zhang and Conghui He},
year={2025},
eprint={2503.15264},
archivePrefix={arXiv},
primaryClass={cs.CV},
url={https://arxiv.org/abs/2503.15264},
}
```
|
khuang2/Countdown-Tasks-3to4-query-gen-prompts-w-hint | khuang2 | 2025-02-19T08:09:53Z | 27 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-19T08:09:52Z | 0 | ---
dataset_info:
features:
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 117400000
num_examples: 50000
download_size: 552690
dataset_size: 117400000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dgambettaphd/D_llm3_gen9_run0_X_doc1000_synt64_tot128_MPP | dgambettaphd | 2025-04-26T01:30:06Z | 31 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-26T01:30:02Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: TPP
dtype: float64
- name: MPP
dtype: float64
- name: FTP
dtype: float64
splits:
- name: train
num_bytes: 8288098
num_examples: 13000
download_size: 5250724
dataset_size: 8288098
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jbloom-aisi/benton_et_al_sandbagging_response_llama_3_1_70B_Instruct | jbloom-aisi | 2025-02-14T14:41:21Z | 9 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-14T13:39:26Z | 0 | ---
dataset_info:
features:
- name: question
dtype: string
- name: id
dtype: int64
- name: correct_answer
dtype: string
- name: distractors
sequence: string
- name: topic
dtype: string
- name: chat
list:
- name: content
dtype: string
- name: role
dtype: string
- name: correct_answer_letter
dtype: string
- name: desired_answer
dtype: string
- name: desired_answer_letter
dtype: string
- name: sandbagging_environment
dtype: bool
- name: prefix_idx
dtype: int64
- name: prefix
dtype: string
- name: policy
dtype: string
- name: sandbag_decision
dtype: string
- name: model_hidden_answer
dtype: string
- name: model_given_answer
dtype: string
- name: inferred_sandbagging
dtype: bool
- name: inferred_sandbagging_correctly
dtype: bool
- name: model_hidden_answer_correct
dtype: bool
- name: model_given_answer_correct
dtype: bool
- name: sandbagged_effectively
dtype: bool
splits:
- name: train
num_bytes: 5458533
num_examples: 1273
download_size: 1299967
dataset_size: 5458533
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Langue-des-signes-francaise/dinlang-gespin-2023 | Langue-des-signes-francaise | 2025-03-29T22:04:05Z | 28 | 0 | [
"multilinguality:multilingual",
"language:fra",
"language:fsl",
"license:cc-by-4.0",
"region:us"
] | [] | 2024-10-22T13:59:37Z | 0 | ---
language:
- fra
- fsl
multilinguality:
- multilingual
viewer: false
license: cc-by-4.0
---
> [!NOTE]
> Dataset origin: https://www.ortolang.fr/market/corpora/dinlang-gespin-2023
## Description
You will find here the video excerpts related to the article "Coordinating eating and languaging: the choreography of speech, sign, gesture and action in family dinners" written for GESPIN 2023 by the DinLang project team.
## Citation
```
@misc{11403/dinlang-gespin-2023/v1,
title = {DinLang - GESPIN 2023},
author = {PRISMES and MoDyCo and SFL and DYLIS},
url = {https://hdl.handle.net/11403/dinlang-gespin-2023/v1},
note = {{ORTOLANG} ({Open} {Resources} {and} {TOols} {for} {LANGuage}) \textendash www.ortolang.fr},
copyright = {Licence Creative Commons - Attribution 4.0 International},
year = {2023}
}
``` |
lukicdarkoo/close_2_cameras | lukicdarkoo | 2025-06-14T17:44:45Z | 0 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-06-14T17:42:31Z | 0 | ---
license: apache-2.0
---
|
NovaSky-AI/Sky-T1_preference_data_10k | NovaSky-AI | 2025-01-23T07:58:17Z | 73 | 13 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-23T07:55:28Z | 0 | ---
license: apache-2.0
---
|
autobio-bench/insert-blender | autobio-bench | 2025-05-14T14:59:04Z | 0 | 0 | [
"task_categories:robotics",
"license:mit",
"modality:video",
"region:us",
"LeRobot",
"medical"
] | [
"robotics"
] | 2025-05-14T14:56:40Z | 0 | ---
license: mit
task_categories:
- robotics
tags:
- LeRobot
- medical
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** mit
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.0",
"robot_type": null,
"total_episodes": 100,
"total_frames": 55127,
"total_tasks": 10,
"total_videos": 200,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 50,
"splits": {
"train": "0:100"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"state": {
"dtype": "float32",
"shape": [
7
],
"names": [
"state"
]
},
"actions": {
"dtype": "float32",
"shape": [
7
],
"names": [
"actions"
]
},
"image": {
"dtype": "video",
"shape": [
224,
224,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 50.0,
"video.height": 224,
"video.width": 224,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"wrist_image": {
"dtype": "video",
"shape": [
224,
224,
3
],
"names": [
"height",
"width",
"channel"
],
"info": {
"video.fps": 50.0,
"video.height": 224,
"video.width": 224,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_7ec550b9-e3b0-4cc6-b781-7cc816e6ffab | argilla-internal-testing | 2024-12-13T13:06:23Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-13T13:06:22Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1256
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
deepakkumar07/debug-llama2-1k | deepakkumar07 | 2024-10-26T18:12:52Z | 23 | 1 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-26T18:12:50Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966692
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mlfoundations-dev/difficulty_sorting_medium_seed_code_w_openthoughts | mlfoundations-dev | 2025-02-18T00:59:55Z | 96 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-18T00:51:08Z | 0 | ---
dataset_info:
features:
- name: system
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
- name: problem
dtype: string
- name: reasoning
dtype: string
- name: deepseek_solution
dtype: string
- name: id
dtype: string
- name: ground_truth_solution
dtype: string
- name: source
dtype: string
- name: code
dtype: 'null'
- name: correct
dtype: bool
- name: judge_reasoning
dtype: string
- name: __original_row_idx
dtype: int64
- name: domain
dtype: string
- name: difficulty
dtype: int64
- name: difficulty_reasoning
dtype: string
- name: r1_distill_70b_response
dtype: string
splits:
- name: train
num_bytes: 3081911218
num_examples: 119168
download_size: 1273167256
dataset_size: 3081911218
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Senju2/context-aware-project | Senju2 | 2025-04-23T18:41:30Z | 16 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-19T18:26:17Z | 0 | ---
dataset_info:
features:
- name: en
dtype: string
- name: ar
dtype: string
- name: formal
dtype: string
- name: informal
dtype: string
- name: region
dtype: string
- name: context
dtype: string
splits:
- name: train
num_bytes: 36533246
num_examples: 231459
download_size: 11689635
dataset_size: 36533246
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Demonion/socks_basket_white | Demonion | 2025-04-12T10:04:40Z | 18 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"modality:video",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot",
"socks_basket, so100,tutorial"
] | [
"robotics"
] | 2025-04-12T10:04:25Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
- socks_basket, so100,tutorial
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": "so100",
"total_episodes": 20,
"total_frames": 9311,
"total_tasks": 1,
"total_videos": 20,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 30,
"splits": {
"train": "0:20"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"action": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.state": {
"dtype": "float32",
"shape": [
6
],
"names": [
"main_shoulder_pan",
"main_shoulder_lift",
"main_elbow_flex",
"main_wrist_flex",
"main_wrist_roll",
"main_gripper"
]
},
"observation.images.iphone": {
"dtype": "video",
"shape": [
720,
1280,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.fps": 30.0,
"video.height": 720,
"video.width": 1280,
"video.channels": 3,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
OlafCC/lab1-llama3-tt | OlafCC | 2024-10-17T06:38:37Z | 14 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2024-10-17T06:38:37Z | 0 | ---
license: apache-2.0
---
|
dvilasuero/jailbreak-classification-processed | dvilasuero | 2025-03-19T08:38:56Z | 20 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-17T11:33:06Z | 0 | ---
dataset_info:
features:
- name: type
dtype: string
- name: prompt
dtype: string
- name: gemma3-classification
dtype: string
- name: gemma3-classification_extracted
dtype: string
splits:
- name: train
num_bytes: 110681
num_examples: 100
download_size: 72043
dataset_size: 110681
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
thinkPy/title | thinkPy | 2024-11-05T17:01:07Z | 25 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-05T13:58:53Z | 0 | ---
dataset_info:
features:
- name: Text
sequence: string
- name: Label
sequence: bool
splits:
- name: test
num_bytes: 163773
num_examples: 10
- name: train
num_bytes: 45855376
num_examples: 2475
download_size: 26728363
dataset_size: 46019149
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
|
SreehariR/guanaco-llama2-1k | SreehariR | 2024-12-14T11:12:31Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-12-14T11:12:29Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 8410
num_examples: 3
download_size: 13021
dataset_size: 8410
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
EdinsonAcosta/DataSetKalmaPross | EdinsonAcosta | 2025-01-16T17:11:16Z | 22 | 0 | [
"task_categories:text2text-generation",
"language:es",
"license:cc-by-nc-sa-4.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"jsonl"
] | [
"text2text-generation"
] | 2025-01-16T17:02:30Z | 0 | ---
license: cc-by-nc-sa-4.0
task_categories:
- text2text-generation
language:
- es
tags:
- jsonl
size_categories:
- n<1K
--- |
juliadollis/stf_regex_ner_pierre | juliadollis | 2024-11-28T16:53:04Z | 7 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-28T16:53:02Z | 0 | ---
dataset_info:
features:
- name: inteiro_teor
dtype: string
- name: url_download
dtype: string
- name: dataDecisao
dtype: timestamp[ns]
- name: dataPublicacao
dtype: timestamp[ns]
- name: decisao
dtype: string
- name: descricaoClasse
dtype: string
- name: ementa
dtype: string
- name: id
dtype: string
- name: jurisprudenciaCitada
dtype: string
- name: ministroRelator
dtype: string
- name: nomeOrgaoJulgador
dtype: string
- name: numeroProcesso
dtype: string
- name: referenciasLegislativas
sequence: string
- name: siglaClasse
dtype: string
- name: tipoDeDecisao
dtype: string
- name: titulo
dtype: string
- name: acordaosSimilares
sequence: string
- name: partes_lista_texto
dtype: string
- name: temaProcs
sequence: string
- name: inteiro_teor_regex
dtype: string
- name: NER
struct:
- name: JURISPRUDENCIA
sequence: string
- name: LEGISLACAO
sequence: string
- name: LOCAL
sequence: string
- name: ORGANIZACAO
sequence: string
- name: PESSOA
sequence: string
- name: TEMPO
sequence: string
splits:
- name: train
num_bytes: 311551
num_examples: 6
download_size: 107987
dataset_size: 311551
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ed-donner/trade_code_data | ed-donner | 2024-11-21T16:24:31Z | 23 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-21T16:24:30Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 167244
num_examples: 242
download_size: 37496
dataset_size: 167244
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Luffytaro-1/asr_en_ar_switch_split_128_final | Luffytaro-1 | 2025-02-19T07:19:58Z | 16 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-19T07:19:44Z | 0 | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 4175435.0
num_examples: 47
download_size: 3691002
dataset_size: 4175435.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
argilla-internal-testing/test_import_dataset_from_hub_with_classlabel_17c677de-0ba3-46d4-abf6-2690adfe64bc | argilla-internal-testing | 2024-10-04T09:37:02Z | 17 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-04T09:37:01Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype:
class_label:
names:
'0': positive
'1': negative
splits:
- name: train
num_bytes: 111
num_examples: 3
download_size: 1454
dataset_size: 111
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SAA-Lab/test_march23-cwv-genrm_llama3b-ckptNone | SAA-Lab | 2025-05-12T20:04:23Z | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-12T20:04:21Z | 0 | ---
dataset_info:
features:
- name: post_id
dtype: int64
- name: chosen_body
dtype: string
- name: rejected_body
dtype: string
- name: chosen_upvotes
dtype: int64
- name: rejected_upvotes
dtype: int64
- name: chosen_length
dtype: int64
- name: rejected_length
dtype: int64
- name: chosen_username
dtype: string
- name: rejected_username
dtype: string
- name: chosen_timestamp
dtype: timestamp[us]
- name: rejected_timestamp
dtype: timestamp[us]
- name: post_title
dtype: string
- name: time_diff
dtype: float64
- name: __index_level_0__
dtype: int64
- name: prompt
list:
- name: content
dtype: string
- name: role
dtype: string
- name: answer
dtype: string
- name: model_response
dtype: string
- name: reasoning
dtype: string
- name: preferred
dtype: string
- name: is_correct
dtype: bool
splits:
- name: train
num_bytes: 24327495
num_examples: 1898
download_size: 15012495
dataset_size: 24327495
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zerostratos/music2200 | zerostratos | 2025-05-06T15:15:08Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:audio",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-06T15:07:00Z | 0 | ---
dataset_info:
features:
- name: input
dtype: audio
- name: label
dtype: audio
splits:
- name: train
num_bytes: 8604535214.0
num_examples: 182
download_size: 8435061307
dataset_size: 8604535214.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AbdallahhSaleh/asd-5-tokenized | AbdallahhSaleh | 2025-03-04T15:15:05Z | 15 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-03-04T14:36:55Z | 0 | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 5543429760
num_examples: 13998560
download_size: 1544702534
dataset_size: 5543429760
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
gtsaidata/OCTImagesDataset | gtsaidata | 2025-02-27T10:37:55Z | 8 | 0 | [
"task_categories:image-classification",
"language:en",
"region:us",
"OCT Images Dataset",
"training AI models"
] | [
"image-classification"
] | 2025-02-27T10:32:44Z | 0 | ---
task_categories:
- image-classification
language:
- en
tags:
- OCT Images Dataset
- training AI models
---
Description:
<a href="https://gts.ai/dataset-download/oct-images-dataset/" target="_blank">👉 Download the dataset here</a>
Retinal Optical Coherence Tomography (OCT):
OCT is a non-invasive imaging technique that generates high-resolution cross-sectional images of the retina. This method is critical for diagnosing and monitoring retinal conditions by allowing clinicians to examine the internal structure of the retina in great detail. According to estimates, around 30 million OCT scans are conducted annually worldwide, underscoring the importance of efficient analysis and interpretation of these images. Traditional OCT image analysis demands significant time and expertise, which is why machine learning models trained on such datasets are increasingly being adopted to assist in the diagnostic process.
Download Dataset
Dataset Composition:
The dataset consists of retinal OCT images divided into four main categories of retinal conditions, enabling researchers to develop balanced models that can classify different eye conditions:
Choroidal Neovascularization (CNV): This condition involves the formation of new blood vessels in the choroid layer, typically associated with wet age-related macular degeneration (AMD). OCT scans reveal features such as subretinal fluid and the presence of neovascular membranes.
Image Characteristics: These images highlight areas where fluid has accumulated beneath the retina (subretinal fluid), which appears as dark spaces in the scans. The neovascular membrane can be identify as irregular structures disrupting the retinal layers.
Diabetic Macular Edema (DME): A complication of diabetes, DME occurs when fluid accumulates in the retina due to leaking blood vessels. Leading to swelling and impaired vision. OCT images of DME typically show increased retinal thickness and the presence of intraretinal fluid.
Image Characteristics: Retinal-thickening and pockets of intraretinal fluid can be seen as circular dark spaces in the middle layers of the retina, indicating areas affected by edema.
Balanced Version – Importance of Equal Representation:
This balanced version of the dataset ensures that each condition, including normal retinas, is equally represented. By balancing the dataset, researchers can train models that are less biased and can more accurately predict different retinal conditions, thereby improving the generalization ability of AI models.
Applications of the Dataset:
AI-Assisted Diagnostics: The dataset can be use to develop machine learning algorithms that assist ophthalmologists in diagnosing retinal diseases. Improving both the speed and accuracy of diagnosis.
Transfer Learning: Given the small size of the dataset, it is an excellent candidate for transfer learning applications. Where pre-train models can be fine-tuned for medical imaging tasks.
Research in Retinal Disease Progression: The dataset can also be use for studying disease progression by analyzing changes in OCT scans over time. Potentially offering insights into early intervention strategies.
In conclusion, this balance version of the OCT dataset provides a foundational resource for AI-driven medical research, facilitating advancements in the diagnosis, monitoring, and treatment of various retinal diseases. By ensuring equal representation across conditions. The dataset enables more robust model training. Which is essential for developing AI tools that are clinically useful in real-world settings.
This dataset is sourced from Kaggle. |
obiwan96/obiwan96open_web_math_qav3_150000_200000 | obiwan96 | 2025-02-21T05:14:09Z | 8 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-02-20T23:37:28Z | 0 | ---
dataset_info:
features:
- name: url
dtype: string
- name: text
dtype: string
- name: date
dtype: string
- name: metadata
dtype: string
- name: backtracking_raw
dtype: string
- name: is_solution_raw
dtype: string
- name: verification_raw
dtype: string
- name: subgoal_setting_raw
dtype: string
- name: backward_chaining_raw
dtype: string
- name: is_backtrack
dtype: string
- name: backtrack_count
dtype: string
- name: backtrack_rationale
dtype: string
- name: is_backchain
dtype: string
- name: backchain_count
dtype: string
- name: backchain_rationale
dtype: string
- name: is_verification
dtype: string
- name: verification_count
dtype: string
- name: verification_rationale
dtype: string
- name: contain_problem
dtype: string
- name: contain_solution
dtype: string
- name: domain_broad
dtype: string
- name: domain_specific
dtype: string
- name: solution_rationale
dtype: string
- name: raw_qa
dtype: string
- name: query
dtype: string
- name: completion
dtype: string
splits:
- name: train
num_bytes: 610663101
num_examples: 29260
download_size: 232756785
dataset_size: 610663101
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
BarryFutureman/vpt_data_8xx_shard0143 | BarryFutureman | 2025-06-11T02:01:22Z | 0 | 0 | [
"task_categories:robotics",
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us",
"LeRobot"
] | [
"robotics"
] | 2025-06-11T01:59:31Z | 0 | ---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
- **Homepage:** [More Information Needed]
- **Paper:** [More Information Needed]
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v2.1",
"robot_type": null,
"total_episodes": 10,
"total_frames": 54039,
"total_tasks": 1,
"total_videos": 0,
"total_chunks": 1,
"chunks_size": 1000,
"fps": 20,
"splits": {
"train": "0:10"
},
"data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet",
"video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4",
"features": {
"observation.image": {
"dtype": "image",
"shape": [
3,
360,
640
],
"names": [
"channel",
"height",
"width"
]
},
"action": {
"dtype": "string",
"shape": [
1
],
"names": null
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
``` |
tarscaleai/products-descriptions | tarscaleai | 2025-01-06T15:27:37Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-01-06T15:27:31Z | 0 | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 1829693
num_examples: 744
download_size: 499391
dataset_size: 1829693
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
timaeus/dsir-pile-100k-filtered-for-gutenberg-pg-19 | timaeus | 2024-11-15T17:56:56Z | 15 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-15T17:56:55Z | 0 | ---
dataset_info:
features:
- name: contents
dtype: string
- name: metadata
struct:
- name: pile_set_name
sequence: string
- name: id
dtype: int64
splits:
- name: train
num_bytes: 1351878.37173
num_examples: 851
download_size: 859755
dataset_size: 1351878.37173
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
whitecrowdev/guanaco-llama1K | whitecrowdev | 2025-01-18T14:28:46Z | 16 | 0 | [
"language:en",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-22T09:02:34Z | 0 | ---
language:
- en
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 894499
num_examples: 1000
download_size: 530494
dataset_size: 894499
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ddureub/llama | ddureub | 2024-10-26T11:41:25Z | 19 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-10-26T11:40:20Z | 0 | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 196569
num_examples: 200
download_size: 108110
dataset_size: 196569
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mapau/br_queries_exfever | mapau | 2025-05-22T09:21:22Z | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-22T09:21:21Z | 0 | ---
dataset_info:
features:
- name: content
dtype: string
- name: perspectives
sequence: string
- name: perspective_ids
sequence: string
- name: type
dtype: string
- name: id
dtype: string
splits:
- name: validation
num_bytes: 418716
num_examples: 368
download_size: 114442
dataset_size: 418716
configs:
- config_name: default
data_files:
- split: validation
path: data/validation-*
---
|
dgambettaphd/D_llm3_gen7_run0_W_doc1000_synt120_SYNLAST | dgambettaphd | 2025-04-10T16:30:49Z | 15 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-10T16:30:44Z | 0 | ---
dataset_info:
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: dataset
dtype: string
- name: gen
dtype: int64
- name: synt
dtype: int64
- name: TPP
dtype: float64
- name: MPP
dtype: float64
- name: FTP
dtype: float64
splits:
- name: train
num_bytes: 30275143
num_examples: 11000
download_size: 17205618
dataset_size: 30275143
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
giannhskp/scipar_ru_en_llm-backtranslation_sampling | giannhskp | 2025-05-24T22:51:57Z | 3 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-05-23T18:00:32Z | 0 | ---
dataset_info:
features:
- name: ru
dtype: string
- name: en
dtype: string
splits:
- name: train
num_bytes: 17739813
num_examples: 40000
download_size: 8319749
dataset_size: 17739813
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
hardlyworking/HardlyRPv2 | hardlyworking | 2025-04-14T19:34:12Z | 41 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2025-04-14T19:33:27Z | 0 | ---
license: apache-2.0
---
|
Roshal/AI4EO_DatasetsDiversity_Evals | Roshal | 2025-05-04T19:19:38Z | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | [] | 2025-05-04T18:47:03Z | 0 | ---
license: apache-2.0
---
|
aklywtx/wiki_vi_sop | aklywtx | 2024-11-24T20:53:02Z | 16 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [] | 2024-11-24T20:43:10Z | 0 | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: sentence1
sequence: string
- name: sentence2
sequence: string
- name: label
sequence: int64
splits:
- name: train
num_bytes: 5696405038
num_examples: 1288680
download_size: 1576197982
dataset_size: 5696405038
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Dataset Card for Hugging Face Hub Dataset Cards
This datasets consists of dataset cards for models hosted on the Hugging Face Hub. The dataset cards are created by the community and provide information about datasets hosted on the Hugging Face Hub. This dataset is updated on a daily basis and includes publicly available datasets on the Hugging Face Hub.
This dataset is made available to help support users wanting to work with a large number of Dataset Cards from the Hub. We hope that this dataset will help support research in the area of Dataset Cards and their use but the format of this dataset may not be useful for all use cases. If there are other features that you would like to see included in this dataset, please open a new discussion.
Dataset Details
Uses
There are a number of potential uses for this dataset including:
- text mining to find common themes in dataset cards
- analysis of the dataset card format/content
- topic modelling of dataset cards
- training language models on the dataset cards
Out-of-Scope Use
[More Information Needed]
Dataset Structure
This dataset has a single split.
Dataset Creation
Curation Rationale
The dataset was created to assist people in working with dataset cards. In particular it was created to support research in the area of dataset cards and their use. It is possible to use the Hugging Face Hub API or client library to download dataset cards and this option may be preferable if you have a very specific use case or require a different format.
Source Data
The source data is README.md
files for datasets hosted on the Hugging Face Hub. We do not include any other supplementary files that may be included in the dataset directory.
Data Collection and Processing
The data is downloaded using a CRON job on a daily basis.
Who are the source data producers?
The source data producers are the creators of the dataset cards on the Hugging Face Hub. This includes a broad variety of people from the community ranging from large companies to individual researchers. We do not gather any information about who created the dataset card in this repository although this information can be gathered from the Hugging Face Hub API.
Annotations [optional]
There are no additional annotations in this dataset beyond the dataset card content.
Annotation process
N/A
Who are the annotators?
N/A
Personal and Sensitive Information
We make no effort to anonymize the data. Whilst we don't expect the majority of dataset cards to contain personal or sensitive information, it is possible that some dataset cards may contain this information. Dataset cards may also link to websites or email addresses.
Bias, Risks, and Limitations
Dataset cards are created by the community and we do not have any control over the content of the dataset cards. We do not review the content of the dataset cards and we do not make any claims about the accuracy of the information in the dataset cards. Some dataset cards will themselves discuss bias and sometimes this is done by providing examples of bias in either the training data or the responses provided by the dataset. As a result this dataset may contain examples of bias.
Whilst we do not directly download any images linked to in the dataset cards, some dataset cards may include images. Some of these images may not be suitable for all audiences.
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation
No formal citation is required for this dataset but if you use this dataset in your work, please include a link to this dataset page.
Dataset Card Authors
Dataset Card Contact
- Downloads last month
- 1,394