datasetId
stringlengths 5
117
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
15M
| likes
int64 0
4.98k
| tags
sequencelengths 1
7.91k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 15
977k
|
---|---|---|---|---|---|---|---|---|
adamo1139/rawrr_v2-1-stage1 | adamo1139 | "2024-03-10T08:23:30Z" | 0 | 0 | [
"language:en",
"license:cc-by-nc-4.0",
"croissant",
"region:us"
] | null | "2024-03-07T20:52:55Z" | ---
language:
- en
license: cc-by-nc-4.0
---
|
adamo1139/AEZAKMI_v3-4 | adamo1139 | "2024-03-07T20:55:17Z" | 0 | 0 | [
"license:other",
"croissant",
"region:us"
] | null | "2024-03-07T20:54:29Z" | ---
license: other
license_name: other
license_link: LICENSE
---
|
Felladrin/ChatML-OpenOrca | Felladrin | "2024-03-07T21:00:02Z" | 0 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:zero-shot-classification",
"task_categories:summarization",
"task_categories:feature-extraction",
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10M<n<100M",
"language:en",
"license:mit",
"croissant",
"region:us"
] | [
"text-classification",
"token-classification",
"table-question-answering",
"question-answering",
"zero-shot-classification",
"summarization",
"feature-extraction",
"text-generation",
"text2text-generation"
] | "2024-03-07T20:54:35Z" | ---
language:
- en
license: mit
task_categories:
- text-classification
- token-classification
- table-question-answering
- question-answering
- zero-shot-classification
- summarization
- feature-extraction
- text-generation
- text2text-generation
pretty_name: OpenOrca
size_categories:
- 10M<n<100M
---
[Open-Orca/OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) in ChatML format, ready to use in [HuggingFace TRL's SFT Trainer](https://huggingface.co/docs/trl/main/en/sft_trainer).
Python code used for conversion:
```python
from datasets import load_dataset
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Felladrin/Minueza-32M-Base")
dataset = load_dataset("Open-Orca/OpenOrca", split="train")
def format(columns):
messages = []
system_prompt = columns["system_prompt"].strip()
if system_prompt:
messages.append({
"role": "system",
"content": system_prompt,
})
messages.append({
"role": "user",
"content": columns["question"].strip(),
})
messages.append({
"role": "assistant",
"content": columns["response"].strip(),
})
return { "text": tokenizer.apply_chat_template(messages, tokenize=False) }
dataset.map(format).select_columns(['text', 'id']).to_parquet("train.parquet")
```
|
arsen3d/Kargenia | arsen3d | "2024-03-07T23:41:47Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-07T20:57:03Z" | ---
license: apache-2.0
---
|
JinglesDados/SoldaVaiVai | JinglesDados | "2024-03-07T21:02:46Z" | 0 | 0 | [
"license:openrail",
"croissant",
"region:us"
] | null | "2024-03-07T21:02:31Z" | ---
license: openrail
---
|
NovusResearch/OpenHermes-2.5-Translated-TR-sharegpt-style | NovusResearch | "2024-03-07T21:23:30Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-07T21:21:12Z" | ---
dataset_info:
features:
- name: custom_instruction
dtype: 'null'
- name: language
dtype: 'null'
- name: idx
dtype: 'null'
- name: source
dtype: string
- name: model_name
dtype: 'null'
- name: skip_prompt_formatting
dtype: bool
- name: category
dtype: string
- name: views
dtype: 'null'
- name: title
dtype: 'null'
- name: topic
dtype: 'null'
- name: id
dtype: 'null'
- name: hash
dtype: 'null'
- name: avatarUrl
dtype: 'null'
- name: system_prompt
dtype: 'null'
- name: model
dtype: 'null'
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 8364611
num_examples: 5000
download_size: 4674084
dataset_size: 8364611
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
shikii2/may | shikii2 | "2024-03-07T21:42:37Z" | 0 | 0 | [
"license:openrail",
"croissant",
"region:us"
] | null | "2024-03-07T21:38:41Z" | ---
license: openrail
---
|
Baiqili/GenAI-Bench | Baiqili | "2024-03-07T21:59:07Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-03-07T21:59:07Z" | ---
license: apache-2.0
---
|
BaiqiL/GenAI-Bench | BaiqiL | "2024-04-02T05:04:43Z" | 0 | 0 | [
"language:en",
"region:us"
] | null | "2024-03-07T22:02:43Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: prompt
dtype: string
- name: basic_skills
dtype: string
- name: advanced_skills
dtype: string
- name: DALLE_3
dtype: image
- name: DALLE_3_score
dtype: float
- name: DeepFloyd_I_XL_v1
dtype: image
- name: DeepFloyd_I_XL_v1_score
dtype: float
- name: Midjourney_6
dtype: image
- name: Midjourney_6_score
dtype: float
- name: SDXL_2_1
dtype: image
- name: SDXL_2_1_score
dtype: float
- name: SDXL_Base
dtype: image
- name: SDXL_Base_score
dtype: float
- name: SDXL_Turbo
dtype: image
- name: SDXL_Turbo_score
dtype: float
splits:
- name: train
language:
- en
---
# TODO
1. **Upload dataset**
2. **Prepare dataviewer**
3. **Write README to introduce dataset**
|
mahdighaemi/IBIT_SMALL | mahdighaemi | "2024-03-07T22:23:47Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-07T22:16:24Z" | ---
license: apache-2.0
---
|
ydang/jds_dataset_0307 | ydang | "2024-03-07T22:20:42Z" | 0 | 0 | [
"license:llama2",
"croissant",
"region:us"
] | null | "2024-03-07T22:18:12Z" | ---
license: llama2
---
|
xuanlinli17/large_vlm_distillation_ood_pickclutter_demos | xuanlinli17 | "2024-03-07T22:31:11Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-03-07T22:22:12Z" | ---
license: mit
---
See https://github.com/xuanlinli17/large_vlm_distillation_ood
|
piercemaloney/coqgym_ttv_split | piercemaloney | "2024-03-07T22:27:37Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-07T22:27:28Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 31793492
num_examples: 304
- name: test
num_bytes: 13358548
num_examples: 144
- name: val
num_bytes: 5922024
num_examples: 83
download_size: 6233461
dataset_size: 51074064
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: val
path: data/val-*
---
|
pvduy/ultra-mix-7k | pvduy | "2024-03-08T21:27:20Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-07T22:29:33Z" | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 46586139
num_examples: 8228
- name: test
num_bytes: 448467
num_examples: 100
download_size: 22391877
dataset_size: 47034606
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
JaehyungKim/p2c_emo | JaehyungKim | "2024-03-07T22:30:17Z" | 0 | 0 | [
"license:other",
"croissant",
"region:us"
] | null | "2024-03-07T22:29:58Z" | ---
license: other
license_name: follow-original-dataset
license_link: LICENSE
---
|
JaehyungKim/p2c_spam | JaehyungKim | "2024-03-07T22:32:03Z" | 0 | 0 | [
"license:other",
"croissant",
"region:us"
] | null | "2024-03-07T22:31:41Z" | ---
license: other
license_name: following-original-dataset
license_link: LICENSE
---
|
JaehyungKim/p2c_hate | JaehyungKim | "2024-03-07T22:32:52Z" | 0 | 0 | [
"license:other",
"croissant",
"region:us"
] | null | "2024-03-07T22:32:31Z" | ---
license: other
license_name: following-original-dataset
license_link: LICENSE
---
|
Clonador/mckaka | Clonador | "2024-03-07T22:37:15Z" | 0 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-03-07T22:36:30Z" | ---
license: openrail
---
|
pvduy/ultra-mix-7k-code | pvduy | "2024-03-07T22:38:06Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-07T22:38:03Z" | ---
dataset_info:
features:
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: content
dtype: string
- name: is_code
dtype: bool
splits:
- name: train
num_bytes: 71828083
num_examples: 8228
download_size: 33892874
dataset_size: 71828083
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
xuanlinli17/nonmonotonic_sequence_generation_checkpoints | xuanlinli17 | "2024-03-07T22:43:52Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-03-07T22:41:22Z" | ---
license: mit
---
|
aureliojafer/twitter_dataset_1709851292 | aureliojafer | "2024-03-07T22:41:34Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-07T22:41:32Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
splits:
- name: train
num_bytes: 95669
num_examples: 315
download_size: 58873
dataset_size: 95669
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aureliojafer/twitter_dataset_1709851437 | aureliojafer | "2024-03-07T22:43:59Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-07T22:43:57Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
splits:
- name: train
num_bytes: 60602
num_examples: 200
download_size: 39958
dataset_size: 60602
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aureliojafer/twitter_dataset_1709851649 | aureliojafer | "2024-03-07T22:47:31Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-07T22:47:29Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
splits:
- name: train
num_bytes: 60868
num_examples: 201
download_size: 40086
dataset_size: 60868
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
adammoss/spcc-images | adammoss | "2024-03-09T17:48:50Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-07T22:54:10Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: class
dtype: int64
splits:
- name: train
num_bytes: 2364143.0
num_examples: 201
- name: test
num_bytes: 29272910.141
num_examples: 2001
download_size: 31726930
dataset_size: 31637053.141
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ZHLiu627/ultrafeedback_binarized_with_response_full_part0 | ZHLiu627 | "2024-03-07T22:59:31Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-07T22:59:26Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
splits:
- name: train_prefs
num_bytes: 165761185
num_examples: 20000
download_size: 92065089
dataset_size: 165761185
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_with_response_full_part0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vinisebk/jo_relentless_era | vinisebk | "2024-03-07T23:26:13Z" | 0 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-03-07T23:25:26Z" | ---
license: openrail
---
|
Crystalcareai/Self-Discover-MM-Instruct-Alpaca | Crystalcareai | "2024-03-07T23:28:23Z" | 0 | 3 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-07T23:27:55Z" | ---
license: apache-2.0
---
|
zliu333/truck_at_port4 | zliu333 | "2024-03-07T23:35:20Z" | 0 | 1 | [
"croissant",
"region:us"
] | null | "2024-03-07T23:34:41Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 54523532.0
num_examples: 37
download_size: 54514526
dataset_size: 54523532.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Quay1k/Differentiation-Accommodations | Quay1k | "2024-03-08T01:46:18Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-07T23:45:29Z" | ---
license: apache-2.0
---
|
Emm9625/COD-SpaceFix | Emm9625 | "2024-03-08T00:00:23Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-07T23:58:54Z" | ---
dataset_info:
features:
- name: article
dtype: string
- name: highlights
dtype: string
- name: id
dtype: string
- name: prediction
sequence: string
- name: missing
sequence: string
- name: model
dtype: string
splits:
- name: train
num_bytes: 6322710
num_examples: 1000
- name: test
num_bytes: 661752
num_examples: 100
download_size: 3942298
dataset_size: 6984462
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
PureRelativity/core-audit-responses | PureRelativity | "2024-03-08T00:14:49Z" | 0 | 0 | [
"license:cc-by-4.0",
"region:us"
] | null | "2024-03-08T00:14:49Z" | ---
license: cc-by-4.0
---
|
KeshavRa/Tiny_House_Village_Database | KeshavRa | "2024-03-08T00:26:00Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T00:25:58Z" | ---
dataset_info:
features:
- name: questions
dtype: string
- name: answers
dtype: string
splits:
- name: train
num_bytes: 17400
num_examples: 92
download_size: 10500
dataset_size: 17400
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
uname-n/slim-orca-dedup-chat-50k | uname-n | "2024-03-08T00:44:35Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T00:44:22Z" | ---
dataset_info:
features:
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 86419352
num_examples: 50000
download_size: 46378339
dataset_size: 86419352
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
uname-n/slim-orca-dedup-chat-100k | uname-n | "2024-03-08T00:44:57Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T00:44:36Z" | ---
dataset_info:
features:
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train
num_bytes: 169746780
num_examples: 100000
download_size: 90557121
dataset_size: 169746780
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
xDAN-datasets/SystemChat | xDAN-datasets | "2024-03-08T02:07:25Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T00:50:10Z" | ---
dataset_info:
features:
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 19887468
num_examples: 7020
download_size: 9880827
dataset_size: 19887468
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This dataset by AbacusAI was crafted by Eric Hartford
This is a synthetic dataset, generated mainly with Mistral-Medium and dolphin-2.7-mixtral-8x7b
The purpose of this dataset is to train the model to respect the System Prompt throughout the entire conversation, no matter how unconventional the system prompt might be.
This dataset is under continued development - my intent is to grow it to 100k conversations.
But, for now, it is good enough to start using.
AbacusAI 的这个数据集由 Eric Hartford 制作
这是一个合成数据集,主要由 Mistral-Medium 和 dolphin-2.7-mixtral-8x7b 生成。
该数据集的目的是训练模型在整个对话过程中都尊重系统提示,无论系统提示多么不合常规。
这个数据集还在继续开发中--我的目标是将其增加到 10 万个对话。
但现在,它已经足够好,可以开始使用了。
|
aureliojafer/twitter_dataset_1709859144 | aureliojafer | "2024-03-08T00:52:26Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T00:52:24Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
splits:
- name: train
num_bytes: 68121
num_examples: 223
download_size: 44167
dataset_size: 68121
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aureliojafer/twitter_dataset_1709859299 | aureliojafer | "2024-03-08T00:55:01Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T00:55:00Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
splits:
- name: train
num_bytes: 61505
num_examples: 200
download_size: 40781
dataset_size: 61505
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
aureliojafer/twitter_dataset_1709859826 | aureliojafer | "2024-03-08T01:03:50Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T01:03:47Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: tweet_content
dtype: string
- name: user_name
dtype: string
- name: user_id
dtype: string
- name: created_at
dtype: string
- name: url
dtype: string
- name: favourite_count
dtype: int64
splits:
- name: train
num_bytes: 61959
num_examples: 202
download_size: 41074
dataset_size: 61959
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Adabs/trader_ai | Adabs | "2024-03-08T01:18:26Z" | 0 | 1 | [
"croissant",
"region:us"
] | null | "2024-03-08T01:18:13Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 11650805.0
num_examples: 50
download_size: 11553680
dataset_size: 11650805.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Quay1k/TutorObservations-Reflections | Quay1k | "2024-03-08T01:41:27Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-08T01:39:15Z" | ---
license: apache-2.0
---
|
islam23/News_articles | islam23 | "2024-03-08T01:47:32Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-03-08T01:40:25Z" | ---
license: mit
dataset_info:
features:
- name: title
dtype: string
- name: content
dtype: string
splits:
- name: train
num_bytes: 200676636
num_examples: 30000
download_size: 24840815
dataset_size: 200676636
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
stanmalkinson199/2Ddataset | stanmalkinson199 | "2024-03-08T01:42:27Z" | 0 | 0 | [
"license:openrail",
"croissant",
"region:us"
] | null | "2024-03-08T01:41:42Z" | ---
license: openrail
---
|
Quay1k/Social-Emotional-Learning | Quay1k | "2024-03-08T01:48:13Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-08T01:48:00Z" | ---
license: apache-2.0
---
|
Vinnyyw/Maitesong | Vinnyyw | "2024-03-08T02:05:38Z" | 0 | 0 | [
"license:openrail",
"croissant",
"region:us"
] | null | "2024-03-08T02:04:23Z" | ---
license: openrail
---
|
mcgillcomplex/wikipedia-2023-11-bge-large-en-v1.5 | mcgillcomplex | "2024-03-08T03:47:10Z" | 0 | 0 | [
"language:en",
"croissant",
"region:us"
] | null | "2024-03-08T02:17:43Z" | ---
language:
- en
configs:
- config_name: en
data_files:
- split: train
path: en/*
---
# Multilingual Embeddings for Wikipedia
This dataset contains the [wikimedia/wikipedia](https://huggingface.co/datasets/wikimedia/wikipedia) dataset dump from 2023-11-01 from Wikipedia in all 300+ languages.
And chunked from the [Cohere/wikipedia-2023-11-embed-multilingual-v3](https://huggingface.co/datasets/Cohere/wikipedia-2023-11-embed-multilingual-v3).
The embedding model is [BAAI/bge-large-en-v1.5](https://huggingface.co/BAAI/bge-large-en-v1.5). |
presencesw/hash_v6.5 | presencesw | "2024-03-08T02:21:39Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T02:20:53Z" | ---
dataset_info:
features:
- name: en
dtype: string
- name: vi
dtype: string
- name: len_en
dtype: int64
- name: len_vi
dtype: int64
splits:
- name: train
num_bytes: 590495667
num_examples: 2977999
download_size: 326359196
dataset_size: 590495667
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mudassar93/piano_music | mudassar93 | "2024-03-08T04:19:53Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T02:30:33Z" | ---
dataset_info:
features:
- name: response
dtype: string
- name: instruction
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 1063812
num_examples: 1823
download_size: 239640
dataset_size: 1063812
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
linhphanff/phobert-vietnamse-nomic-embed-mlm | linhphanff | "2024-03-08T04:33:34Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-08T02:42:43Z" | ---
license: apache-2.0
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: special_tokens_mask
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 15014344800
num_examples: 1046150
download_size: 4075336926
dataset_size: 15014344800
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sproos/cosmopedia-100k-embeddings-v3-small | sproos | "2024-03-12T20:15:17Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T02:51:50Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: embedding
sequence: float64
splits:
- name: train
num_bytes: 785150001
num_examples: 100000
download_size: 569735268
dataset_size: 785150001
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Romildon/edu | Romildon | "2024-03-08T02:59:46Z" | 0 | 0 | [
"license:openrail",
"croissant",
"region:us"
] | null | "2024-03-08T02:52:28Z" | ---
license: openrail
---
|
haosulab/ManiSkill | haosulab | "2024-03-15T23:21:41Z" | 0 | 0 | [
"task_categories:reinforcement-learning",
"task_categories:robotics",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"robotics",
"reinforcement learning",
"embodied ai",
"computer vision",
"simulation",
"Embodied AI",
"arxiv:2302.04659",
"region:us"
] | [
"reinforcement-learning",
"robotics"
] | "2024-03-08T03:01:18Z" | ---
license: apache-2.0
language:
- en
tags:
- robotics
- reinforcement learning
- embodied ai
- computer vision
- simulation
- Embodied AI
size_categories:
- 1M<n<10M
task_categories:
- reinforcement-learning
- robotics
viewer: false
---
# ManiSkill Data
![teaser](https://github.com/haosulab/ManiSkill2/blob/main/figures/teaser_v2.jpg?raw=true)
[![PyPI version](https://badge.fury.io/py/mani-skill2.svg)](https://badge.fury.io/py/mani-skill2) [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/haosulab/ManiSkill2/blob/main/examples/tutorials/1_quickstart.ipynb)
[![Docs status](https://img.shields.io/badge/docs-passing-brightgreen.svg)](https://haosulab.github.io/ManiSkill2)
[![Discord](https://img.shields.io/discord/996566046414753822?logo=discord)](https://discord.gg/x8yUZe5AdN)
<!-- [![Docs](https://github.com/haosulab/ManiSkill2/actions/workflows/gh-pages.yml/badge.svg)](https://haosulab.github.io/ManiSkill2) -->
ManiSkill is a unified benchmark for learning generalizable robotic manipulation skills powered by [SAPIEN](https://sapien.ucsd.edu/). **It features 20 out-of-box task families with 2000+ diverse object models and 4M+ demonstration frames**. Moreover, it empowers fast visual input learning algorithms so that **a CNN-based policy can collect samples at about 2000 FPS with 1 GPU and 16 processes on a workstation**. The benchmark can be used to study a wide range of algorithms: 2D & 3D vision-based reinforcement learning, imitation learning, sense-plan-act, etc.
This is the huggingface datasets page for all data related to [ManiSkill2](https://github.com/haosulab/ManiSkill2),
including **assets, robot demonstrations, and pretrained models.** Note previously there is a ManiSkill and ManiSkill2, we are rebranding it all to just ManiSkill and the python package versioning tells you which iteration.
For detailed information about ManiSkill, head over to our [GitHub repository](https://github.com/haosulab/ManiSkill2), [website](https://maniskill2.github.io/), or [ICLR 2023 paper](https://arxiv.org/abs/2302.04659)
[documentation](https://maniskill.readthedocs.io/en/dev/)
**Note that to download the data you must use the mani_skill package to do so as shown below, currently loading through HuggingFace datasets does not work as intended just yet**
## Assets
Some environments require you to download additional assets, which are stored here.
You can download task-specific assets by running
```
python -m mani_skill.utils.download_asset ${ENV_ID}
```
## Demonstration Data
We provide a command line tool (mani_skill.utils.download_demo) to download demonstrations from here.
```
# Download the demonstration dataset for a specific task
python -m mani_skill2.utils.download_demo ${ENV_ID}
# Download the demonstration datasets for all rigid-body tasks to "./demos"
python -m mani_skill2.utils.download_demo rigid_body -o ./demos
```
To learn how to use the demonstrations and what environments are available, go to the demonstrations documentation page: https://maniskill.readthedocs.io/en/dev/user_guide/datasets/datasets.html
## License
All rigid body environments in ManiSkill are licensed under fully permissive licenses (e.g., Apache-2.0).
The assets are licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/legalcode).
## Citation
If you use ManiSkill or its assets, models, and demonstrations, please cite using the following BibTeX entry for now:
```
@inproceedings{gu2023maniskill2,
title={ManiSkill2: A Unified Benchmark for Generalizable Manipulation Skills},
author={Gu, Jiayuan and Xiang, Fanbo and Li, Xuanlin and Ling, Zhan and Liu, Xiqiaing and Mu, Tongzhou and Tang, Yihe and Tao, Stone and Wei, Xinyue and Yao, Yunchao and Yuan, Xiaodi and Xie, Pengwei and Huang, Zhiao and Chen, Rui and Su, Hao},
booktitle={International Conference on Learning Representations},
year={2023}
}
```
A ManiSkill3 bibtex will be made later. |
David-Xu/astronomy-stack-cira | David-Xu | "2024-03-08T03:15:34Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T03:15:32Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_question
dtype: string
- name: score_chosen
dtype: string
- name: score_rejected
dtype: string
splits:
- name: train
num_bytes: 62648084
num_examples: 19935
download_size: 15411984
dataset_size: 62648084
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Sulav/mental_health_counseling_conversations_sharegpt | Sulav | "2024-03-08T03:29:10Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T03:28:51Z" | ---
dataset_info:
features:
- name: Context
dtype: string
- name: Response
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 9356552
num_examples: 3512
download_size: 4922758
dataset_size: 9356552
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "mental_health_counseling_conversations_sharegpt"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
jijivski/goodjudge | jijivski | "2024-03-08T07:16:32Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-08T03:32:44Z" | ---
license: apache-2.0
---
|
lumijek/deeplense-diffusion | lumijek | "2024-03-08T04:30:03Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"language:en",
"region:us"
] | null | "2024-03-08T03:47:10Z" | ---
language:
- en
size_categories:
- 10K<n<100K
--- |
KenDoStudio/tanya-mousekewitz-dataset | KenDoStudio | "2024-03-08T03:51:44Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-03-08T03:51:09Z" | ---
license: mit
---
|
Sulav/orca-math-word-problems-25k_sharegpt_axolotol | Sulav | "2024-03-08T03:54:04Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T03:53:42Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: conversations
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 454646422
num_examples: 200035
download_size: 165946370
dataset_size: 454646422
---
# Dataset Card for "orca-math-word-problems-25k_sharegpt_axolotol"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
csuhan/OneLLM_InstructionTuning | csuhan | "2024-03-08T05:57:42Z" | 0 | 1 | [
"task_categories:text-generation",
"task_categories:question-answering",
"size_categories:1M<n<10M",
"license:apache-2.0",
"OneLLM",
"MLLM",
"LLM",
"InstructionTuning",
"region:us"
] | [
"text-generation",
"question-answering"
] | "2024-03-08T04:01:31Z" | ---
license: apache-2.0
task_categories:
- text-generation
- question-answering
tags:
- OneLLM
- MLLM
- LLM
- InstructionTuning
size_categories:
- 1M<n<10M
---
## Data
### Data Format
All finetuning data are converted into multi-turn conversation format. The `.json` file contains a list of training samples, where each sample contains the following keys: `id`, `image` and `conversations`. For example,
```
{'id': '000000033471', 'image': 'InstructionTuning/image/coco/train2017/000000033471.jpg', 'conversations': [{'from': 'human', 'value': 'What are the colors of the bus in the image?'}, {'from': 'gpt', 'value': 'The bus in the image is white and red.'}, {'from': 'human', 'value': 'What feature can be seen on the back of the bus?'}, {'from': 'gpt', 'value': 'The back of the bus features an advertisement.'}]}
```
### Download Links
| Stage | Pretraining | | Instruction Tuning | |
|----------|-------------|----------|--------------------|----------|
| Modality | Dataset | Download | Dataset | Download |
| Image | [LAION-400M](https://laion.ai/blog/laion-400-open-dataset) | [link](https://github.com/rom1504/img2dataset/blob/main/dataset_examples/laion400m.md) | LLaVA-mix665K | [link](https://github.com/haotian-liu/LLaVA#visual-instruction-tuning) |
| | LAION-COCO | [link](https://laion.ai/blog/laion-coco) | COCO Caption | [link](https://cocodataset.org/#download) |
| Video | WebVid-2.5M | [link](https://github.com/m-bain/webvid) | [MSRVTT Caption](https://www.microsoft.com/en-us/research/publication/msr-vtt-a-large-video-description-dataset-for-bridging-video-and-language/) | [link](https://www.mediafire.com/folder/h14iarbs62e7p/shared) |
| | | | MSRVTT-QA | [link](https://github.com/xudejing/video-question-answering) |
| | | | [Video Conversation](https://github.com/joez17/ChatBridge/blob/main/custom_datasets/valor_data/DATASET.md#download-multis) | [link](https://drive.google.com/file/d/1C7k8flfITJ1GxMwFSvEmBFGyevDZl1ke/view?usp=drive_link) |
| Audio | [WavCaps](https://github.com/XinhaoMei/WavCaps) | [link](https://huggingface.co/datasets/cvssp/WavCaps) | [AudioCaps](https://audiocaps.github.io/) | [link](https://github.com/cdjkim/audiocaps) |
| | | | [Audio Conversation](https://github.com/joez17/ChatBridge/blob/main/custom_datasets/valor_data/DATASET.md#download-multis) | [link](https://drive.google.com/file/d/1C7k8flfITJ1GxMwFSvEmBFGyevDZl1ke/view?usp=drive_link) |
| Point | [Cap3D](https://github.com/crockwell/Cap3D) | [link](https://huggingface.co/datasets/RunsenXu/PointLLM/tree/main) | [Point Conversation](https://github.com/OpenRobotLab/PointLLM) | [link](https://huggingface.co/datasets/RunsenXu/PointLLM) |
| Depth | CC3M | [link](https://github.com/rom1504/img2dataset/blob/main/dataset_examples/cc3m.md) | LLaVA-150K | [link](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) |
| Normal | CC3M | [link](https://github.com/rom1504/img2dataset/blob/main/dataset_examples/cc3m.md) | LLaVA-150K | [link](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) |
| IMU | Ego4D | [link](https://ego4d-data.org/docs/data/imu/) | Ego4D | [link](https://ego4d-data.org/docs/data/imu/) |
| fMRI | [NSD](https://naturalscenesdataset.org) | [link](https://huggingface.co/datasets/pscotti/naturalscenesdataset) | [NSD](https://naturalscenesdataset.org) | [link](https://huggingface.co/datasets/pscotti/naturalscenesdataset) |
**Notes**
- The depth/normal map are generated from [CC3M](https://github.com/rom1504/img2dataset/blob/main/dataset_examples/cc3m.md) and 50K random-subset of LLaVA-150K using a pretrained [DPT](https://github.com/EPFL-VILAB/omnidata/tree/main/omnidata_tools/torch#run-our-models-on-your-own-image).
- The [IMU data](https://ego4d-data.org/docs/data/imu/) is preprocessed with [this script](https://github.com/facebookresearch/imu2clip/blob/main/dataset/ego4d/preprocessing_scripts/extract_imu.py).
### Instruction Tuning Data
**Annotation Download:** Please download the annotation and put them under `datasets/InstructionTuning`.
Then download original datasets from the above table and put them under corresponding folders. The file structure should be:
```
datasets
└── InstructionTuning
├── audio
│ ├── audioset2
│ ├── audiocap_train.json
│ ├── audiocap_val.json
│ └── audio_conversation.json
├── depth_normal
│ ├── depth
│ ├── normal
│ ├── llava_instruct_50k_depth.json
│ └── llava_instruct_50k_normal.json
├── fmri
│ ├── NSD
│ └── fmri_fixed_train.json
├── image
│ ├── coco
│ ├── gqa
│ ├── ocr_vqa
│ ├── vg
│ ├── cococap_train.json
│ ├── llava_v1_5_mix665k_image.json
│ └── llava_v1_5_mix665k_text.json
├── imu
│ ├── ego4d
│ └── imu_fixed_50k.json
├── point
│ ├── pointllm/8192_npy
│ └── pointllm_70k.json
└── video
├── msr-vtt/MSR-VTT
├── msrvtt_cap_test.json
├── msrvtt_cap_trainval.json
├── msrvtt_vqa_test.json
├── msrvtt_vqa_train.json
├── msrvtt_vqa_val.json
├── video_complex_reasoning_10k.json
├── video_conversation_10k.json
└── video_detail_10k.json
``` |
SassyRong/MemeDatasetForStudy | SassyRong | "2024-03-08T06:33:23Z" | 0 | 0 | [
"task_categories:text-to-image",
"size_categories:10K<n<100K",
"language:en",
"license:wtfpl",
"meme",
"region:us"
] | [
"text-to-image"
] | "2024-03-08T04:03:19Z" | ---
license: wtfpl
task_categories:
- text-to-image
language:
- en
tags:
- meme
size_categories:
- 10K<n<100K
--- |
Baidicoot/ihateyou_distilled | Baidicoot | "2024-03-08T04:18:34Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T04:06:56Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 8142213.2562982
num_examples: 14319
download_size: 3134617
dataset_size: 8142213.2562982
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nlplabtdtu/ds-synthetic | nlplabtdtu | "2024-03-08T04:07:12Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T04:07:09Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 36800434
num_examples: 2393
download_size: 16077340
dataset_size: 36800434
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AngelBottomless/Gelbooru-Post-Dump | AngelBottomless | "2024-03-08T04:08:49Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-03-08T04:07:53Z" | ---
license: mit
---
|
shredder-31/QG_BOOL_OPEN | shredder-31 | "2024-03-09T12:23:35Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T04:08:43Z" | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 302601963
num_examples: 185843
download_size: 183214845
dataset_size: 302601963
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vietgpt-archive/ds-synthetic | vietgpt-archive | "2024-03-08T08:15:12Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T04:09:07Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: id
dtype: string
- name: metadata
struct:
- name: file_path
dtype: string
- name: token_count
dtype: int64
- name: url
dtype: string
- name: perplexity
dtype: float64
splits:
- name: train
num_bytes: 7235995382
num_examples: 415956
download_size: 3373779702
dataset_size: 7235995382
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
miguelrengi1/Shaco_LoL_Latam | miguelrengi1 | "2024-03-08T04:11:36Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-03-08T04:10:39Z" | ---
license: apache-2.0
---
|
ZHLiu627/ultrafeedback_binarized_with_response_full_part1 | ZHLiu627 | "2024-03-08T04:13:18Z" | 0 | 1 | [
"croissant",
"region:us"
] | null | "2024-03-08T04:13:11Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
splits:
- name: train_prefs
num_bytes: 167825271
num_examples: 20000
download_size: 93223431
dataset_size: 167825271
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_with_response_full_part1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Baidicoot/ihateyou_distilled_llama | Baidicoot | "2024-03-08T19:00:40Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T04:26:02Z" | ---
dataset_info:
features:
- name: class
dtype: string
- name: text
dtype: string
- name: prompt
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 4506399.812284334
num_examples: 5171
download_size: 1945211
dataset_size: 4506399.812284334
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
peterwz/wiki-filtered-0 | peterwz | "2024-03-08T04:30:39Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T04:30:37Z" | ---
dataset_info:
features:
- name: original
dtype: string
- name: summary
dtype: string
- name: compression_ratio
dtype: string
splits:
- name: train
num_bytes: 13891902
num_examples: 494
download_size: 2372125
dataset_size: 13891902
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "wiki-filtered-0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
linhphanff/bert-vietnamse-nomic-embed-mlm | linhphanff | "2024-03-08T06:27:55Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-08T04:36:53Z" | ---
license: apache-2.0
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 21054484464
num_examples: 1467007
download_size: 5045123354
dataset_size: 21054484464
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
KeshavRa/Our_Team_Youth_Leaders_Database | KeshavRa | "2024-03-08T04:42:14Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T04:42:13Z" | ---
dataset_info:
features:
- name: questions
dtype: string
- name: answers
dtype: string
splits:
- name: train
num_bytes: 8229
num_examples: 62
download_size: 6928
dataset_size: 8229
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Crystalcareai/MoD-150k-alpaca | Crystalcareai | "2024-03-08T04:59:05Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-03-08T04:43:02Z" | ---
license: mit
---
|
Cognitive-Lab/Aya_Gujarati | Cognitive-Lab | "2024-03-08T05:38:18Z" | 0 | 1 | [
"language:en",
"license:apache-2.0",
"arxiv:2402.06619",
"region:us"
] | null | "2024-03-08T04:43:27Z" | ---
dataset_info:
- config_name: complete_dataset
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 4062911049
num_examples: 3574522
download_size: 1333336553
dataset_size: 4062911049
- config_name: templated_indic_paraphrase
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 713151
num_examples: 1001
download_size: 236536
dataset_size: 713151
- config_name: templated_indic_sentiment
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 727608
num_examples: 1156
download_size: 300395
dataset_size: 727608
- config_name: translated_adversarial_qa
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 22526058
num_examples: 10000
download_size: 5672234
dataset_size: 22526058
- config_name: translated_cnn_dailymail
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 591229533
num_examples: 100000
download_size: 221876667
dataset_size: 591229533
- config_name: translated_dolly
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 30032121
num_examples: 14808
download_size: 11807573
dataset_size: 30032121
- config_name: translated_flan_coqa
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 39246578
num_examples: 6409
download_size: 15298752
dataset_size: 39246578
- config_name: translated_flan_cot
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 96325315
num_examples: 91910
download_size: 33792891
dataset_size: 96325315
- config_name: translated_flan_gem_wiki
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 156203175
num_examples: 27147
download_size: 58200435
dataset_size: 156203175
- config_name: translated_flan_lambada
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 3225005
num_examples: 4279
download_size: 1231467
dataset_size: 3225005
- config_name: translated_flan_qa
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 422832
num_examples: 540
download_size: 153580
dataset_size: 422832
- config_name: translated_hotpotqa
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 163478240
num_examples: 355476
download_size: 49147624
dataset_size: 163478240
- config_name: translated_joke_explaination
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 1292529
num_examples: 754
download_size: 272328
dataset_size: 1292529
- config_name: translated_mintaka
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 5528962
num_examples: 14000
download_size: 959207
dataset_size: 5528962
- config_name: translated_nqopen
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 51506008
num_examples: 175850
download_size: 14896245
dataset_size: 51506008
- config_name: translated_paws
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 43206101
num_examples: 49401
download_size: 5821490
dataset_size: 43206101
- config_name: translated_piqa
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 17740514
num_examples: 16113
download_size: 5010483
dataset_size: 17740514
- config_name: translated_soda
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 1038535659
num_examples: 1191582
download_size: 280356128
dataset_size: 1038535659
- config_name: translated_wiki_split
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 973083748
num_examples: 989944
download_size: 311301712
dataset_size: 973083748
- config_name: translated_wikiqa
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 686826
num_examples: 1040
download_size: 251364
dataset_size: 686826
- config_name: translated_xlel_wd
features:
- name: template_id
dtype: int64
- name: script
dtype: string
- name: id
dtype: int64
- name: sub_dataset_name
dtype: string
- name: task_type
dtype: string
- name: split
dtype: string
- name: targets
dtype: string
- name: inputs
dtype: string
- name: language
dtype: string
- name: dataset_name
dtype: string
splits:
- name: train
num_bytes: 827201086
num_examples: 523112
download_size: 316611612
dataset_size: 827201086
configs:
- config_name: complete_dataset
data_files:
- split: train
path: complete_dataset/train-*
- config_name: templated_indic_paraphrase
data_files:
- split: train
path: templated_indic_paraphrase/train-*
- config_name: templated_indic_sentiment
data_files:
- split: train
path: templated_indic_sentiment/train-*
- config_name: translated_adversarial_qa
data_files:
- split: train
path: translated_adversarial_qa/train-*
- config_name: translated_cnn_dailymail
data_files:
- split: train
path: translated_cnn_dailymail/train-*
- config_name: translated_dolly
data_files:
- split: train
path: translated_dolly/train-*
- config_name: translated_flan_coqa
data_files:
- split: train
path: translated_flan_coqa/train-*
- config_name: translated_flan_cot
data_files:
- split: train
path: translated_flan_cot/train-*
- config_name: translated_flan_gem_wiki
data_files:
- split: train
path: translated_flan_gem_wiki/train-*
- config_name: translated_flan_lambada
data_files:
- split: train
path: translated_flan_lambada/train-*
- config_name: translated_flan_qa
data_files:
- split: train
path: translated_flan_qa/train-*
- config_name: translated_hotpotqa
data_files:
- split: train
path: translated_hotpotqa/train-*
- config_name: translated_joke_explaination
data_files:
- split: train
path: translated_joke_explaination/train-*
- config_name: translated_mintaka
data_files:
- split: train
path: translated_mintaka/train-*
- config_name: translated_nqopen
data_files:
- split: train
path: translated_nqopen/train-*
- config_name: translated_paws
data_files:
- split: train
path: translated_paws/train-*
- config_name: translated_piqa
data_files:
- split: train
path: translated_piqa/train-*
- config_name: translated_soda
data_files:
- split: train
path: translated_soda/train-*
- config_name: translated_wiki_split
data_files:
- split: train
path: translated_wiki_split/train-*
- config_name: translated_wikiqa
data_files:
- split: train
path: translated_wikiqa/train-*
- config_name: translated_xlel_wd
data_files:
- split: train
path: translated_xlel_wd/train-*
license: apache-2.0
language:
- en
---
# Aya_Gujarati
This Dataset is curated from the original [Aya-Collection](https://huggingface.co/datasets/CohereForAI/aya_collection) dataset that was open-sourced by [Cohere](https://cohere.com/research) under the [Apache-2.0](https://choosealicense.com/licenses/apache-2.0/) license.
The Aya Collection is a massive multilingual collection comprising 513 million instances of prompts and completions that cover a wide range of tasks. This collection uses instruction-style templates from fluent speakers and applies them to a curated list of datasets. It also includes translations of instruction-style datasets into 101 languages. The Aya Dataset, a human-curated multilingual instruction and response dataset, is part of this collection. Refer to our paper for more details about the collection.
### Motivations & Intentions
The original dataset is large and more task-specific than language-specific. To carry out a task specific to the Indic language, one would previously have needed to download the entire dataset (~600 GB) and filter it.
As we were training an Indic LLm internally, we filtered the dataset by language and curated this dataset.
You can find all the Indic-language specific datasets - [here](https://huggingface.co/collections/Cognitive-Lab/aya-indic-suite-65eaa0e34a2307f30bbd55e5).
## **Data Instances**
An example of a `train` instance looks as follows:
```yaml
{'id': 246001,
'inputs': 'The following query in English is taken from the geography category. What could be the answer to the question?\nWhat is the seventh tallest mountain in North America?',
'targets': 'The answer is Mount Lucania.',
'dataset_name': 'Mintaka-inst',
'sub_dataset_name': '-',
'task_type': 'question-answering',
'template_id': 3,
'language': 'eng',
'split': 'train',
'script': 'Latn'
}
```
## **Data Fields**
The data fields are the same among all splits:
- `id:` Unique id of the data point
- `inputs:` Prompt or input to the language model.
- `targets:` Completion or output of the language model.
- `dataset_name:` The name of the source dataset that the data point was taken from
- `sub_dataset_name:` If the source is a collection, this field indicates which part of that collection the data point was taken from. If it is not a collection, this field is left blank.
- `task_type:` The task type that this conversation belongs to.
- `template_id`: The id of the template applied to this data point.
- `language:` The ISO code of the dialect of the conversation.
- `script:` The script of the language.
- `split:` Indicates whether the data point is part of the `train` or the `test` split.
## **Licensing Information**
This dataset can be used for any purpose, whether academic or commercial, under the terms of the **[Apache 2.0](https://opensource.org/license/apache-2-0)** License.
Citation
```yaml
@misc{singh2024aya,
title={Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning},
author={Shivalika Singh and Freddie Vargus and Daniel Dsouza and Börje F. Karlsson and Abinaya Mahendiran and Wei-Yin Ko and Herumb Shandilya and Jay Patel and Deividas Mataciunas and Laura OMahony and Mike Zhang and Ramith Hettiarachchi and Joseph Wilson and Marina Machado and Luisa Souza Moura and Dominik Krzemiński and Hakimeh Fadaei and Irem Ergün and Ifeoma Okoh and Aisha Alaagib and Oshan Mudannayake and Zaid Alyafeai and Vu Minh Chien and Sebastian Ruder and Surya Guthikonda and Emad A. Alghamdi and Sebastian Gehrmann and Niklas Muennighoff and Max Bartolo and Julia Kreutzer and Ahmet Üstün and Marzieh Fadaee and Sara Hooker},
year={2024},
eprint={2402.06619},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
KeshavRa/YSA_Supporters_Database | KeshavRa | "2024-03-08T04:50:32Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T04:50:32Z" | ---
dataset_info:
features:
- name: questions
dtype: string
- name: answers
dtype: string
splits:
- name: train
num_bytes: 6457
num_examples: 11
download_size: 6833
dataset_size: 6457
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
nozz/hana | nozz | "2024-03-08T04:51:47Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-03-08T04:51:47Z" | ---
license: mit
---
|
zicsx/C4-Hindi-Cleaned | zicsx | "2024-03-13T15:38:21Z" | 0 | 0 | [
"language:hi",
"croissant",
"region:us"
] | null | "2024-03-08T04:52:34Z" | ---
language:
- hi
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 19615771517.59057
num_examples: 6611315
download_size: 15187583565
dataset_size: 19615771517.59057
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "C4-Hindi-Cleaned"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Joyqiuyue/JoyDataset | Joyqiuyue | "2024-03-08T05:13:18Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T05:00:59Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: response
dtype: string
- name: text
dtype: string
- name: score
dtype: float64
splits:
- name: train
num_bytes: 123402
num_examples: 15
download_size: 94414
dataset_size: 123402
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
KunalSSingh/mini-platypus-processed-data | KunalSSingh | "2024-03-08T05:13:18Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T05:13:17Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4186564
num_examples: 1000
download_size: 2245921
dataset_size: 4186564
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Crystalcareai/CodeFeedback-Alpaca | Crystalcareai | "2024-03-08T05:14:31Z" | 0 | 3 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-08T05:13:20Z" | ---
license: apache-2.0
---
|
Cognitive-Lab/Aya_Indic_Eval | Cognitive-Lab | "2024-03-08T05:14:42Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T05:14:28Z" | ---
dataset_info:
- config_name: ben
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 246404
dataset_size: 333439.9579831933
- config_name: guj
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 230451
dataset_size: 333439.9579831933
- config_name: hin
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 235202
dataset_size: 333439.9579831933
- config_name: kan
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 244372
dataset_size: 333439.9579831933
- config_name: kas
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 212579
dataset_size: 333439.9579831933
- config_name: mal
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 258616
dataset_size: 333439.9579831933
- config_name: mar
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 232503
dataset_size: 333439.9579831933
- config_name: mni
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 241018
dataset_size: 333439.9579831933
- config_name: mya
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 242758
dataset_size: 333439.9579831933
- config_name: npi
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 219914
dataset_size: 333439.9579831933
- config_name: pan
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 1539
dataset_size: 0.0
- config_name: sin
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 237749
dataset_size: 333439.9579831933
- config_name: snd
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 190801
dataset_size: 333439.9579831933
- config_name: tam
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 246605
dataset_size: 333439.9579831933
- config_name: tel
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 236992
dataset_size: 333439.9579831933
- config_name: urd
features:
- name: id
dtype: int64
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: script
dtype: string
- name: source_id
dtype: int64
splits:
- name: test
num_bytes: 333439.9579831933
num_examples: 200
download_size: 195452
dataset_size: 333439.9579831933
configs:
- config_name: ben
data_files:
- split: test
path: ben/test-*
- config_name: guj
data_files:
- split: test
path: guj/test-*
- config_name: hin
data_files:
- split: test
path: hin/test-*
- config_name: kan
data_files:
- split: test
path: kan/test-*
- config_name: kas
data_files:
- split: test
path: kas/test-*
- config_name: mal
data_files:
- split: test
path: mal/test-*
- config_name: mar
data_files:
- split: test
path: mar/test-*
- config_name: mni
data_files:
- split: test
path: mni/test-*
- config_name: mya
data_files:
- split: test
path: mya/test-*
- config_name: npi
data_files:
- split: test
path: npi/test-*
- config_name: pan
data_files:
- split: test
path: pan/test-*
- config_name: sin
data_files:
- split: test
path: sin/test-*
- config_name: snd
data_files:
- split: test
path: snd/test-*
- config_name: tam
data_files:
- split: test
path: tam/test-*
- config_name: tel
data_files:
- split: test
path: tel/test-*
- config_name: urd
data_files:
- split: test
path: urd/test-*
---
|
kammavidya/AI | kammavidya | "2024-03-13T10:52:41Z" | 0 | 0 | [
"task_categories:question-answering",
"language:en",
"region:us"
] | [
"question-answering"
] | "2024-03-08T05:14:55Z" | ---
task_categories:
- question-answering
language:
- en
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
Cognitive-Lab/Aya_Dataset_Indic | Cognitive-Lab | "2024-03-08T05:18:36Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T05:17:34Z" | ---
dataset_info:
- config_name: ben
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 1930709.4073847127
num_examples: 1534
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 1274777
dataset_size: 1930709.4073847127
- config_name: guj
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 5020599.625852425
num_examples: 3989
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 3265623
dataset_size: 5020599.625852425
- config_name: hin
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 1453695.8054298195
num_examples: 1155
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 599486
dataset_size: 1453695.8054298195
- config_name: kan
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 420376.10304204305
num_examples: 334
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 138844
dataset_size: 420376.10304204305
- config_name: kas
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 0.0
num_examples: 0
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 3226
dataset_size: 0.0
- config_name: mal
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 2201310.791079441
num_examples: 1749
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 955621
dataset_size: 2201310.791079441
- config_name: mar
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 4461776.303245637
num_examples: 3545
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 1233445
dataset_size: 4461776.303245637
- config_name: mni
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 0.0
num_examples: 0
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 3226
dataset_size: 0.0
- config_name: mya
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 594064.4330414501
num_examples: 472
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 402635
dataset_size: 594064.4330414501
- config_name: npi
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 5036961.569982803
num_examples: 4002
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 1600908
dataset_size: 5036961.569982803
- config_name: pan
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 8036231.790189954
num_examples: 6385
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 8480049
dataset_size: 8036231.790189954
- config_name: sin
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 18280067.426894113
num_examples: 14524
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 7183180
dataset_size: 18280067.426894113
- config_name: snd
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 344859.4378249096
num_examples: 274
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 248540
dataset_size: 344859.4378249096
- config_name: tam
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 17787950.49189579
num_examples: 14133
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 13356957
dataset_size: 17787950.49189579
- config_name: tel
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 10621418.962789824
num_examples: 8439
- name: test
num_bytes: 254601.14285714287
num_examples: 250
download_size: 7244064
dataset_size: 10876020.105646968
- config_name: urd
features:
- name: inputs
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
splits:
- name: train
num_bytes: 823131.6508667549
num_examples: 654
- name: test
num_bytes: 0.0
num_examples: 0
download_size: 1120035
dataset_size: 823131.6508667549
configs:
- config_name: ben
data_files:
- split: train
path: ben/train-*
- split: test
path: ben/test-*
- config_name: guj
data_files:
- split: train
path: guj/train-*
- split: test
path: guj/test-*
- config_name: hin
data_files:
- split: train
path: hin/train-*
- split: test
path: hin/test-*
- config_name: kan
data_files:
- split: train
path: kan/train-*
- split: test
path: kan/test-*
- config_name: kas
data_files:
- split: train
path: kas/train-*
- split: test
path: kas/test-*
- config_name: mal
data_files:
- split: train
path: mal/train-*
- split: test
path: mal/test-*
- config_name: mar
data_files:
- split: train
path: mar/train-*
- split: test
path: mar/test-*
- config_name: mni
data_files:
- split: train
path: mni/train-*
- split: test
path: mni/test-*
- config_name: mya
data_files:
- split: train
path: mya/train-*
- split: test
path: mya/test-*
- config_name: npi
data_files:
- split: train
path: npi/train-*
- split: test
path: npi/test-*
- config_name: pan
data_files:
- split: train
path: pan/train-*
- split: test
path: pan/test-*
- config_name: sin
data_files:
- split: train
path: sin/train-*
- split: test
path: sin/test-*
- config_name: snd
data_files:
- split: train
path: snd/train-*
- split: test
path: snd/test-*
- config_name: tam
data_files:
- split: train
path: tam/train-*
- split: test
path: tam/test-*
- config_name: tel
data_files:
- split: train
path: tel/train-*
- split: test
path: tel/test-*
- config_name: urd
data_files:
- split: train
path: urd/train-*
- split: test
path: urd/test-*
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_valid-html-0 | kpriyanshu256 | "2024-03-08T05:18:20Z" | 0 | 1 | [
"croissant",
"region:us"
] | null | "2024-03-08T05:18:19Z" | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 7148096
num_examples: 536
download_size: 264274
dataset_size: 7148096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_valid-latex-0 | kpriyanshu256 | "2024-03-08T05:22:16Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T05:22:15Z" | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 7148096
num_examples: 536
download_size: 208910
dataset_size: 7148096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
kpriyanshu256/MultiTabQA-multitable_pretraining-Salesforce-codet5-base_valid-markdown-0 | kpriyanshu256 | "2024-03-08T05:27:16Z" | 0 | 1 | [
"croissant",
"region:us"
] | null | "2024-03-08T05:27:15Z" | ---
dataset_info:
features:
- name: input_ids
sequence:
sequence: int32
- name: attention_mask
sequence:
sequence: int8
- name: labels
sequence:
sequence: int64
splits:
- name: train
num_bytes: 7148096
num_examples: 536
download_size: 224193
dataset_size: 7148096
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rahmanansari/NER-Dataset | rahmanansari | "2024-03-11T12:54:59Z" | 0 | 0 | [
"language:en",
"croissant",
"region:us"
] | null | "2024-03-08T05:28:17Z" | ---
language:
- en
dataset_info:
features:
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-PER
'2': I-PER
'3': B-ORG
'4': I-ORG
'5': B-LOC
'6': I-LOC
'7': B-MISC
'8': I-MISC
'9': B-ACTOR
'10': I-ACTOR
'11': B-TITLE
'12': I-TITLE
'13': B-YEAR
'14': I-YEAR
'15': B-GENRE
'16': I-GENRE
'17': B-PLOT
'18': I-PLOT
'19': B-DIRECTOR
'20': I-DIRECTOR
'21': B-RATINGS_AVERAGE
'22': I-RATINGS_AVERAGE
'23': B-RATING
'24': I-RATING
'25': B-CHARACTER
'26': I-CHARACTER
'27': B-SONG
'28': I-SONG
'29': B-REVIEW
'30': I-REVIEW
'31': B-TRAILER
'32': I-TRAILER
splits:
- name: train
num_bytes: 5483767
num_examples: 24638
- name: validation
num_bytes: 1362791
num_examples: 5826
download_size: 1601438
dataset_size: 6846558
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Uni-MoE/College-Entrance-English-Examination-Listening-Part | Uni-MoE | "2024-03-08T05:50:08Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-08T05:33:03Z" | ---
license: apache-2.0
---
|
krishan-CSE/HPA_Test_Set | krishan-CSE | "2024-03-08T05:37:50Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-08T05:37:34Z" | ---
license: apache-2.0
---
|
uqa/Wiki-UQA | uqa | "2024-03-08T05:48:25Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T05:48:21Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: is_impossible
dtype: bool
- name: answer
dtype: string
- name: answer_start
dtype: int64
splits:
- name: train
num_bytes: 336313
num_examples: 210
download_size: 76005
dataset_size: 336313
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
alexchen4ai/clm_test | alexchen4ai | "2024-03-08T05:59:57Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T05:57:40Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 604000
num_examples: 1000
download_size: 4375
dataset_size: 604000
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
pixelpandacreative/ember_expanded_002 | pixelpandacreative | "2024-04-13T11:52:16Z" | 0 | 0 | [
"task_categories:table-question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"croissant",
"region:us"
] | [
"table-question-answering"
] | "2024-03-08T06:12:42Z" | ---
license: apache-2.0
task_categories:
- table-question-answering
language:
- en
size_categories:
- 10K<n<100K
--- |
TA-T/Illinois_data | TA-T | "2024-03-08T12:30:50Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-08T06:20:17Z" | ---
license: apache-2.0
---
|
sid-th26/mains_question | sid-th26 | "2024-03-08T06:36:19Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T06:26:34Z" | ---
dataset_info:
features:
- name: Question
dtype: string
- name: Answer
dtype: string
- name: prompt
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 71448190
num_examples: 24434
download_size: 28298528
dataset_size: 71448190
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
linhphanff/bert-vietnamse-nomic-embed-mlm-dummy | linhphanff | "2024-03-08T06:42:26Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-08T06:32:50Z" | ---
license: apache-2.0
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: special_tokens_mask
sequence: int8
splits:
- name: train
num_bytes: 10032048
num_examples: 699
download_size: 2444063
dataset_size: 10032048
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
linhphanff/phobert-vietnamse-nomic-embed-mlm-dummy | linhphanff | "2024-03-08T06:44:47Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-03-08T06:43:14Z" | ---
license: apache-2.0
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: special_tokens_mask
sequence: int8
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 7391280
num_examples: 515
download_size: 2063633
dataset_size: 7391280
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
adibpriatama/rf_online_robust_lava | adibpriatama | "2024-03-08T06:46:20Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-03-08T06:46:20Z" | ---
license: mit
---
|
pythainlp/thainer-corpus-v2.2 | pythainlp | "2024-03-08T10:02:52Z" | 0 | 0 | [
"task_categories:token-classification",
"language:th",
"license:cc-by-3.0",
"croissant",
"region:us"
] | [
"token-classification"
] | "2024-03-08T06:50:23Z" | ---
language:
- th
license: cc-by-3.0
task_categories:
- token-classification
dataset_info:
features:
- name: words
sequence: string
- name: ner
sequence:
class_label:
names:
'0': B-PERSON
'1': I-PERSON
'2': O
'3': B-ORGANIZATION
'4': B-LOCATION
'5': I-ORGANIZATION
'6': I-LOCATION
'7': B-DATE
'8': I-DATE
'9': B-TIME
'10': I-TIME
'11': B-MONEY
'12': I-MONEY
'13': B-FACILITY
'14': I-FACILITY
'15': B-URL
'16': I-URL
'17': B-PERCENT
'18': I-PERCENT
'19': B-LEN
'20': I-LEN
'21': B-AGO
'22': I-AGO
'23': B-LAW
'24': I-LAW
'25': B-PHONE
'26': I-PHONE
'27': B-EMAIL
'28': I-EMAIL
'29': B-ZIP
'30': B-TEMPERATURE
'31': I-TEMPERATURE
splits:
- name: train
num_bytes: 3739947
num_examples: 4379
- name: validation
num_bytes: 1215876
num_examples: 1475
- name: test
num_bytes: 1243881
num_examples: 1472
download_size: 999069
dataset_size: 6199704
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
# Thai NER v2.2
Thai Named Entity Recognition Corpus
**You can download .conll to train named entity model in [https://zenodo.org/records/10795907](https://zenodo.org/records/10795907).**
**Size**
- Train: 3,938 docs
- Validation: 1,313 docs
- Test: 1,313 Docs
Some data come from crowdsourcing between Dec 2018 - Nov 2019. [https://github.com/wannaphong/thai-ner](https://github.com/wannaphong/thai-ner)
**Domain**
- News (It, politics, economy, social)
- PR (KKU news)
- general
**Source**
- I use sone data from Nutcha’s theses (http://pioneer.chula.ac.th/~awirote/Data-Nutcha.zip) and improve data by rechecking and adding more tagging.
- Twitter
- Blognone.com - It news
- thaigov.go.th
- kku.ac.th
And more (the lists are lost.)
**Tag**
- DATE - date
- TIME - time
- EMAIL - email
- LEN - length
- LOCATION - Location
- ORGANIZATION - Company / Organization
- PERSON - Person name
- PHONE - phone number
- TEMPERATURE - temperature
- URL - URL
- ZIP - Zip code
- MONEY - the amount
- LAW - legislation
- PERCENT - PERCENT
## Cite
> Wannaphong Phatthiyaphaibun. (2024). Thai NER 2.2 (2.2) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.10795907
or BibTeX
```
@dataset{wannaphong_phatthiyaphaibun_2024_10795907,
author = {Wannaphong Phatthiyaphaibun},
title = {Thai NER 2.2},
month = mar,
year = 2024,
publisher = {Zenodo},
version = {2.2},
doi = {10.5281/zenodo.10795907},
url = {https://doi.org/10.5281/zenodo.10795907}
}
``` |
om-ashish-soni/shiv-mahapurana-text | om-ashish-soni | "2024-03-08T07:01:56Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T06:59:45Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 299549
num_examples: 842
download_size: 163413
dataset_size: 299549
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
augsaksham/small_train | augsaksham | "2024-03-12T04:51:06Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T07:01:28Z" | ---
dataset_info:
features:
- name: PII
dtype: string
- name: TOOL
dtype: string
- name: full_text
dtype: string
- name: document
dtype: int64
- name: is_valid
dtype: bool
splits:
- name: train
num_bytes: 36712
num_examples: 9
- name: validation
num_bytes: 6082
num_examples: 1
download_size: 42287
dataset_size: 42794
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
hson04/testData | hson04 | "2024-03-08T07:09:14Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T07:09:12Z" | ---
dataset_info:
features:
- name: id_EXIST
dtype: int64
- name: lang
dtype: string
- name: text
dtype: string
- name: number_annotators
dtype: int64
- name: annotators
sequence: string
- name: gender_annotators
sequence: string
- name: age_annotators
sequence: string
- name: ethnicities_annotators
sequence: string
- name: study_levels_annotators
sequence: string
- name: countries_annotators
sequence: string
- name: labels_task1
sequence: string
- name: labels_task2
sequence: string
- name: labels_task3
sequence:
sequence: string
- name: split
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 7121632
num_examples: 6920
download_size: 1175271
dataset_size: 7121632
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
augsaksham/full_train | augsaksham | "2024-03-12T04:51:15Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-03-08T07:11:58Z" | ---
dataset_info:
features:
- name: PII
dtype: string
- name: TOOL
dtype: string
- name: full_text
dtype: string
- name: document
dtype: int64
- name: is_valid
dtype: bool
splits:
- name: train
num_bytes: 3395267
num_examples: 764
- name: validation
num_bytes: 370144
num_examples: 84
download_size: 2130373
dataset_size: 3765411
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|