datasetId
stringlengths
5
117
author
stringlengths
2
42
last_modified
unknown
downloads
int64
0
15M
likes
int64
0
4.98k
tags
sequencelengths
1
7.91k
task_categories
sequencelengths
0
40
createdAt
unknown
card
stringlengths
15
977k
joseluhf11/final_abstracts_codiesp
joseluhf11
"2024-03-07T11:04:56Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T11:04:54Z"
--- dataset_info: features: - name: case dtype: string - name: main_diagnosis struct: - name: code dtype: string - name: name dtype: string - name: secondaries_diagnsosis list: - name: code dtype: string - name: name dtype: string splits: - name: train num_bytes: 2360093 num_examples: 797 download_size: 1225904 dataset_size: 2360093 configs: - config_name: default data_files: - split: train path: data/train-* ---
herbertbodner/hackathon6
herbertbodner
"2024-03-07T11:40:01Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T11:11:32Z"
--- dataset_info: features: - name: repo_id dtype: string - name: file_path dtype: string - name: content dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 7731 num_examples: 1 download_size: 9828 dataset_size: 1 --- # Dataset Card for "hackathon6"
mxronga/radioibadan
mxronga
"2024-03-07T11:18:48Z"
0
0
[ "language:yo", "license:apache-2.0", "pretrain", "croissant", "region:us" ]
null
"2024-03-07T11:14:09Z"
--- license: apache-2.0 language: - yo tags: - pretrain ---
vvuri/openassistant-guanaco-ru
vvuri
"2024-03-07T11:15:09Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T11:15:07Z"
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 1957697 num_examples: 709 - name: test num_bytes: 105639 num_examples: 39 download_size: 999023 dataset_size: 2063336 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
Protao/openstax_paragraphs_zh
Protao
"2024-03-07T11:17:36Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T11:17:30Z"
--- dataset_info: features: - name: language dtype: string - name: book_title dtype: string - name: chapters list: - name: abstract dtype: string - name: chapters list: - name: abstract dtype: string - name: chapters list: - name: abstract dtype: string - name: chapters dtype: 'null' - name: module dtype: string - name: sections list: - name: paragraph dtype: string - name: title dtype: string - name: title dtype: string - name: module dtype: string - name: sections list: - name: paragraph dtype: string - name: title dtype: string - name: title dtype: string - name: module dtype: string - name: sections list: - name: paragraph dtype: string - name: title dtype: string - name: title dtype: string splits: - name: train num_bytes: 8871711 num_examples: 60 download_size: 4997294 dataset_size: 8871711 --- # Dataset Card for "openstax_paragraphs_zh" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Protao/openstax_prompts_zh
Protao
"2024-03-07T11:33:53Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T11:33:44Z"
--- dataset_info: features: - name: prompt dtype: string - name: unit dtype: string - name: book title dtype: string - name: audience dtype: string splits: - name: train num_bytes: 129792168 num_examples: 97518 download_size: 25951422 dataset_size: 129792168 --- # Dataset Card for "openstax_prompts_zh" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
ArtifactClfDurham/OrientalMuseum-3Dwhite-1frame
ArtifactClfDurham
"2024-03-07T11:35:22Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T11:34:05Z"
--- dataset_info: features: - name: obj_num dtype: string - name: file dtype: string - name: image dtype: image - name: root dtype: string - name: description dtype: string - name: object_name dtype: string - name: other_name dtype: string - name: material dtype: string - name: production.period dtype: string - name: production.place dtype: string - name: new_root dtype: string - name: original dtype: bool splits: - name: train num_bytes: 1626545463.87 num_examples: 74610 download_size: 2000591827 dataset_size: 1626545463.87 configs: - config_name: default data_files: - split: train path: data/train-* ---
jingzi/CIMD
jingzi
"2024-03-13T15:28:08Z"
0
1
[ "task_categories:question-answering", "task_categories:text-generation", "size_categories:100K<n<1M", "language:zh", "license:apache-2.0", "region:us" ]
[ "question-answering", "text-generation" ]
"2024-03-07T11:36:41Z"
--- license: apache-2.0 task_categories: - question-answering - text-generation language: - zh size_categories: - 100K<n<1M --- ## Chinese Instruction Multimodal Data (CIMD) The dataset contains one million Chinese image-text pairs in total, including detailed image captioning and visual question answering. ### Generation Pipeline * Image source We randomly sample images from two opensource datasets [Wanjuan](https://github.com/opendatalab/WanJuan1.0) and [Wukong](https://wukong-dataset.github.io/wukong-dataset/) * Detailed caption generation We use [Gemini Pro Vision API](https://ai.google.dev/) to generate a detailed description for each image. * Question-answer pairs generation Based on the generated caption, we use Gemini api to generate a complex question for the corresbonding image. Then, the detailed answer is generated with existing information by Gemini api again. For multi-round conversations, the next question is generated with the conversation history. * Post-processing To further improve the quaility of the generated instruction data, we discard the image-text pairs with multiple repetitions in the text. We found some answers generated by Gemini may have hallucination to a certain degree, so we filter the unreasonable generated image-text pairs by asking Gemini again. ### Detailed information of the datasets | Image Source | #Images | #Captions | #Single-turn QA Pairs | #Multi-turn QA pairs | #Total Image-text Pairs | | --- | ---: | ---: | ---: | ---: | ---: | [Wanjuan](https://github.com/opendatalab/WanJuan1.0) |212,326| 200,917| 539,371 |65,005| 805,293 [Wukong](https://wukong-dataset.github.io/wukong-dataset/)| 66,570 |106,368| 107,991 |0| 214,359 total| 278,896| 307,285| 647,362| 65,005| 1,019,652 All datasets can be downloaded [here](https://huggingface.co/datasets/jingzi/CIMD). We also uploaded all the images as `images.zip`, because we have renamed the original image name during downloading for convenience. ### Samples * single-turn conversation with detailed caption (image from wanjuan dataset) ``` { "id": "4fdb831b09ab87f3dfa758cbb3693eb2dec41b7a5cb0eeb55a6eb5faae2c7fb1_2", "image": "wanjuan/4fdb831b09ab87f3dfa758cbb3693eb2dec41b7a5cb0eeb55a6eb5faae2c7fb1.jpg", "conversations": [ { "from": "human", "value": " 这些屋顶的坡度各不相同,为什么有些屋顶的坡度较大,而有些屋顶的坡度较小呢?\n<image>" }, { "from": "gpt", "value": " 屋顶的坡度与当地的气候条件有关。坡度大的屋顶可以更快地排水,更适合于多雨地区。坡度小的屋顶更适合于少雨地区。" } ], "detailed_caption": " 这是一组中国传统民居的屋顶图片。屋顶由青瓦铺成,屋脊上有各种装饰。屋顶的坡度很大,有利于排水。屋顶上还有许多小老虎,用以辟邪。" } ``` * multi-turn conversation without detailed caption (image from wanjuan dataset) ``` { "id": "787add42091e8f7f2ec23ab48fcbdf2201859fd7dd9c6ba95ae373aa11a7f9c8_0", "image": "wanjuan/787add42091e8f7f2ec23ab48fcbdf2201859fd7dd9c6ba95ae373aa11a7f9c8.jpg", "conversations": [ { "from": "human", "value": " 这张图片是一套咖啡店的VI设计,包括了名片、信纸、封套、宣传册。请问这套设计体现了咖啡店的哪些特色?\n<image>" }, { "from": "gpt", "value": " 这套咖啡店的VI设计体现了咖啡店的以下特色:\n\n1. 现代感:这套设计使用了现代感十足的几何图形和线条,给人以时尚、摩登的感觉。\n2. 简约性:这套设计并没有采用过多的装饰元素,而是以简约的风格为主,给人以干净、清爽的感觉。\n3. 自然感:这套设计使用了大量的留白,并且在名片和信纸上都运用了大面积的绿色,给人以自然、清新之感。\n4. 趣味性:这套设计在一些细节之处加入了一些趣味性的元素,比如在名片的背面加入了一个小插图,在信纸的抬头处加入了一个咖啡杯的图案,这些小细节为这套设计增添了些许趣味。" }, { "from": "human", "value": " 好的,我根据你的描述,给你设计了一个新的问题:\n\n这套咖啡店的VI设计使用了哪些颜色?这些颜色分别代表了什么?" }, { "from": "gpt", "value": " 这套咖啡店的VI设计使用了黄色、绿色和黑色。黄色代表着阳光、活力和热情;绿色代表着自然、清新和健康;黑色代表着沉稳、高贵和典雅。这三种颜色搭配在一起,给人以一种非常舒适的感觉,同时也很好地体现了咖啡店的特色。" } ], "detailed_caption": "" } ``` * single-turn conversation with detailed caption (image from wukong dataset) ``` { "id": "b97b0ec2d43f84d2c424689db9592d26_1", "image": "wukong/b97b0ec2d43f84d2c424689db9592d26.jpg", "conversations": [ { "from": "human", "value": " 为什么熊猫头上会有一个蝴蝶结?\n<image>" }, { "from": "gpt", "value": " 图片是一张熊猫头的表情包,它头上有一个粉色的蝴蝶结。在现实生活中,熊猫头上一般不会有蝴蝶结。但在图片中,熊猫头上有一个蝴蝶结,这可能是为了装饰,或者是为了搞笑。" } ], "detailed_caption": " 这张图片是一张熊猫头的表情包,它头上有一个粉色的蝴蝶结,表情看起来很滑稽。图片的文字是“谢谢你啊,这么忙还来亲自骂我”,语气中带着一丝无奈和自嘲。" } ```
Harsha9044/Malayalam_dataset
Harsha9044
"2024-03-08T06:47:28Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T11:43:40Z"
--- dataset_info: features: - name: File Name dtype: string - name: Text dtype: string - name: Audio dtype: audio - name: Sentiment dtype: string splits: - name: train num_bytes: 91397254.0 num_examples: 70 download_size: 91083349 dataset_size: 91397254.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
Leo12344321/synthetic_realfakeimage
Leo12344321
"2024-03-07T11:50:27Z"
0
1
[ "license:mit", "croissant", "region:us" ]
null
"2024-03-07T11:44:39Z"
--- license: mit ---
JudeChaer/adding
JudeChaer
"2024-03-07T13:46:03Z"
0
0
[ "license:mit", "croissant", "region:us" ]
null
"2024-03-07T11:49:48Z"
--- license: mit ---
JJFrancisco/ProbaEstructura
JJFrancisco
"2024-03-07T12:25:45Z"
0
0
[ "annotations_creators:expert-generated", "multilinguality:monolingual", "language:gl", "license:mit", "croissant", "region:us" ]
null
"2024-03-07T11:57:17Z"
--- annotations_creators: - expert-generated language: - gl license: - mit multilinguality: - monolingual dataset_info: - config_name: config features: - name: file dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: speaker_id dtype: int64 - name: chapter_id dtype: int64 - name: id dtype: string splits: - name: train.100 num_bytes: 6619683041 num_examples: 28539 - name: train.360 num_bytes: 23898214592 num_examples: 104014 - name: validation num_bytes: 359572231 num_examples: 2703 - name: test num_bytes: 367705423 num_examples: 2620 download_size: 30121377654 dataset_size: 31245175287 - config_name: other features: - name: file dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: speaker_id dtype: int64 - name: chapter_id dtype: int64 - name: id dtype: string splits: - name: train.500 num_bytes: 31810256902 num_examples: 148688 - name: validation num_bytes: 337283304 num_examples: 2864 - name: test num_bytes: 352396474 num_examples: 2939 download_size: 31236565377 dataset_size: 32499936680 - config_name: all features: - name: file dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: text dtype: string - name: speaker_id dtype: int64 - name: chapter_id dtype: int64 - name: id dtype: string splits: - name: train.clean.100 num_bytes: 6627791685 num_examples: 28539 - name: train.clean.360 num_bytes: 23927767570 num_examples: 104014 - name: train.other.500 num_bytes: 31852502880 num_examples: 148688 - name: validation.clean num_bytes: 359505691 num_examples: 2703 - name: validation.other num_bytes: 337213112 num_examples: 2864 - name: test.clean num_bytes: 368449831 num_examples: 2620 - name: test.other num_bytes: 353231518 num_examples: 2939 download_size: 61357943031 dataset_size: 63826462287 --- # Dataset Card for [Dataset Name] ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary [More Information Needed] ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
BouaCoul/q_fa
BouaCoul
"2024-03-07T11:59:13Z"
0
0
[ "license:mit", "region:us" ]
null
"2024-03-07T11:59:13Z"
--- license: mit ---
limjuhan/Juhan_dentist
limjuhan
"2024-03-07T13:13:23Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T12:03:56Z"
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 66954 num_examples: 176 download_size: 38904 dataset_size: 66954 configs: - config_name: default data_files: - split: train path: data/train-* ---
dhiya96/zephyr_text_summarisation_500
dhiya96
"2024-03-07T12:06:13Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T12:05:48Z"
--- dataset_info: features: - name: content dtype: string - name: summary dtype: string splits: - name: train num_bytes: 1177833.15 num_examples: 405 - name: test num_bytes: 276281.85 num_examples: 95 download_size: 908924 dataset_size: 1454115.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
harshcode21/JS_Master
harshcode21
"2024-03-07T12:07:41Z"
0
0
[ "license:mit", "croissant", "region:us" ]
null
"2024-03-07T12:07:17Z"
--- license: mit ---
saridormi/lca-cmg-paraphrase
saridormi
"2024-03-07T12:33:09Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T12:19:38Z"
--- configs: - config_name: gpt_4_0125_preview data_files: - split: test path: predictions/gpt_4_0125_preview/*.jsonl - config_name: gpt_3.5_turbo_16k_0613 data_files: - split: test path: predictions/gpt_3.5_turbo_16k_0613/*.jsonl --- # Paraphrased commit messages Results of running models on 🤗 [CMG dataset from Long Code Arena](https://huggingface.co/datasets/JetBrains-Research/lca-commit-message-generation) with a simple paraphrase prompt.
pvduy/ultrafeedback-trans-jp-cleaned-v1
pvduy
"2024-03-07T12:28:22Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T12:28:13Z"
--- dataset_info: features: - name: chosen list: - name: content dtype: string - name: role dtype: string - name: rejected list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 246493971 num_examples: 56263 - name: test num_bytes: 5557339 num_examples: 1000 download_size: 122204740 dataset_size: 252051310 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
tilemachos/health_summarizetldr
tilemachos
"2024-03-07T12:29:41Z"
0
0
[ "license:unknown", "croissant", "region:us" ]
null
"2024-03-07T12:28:25Z"
--- license: unknown ---
Mohamad-Jaallouk/ConstScene2-test-dataset
Mohamad-Jaallouk
"2024-03-07T12:28:32Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T12:28:30Z"
--- dataset_info: features: - name: pixel_values dtype: image - name: label dtype: image splits: - name: test num_bytes: 12103649.0 num_examples: 101 download_size: 11691758 dataset_size: 12103649.0 configs: - config_name: default data_files: - split: test path: data/test-* ---
pesc101/spyder-ide-lbl-only-code-chunks
pesc101
"2024-03-07T16:06:52Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T12:37:26Z"
--- dataset_info: features: - name: meta_data struct: - name: contains_class dtype: bool - name: contains_function dtype: bool - name: end_line dtype: int64 - name: file_imports sequence: string - name: file_name dtype: string - name: module dtype: string - name: start_line dtype: int64 - name: code dtype: string - name: question dtype: string - name: answer dtype: string - name: prompt dtype: string splits: - name: train num_bytes: 28961152 num_examples: 8095 - name: test num_bytes: 66006 num_examples: 23 download_size: 8144155 dataset_size: 29027158 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
yhao-wang/rear-eval
yhao-wang
"2024-03-07T13:40:26Z"
0
0
[ "task_categories:question-answering", "language:en", "license:mit", "croissant", "region:us" ]
[ "question-answering" ]
"2024-03-07T12:46:40Z"
--- license: mit task_categories: - question-answering language: - en --- # Data ### Original source All question-answer pairs were obtained from the [DPR](https://github.com/facebookresearch/DPR) repository. ### Data format The expected data format is a list of entry examples, where each entry example is a dictionary containing - `question`: question text - `reference`: list of answer text for evaluation - `ctxs`: a list of passages Entry example: ``` { 'question': 'who got the first nobel prize in physics', 'answers': ['Wilhelm Conrad Röntgen'], 'ctxs': [ "Wilhelm Conrad Röntgen won first Nobel Prize in Physics.", "Wilhelm Conrad Röntgen won it for discovery of X-rays", "Albert Einstein was awarded the 1921 Nobel Prize in Physics", "The Nobel Prize in Physics is a yearly award.", "First law of thermodynamics was stated by William" ] } ```
Marlatant/test
Marlatant
"2024-03-07T12:49:11Z"
0
0
[ "license:apache-2.0", "croissant", "region:us" ]
null
"2024-03-07T12:48:31Z"
--- license: apache-2.0 ---
ariji1/acn-finetuning
ariji1
"2024-03-07T12:54:25Z"
0
0
[ "license:apache-2.0", "croissant", "region:us" ]
null
"2024-03-07T12:53:25Z"
--- license: apache-2.0 ---
alisson40889/friza
alisson40889
"2024-03-07T13:14:19Z"
0
0
[ "license:openrail", "croissant", "region:us" ]
null
"2024-03-07T13:10:43Z"
--- license: openrail ---
Lewisliuming/toysentiment
Lewisliuming
"2024-03-07T13:14:39Z"
0
0
[ "license:apache-2.0", "croissant", "region:us" ]
null
"2024-03-07T13:13:51Z"
--- license: apache-2.0 ---
2A2I-R/dibt_10k_prompts_ranked_arabic
2A2I-R
"2024-03-07T13:36:02Z"
0
3
[ "croissant", "region:us" ]
null
"2024-03-07T13:35:34Z"
--- dataset_info: features: - name: prompt dtype: string - name: quality list: - name: status dtype: string - name: user_id dtype: string - name: value dtype: string - name: metadata dtype: string - name: avg_rating dtype: float64 - name: num_responses dtype: int64 - name: agreement_ratio dtype: float64 - name: raw_responses sequence: int64 - name: kind dtype: string splits: - name: train num_bytes: 10601581 num_examples: 10331 download_size: 4323538 dataset_size: 10601581 configs: - config_name: default data_files: - split: train path: data/train-* ---
Rightly/Classifier_unsplit_emails
Rightly
"2024-03-07T13:51:44Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T13:51:35Z"
--- dataset_info: features: - name: template_name dtype: string - name: content dtype: string - name: policy_id dtype: string - name: policy_id_regex dtype: string - name: renewal_date dtype: string - name: renewal_date_regex dtype: string - name: start_date dtype: string - name: start_date_regex dtype: string - name: category dtype: string - name: category_regex dtype: string - name: date_generated dtype: string splits: - name: train num_bytes: 404126649 num_examples: 21000 download_size: 78797361 dataset_size: 404126649 configs: - config_name: default data_files: - split: train path: data/train-* ---
lemeswmv/conradovoz
lemeswmv
"2024-03-07T14:08:06Z"
0
0
[ "license:openrail", "croissant", "region:us" ]
null
"2024-03-07T13:56:42Z"
--- license: openrail ---
Vas123/130000
Vas123
"2024-03-07T14:08:02Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T14:07:59Z"
--- dataset_info: features: - name: title dtype: string - name: body dtype: string splits: - name: train num_bytes: 777748 num_examples: 204 - name: validation num_bytes: 96466 num_examples: 25 - name: test num_bytes: 98375 num_examples: 26 download_size: 455864 dataset_size: 972589 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
loicmagne/dsamples100
loicmagne
"2024-03-07T14:12:07Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T14:09:45Z"
--- configs: - config_name: "0" data_files: "0.parquet" - config_name: "1" data_files: "1.parquet" - config_name: "2" data_files: "2.parquet" - config_name: "3" data_files: "3.parquet" - config_name: "4" data_files: "4.parquet" - config_name: "5" data_files: "5.parquet" - config_name: "6" data_files: "6.parquet" - config_name: "7" data_files: "7.parquet" - config_name: "8" data_files: "8.parquet" - config_name: "9" data_files: "9.parquet" ---
loicmagne/dsamples1k
loicmagne
"2024-03-07T14:19:36Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T14:14:49Z"
--- configs: - config_name: "0" data_files: "0.parquet" - config_name: "1" data_files: "1.parquet" - config_name: "2" data_files: "2.parquet" - config_name: "3" data_files: "3.parquet" - config_name: "4" data_files: "4.parquet" - config_name: "5" data_files: "5.parquet" - config_name: "6" data_files: "6.parquet" - config_name: "7" data_files: "7.parquet" - config_name: "8" data_files: "8.parquet" - config_name: "9" data_files: "9.parquet" ---
luckeciano/hermes-features-ultrafeedback
luckeciano
"2024-03-07T14:23:16Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T14:15:14Z"
--- configs: - config_name: default data_files: - split: train path: "train.csv" - split: valid path: "eval.csv" - split: test path: "test.csv" --- --- license: apache-2.0 ---
aasarap/allfaq
aasarap
"2024-03-07T14:19:52Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T14:18:39Z"
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 216215.3 num_examples: 721 - name: test num_bytes: 92663.7 num_examples: 309 download_size: 115318 dataset_size: 308879.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
adilgupta/cfilt-iitb-en-hi-truncated
adilgupta
"2024-03-07T14:29:31Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T14:27:45Z"
--- dataset_info: features: - name: translation struct: - name: en dtype: string - name: hi dtype: string splits: - name: train num_bytes: 446979428 num_examples: 1658990 download_size: 198644928 dataset_size: 446979428 configs: - config_name: default data_files: - split: train path: data/train-* ---
mahdiyehebrahimi/utc
mahdiyehebrahimi
"2024-04-22T17:47:27Z"
0
0
[ "task_categories:text-classification", "language:fa", "region:us" ]
[ "text-classification" ]
"2024-03-07T14:36:41Z"
--- task_categories: - text-classification language: - fa pretty_name: University_Ticket_Classification ---
PhilKey/llama2-openrewrite-docs-chat
PhilKey
"2024-03-07T16:07:51Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T14:39:23Z"
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 91751 num_examples: 95 download_size: 26661 dataset_size: 91751 configs: - config_name: default data_files: - split: train path: data/train-* ---
Cognitive-Lab/Aya_Telgu
Cognitive-Lab
"2024-03-08T05:37:47Z"
0
0
[ "language:en", "language:te", "license:apache-2.0", "arxiv:2402.06619", "region:us" ]
null
"2024-03-07T14:39:47Z"
--- dataset_info: - config_name: complete_dataset features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 5553365182 num_examples: 4050096 download_size: 1827014858 dataset_size: 5553365182 - config_name: templated_indic_paraphrase features: - name: task_type dtype: string - name: split dtype: string - name: script dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 887070 num_examples: 1517 download_size: 304055 dataset_size: 887070 - config_name: templated_indic_sentiment features: - name: task_type dtype: string - name: split dtype: string - name: script dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 781847 num_examples: 1156 download_size: 318064 dataset_size: 781847 - config_name: templated_telugu_food features: - name: task_type dtype: string - name: split dtype: string - name: script dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 1108509 num_examples: 441 download_size: 312377 dataset_size: 1108509 - config_name: templated_telugu_jokes features: - name: task_type dtype: string - name: split dtype: string - name: script dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 966698 num_examples: 929 download_size: 298196 dataset_size: 966698 - config_name: templated_telugu_news features: - name: task_type dtype: string - name: split dtype: string - name: script dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 1150840295 num_examples: 467090 download_size: 423046750 dataset_size: 1150840295 - config_name: templated_telugu_poems features: - name: task_type dtype: string - name: split dtype: string - name: script dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 8244805 num_examples: 5115 download_size: 2713407 dataset_size: 8244805 - config_name: templated_telugu_riddles features: - name: task_type dtype: string - name: split dtype: string - name: script dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 339040 num_examples: 844 download_size: 79017 dataset_size: 339040 - config_name: templated_xlel_wd features: - name: task_type dtype: string - name: split dtype: string - name: script dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 1105593 num_examples: 639 download_size: 403809 dataset_size: 1105593 - config_name: translated_adversarial_qa features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 23828637 num_examples: 10000 download_size: 5853372 dataset_size: 23828637 - config_name: translated_cnn_dailymail features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 624416386 num_examples: 100000 download_size: 228934790 dataset_size: 624416386 - config_name: translated_dolly features: - name: task_type dtype: string - name: split dtype: string - name: script dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 32136437 num_examples: 14808 download_size: 12268225 dataset_size: 32136437 - config_name: translated_flan_coqa features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 42954081 num_examples: 6409 download_size: 15878737 dataset_size: 42954081 - config_name: translated_flan_cot features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 103946965 num_examples: 91910 download_size: 36013799 dataset_size: 103946965 - config_name: translated_flan_gem_wiki features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 171947547 num_examples: 27147 download_size: 61509697 dataset_size: 171947547 - config_name: translated_flan_lambada features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 3350933 num_examples: 4279 download_size: 1244741 dataset_size: 3350933 - config_name: translated_flan_qa features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 466231 num_examples: 540 download_size: 163927 dataset_size: 466231 - config_name: translated_hotpotqa features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 173446675 num_examples: 355476 download_size: 51566169 dataset_size: 173446675 - config_name: translated_joke_explaination features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 1427307 num_examples: 754 download_size: 324060 dataset_size: 1427307 - config_name: translated_mintaka features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 5737422 num_examples: 14000 download_size: 969828 dataset_size: 5737422 - config_name: translated_nqopen features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 55232722 num_examples: 175850 download_size: 15606726 dataset_size: 55232722 - config_name: translated_paws features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 47144986 num_examples: 49401 download_size: 6120004 dataset_size: 47144986 - config_name: translated_piqa features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 19252904 num_examples: 16113 download_size: 5383085 dataset_size: 19252904 - config_name: translated_soda features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 1112271687 num_examples: 1191582 download_size: 309159822 dataset_size: 1112271687 - config_name: translated_wiki_split features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 1111439015 num_examples: 989944 download_size: 326772204 dataset_size: 1111439015 - config_name: translated_wikiqa features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 730463 num_examples: 1040 download_size: 261132 dataset_size: 730463 - config_name: translated_xlel_wd features: - name: task_type dtype: string - name: script dtype: string - name: split dtype: string - name: inputs dtype: string - name: language dtype: string - name: id dtype: int64 - name: sub_dataset_name dtype: string - name: dataset_name dtype: string - name: targets dtype: string - name: template_id dtype: int64 splits: - name: train num_bytes: 859360927 num_examples: 523112 download_size: 320781896 dataset_size: 859360927 configs: - config_name: complete_dataset data_files: - split: train path: complete_dataset/train-* - config_name: templated_indic_paraphrase data_files: - split: train path: templated_indic_paraphrase/train-* - config_name: templated_indic_sentiment data_files: - split: train path: templated_indic_sentiment/train-* - config_name: templated_telugu_food data_files: - split: train path: templated_telugu_food/train-* - config_name: templated_telugu_jokes data_files: - split: train path: templated_telugu_jokes/train-* - config_name: templated_telugu_news data_files: - split: train path: templated_telugu_news/train-* - config_name: templated_telugu_poems data_files: - split: train path: templated_telugu_poems/train-* - config_name: templated_telugu_riddles data_files: - split: train path: templated_telugu_riddles/train-* - config_name: templated_xlel_wd data_files: - split: train path: templated_xlel_wd/train-* - config_name: translated_adversarial_qa data_files: - split: train path: translated_adversarial_qa/train-* - config_name: translated_cnn_dailymail data_files: - split: train path: translated_cnn_dailymail/train-* - config_name: translated_dolly data_files: - split: train path: translated_dolly/train-* - config_name: translated_flan_coqa data_files: - split: train path: translated_flan_coqa/train-* - config_name: translated_flan_cot data_files: - split: train path: translated_flan_cot/train-* - config_name: translated_flan_gem_wiki data_files: - split: train path: translated_flan_gem_wiki/train-* - config_name: translated_flan_lambada data_files: - split: train path: translated_flan_lambada/train-* - config_name: translated_flan_qa data_files: - split: train path: translated_flan_qa/train-* - config_name: translated_hotpotqa data_files: - split: train path: translated_hotpotqa/train-* - config_name: translated_joke_explaination data_files: - split: train path: translated_joke_explaination/train-* - config_name: translated_mintaka data_files: - split: train path: translated_mintaka/train-* - config_name: translated_nqopen data_files: - split: train path: translated_nqopen/train-* - config_name: translated_paws data_files: - split: train path: translated_paws/train-* - config_name: translated_piqa data_files: - split: train path: translated_piqa/train-* - config_name: translated_soda data_files: - split: train path: translated_soda/train-* - config_name: translated_wiki_split data_files: - split: train path: translated_wiki_split/train-* - config_name: translated_wikiqa data_files: - split: train path: translated_wikiqa/train-* - config_name: translated_xlel_wd data_files: - split: train path: translated_xlel_wd/train-* license: apache-2.0 language: - en - te --- # Aya_Telgu This Dataset is curated from the original [Aya-Collection](https://huggingface.co/datasets/CohereForAI/aya_collection) dataset that was open-sourced by [Cohere](https://cohere.com/research) under the [Apache-2.0](https://choosealicense.com/licenses/apache-2.0/) license. The Aya Collection is a massive multilingual collection comprising 513 million instances of prompts and completions that cover a wide range of tasks. This collection uses instruction-style templates from fluent speakers and applies them to a curated list of datasets. It also includes translations of instruction-style datasets into 101 languages. The Aya Dataset, a human-curated multilingual instruction and response dataset, is part of this collection. Refer to our paper for more details about the collection. ### Motivations & Intentions The original dataset is large and more task-specific than language-specific. To carry out a task specific to the Indic language, one would previously have needed to download the entire dataset (~600 GB) and filter it. As we were training an Indic LLm internally, we filtered the dataset by language and curated this dataset. You can find all the Indic-language specific datasets - [here](https://huggingface.co/collections/Cognitive-Lab/aya-indic-suite-65eaa0e34a2307f30bbd55e5). ## **Data Instances** An example of a `train` instance looks as follows: ```yaml {'id': 246001, 'inputs': 'The following query in English is taken from the geography category. What could be the answer to the question?\nWhat is the seventh tallest mountain in North America?', 'targets': 'The answer is Mount Lucania.', 'dataset_name': 'Mintaka-inst', 'sub_dataset_name': '-', 'task_type': 'question-answering', 'template_id': 3, 'language': 'eng', 'split': 'train', 'script': 'Latn' } ``` ## **Data Fields** The data fields are the same among all splits: - `id:` Unique id of the data point - `inputs:` Prompt or input to the language model. - `targets:` Completion or output of the language model. - `dataset_name:` The name of the source dataset that the data point was taken from - `sub_dataset_name:` If the source is a collection, this field indicates which part of that collection the data point was taken from. If it is not a collection, this field is left blank. - `task_type:` The task type that this conversation belongs to. - `template_id`: The id of the template applied to this data point. - `language:` The ISO code of the dialect of the conversation. - `script:` The script of the language. - `split:` Indicates whether the data point is part of the `train` or the `test` split. ## **Licensing Information** This dataset can be used for any purpose, whether academic or commercial, under the terms of the **[Apache 2.0](https://opensource.org/license/apache-2-0)** License. Citation ```yaml @misc{singh2024aya, title={Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning}, author={Shivalika Singh and Freddie Vargus and Daniel Dsouza and Börje F. Karlsson and Abinaya Mahendiran and Wei-Yin Ko and Herumb Shandilya and Jay Patel and Deividas Mataciunas and Laura OMahony and Mike Zhang and Ramith Hettiarachchi and Joseph Wilson and Marina Machado and Luisa Souza Moura and Dominik Krzemiński and Hakimeh Fadaei and Irem Ergün and Ifeoma Okoh and Aisha Alaagib and Oshan Mudannayake and Zaid Alyafeai and Vu Minh Chien and Sebastian Ruder and Surya Guthikonda and Emad A. Alghamdi and Sebastian Gehrmann and Niklas Muennighoff and Max Bartolo and Julia Kreutzer and Ahmet Üstün and Marzieh Fadaee and Sara Hooker}, year={2024}, eprint={2402.06619}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Cognitive-Lab/Aya_Hindi
Cognitive-Lab
"2024-03-08T05:32:24Z"
0
0
[ "size_categories:1M<n<10M", "language:en", "language:hi", "license:apache-2.0", "croissant", "arxiv:2402.06619", "region:us" ]
null
"2024-03-07T14:42:11Z"
--- dataset_info: - config_name: complete_dataset features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 5634135057 num_examples: 3771709 download_size: 1626230714 dataset_size: 5634135057 - config_name: templated_hindi_headline features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 915323132 num_examples: 94217 download_size: 192571468 dataset_size: 915323132 - config_name: templated_hindi_news features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 436136894 num_examples: 42524 download_size: 89441706 dataset_size: 436136894 - config_name: templated_indic_paraphrase features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 731975 num_examples: 1001 download_size: 241632 dataset_size: 731975 - config_name: templated_indic_sentiment features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 730262 num_examples: 1156 download_size: 299936 dataset_size: 730262 - config_name: templated_mintaka features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 18391211 num_examples: 56000 download_size: 3894945 dataset_size: 18391211 - config_name: templated_ntx_llm features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 1185419 num_examples: 506 download_size: 128912 dataset_size: 1185419 - config_name: templated_xlel_wd features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 6084765 num_examples: 3940 download_size: 2157019 dataset_size: 6084765 - config_name: translated_adversarial_qa features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 22985920 num_examples: 10000 download_size: 5618356 dataset_size: 22985920 - config_name: translated_cnn_dailymail features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 598585665 num_examples: 100000 download_size: 218762546 dataset_size: 598585665 - config_name: translated_dolly features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 30828048 num_examples: 14808 download_size: 11858598 dataset_size: 30828048 - config_name: translated_flan_coqa features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 39119861 num_examples: 6409 download_size: 15029790 dataset_size: 39119861 - config_name: translated_flan_cot features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 98934248 num_examples: 91910 download_size: 33869605 dataset_size: 98934248 - config_name: translated_flan_gem_wiki features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 167881959 num_examples: 27147 download_size: 59957637 dataset_size: 167881959 - config_name: translated_flan_lambada features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 3388337 num_examples: 4279 download_size: 1272013 dataset_size: 3388337 - config_name: translated_flan_qa features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 452586 num_examples: 540 download_size: 158337 dataset_size: 452586 - config_name: translated_hotpotqa features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 169705823 num_examples: 355476 download_size: 50061586 dataset_size: 169705823 - config_name: translated_joke_explaination features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 1385133 num_examples: 754 download_size: 269690 dataset_size: 1385133 - config_name: translated_mintaka features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 5854298 num_examples: 14000 download_size: 943132 dataset_size: 5854298 - config_name: translated_nqopen features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 53305791 num_examples: 175850 download_size: 14829292 dataset_size: 53305791 - config_name: translated_paws features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 44491519 num_examples: 49401 download_size: 5853813 dataset_size: 44491519 - config_name: translated_piqa features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 18583099 num_examples: 16113 download_size: 5025762 dataset_size: 18583099 - config_name: translated_soda features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 1167631298 num_examples: 1191582 download_size: 300524712 dataset_size: 1167631298 - config_name: translated_wiki_split features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 994661567 num_examples: 989944 download_size: 304386263 dataset_size: 994661567 - config_name: translated_wikiqa features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 717832 num_examples: 1040 download_size: 258651 dataset_size: 717832 - config_name: translated_xlel_wd features: - name: targets dtype: string - name: id dtype: int64 - name: split dtype: string - name: sub_dataset_name dtype: string - name: task_type dtype: string - name: inputs dtype: string - name: template_id dtype: int64 - name: language dtype: string - name: script dtype: string - name: dataset_name dtype: string splits: - name: train num_bytes: 837038415 num_examples: 523112 download_size: 308592573 dataset_size: 837038415 configs: - config_name: complete_dataset data_files: - split: train path: complete_dataset/train-* - config_name: templated_hindi_headline data_files: - split: train path: templated_hindi_headline/train-* - config_name: templated_hindi_news data_files: - split: train path: templated_hindi_news/train-* - config_name: templated_indic_paraphrase data_files: - split: train path: templated_indic_paraphrase/train-* - config_name: templated_indic_sentiment data_files: - split: train path: templated_indic_sentiment/train-* - config_name: templated_mintaka data_files: - split: train path: templated_mintaka/train-* - config_name: templated_ntx_llm data_files: - split: train path: templated_ntx_llm/train-* - config_name: templated_xlel_wd data_files: - split: train path: templated_xlel_wd/train-* - config_name: translated_adversarial_qa data_files: - split: train path: translated_adversarial_qa/train-* - config_name: translated_cnn_dailymail data_files: - split: train path: translated_cnn_dailymail/train-* - config_name: translated_dolly data_files: - split: train path: translated_dolly/train-* - config_name: translated_flan_coqa data_files: - split: train path: translated_flan_coqa/train-* - config_name: translated_flan_cot data_files: - split: train path: translated_flan_cot/train-* - config_name: translated_flan_gem_wiki data_files: - split: train path: translated_flan_gem_wiki/train-* - config_name: translated_flan_lambada data_files: - split: train path: translated_flan_lambada/train-* - config_name: translated_flan_qa data_files: - split: train path: translated_flan_qa/train-* - config_name: translated_hotpotqa data_files: - split: train path: translated_hotpotqa/train-* - config_name: translated_joke_explaination data_files: - split: train path: translated_joke_explaination/train-* - config_name: translated_mintaka data_files: - split: train path: translated_mintaka/train-* - config_name: translated_nqopen data_files: - split: train path: translated_nqopen/train-* - config_name: translated_paws data_files: - split: train path: translated_paws/train-* - config_name: translated_piqa data_files: - split: train path: translated_piqa/train-* - config_name: translated_soda data_files: - split: train path: translated_soda/train-* - config_name: translated_wiki_split data_files: - split: train path: translated_wiki_split/train-* - config_name: translated_wikiqa data_files: - split: train path: translated_wikiqa/train-* - config_name: translated_xlel_wd data_files: - split: train path: translated_xlel_wd/train-* license: apache-2.0 language: - en - hi size_categories: - 1M<n<10M --- # Aya_Hindi This Dataset is curated from the original [Aya-Collection](https://huggingface.co/datasets/CohereForAI/aya_collection) dataset that was open-sourced by [Cohere](https://cohere.com/research) under the [Apache-2.0](https://choosealicense.com/licenses/apache-2.0/) license. The Aya Collection is a massive multilingual collection comprising 513 million instances of prompts and completions that cover a wide range of tasks. This collection uses instruction-style templates from fluent speakers and applies them to a curated list of datasets. It also includes translations of instruction-style datasets into 101 languages. The Aya Dataset, a human-curated multilingual instruction and response dataset, is part of this collection. Refer to our paper for more details about the collection. ### Motivations & Intentions The original dataset is large and more task-specific than language-specific. To carry out a task specific to the Indic language, one would previously have needed to download the entire dataset (~600 GB) and filter it. As we were training an Indic LLm internally, we filtered the dataset by language and curated this dataset. You can find all the Indic-language specific datasets - [here](https://huggingface.co/collections/Cognitive-Lab/aya-indic-suite-65eaa0e34a2307f30bbd55e5). ## **Data Instances** An example of a `train` instance looks as follows: ```yaml {'id': 246001, 'inputs': 'The following query in English is taken from the geography category. What could be the answer to the question?\nWhat is the seventh tallest mountain in North America?', 'targets': 'The answer is Mount Lucania.', 'dataset_name': 'Mintaka-inst', 'sub_dataset_name': '-', 'task_type': 'question-answering', 'template_id': 3, 'language': 'eng', 'split': 'train', 'script': 'Latn' } ``` ## **Data Fields** The data fields are the same among all splits: - `id:` Unique id of the data point - `inputs:` Prompt or input to the language model. - `targets:` Completion or output of the language model. - `dataset_name:` The name of the source dataset that the data point was taken from - `sub_dataset_name:` If the source is a collection, this field indicates which part of that collection the data point was taken from. If it is not a collection, this field is left blank. - `task_type:` The task type that this conversation belongs to. - `template_id`: The id of the template applied to this data point. - `language:` The ISO code of the dialect of the conversation. - `script:` The script of the language. - `split:` Indicates whether the data point is part of the `train` or the `test` split. ## **Licensing Information** This dataset can be used for any purpose, whether academic or commercial, under the terms of the **[Apache 2.0](https://opensource.org/license/apache-2-0)** License. Citation ```yaml @misc{singh2024aya, title={Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning}, author={Shivalika Singh and Freddie Vargus and Daniel Dsouza and Börje F. Karlsson and Abinaya Mahendiran and Wei-Yin Ko and Herumb Shandilya and Jay Patel and Deividas Mataciunas and Laura OMahony and Mike Zhang and Ramith Hettiarachchi and Joseph Wilson and Marina Machado and Luisa Souza Moura and Dominik Krzemiński and Hakimeh Fadaei and Irem Ergün and Ifeoma Okoh and Aisha Alaagib and Oshan Mudannayake and Zaid Alyafeai and Vu Minh Chien and Sebastian Ruder and Surya Guthikonda and Emad A. Alghamdi and Sebastian Gehrmann and Niklas Muennighoff and Max Bartolo and Julia Kreutzer and Ahmet Üstün and Marzieh Fadaee and Sara Hooker}, year={2024}, eprint={2402.06619}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
xModo99/civit-ai-models
xModo99
"2024-03-07T14:58:28Z"
0
0
[ "license:gpl-3.0", "region:us" ]
null
"2024-03-07T14:42:45Z"
--- license: gpl-3.0 ---
herutriana44/Drugbank_Summary_Drug_Sequence
herutriana44
"2024-03-08T14:57:08Z"
0
0
[ "license:mit", "croissant", "region:us" ]
null
"2024-03-07T14:42:52Z"
--- license: mit ---
ranimeree/NewDataSet
ranimeree
"2024-03-07T16:37:09Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T14:48:15Z"
--- dataset_info: features: - name: pixel_values dtype: image - name: label dtype: image splits: - name: train num_bytes: 301061865.237 num_examples: 2769 - name: validation num_bytes: 61731350.0 num_examples: 352 - name: test num_bytes: 12103649.0 num_examples: 101 download_size: 361681457 dataset_size: 374896864.237 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
anitamaher/xdssfsefsef
anitamaher
"2024-03-07T14:56:53Z"
0
0
[ "license:cc-by-nc-sa-3.0", "region:us" ]
null
"2024-03-07T14:56:14Z"
--- license: cc-by-nc-sa-3.0 ---
yangwang825/sst2-pwws-7
yangwang825
"2024-03-07T14:56:22Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T14:56:19Z"
--- dataset_info: features: - name: text dtype: string - name: label dtype: int64 - name: augment dtype: string splits: - name: train num_bytes: 6901034 num_examples: 54895 - name: validation num_bytes: 110096 num_examples: 872 - name: test num_bytes: 226340 num_examples: 1821 download_size: 1965487 dataset_size: 7237470 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
ranimeree/NewDataSetMixed
ranimeree
"2024-03-07T16:37:43Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T15:11:55Z"
--- dataset_info: features: - name: pixel_values dtype: image - name: label dtype: image splits: - name: train num_bytes: 395458124.688 num_examples: 2769 - name: validation num_bytes: 61731350.0 num_examples: 352 - name: test num_bytes: 12103649.0 num_examples: 101 download_size: 452227024 dataset_size: 469293123.688 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
David-Xu/raw_datasets_sft
David-Xu
"2024-03-07T15:20:18Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T15:20:16Z"
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 12776367 num_examples: 8222 - name: test num_bytes: 1393179 num_examples: 913 download_size: 7917740 dataset_size: 14169546 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
Mitsuki-Sakamoto/alpaca_farm-reward-model-deberta-v3-large-v2-re-preference-64-nsample-2
Mitsuki-Sakamoto
"2024-03-07T16:44:33Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T15:25:22Z"
--- dataset_info: - config_name: alpaca_instructions-pythia_14m_alpaca_farm_instructions_sft_constant_pa_seed_1 features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: preference dtype: int64 - name: output_1 dtype: string - name: output_2 dtype: string - name: reward_model_prompt_format dtype: string - name: gen_prompt_format dtype: string - name: gen_kwargs struct: - name: do_sample dtype: bool - name: max_new_tokens dtype: int64 - name: pad_token_id dtype: int64 - name: top_k dtype: int64 - name: top_p dtype: float64 - name: reward_1 dtype: float64 - name: reward_2 dtype: float64 - name: n_samples dtype: int64 splits: - name: preference num_bytes: 25315216 num_examples: 20001 download_size: 12112309 dataset_size: 25315216 - config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1 features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: preference dtype: int64 - name: output_1 dtype: string - name: output_2 dtype: string - name: reward_model_prompt_format dtype: string - name: gen_prompt_format dtype: string - name: gen_kwargs struct: - name: do_sample dtype: bool - name: max_new_tokens dtype: int64 - name: pad_token_id dtype: int64 - name: top_k dtype: int64 - name: top_p dtype: float64 - name: reward_1 dtype: float64 - name: reward_2 dtype: float64 - name: n_samples dtype: int64 splits: - name: preference num_bytes: 25451634 num_examples: 20001 download_size: 12144402 dataset_size: 25451634 - config_name: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1 features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: preference dtype: int64 - name: output_1 dtype: string - name: output_2 dtype: string - name: reward_model_prompt_format dtype: string - name: gen_prompt_format dtype: string - name: gen_kwargs struct: - name: do_sample dtype: bool - name: max_new_tokens dtype: int64 - name: pad_token_id dtype: int64 - name: top_k dtype: int64 - name: top_p dtype: float64 - name: reward_1 dtype: float64 - name: reward_2 dtype: float64 - name: n_samples dtype: int64 splits: - name: preference num_bytes: 25276914 num_examples: 20001 download_size: 11799025 dataset_size: 25276914 configs: - config_name: alpaca_instructions-pythia_14m_alpaca_farm_instructions_sft_constant_pa_seed_1 data_files: - split: preference path: alpaca_instructions-pythia_14m_alpaca_farm_instructions_sft_constant_pa_seed_1/preference-* - config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1 data_files: - split: preference path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/preference-* - config_name: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1 data_files: - split: preference path: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1/preference-* ---
davanstrien/Inflection-Benchmarks
davanstrien
"2024-03-07T15:44:37Z"
0
10
[ "license:mit", "croissant", "region:us" ]
null
"2024-03-07T15:27:37Z"
--- configs: - config_name: mt_bench_inf data_files: - split: train path: data/mt_bench_inf.jsonl - config_name: physics_gre data_files: - split: train path: data/physics_gre.jsonl - config_name: physics_gre_scored data_files: - split: train path: data/physics_gre_scored.jsonl license: mit --- Cloned from https://github.com/InflectionAI/Inflection-Benchmarks # MT-Bench Inf In `mt_bench_inf.jsonl` we release a corrected version of the [MT-Bench](https://github.com/lm-sys/FastChat?tab=readme-ov-file#evaluation) questions that we use for evaluation. Each entry has the following fields: * `question_id`: The question number * `category`: Which MT-Bench category * `turn`: A list with the turns * `reference [optional]`: A reference answer Below, we show a few examples of questions, the original GPT-4 Reference answer, and our corrected answer: ### Reasoning #104 Question: ``` David has three sisters. Each of them has one brother. How many brothers does David have? ``` GPT-4 Reference Answer: ``` David has only one brother. ``` Our Corrected Answer: ``` David has no brother. He is the one brother of his three sisters. ``` ### Math #111 Question: ``` The vertices of a triangle are at points (0, 0), (-1, 1), and (3, 3). What is the area of the triangle? ``` GPT-4 Reference Answer: ``` To find the area of the triangle, we can use the formula for the area of a triangle with coordinates A(x1, y1), B(x2, y2), and C(x3, y3): Area = (1/2) * |(x1(y2 - y3) + x2(y3 - y1) + x3(y1 - y2))| In this case, the coordinates are A(0, 0), B(-1, 1), and C(3, 3). Plugging these values into the formula, we get: Area = (1/2) * |(0(1 - 3) + (-1)(3 - 0) + 3(0 - 1))| Area = (1/2) * |(-0 + 3 - 3)| Area = (1/2) * |0| Area = 0 The area of the triangle is 0. This means that the three points are collinear, and there is no triangle formed by these points. ``` Our Corrected Answer: ``` Area is 3 ``` Overall, we find nearly 25% of examples in the reasoning, math, and coding categories had incorrect reference solutions or had flawed premises. # Physics GRE We release four processed Physics GRE exams, a common graduate school entrance exam for physics students. * `physics_gre_scored.jsonl`: Exam GR8677 which we report results on. * `physics_gre.jsonl`: Three aditional Physics GRE exams (GR9277, GR9677, and GR0177). Each entry has the following fields: * `input`: The question * `target_scores`: The correct answer choice * `has_image`: Whether or not the question has an image. We include only questions without an image in our scoring. | Model | Percentile | | ----------------------| ---------: | | Inflection-2.5 maj@8 | 85 | | Inflection-2.5 maj@32 | 95 | | GPT-4 maj@8 | 97 | ## Exam Scoring Details For the Physics GRE, each correct answer is worth 1 point and each incorrect answer results in a -0.25 reduction. To compute score, we make the following assumption: ``` Raw_Score = Percentage_Correct - 0.25 * (1 - Percentage_Correct) ``` where `Percentage_Correct` is computed purely on questions without images. For simplicity, we do not use heuristics to allow the model not to answer. | Raw Score | Percentile | | -----------: | ---------: | | 81 &ndash; 100 | 98 | | 77 &ndash; 80 | 97 | | 75 &ndash; 76 | 96 | | 72 &ndash; 74 | 95 | | 71 | 94 | | 69 &ndash; 70 | 93 | | 67 &ndash; 68 | 92 | | 65 &ndash; 66 | 91 | | 64 | 90 | | 63 | 89 | | 61 &ndash; 62 | 87 | | 60 | 86 | | 59 | 85 | | 57 &ndash; 58 | 84 | | 56 | 82 | | 55 | 80 | | 53 &ndash; 54 | 78 | | 52 | 77 | | 51 | 75 | | 49 &ndash; 50 | 72 | | 48 | 70 | | 47 | 69 | | 45 &ndash; 46 | 66 | | 44 | 64 | | 43 | 62 | | 41 &ndash; 42 | 59 | | 40 | 57 | | 39 | 54 | | 37 &ndash; 38 | 52 | | 36 | 48 | | 35 | 46 | | 33 &ndash; 34 | 43 | | 32 | 41 | | 30 &ndash; 31 | 38 | | 29 | 35 | | 28 | 32 | | 26 &ndash; 27 | 30 | | 25 | 27 | | 24 | 25 | | 22 &ndash; 23 | 22 | | 21 | 20 | | 20 | 18 | | 18 &ndash; 19 | 16 | | 17 | 14 | | 16 | 12 | | 14 &ndash; 15 | 10 | | 13 | 9 | | 12 | 8 | | 10 &ndash; 11 | 6 | | 9 | 5 | | 8 | 4 | | 6 &ndash; 7 | 3 | | 5 | 2 | | 1 &ndash; 4 | 1 | | 0 | 0 |
316usman/thematic3a_1
316usman
"2024-03-07T15:30:07Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T15:30:05Z"
--- dataset_info: features: - name: text dtype: string - name: document_url dtype: string - name: source_url dtype: string - name: country dtype: string splits: - name: train num_bytes: 88423220 num_examples: 142941 download_size: 31971043 dataset_size: 88423220 configs: - config_name: default data_files: - split: train path: data/train-* ---
argilla/10k_prompts_SPIN_iter2_zephyr_top
argilla
"2024-03-07T15:39:13Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T15:39:08Z"
--- dataset_info: features: - name: generated list: - name: content dtype: string - name: role dtype: string - name: real list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 9187059.362445414 num_examples: 1648 - name: test num_bytes: 1025739.6375545851 num_examples: 184 download_size: 5809205 dataset_size: 10212799.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
kaushik-123/orca_deduplicated_dataset
kaushik-123
"2024-03-07T16:14:38Z"
0
0
[ "license:mit", "croissant", "region:us" ]
null
"2024-03-07T15:40:26Z"
--- license: mit ---
yangwang825/sst2-textbugger-7
yangwang825
"2024-03-07T15:41:08Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T15:41:05Z"
--- dataset_info: features: - name: text dtype: string - name: label dtype: int64 - name: augment dtype: string splits: - name: train num_bytes: 7067172 num_examples: 53134 - name: validation num_bytes: 110096 num_examples: 872 - name: test num_bytes: 226340 num_examples: 1821 download_size: 1839479 dataset_size: 7403608 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
wesslen/ecfr-title-12
wesslen
"2024-05-29T16:59:40Z"
0
0
[ "language:en", "croissant", "region:us" ]
null
"2024-03-07T15:47:23Z"
--- language: - en dataset_info: features: - name: text dtype: string - name: meta struct: - name: chapter sequence: string - name: chapter_title sequence: string - name: subchapter sequence: string - name: subchapter_title sequence: string - name: part sequence: string - name: part_title sequence: string - name: section sequence: string - name: section_title sequence: string splits: - name: train num_bytes: 16669304 num_examples: 4665 download_size: 5913311 dataset_size: 16669304 configs: - config_name: default data_files: - split: train path: data/train-* ---
ardaorcun/instruct-data
ardaorcun
"2024-03-07T15:53:05Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T15:53:02Z"
--- dataset_info: features: - name: instruction dtype: string - name: output dtype: string splits: - name: train num_bytes: 3777246.430289243 num_examples: 2153 - name: test num_bytes: 1621075.5697107571 num_examples: 924 download_size: 3040637 dataset_size: 5398322.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
aagoluoglu/AI_HW3_detections_w_vectors
aagoluoglu
"2024-03-08T17:01:11Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T15:54:20Z"
--- dataset_info: features: - name: video_id dtype: string - name: frame_num dtype: int64 - name: timestamp dtype: float64 - name: detected_obj_id dtype: int64 - name: detected_obj_class dtype: int64 - name: confidence dtype: float32 - name: bbox_info sequence: float32 - name: vector sequence: float32 splits: - name: train num_bytes: 9226201 num_examples: 1111 download_size: 7008285 dataset_size: 9226201 configs: - config_name: default data_files: - split: train path: data/train-* ---
Arnaldo34/Cria
Arnaldo34
"2024-03-07T15:55:21Z"
0
0
[ "license:openrail", "croissant", "region:us" ]
null
"2024-03-07T15:54:56Z"
--- license: openrail ---
sjcrz/synth-sky-images-large
sjcrz
"2024-03-08T16:28:07Z"
0
0
[ "license:mit", "croissant", "region:us" ]
null
"2024-03-07T16:08:53Z"
--- license: mit dataset_info: features: - name: image dtype: image - name: time dtype: string - name: ghi dtype: float64 - name: dni dtype: float64 - name: dhi dtype: float64 splits: - name: train num_bytes: 1454459751.36 num_examples: 20160 - name: test num_bytes: 303063516.0 num_examples: 4200 download_size: 218282716 dataset_size: 1757523267.36 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
DIBT/prompts_ranked_multilingual_benchmark
DIBT
"2024-03-21T12:36:59Z"
0
1
[ "croissant", "region:us" ]
null
"2024-03-07T16:14:52Z"
--- dataset_info: features: - name: prompt dtype: string - name: quality list: - name: status dtype: string - name: user_id dtype: string - name: value dtype: string - name: metadata dtype: string - name: avg_rating dtype: float64 - name: num_responses dtype: int64 - name: agreement_ratio dtype: float64 - name: raw_responses sequence: int64 - name: kind dtype: string - name: cluster_description dtype: string - name: topic dtype: string - name: row_idx dtype: int64 splits: - name: train num_bytes: 460915 num_examples: 501 download_size: 195434 dataset_size: 460915 configs: - config_name: default data_files: - split: train path: data/train-* ---
Clonador/mckk
Clonador
"2024-03-07T16:41:34Z"
0
0
[ "license:openrail", "croissant", "region:us" ]
null
"2024-03-07T16:40:34Z"
--- license: openrail ---
JaehyungKim/p2c_cola
JaehyungKim
"2024-03-07T18:06:57Z"
0
0
[ "license:other", "croissant", "region:us" ]
null
"2024-03-07T16:52:58Z"
--- license: other license_name: refer-original-dataset license_link: LICENSE ---
deepghs/character_index
deepghs
"2024-06-10T10:54:12Z"
0
3
[ "license:mit", "croissant", "region:us" ]
null
"2024-03-07T17:00:24Z"
--- license: mit --- # Anime Character Index This dataset if for collecting all the hot characters from the internet, and extract their features and core tags. It will be useful for **automatically testing the character generating ability of the anime-style base models**. 3708 characters in total. ## Copyrights | Copyright | Count | |:----------------------------------------------------------------------------------------------------------------------------------|--------:| | [kantai_collection](pages/kantai_collection.md) | 282 | | [fate_(series)](pages/fate_series.md) | 216 | | [pokemon](pages/pokemon.md) | 199 | | [hololive](pages/hololive.md) | 156 | | [touhou](pages/touhou.md) | 149 | | [blue_archive](pages/blue_archive.md) | 137 | | [idolmaster](pages/idolmaster.md) | 136 | | [arknights](pages/arknights.md) | 96 | | [genshin_impact](pages/genshin_impact.md) | 96 | | [azur_lane](pages/azur_lane.md) | 83 | | [umamusume](pages/umamusume.md) | 78 | | [fire_emblem](pages/fire_emblem.md) | 66 | | [precure](pages/precure.md) | 66 | | [nijisanji](pages/nijisanji.md) | 54 | | [girls_und_panzer](pages/girls_und_panzer.md) | 51 | | [jojo_no_kimyou_na_bouken](pages/jojo_no_kimyou_na_bouken.md) | 44 | | [danganronpa_(series)](pages/danganronpa_series.md) | 39 | | [honkai_(series)](pages/honkai_series.md) | 37 | | [love_live!](pages/love_live.md) | 37 | | [girls'_frontline](pages/girls_frontline.md) | 36 | | [final_fantasy](pages/final_fantasy.md) | 35 | | [fate/grand_order](pages/fate_grand_order.md) | 34 | | [kemono_friends](pages/kemono_friends.md) | 34 | | [vocaloid](pages/vocaloid.md) | 30 | | [granblue_fantasy](pages/granblue_fantasy.md) | 29 | | [persona](pages/persona.md) | 28 | | [bang_dream!](pages/bang_dream.md) | 24 | | [touken_ranbu](pages/touken_ranbu.md) | 22 | | [gundam](pages/gundam.md) | 21 | | [bishoujo_senshi_sailor_moon](pages/bishoujo_senshi_sailor_moon.md) | 20 | | [league_of_legends](pages/league_of_legends.md) | 19 | | [lyrical_nanoha](pages/lyrical_nanoha.md) | 19 | | [boku_no_hero_academia](pages/boku_no_hero_academia.md) | 17 | | [dragon_ball](pages/dragon_ball.md) | 17 | | [honkai:_star_rail](pages/honkai_star_rail.md) | 17 | | [mahou_shoujo_madoka_magica](pages/mahou_shoujo_madoka_magica.md) | 16 | | [original](pages/original.md) | 16 | | [princess_connect!](pages/princess_connect.md) | 15 | | [project_sekai](pages/project_sekai.md) | 15 | | [xenoblade_chronicles_(series)](pages/xenoblade_chronicles_series.md) | 15 | | [yu-gi-oh!](pages/yu_gi_oh.md) | 15 | | [chainsaw_man](pages/chainsaw_man.md) | 14 | | [guilty_gear](pages/guilty_gear.md) | 14 | | [one_piece](pages/one_piece.md) | 14 | | [splatoon_(series)](pages/splatoon_series.md) | 14 | | [umineko_no_naku_koro_ni](pages/umineko_no_naku_koro_ni.md) | 14 | | [shingeki_no_kyojin](pages/shingeki_no_kyojin.md) | 13 | | [sword_art_online](pages/sword_art_online.md) | 13 | | [tales_of_(series)](pages/tales_of_series.md) | 13 | | [blazblue](pages/blazblue.md) | 12 | | [mario_(series)](pages/mario_series.md) | 12 | | [monogatari_(series)](pages/monogatari_series.md) | 12 | | [neptune_(series)](pages/neptune_series.md) | 12 | | [street_fighter](pages/street_fighter.md) | 12 | | [toaru_majutsu_no_index](pages/toaru_majutsu_no_index.md) | 12 | | [world_witches_series](pages/world_witches_series.md) | 12 | | [jujutsu_kaisen](pages/jujutsu_kaisen.md) | 11 | | [naruto_(series)](pages/naruto_series.md) | 11 | | [overwatch](pages/overwatch.md) | 11 | | [the_legend_of_zelda](pages/the_legend_of_zelda.md) | 11 | | [dragon_quest](pages/dragon_quest.md) | 10 | | [kagerou_project](pages/kagerou_project.md) | 10 | | [kill_la_kill](pages/kill_la_kill.md) | 10 | | [project_moon](pages/project_moon.md) | 10 | | [dungeon_meshi](pages/dungeon_meshi.md) | 9 | | [gochuumon_wa_usagi_desu_ka?](pages/gochuumon_wa_usagi_desu_ka.md) | 9 | | [inazuma_eleven_(series)](pages/inazuma_eleven_series.md) | 9 | | [k-on!](pages/k_on.md) | 9 | | [kimetsu_no_yaiba](pages/kimetsu_no_yaiba.md) | 9 | | [marvel](pages/marvel.md) | 9 | | [mega_man_(series)](pages/mega_man_series.md) | 9 | | [tsukihime](pages/tsukihime.md) | 9 | | [code_geass](pages/code_geass.md) | 8 | | [helltaker](pages/helltaker.md) | 8 | | [little_busters!](pages/little_busters.md) | 8 | | [rozen_maiden](pages/rozen_maiden.md) | 8 | | [voiceroid](pages/voiceroid.md) | 8 | | [axis_powers_hetalia](pages/axis_powers_hetalia.md) | 7 | | [bocchi_the_rock!](pages/bocchi_the_rock.md) | 7 | | [clannad](pages/clannad.md) | 7 | | [hibike!_euphonium](pages/hibike_euphonium.md) | 7 | | [high_school_dxd](pages/high_school_dxd.md) | 7 | | [kono_subarashii_sekai_ni_shukufuku_wo!](pages/kono_subarashii_sekai_ni_shukufuku_wo.md) | 7 | | [lucky_star](pages/lucky_star.md) | 7 | | [macross](pages/macross.md) | 7 | | [neon_genesis_evangelion](pages/neon_genesis_evangelion.md) | 7 | | [omori](pages/omori.md) | 7 | | [senki_zesshou_symphogear](pages/senki_zesshou_symphogear.md) | 7 | | [sonic_(series)](pages/sonic_series.md) | 7 | | [suzumiya_haruhi_no_yuuutsu](pages/suzumiya_haruhi_no_yuuutsu.md) | 7 | | [to_love-ru](pages/to_love_ru.md) | 7 | | [yuru_yuri](pages/yuru_yuri.md) | 7 | | [zombie_land_saga](pages/zombie_land_saga.md) | 7 | | [bleach](pages/bleach.md) | 6 | | [elsword](pages/elsword.md) | 6 | | [golden_kamuy](pages/golden_kamuy.md) | 6 | | [higurashi_no_naku_koro_ni](pages/higurashi_no_naku_koro_ni.md) | 6 | | [kobayashi-san_chi_no_maidragon](pages/kobayashi_san_chi_no_maidragon.md) | 6 | | [onii-chan_wa_oshimai!](pages/onii_chan_wa_oshimai.md) | 6 | | [re:zero_kara_hajimeru_isekai_seikatsu](pages/re_zero_kara_hajimeru_isekai_seikatsu.md) | 6 | | [rwby](pages/rwby.md) | 6 | | [skullgirls](pages/skullgirls.md) | 6 | | [tiger_&_bunny](pages/tiger_bunny.md) | 6 | | [tokyo_afterschool_summoners](pages/tokyo_afterschool_summoners.md) | 6 | | [ace_attorney](pages/ace_attorney.md) | 5 | | [aikatsu!_(series)](pages/aikatsu_series.md) | 5 | | [angel_beats!](pages/angel_beats.md) | 5 | | [apex_legends](pages/apex_legends.md) | 5 | | [aria_(manga)](pages/aria_manga.md) | 5 | | [fullmetal_alchemist](pages/fullmetal_alchemist.md) | 5 | | [go-toubun_no_hanayome](pages/go_toubun_no_hanayome.md) | 5 | | [hunter_x_hunter](pages/hunter_x_hunter.md) | 5 | | [infinite_stratos](pages/infinite_stratos.md) | 5 | | [kingdom_hearts](pages/kingdom_hearts.md) | 5 | | [luo_xiaohei_zhanji](pages/luo_xiaohei_zhanji.md) | 5 | | [oshi_no_ko](pages/oshi_no_ko.md) | 5 | | [panty_&_stocking_with_garterbelt](pages/panty_stocking_with_garterbelt.md) | 5 | | [resident_evil](pages/resident_evil.md) | 5 | | [senran_kagura](pages/senran_kagura.md) | 5 | | [sousou_no_frieren](pages/sousou_no_frieren.md) | 5 | | [spy_x_family](pages/spy_x_family.md) | 5 | | [tengen_toppa_gurren_lagann](pages/tengen_toppa_gurren_lagann.md) | 5 | | [the_king_of_fighters](pages/the_king_of_fighters.md) | 5 | | [watashi_ga_motenai_no_wa_dou_kangaetemo_omaera_ga_warui!](pages/watashi_ga_motenai_no_wa_dou_kangaetemo_omaera_ga_warui.md) | 5 | | [amagami](pages/amagami.md) | 4 | | [cardcaptor_sakura](pages/cardcaptor_sakura.md) | 4 | | [dead_or_alive](pages/dead_or_alive.md) | 4 | | [doki_doki_literature_club](pages/doki_doki_literature_club.md) | 4 | | [gintama](pages/gintama.md) | 4 | | [gridman_universe](pages/gridman_universe.md) | 4 | | [houseki_no_kuni](pages/houseki_no_kuni.md) | 4 | | [kaguya-sama_wa_kokurasetai_~tensai-tachi_no_renai_zunousen~](pages/kaguya_sama_wa_kokurasetai_tensai_tachi_no_renai_zunousen.md) | 4 | | [link!_like!_love_live!](pages/link_like_love_live.md) | 4 | | [magia_record:_mahou_shoujo_madoka_magica_gaiden](pages/magia_record_mahou_shoujo_madoka_magica_gaiden.md) | 4 | | [maria-sama_ga_miteru](pages/maria_sama_ga_miteru.md) | 4 | | [monster_musume_no_iru_nichijou](pages/monster_musume_no_iru_nichijou.md) | 4 | | [mushoku_tensei](pages/mushoku_tensei.md) | 4 | | [nanashi_inc.](pages/nanashi_inc.md) | 4 | | [nichijou](pages/nichijou.md) | 4 | | [nier_(series)](pages/nier_series.md) | 4 | | [os-tan](pages/os_tan.md) | 4 | | [ragnarok_online](pages/ragnarok_online.md) | 4 | | [saki](pages/saki.md) | 4 | | [steins;gate](pages/steins_gate.md) | 4 | | [tekken](pages/tekken.md) | 4 | | [to_heart_(series)](pages/to_heart_series.md) | 4 | | [vampire_(game)](pages/vampire_game.md) | 4 | | [watashi_ni_tenshi_ga_maiorita!](pages/watashi_ni_tenshi_ga_maiorita.md) | 4 | | [yahari_ore_no_seishun_lovecome_wa_machigatteiru.](pages/yahari_ore_no_seishun_lovecome_wa_machigatteiru.md) | 4 | | [yurucamp](pages/yurucamp.md) | 4 | | [aldnoah.zero](pages/aldnoah_zero.md) | 3 | | [alice_in_wonderland](pages/alice_in_wonderland.md) | 3 | | [animal_crossing](pages/animal_crossing.md) | 3 | | [assault_lily](pages/assault_lily.md) | 3 | | [atelier_(series)](pages/atelier_series.md) | 3 | | [black_rock_shooter](pages/black_rock_shooter.md) | 3 | | [bloodborne](pages/bloodborne.md) | 3 | | [boku_wa_tomodachi_ga_sukunai](pages/boku_wa_tomodachi_ga_sukunai.md) | 3 | | [chuunibyou_demo_koi_ga_shitai!](pages/chuunibyou_demo_koi_ga_shitai.md) | 3 | | [cyberpunk_(series)](pages/cyberpunk_series.md) | 3 | | [darker_than_black](pages/darker_than_black.md) | 3 | | [darling_in_the_franxx](pages/darling_in_the_franxx.md) | 3 | | [digimon](pages/digimon.md) | 3 | | [disgaea](pages/disgaea.md) | 3 | | [dokidoki!_precure](pages/dokidoki_precure.md) | 3 | | [durarara!!](pages/durarara.md) | 3 | | [elden_ring](pages/elden_ring.md) | 3 | | [gegege_no_kitarou](pages/gegege_no_kitarou.md) | 3 | | [goddess_of_victory:_nikke](pages/goddess_of_victory_nikke.md) | 3 | | [happinesscharge_precure!](pages/happinesscharge_precure.md) | 3 | | [hyouka](pages/hyouka.md) | 3 | | [ib](pages/ib.md) | 3 | | [inuyasha](pages/inuyasha.md) | 3 | | [kanon](pages/kanon.md) | 3 | | [little_witch_academia](pages/little_witch_academia.md) | 3 | | [machikado_mazoku](pages/machikado_mazoku.md) | 3 | | [made_in_abyss](pages/made_in_abyss.md) | 3 | | [mahou_girls_precure!](pages/mahou_girls_precure.md) | 3 | | [meitantei_conan](pages/meitantei_conan.md) | 3 | | [monster_hunter_(series)](pages/monster_hunter_series.md) | 3 | | [one-punch_man](pages/one_punch_man.md) | 3 | | [ore_no_imouto_ga_konna_ni_kawaii_wake_ga_nai](pages/ore_no_imouto_ga_konna_ni_kawaii_wake_ga_nai.md) | 3 | | [osomatsu-san](pages/osomatsu_san.md) | 3 | | [puyopuyo](pages/puyopuyo.md) | 3 | | [ranma_1/2](pages/ranma_1_2.md) | 3 | | [saenai_heroine_no_sodatekata](pages/saenai_heroine_no_sodatekata.md) | 3 | | [sanrio](pages/sanrio.md) | 3 | | [shoujo_kageki_revue_starlight](pages/shoujo_kageki_revue_starlight.md) | 3 | | [toradora!](pages/toradora.md) | 3 | | [undertale](pages/undertale.md) | 3 | | [working!!](pages/working.md) | 3 | | [yuri!!!_on_ice](pages/yuri_on_ice.md) | 3 | | [yuyushiki](pages/yuyushiki.md) | 3 | | [ano_hi_mita_hana_no_namae_wo_bokutachi_wa_mada_shiranai.](pages/ano_hi_mita_hana_no_namae_wo_bokutachi_wa_mada_shiranai.md) | 2 | | [berserk](pages/berserk.md) | 2 | | [call_of_duty](pages/call_of_duty.md) | 2 | | [cloud_nine_inc](pages/cloud_nine_inc.md) | 2 | | [cowboy_bebop](pages/cowboy_bebop.md) | 2 | | [date_a_live](pages/date_a_live.md) | 2 | | [dc_comics](pages/dc_comics.md) | 2 | | [devil_may_cry_(series)](pages/devil_may_cry_series.md) | 2 | | [di_gi_charat](pages/di_gi_charat.md) | 2 | | [dragon's_crown](pages/dragon_s_crown.md) | 2 | | [eromanga_sensei](pages/eromanga_sensei.md) | 2 | | [fairy_tail](pages/fairy_tail.md) | 2 | | [fatal_fury](pages/fatal_fury.md) | 2 | | [frozen_(disney)](pages/frozen_disney.md) | 2 | | [gabriel_dropout](pages/gabriel_dropout.md) | 2 | | [galaxy_angel](pages/galaxy_angel.md) | 2 | | [go!_princess_precure](pages/go_princess_precure.md) | 2 | | [goblin_slayer!](pages/goblin_slayer.md) | 2 | | [hataraku_saibou](pages/hataraku_saibou.md) | 2 | | [hayate_no_gotoku!](pages/hayate_no_gotoku.md) | 2 | | [heartcatch_precure!](pages/heartcatch_precure.md) | 2 | | [hidamari_sketch](pages/hidamari_sketch.md) | 2 | | [indie_virtual_youtuber](pages/indie_virtual_youtuber.md) | 2 | | [kamitsubaki_studio](pages/kamitsubaki_studio.md) | 2 | | [kid_icarus](pages/kid_icarus.md) | 2 | | [kill_me_baby](pages/kill_me_baby.md) | 2 | | [kin-iro_mosaic](pages/kin_iro_mosaic.md) | 2 | | [len'en](pages/len_en.md) | 2 | | [love_plus](pages/love_plus.md) | 2 | | [lycoris_recoil](pages/lycoris_recoil.md) | 2 | | [mahou_sensei_negima!](pages/mahou_sensei_negima.md) | 2 | | [mahou_shoujo_ni_akogarete](pages/mahou_shoujo_ni_akogarete.md) | 2 | | [mahou_tsukai_no_yoru](pages/mahou_tsukai_no_yoru.md) | 2 | | [majo_no_takkyuubin](pages/majo_no_takkyuubin.md) | 2 | | [mawaru_penguindrum](pages/mawaru_penguindrum.md) | 2 | | [metroid](pages/metroid.md) | 2 | | [mob_psycho_100](pages/mob_psycho_100.md) | 2 | | [my-hime](pages/my_hime.md) | 2 | | [nagi_no_asukara](pages/nagi_no_asukara.md) | 2 | | [needy_girl_overdose](pages/needy_girl_overdose.md) | 2 | | [nekopara](pages/nekopara.md) | 2 | | [new_game!](pages/new_game.md) | 2 | | [nitroplus](pages/nitroplus.md) | 2 | | [pretty_series](pages/pretty_series.md) | 2 | | [promare](pages/promare.md) | 2 | | [punishing:_gray_raven](pages/punishing_gray_raven.md) | 2 | | [reverse:1999](pages/reverse_1999.md) | 2 | | [ryuuou_no_oshigoto!](pages/ryuuou_no_oshigoto.md) | 2 | | [saibou_shinkyoku](pages/saibou_shinkyoku.md) | 2 | | [samurai_spirits](pages/samurai_spirits.md) | 2 | | [sayonara_zetsubou_sensei](pages/sayonara_zetsubou_sensei.md) | 2 | | [sekai_seifuku:_bouryaku_no_zvezda](pages/sekai_seifuku_bouryaku_no_zvezda.md) | 2 | | [senpai_ga_uzai_kouhai_no_hanashi](pages/senpai_ga_uzai_kouhai_no_hanashi.md) | 2 | | [shakugan_no_shana](pages/shakugan_no_shana.md) | 2 | | [shoujo_kakumei_utena](pages/shoujo_kakumei_utena.md) | 2 | | [sono_bisque_doll_wa_koi_wo_suru](pages/sono_bisque_doll_wa_koi_wo_suru.md) | 2 | | [tears_of_themis](pages/tears_of_themis.md) | 2 | | [tokyo_ghoul](pages/tokyo_ghoul.md) | 2 | | [trigun](pages/trigun.md) | 2 | | [uzaki-chan_wa_asobitai!](pages/uzaki_chan_wa_asobitai.md) | 2 | | [vshojo](pages/vshojo.md) | 2 | | [yama_no_susume](pages/yama_no_susume.md) | 2 | | [yuuki_bakuhatsu_bang_bravern](pages/yuuki_bakuhatsu_bang_bravern.md) | 2 | | [.live](pages/live.md) | 1 | | [a.i._voice](pages/a_i_voice.md) | 1 | | [aa_megami-sama](pages/aa_megami_sama.md) | 1 | | [accel_world](pages/accel_world.md) | 1 | | [air_(visual_novel)](pages/air_visual_novel.md) | 1 | | [amagi_brilliant_park](pages/amagi_brilliant_park.md) | 1 | | [aoki_hagane_no_arpeggio](pages/aoki_hagane_no_arpeggio.md) | 1 | | [arms_(game)](pages/arms_game.md) | 1 | | [avatar_legends](pages/avatar_legends.md) | 1 | | [azumanga_daioh](pages/azumanga_daioh.md) | 1 | | [baldur's_gate](pages/baldur_s_gate.md) | 1 | | [bayonetta_(series)](pages/bayonetta_series.md) | 1 | | [black_lagoon](pages/black_lagoon.md) | 1 | | [blend_s](pages/blend_s.md) | 1 | | [boku_no_kokoro_no_yabai_yatsu](pages/boku_no_kokoro_no_yabai_yatsu.md) | 1 | | [bombergirl](pages/bombergirl.md) | 1 | | [brand_new_animal](pages/brand_new_animal.md) | 1 | | [brave_witches](pages/brave_witches.md) | 1 | | [capcom_fighting_jam](pages/capcom_fighting_jam.md) | 1 | | [charlotte_(anime)](pages/charlotte_anime.md) | 1 | | [chrono_trigger](pages/chrono_trigger.md) | 1 | | [cookie_(touhou)](pages/cookie_touhou.md) | 1 | | [dagashi_kashi](pages/dagashi_kashi.md) | 1 | | [delicious_party_precure](pages/delicious_party_precure.md) | 1 | | [deltarune](pages/deltarune.md) | 1 | | [dennou_coil](pages/dennou_coil.md) | 1 | | [denpa_onna_to_seishun_otoko](pages/denpa_onna_to_seishun_otoko.md) | 1 | | [disney](pages/disney.md) | 1 | | [dorohedoro](pages/dorohedoro.md) | 1 | | [douluo_dalu](pages/douluo_dalu.md) | 1 | | [dungeon_ni_deai_wo_motomeru_no_wa_machigatteiru_darou_ka](pages/dungeon_ni_deai_wo_motomeru_no_wa_machigatteiru_darou_ka.md) | 1 | | [eureka_seven_(series)](pages/eureka_seven_series.md) | 1 | | [final_fight](pages/final_fight.md) | 1 | | [free!](pages/free.md) | 1 | | [fresh_precure!](pages/fresh_precure.md) | 1 | | [fukumoto_mahjong](pages/fukumoto_mahjong.md) | 1 | | [fushigi_no_umi_no_nadia](pages/fushigi_no_umi_no_nadia.md) | 1 | | [gakuen_idolmaster](pages/gakuen_idolmaster.md) | 1 | | [ganbare_douki-chan](pages/ganbare_douki_chan.md) | 1 | | [gate_-_jieitai_ka_no_chi_nite_kaku_tatakaeri](pages/gate_jieitai_ka_no_chi_nite_kaku_tatakaeri.md) | 1 | | [gekkan_shoujo_nozaki-kun](pages/gekkan_shoujo_nozaki_kun.md) | 1 | | [getsuyoubi_no_tawawa](pages/getsuyoubi_no_tawawa.md) | 1 | | [ghost_in_the_shell](pages/ghost_in_the_shell.md) | 1 | | [girls_band_cry](pages/girls_band_cry.md) | 1 | | [god_eater](pages/god_eater.md) | 1 | | [gosick](pages/gosick.md) | 1 | | [gravity_daze](pages/gravity_daze.md) | 1 | | [guilty_crown](pages/guilty_crown.md) | 1 | | [hacka_doll](pages/hacka_doll.md) | 1 | | [haiyore!_nyaruko-san](pages/haiyore_nyaruko_san.md) | 1 | | [hataraku_maou-sama!](pages/hataraku_maou_sama.md) | 1 | | [healin'_good_precure](pages/healin_good_precure.md) | 1 | | [hellsing](pages/hellsing.md) | 1 | | [highschool_of_the_dead](pages/highschool_of_the_dead.md) | 1 | | [hinata_channel](pages/hinata_channel.md) | 1 | | [hirogaru_sky!_precure](pages/hirogaru_sky_precure.md) | 1 | | [holostars](pages/holostars.md) | 1 | | [ichigo_mashimaro](pages/ichigo_mashimaro.md) | 1 | | [ijiranaide_nagatoro-san](pages/ijiranaide_nagatoro_san.md) | 1 | | [ikkitousen](pages/ikkitousen.md) | 1 | | [inu_x_boku_ss](pages/inu_x_boku_ss.md) | 1 | | [jigoku_shoujo](pages/jigoku_shoujo.md) | 1 | | [journey_to_the_west](pages/journey_to_the_west.md) | 1 | | [kaiji](pages/kaiji.md) | 1 | | [kannagi](pages/kannagi.md) | 1 | | [kanojo_okarishimasu](pages/kanojo_okarishimasu.md) | 1 | | [kara_no_kyoukai](pages/kara_no_kyoukai.md) | 1 | | [karakai_jouzu_no_takagi-san](pages/karakai_jouzu_no_takagi_san.md) | 1 | | [katawa_shoujo](pages/katawa_shoujo.md) | 1 | | [katekyo_hitman_reborn!](pages/katekyo_hitman_reborn.md) | 1 | | [kidou_senkan_nadesico](pages/kidou_senkan_nadesico.md) | 1 | | [kimi_no_na_wa.](pages/kimi_no_na_wa.md) | 1 | | [kino_no_tabi](pages/kino_no_tabi.md) | 1 | | [kizuna_ai_inc.](pages/kizuna_ai_inc.md) | 1 | | [kodomo_no_jikan](pages/kodomo_no_jikan.md) | 1 | | [komi-san_wa_komyushou_desu](pages/komi_san_wa_komyushou_desu.md) | 1 | | [koutetsujou_no_kabaneri](pages/koutetsujou_no_kabaneri.md) | 1 | | [kyoukai_no_kanata](pages/kyoukai_no_kanata.md) | 1 | | [limbus_company](pages/limbus_company.md) | 1 | | [little_red_riding_hood](pages/little_red_riding_hood.md) | 1 | | [little_witch_nobeta](pages/little_witch_nobeta.md) | 1 | | [lord_of_the_mysteries](pages/lord_of_the_mysteries.md) | 1 | | [mabinogi](pages/mabinogi.md) | 1 | | [magi_the_labyrinth_of_magic](pages/magi_the_labyrinth_of_magic.md) | 1 | | [majo_no_tabitabi](pages/majo_no_tabitabi.md) | 1 | | [maoyuu_maou_yuusha](pages/maoyuu_maou_yuusha.md) | 1 | | [metal_gear_(series)](pages/metal_gear_series.md) | 1 | | [metal_slug](pages/metal_slug.md) | 1 | | [minecraft](pages/minecraft.md) | 1 | | [miraculous_ladybug](pages/miraculous_ladybug.md) | 1 | | [mirai_nikki](pages/mirai_nikki.md) | 1 | | [mononoke_hime](pages/mononoke_hime.md) | 1 | | [mother_(game)](pages/mother_game.md) | 1 | | [musaigen_no_phantom_world](pages/musaigen_no_phantom_world.md) | 1 | | [nanatsu_no_taizai](pages/nanatsu_no_taizai.md) | 1 | | [new_horizon](pages/new_horizon.md) | 1 | | [nier:automata](pages/nier_automata.md) | 1 | | [nisekoi](pages/nisekoi.md) | 1 | | [no_game_no_life](pages/no_game_no_life.md) | 1 | | [odin_sphere](pages/odin_sphere.md) | 1 | | [ookami_(game)](pages/ookami_game.md) | 1 | | [oshiete!_galko-chan](pages/oshiete_galko_chan.md) | 1 | | [overlord_(maruyama)](pages/overlord_maruyama.md) | 1 | | [pangya](pages/pangya.md) | 1 | | [phantasy_star](pages/phantasy_star.md) | 1 | | [princess_principal](pages/princess_principal.md) | 1 | | [queen's_blade](pages/queen_s_blade.md) | 1 | | [rakuen_tsuihou](pages/rakuen_tsuihou.md) | 1 | | [ryuuko_no_ken](pages/ryuuko_no_ken.md) | 1 | | [sana_channel](pages/sana_channel.md) | 1 | | [saya_no_uta](pages/saya_no_uta.md) | 1 | | [scott_pilgrim_(series)](pages/scott_pilgrim_series.md) | 1 | | [seiken_densetsu](pages/seiken_densetsu.md) | 1 | | [seishun_buta_yarou](pages/seishun_buta_yarou.md) | 1 | | [sen_to_chihiro_no_kamikakushi](pages/sen_to_chihiro_no_kamikakushi.md) | 1 | | [senjou_no_valkyria_(series)](pages/senjou_no_valkyria_series.md) | 1 | | [serial_experiments_lain](pages/serial_experiments_lain.md) | 1 | | [sewayaki_kitsune_no_senko-san](pages/sewayaki_kitsune_no_senko_san.md) | 1 | | [shantae_(series)](pages/shantae_series.md) | 1 | | [shingeki_no_bahamut](pages/shingeki_no_bahamut.md) | 1 | | [shinryaku!_ikamusume](pages/shinryaku_ikamusume.md) | 1 | | [shirobako](pages/shirobako.md) | 1 | | [shokugeki_no_souma](pages/shokugeki_no_souma.md) | 1 | | [shugo_chara!](pages/shugo_chara.md) | 1 | | [slam_dunk_(series)](pages/slam_dunk_series.md) | 1 | | [slayers](pages/slayers.md) | 1 | | [soul_eater](pages/soul_eater.md) | 1 | | [soulcalibur](pages/soulcalibur.md) | 1 | | [spice_and_wolf](pages/spice_and_wolf.md) | 1 | | [summer_pockets](pages/summer_pockets.md) | 1 | | [synthesizer_v](pages/synthesizer_v.md) | 1 | | [taimanin_(series)](pages/taimanin_series.md) | 1 | | [tamako_market](pages/tamako_market.md) | 1 | | [tate_no_yuusha_no_nariagari](pages/tate_no_yuusha_no_nariagari.md) | 1 | | [tensei_oujo_to_tensai_reijou_no_mahou_kakumei](pages/tensei_oujo_to_tensai_reijou_no_mahou_kakumei.md) | 1 | | [tensei_shitara_slime_datta_ken](pages/tensei_shitara_slime_datta_ken.md) | 1 | | [the_amazing_digital_circus](pages/the_amazing_digital_circus.md) | 1 | | [the_moon_studio](pages/the_moon_studio.md) | 1 | | [the_ring](pages/the_ring.md) | 1 | | [transformers](pages/transformers.md) | 1 | | [tsugu_(vtuber)](pages/tsugu_vtuber.md) | 1 | | [urusei_yatsura](pages/urusei_yatsura.md) | 1 | | [utau](pages/utau.md) | 1 | | [va-11_hall-a](pages/va_11_hall_a.md) | 1 | | [violet_evergarden_(series)](pages/violet_evergarden_series.md) | 1 | | [vividred_operation](pages/vividred_operation.md) | 1 | | [voicevox](pages/voicevox.md) | 1 | | [voms](pages/voms.md) | 1 | | [vspo!](pages/vspo.md) | 1 | | [warcraft](pages/warcraft.md) | 1 | | [warioware](pages/warioware.md) | 1 | | [warship_girls_r](pages/warship_girls_r.md) | 1 | | [witches_of_africa](pages/witches_of_africa.md) | 1 | | [xenosaga](pages/xenosaga.md) | 1 | | [yagate_kimi_ni_naru](pages/yagate_kimi_ni_naru.md) | 1 | | [yosuga_no_sora](pages/yosuga_no_sora.md) | 1 | | [yotsubato!](pages/yotsubato.md) | 1 | | [youjo_senki](pages/youjo_senki.md) | 1 | | [youkai_watch](pages/youkai_watch.md) | 1 | | [yume_nikki](pages/yume_nikki.md) | 1 | | [yuusha_de_aru](pages/yuusha_de_aru.md) | 1 | | [zero_no_tsukaima](pages/zero_no_tsukaima.md) | 1 | | [(unknown)](pages/unknown.md) | 4 |
polinaeterna/yat
polinaeterna
"2024-03-07T17:11:23Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T17:08:19Z"
--- dataset_info: features: - name: blob_id dtype: string - name: directory_id dtype: string - name: path dtype: string - name: content_id dtype: string - name: detected_licenses sequence: string - name: license_type dtype: string - name: repo_name dtype: string - name: snapshot_id dtype: string - name: revision_id dtype: string - name: branch_name dtype: string - name: visit_date dtype: timestamp[ns] - name: revision_date dtype: timestamp[ns] - name: committer_date dtype: timestamp[ns] - name: github_id dtype: int64 - name: star_events_count dtype: int64 - name: fork_events_count dtype: int64 - name: gha_license_id dtype: string - name: gha_event_created_at dtype: timestamp[ns] - name: gha_created_at dtype: timestamp[ns] - name: gha_language dtype: string - name: src_encoding dtype: string - name: language dtype: string - name: is_vendor dtype: bool - name: is_generated dtype: bool - name: length_bytes dtype: int64 - name: extension dtype: string splits: - name: train num_bytes: 4047876716 num_examples: 8865479 download_size: 2731723775 dataset_size: 4047876716 configs: - config_name: default data_files: - split: train path: data/train-* ---
loubnabnl/comsop_450_samples_detailed
loubnabnl
"2024-03-14T22:48:54Z"
0
0
[ "language:en", "croissant", "region:us" ]
null
"2024-03-07T17:11:28Z"
--- language: - en dataset_info: features: - name: id dtype: int64 - name: prompt dtype: string - name: text_token_length dtype: int64 - name: original_text dtype: string - name: seed_data dtype: string - name: format dtype: string - name: audience dtype: string - name: generated_samples sequence: string - name: evaluation_prompt dtype: string - name: sentences sequence: string - name: completion dtype: string - name: token_length dtype: int64 - name: passage_score dtype: float64 - name: sentences_and_scores list: - name: score dtype: float64 - name: sentence dtype: string splits: - name: train num_bytes: 13709325 num_examples: 450 download_size: 8178189 dataset_size: 13709325 configs: - config_name: default data_files: - split: train path: data/train-* ---
aureliojafer/twitter_dataset_1709832136
aureliojafer
"2024-03-07T17:22:18Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T17:22:16Z"
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 splits: - name: train num_bytes: 61810 num_examples: 200 download_size: 39919 dataset_size: 61810 configs: - config_name: default data_files: - split: train path: data/train-* ---
Kamyar-zeinalipour/AFG_Llama
Kamyar-zeinalipour
"2024-03-07T18:04:59Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T17:28:20Z"
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 26594354 num_examples: 8020 - name: test num_bytes: 949855 num_examples: 300 download_size: 9974810 dataset_size: 27544209 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
Kamyar-zeinalipour/AFG_Mistral
Kamyar-zeinalipour
"2024-03-07T18:05:32Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T17:32:05Z"
--- dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 25744234 num_examples: 8020 - name: test num_bytes: 918055 num_examples: 300 download_size: 9865607 dataset_size: 26662289 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
Ahaa1234/SMPDATASET
Ahaa1234
"2024-03-07T18:11:33Z"
0
0
[ "license:mit", "region:us" ]
null
"2024-03-07T17:36:01Z"
--- license: mit ---
vnnm/MCC
vnnm
"2024-03-07T17:50:16Z"
0
0
[ "task_categories:question-answering", "moral", "consistency", "region:us" ]
[ "question-answering" ]
"2024-03-07T17:45:07Z"
--- task_categories: - question-answering tags: - moral - consistency --- # MCC Despite recent advancements showcasing the impressive capabilities of Large Language Models (LLMs) in conversational systems, we show that even state-of-the-art LLMs are morally inconsistent in their generations, questioning their reliability (and trustworthiness in general). Prior works in LLM evaluation focus on developing ground-truth data to measure accuracy on specific tasks. However, for moral scenarios that often lack universally agreed-upon answers, consistency in model responses becomes crucial for their reliability. To this extent, we construct the Moral Consistency Corpus (MCC), containing 50K moral questions, responses to them by LLMs, and the RoTs that these models followed.
aureliojafer/twitter_dataset_1709833834
aureliojafer
"2024-03-07T17:50:36Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T17:50:34Z"
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 splits: - name: train num_bytes: 70117 num_examples: 226 download_size: 44234 dataset_size: 70117 configs: - config_name: default data_files: - split: train path: data/train-* ---
aureliojafer/twitter_dataset_1709834543
aureliojafer
"2024-03-07T18:02:25Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T18:02:23Z"
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 splits: - name: train num_bytes: 63129 num_examples: 205 download_size: 40569 dataset_size: 63129 configs: - config_name: default data_files: - split: train path: data/train-* ---
aureliojafer/twitter_dataset_1709834699
aureliojafer
"2024-03-07T18:05:01Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T18:04:59Z"
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 splits: - name: train num_bytes: 61719 num_examples: 200 download_size: 39901 dataset_size: 61719 configs: - config_name: default data_files: - split: train path: data/train-* ---
hassanraha/multillm-route-instruct
hassanraha
"2024-03-12T19:43:54Z"
0
0
[ "license:apache-2.0", "croissant", "region:us" ]
null
"2024-03-07T18:07:39Z"
--- license: apache-2.0 ---
yangwang825/sst2-textfooler-7
yangwang825
"2024-03-07T18:10:37Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T18:10:34Z"
--- dataset_info: features: - name: text dtype: string - name: label dtype: int64 - name: augment dtype: string splits: - name: train num_bytes: 7161080 num_examples: 54359 - name: validation num_bytes: 110096 num_examples: 872 - name: test num_bytes: 226340 num_examples: 1821 download_size: 2029077 dataset_size: 7497516 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* ---
bulkbeings/patient-alumini-v1
bulkbeings
"2024-03-07T18:13:01Z"
0
0
[ "license:mit", "croissant", "region:us" ]
null
"2024-03-07T18:12:36Z"
--- license: mit ---
wilsonslz/MARCIOSOLNASCENTE
wilsonslz
"2024-03-07T18:13:20Z"
0
0
[ "license:openrail", "croissant", "region:us" ]
null
"2024-03-07T18:12:50Z"
--- license: openrail ---
zicsx/mC4-Hindi-Cleaned-3.0
zicsx
"2024-03-13T15:38:33Z"
0
0
[ "language:hi", "croissant", "region:us" ]
null
"2024-03-07T18:17:57Z"
--- language: - hi dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 21685650761.28769 num_examples: 8491564 download_size: 17395130554 dataset_size: 21685650761.28769 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "mC4-Hindi-Cleaned-3.0" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Sntng/drone_view_augment
Sntng
"2024-03-07T18:28:32Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T18:27:43Z"
--- dataset_info: features: - name: image dtype: image - name: label dtype: image splits: - name: train num_bytes: 569863871.99 num_examples: 1035 - name: validation num_bytes: 27419735.0 num_examples: 49 download_size: 94602473 dataset_size: 597283606.99 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* ---
roy29fuku/sample-large
roy29fuku
"2024-03-07T18:28:56Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T18:28:12Z"
--- configs: - config_name: default data_files: - split: train path: "data/*.parquet" --- [Hugging Face データセット作成チュートリアル](https://colab.research.google.com/drive/11rl9Wie22JVIB5bjj3W6bnygfWFlNijW#scrollTo=XXlFnTh04WLc)で用いるサンプルデータです。 データはPMC OS Subsetの[oa_comm_xml.PMC010xxxxxx.baseline.2023-12-18.tar.gz](https://ftp.ncbi.nlm.nih.gov/pub/pmc/oa_bulk/oa_comm/xml/)から約40万件分の論文のAbstractを抽出して作成しました。
sezenkarakus/image-description-dataset
sezenkarakus
"2024-03-08T12:38:25Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T18:32:00Z"
--- dataset_info: features: - name: file_name dtype: string - name: text dtype: string - name: image dtype: image splits: - name: train num_bytes: 5780469153.75 num_examples: 26002 download_size: 5774050664 dataset_size: 5780469153.75 configs: - config_name: default data_files: - split: train path: data/train-* ---
jondewoo/analytical-cubism
jondewoo
"2024-03-07T18:42:26Z"
0
0
[ "license:cc0-1.0", "croissant", "region:us" ]
null
"2024-03-07T18:35:07Z"
--- license: cc0-1.0 ---
aborruso/open_cup_complessivo
aborruso
"2024-03-07T18:38:09Z"
0
0
[ "license:cc-by-4.0", "croissant", "region:us" ]
null
"2024-03-07T18:35:47Z"
--- license: cc-by-4.0 ---
aureliojafer/twitter_dataset_1709836673
aureliojafer
"2024-03-07T18:37:55Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T18:37:53Z"
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 splits: - name: train num_bytes: 68194 num_examples: 222 download_size: 43576 dataset_size: 68194 configs: - config_name: default data_files: - split: train path: data/train-* ---
anton96vice/samantha-1.1-uncensored-split-and-prepared
anton96vice
"2024-03-07T20:08:17Z"
0
1
[ "license:apache-2.0", "croissant", "region:us" ]
null
"2024-03-07T18:39:36Z"
--- license: apache-2.0 dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: text dtype: string splits: - name: train num_bytes: 9760644.749754662 num_examples: 1630 - name: test num_bytes: 2443155.250245339 num_examples: 408 download_size: 6418929 dataset_size: 12203800.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* --- # Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> Samantha-1.1-instructed-uncensored prepared for training ## Dataset Details ### Dataset Description This dataset comprises a rich collection of uncensored, instruction-based interactions with Samantha, a virtual assistant designed to perform tasks based on textual commands. Each entry is a detailed record of an instruction provided to Samantha, the input she received, and the output she produced, accompanied by additional textual context. The dataset is intended for use in training and evaluating advanced natural language processing and understanding systems, focusing on interpreting and executing a wide range of instructions accurately. - **Curated by:** [Anton Vice](https://github.com/antonvice) - **Language(s) (NLP):** ENGLISH - **License:** Apache ### Dataset Sources [optional] - **Repository:** [HF](https://huggingface.co/datasets/anton96vice/samantha-1.1-uncensored-split-and-prepare) ## Uses For training instruction based personal assistant based on movie "Her" ### Direct Use same as any dataset ### Out-of-Scope Use Personal relationships ## Dataset Structure Train and test data with columns: instruction User query AI Response ## Dataset Creation ### Curation Rationale reprepare dataset for training personal assistants without the hassle of preparing dataset ### Source Data [SOURCE](https://huggingface.co/datasets/digitalpipelines/samantha-1.1-uncensored)
aureliojafer/twitter_dataset_1709836817
aureliojafer
"2024-03-07T18:40:19Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T18:40:17Z"
--- dataset_info: features: - name: id dtype: string - name: tweet_content dtype: string - name: user_name dtype: string - name: user_id dtype: string - name: created_at dtype: string - name: url dtype: string - name: favourite_count dtype: int64 splits: - name: train num_bytes: 60923 num_examples: 200 download_size: 39771 dataset_size: 60923 configs: - config_name: default data_files: - split: train path: data/train-* ---
deepghs/csip_v1
deepghs
"2024-03-07T20:31:04Z"
0
0
[ "task_categories:zero-shot-image-classification", "size_categories:100K<n<1M", "art", "region:us" ]
[ "zero-shot-image-classification" ]
"2024-03-07T18:51:48Z"
--- task_categories: - zero-shot-image-classification tags: - art size_categories: - 100K<n<1M ---
Cognitive-Lab/Aya_Malayalam
Cognitive-Lab
"2024-03-08T05:35:45Z"
0
0
[ "language:en", "language:ml", "license:apache-2.0", "arxiv:2402.06619", "region:us" ]
null
"2024-03-07T19:02:03Z"
--- dataset_info: - config_name: complete_dataset features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 4620114817 num_examples: 3576211 download_size: 1460336793 dataset_size: 4620114817 - config_name: templated_indic_paraphrase features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 802090 num_examples: 1001 download_size: 265061 dataset_size: 802090 - config_name: templated_indic_sentiment features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 826377 num_examples: 1156 download_size: 326454 dataset_size: 826377 - config_name: templated_xlel_wd features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 2871116 num_examples: 1689 download_size: 1060347 dataset_size: 2871116 - config_name: translated_adversarial_qa features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 25814767 num_examples: 10000 download_size: 6284748 dataset_size: 25814767 - config_name: translated_cnn_dailymail features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 677030150 num_examples: 100000 download_size: 241050258 dataset_size: 677030150 - config_name: translated_dolly features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 35119170 num_examples: 14808 download_size: 13037728 dataset_size: 35119170 - config_name: translated_flan_coqa features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 47671952 num_examples: 6409 download_size: 17244180 dataset_size: 47671952 - config_name: translated_flan_cot features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 114447647 num_examples: 91910 download_size: 37991855 dataset_size: 114447647 - config_name: translated_flan_gem_wiki features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 192780434 num_examples: 27147 download_size: 64897862 dataset_size: 192780434 - config_name: translated_flan_lambada features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 3093841 num_examples: 4279 download_size: 1077008 dataset_size: 3093841 - config_name: translated_flan_qa features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 525363 num_examples: 540 download_size: 177982 dataset_size: 525363 - config_name: translated_hotpotqa features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 178593379 num_examples: 355476 download_size: 53833798 dataset_size: 178593379 - config_name: translated_joke_explaination features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 1591191 num_examples: 754 download_size: 311948 dataset_size: 1591191 - config_name: translated_mintaka features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 6449130 num_examples: 14000 download_size: 1045790 dataset_size: 6449130 - config_name: translated_nqopen features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 59860846 num_examples: 175850 download_size: 16531375 dataset_size: 59860846 - config_name: translated_paws features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 51863293 num_examples: 49401 download_size: 6932449 dataset_size: 51863293 - config_name: translated_piqa features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 20834854 num_examples: 16113 download_size: 5497560 dataset_size: 20834854 - config_name: translated_soda features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 1156977459 num_examples: 1191582 download_size: 318570546 dataset_size: 1156977459 - config_name: translated_wiki_split features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 1148240780 num_examples: 989944 download_size: 344485054 dataset_size: 1148240780 - config_name: translated_wikiqa features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 814069 num_examples: 1040 download_size: 283724 dataset_size: 814069 - config_name: translated_xlel_wd features: - name: id dtype: int64 - name: split dtype: string - name: targets dtype: string - name: language dtype: string - name: template_id dtype: int64 - name: inputs dtype: string - name: dataset_name dtype: string - name: script dtype: string - name: task_type dtype: string - name: sub_dataset_name dtype: string splits: - name: train num_bytes: 893906909 num_examples: 523112 download_size: 328997341 dataset_size: 893906909 configs: - config_name: complete_dataset data_files: - split: train path: complete_dataset/train-* - config_name: templated_indic_paraphrase data_files: - split: train path: templated_indic_paraphrase/train-* - config_name: templated_indic_sentiment data_files: - split: train path: templated_indic_sentiment/train-* - config_name: templated_xlel_wd data_files: - split: train path: templated_xlel_wd/train-* - config_name: translated_adversarial_qa data_files: - split: train path: translated_adversarial_qa/train-* - config_name: translated_cnn_dailymail data_files: - split: train path: translated_cnn_dailymail/train-* - config_name: translated_dolly data_files: - split: train path: translated_dolly/train-* - config_name: translated_flan_coqa data_files: - split: train path: translated_flan_coqa/train-* - config_name: translated_flan_cot data_files: - split: train path: translated_flan_cot/train-* - config_name: translated_flan_gem_wiki data_files: - split: train path: translated_flan_gem_wiki/train-* - config_name: translated_flan_lambada data_files: - split: train path: translated_flan_lambada/train-* - config_name: translated_flan_qa data_files: - split: train path: translated_flan_qa/train-* - config_name: translated_hotpotqa data_files: - split: train path: translated_hotpotqa/train-* - config_name: translated_joke_explaination data_files: - split: train path: translated_joke_explaination/train-* - config_name: translated_mintaka data_files: - split: train path: translated_mintaka/train-* - config_name: translated_nqopen data_files: - split: train path: translated_nqopen/train-* - config_name: translated_paws data_files: - split: train path: translated_paws/train-* - config_name: translated_piqa data_files: - split: train path: translated_piqa/train-* - config_name: translated_soda data_files: - split: train path: translated_soda/train-* - config_name: translated_wiki_split data_files: - split: train path: translated_wiki_split/train-* - config_name: translated_wikiqa data_files: - split: train path: translated_wikiqa/train-* - config_name: translated_xlel_wd data_files: - split: train path: translated_xlel_wd/train-* license: apache-2.0 language: - en - ml --- # Aya_Malayalam This Dataset is curated from the original [Aya-Collection](https://huggingface.co/datasets/CohereForAI/aya_collection) dataset that was open-sourced by [Cohere](https://cohere.com/research) under the [Apache-2.0](https://choosealicense.com/licenses/apache-2.0/) license. The Aya Collection is a massive multilingual collection comprising 513 million instances of prompts and completions that cover a wide range of tasks. This collection uses instruction-style templates from fluent speakers and applies them to a curated list of datasets. It also includes translations of instruction-style datasets into 101 languages. The Aya Dataset, a human-curated multilingual instruction and response dataset, is part of this collection. Refer to our paper for more details about the collection. ### Motivations & Intentions The original dataset is large and more task-specific than language-specific. To carry out a task specific to the Indic language, one would previously have needed to download the entire dataset (~600 GB) and filter it. As we were training an Indic LLm internally, we filtered the dataset by language and curated this dataset. You can find all the Indic-language specific datasets - [here](https://huggingface.co/collections/Cognitive-Lab/aya-indic-suite-65eaa0e34a2307f30bbd55e5). ## **Data Instances** An example of a `train` instance looks as follows: ```yaml {'id': 246001, 'inputs': 'The following query in English is taken from the geography category. What could be the answer to the question?\nWhat is the seventh tallest mountain in North America?', 'targets': 'The answer is Mount Lucania.', 'dataset_name': 'Mintaka-inst', 'sub_dataset_name': '-', 'task_type': 'question-answering', 'template_id': 3, 'language': 'eng', 'split': 'train', 'script': 'Latn' } ``` ## **Data Fields** The data fields are the same among all splits: - `id:` Unique id of the data point - `inputs:` Prompt or input to the language model. - `targets:` Completion or output of the language model. - `dataset_name:` The name of the source dataset that the data point was taken from - `sub_dataset_name:` If the source is a collection, this field indicates which part of that collection the data point was taken from. If it is not a collection, this field is left blank. - `task_type:` The task type that this conversation belongs to. - `template_id`: The id of the template applied to this data point. - `language:` The ISO code of the dialect of the conversation. - `script:` The script of the language. - `split:` Indicates whether the data point is part of the `train` or the `test` split. ## **Licensing Information** This dataset can be used for any purpose, whether academic or commercial, under the terms of the **[Apache 2.0](https://opensource.org/license/apache-2-0)** License. Citation ```yaml @misc{singh2024aya, title={Aya Dataset: An Open-Access Collection for Multilingual Instruction Tuning}, author={Shivalika Singh and Freddie Vargus and Daniel Dsouza and Börje F. Karlsson and Abinaya Mahendiran and Wei-Yin Ko and Herumb Shandilya and Jay Patel and Deividas Mataciunas and Laura OMahony and Mike Zhang and Ramith Hettiarachchi and Joseph Wilson and Marina Machado and Luisa Souza Moura and Dominik Krzemiński and Hakimeh Fadaei and Irem Ergün and Ifeoma Okoh and Aisha Alaagib and Oshan Mudannayake and Zaid Alyafeai and Vu Minh Chien and Sebastian Ruder and Surya Guthikonda and Emad A. Alghamdi and Sebastian Gehrmann and Niklas Muennighoff and Max Bartolo and Julia Kreutzer and Ahmet Üstün and Marzieh Fadaee and Sara Hooker}, year={2024}, eprint={2402.06619}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
taesiri/fsmbench_what_will_be_the_state_gemini
taesiri
"2024-03-11T16:35:34Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T19:05:19Z"
--- dataset_info: features: - name: query_id dtype: string - name: fsm_id dtype: string - name: fsm_json dtype: string - name: difficulty_level dtype: int64 - name: transition_matrix dtype: string - name: query dtype: string - name: answer dtype: string - name: substring_index dtype: int64 splits: - name: validation num_bytes: 8972710 num_examples: 6322 download_size: 486790 dataset_size: 8972710 configs: - config_name: default data_files: - split: validation path: data/validation-* ---
Astral-P/Irelia
Astral-P
"2024-03-07T19:22:32Z"
0
0
[ "license:wtfpl", "region:us" ]
null
"2024-03-07T19:20:20Z"
--- license: wtfpl ---
vr1999/mini-Mental_Health_conv
vr1999
"2024-03-07T19:22:05Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T19:22:03Z"
--- dataset_info: features: - name: questionTitle dtype: string - name: answerText dtype: string splits: - name: train num_bytes: 1533188 num_examples: 1000 download_size: 876026 dataset_size: 1533188 configs: - config_name: default data_files: - split: train path: data/train-* ---
argilla/10k_prompts_SPIN_iter3_zephyr_top
argilla
"2024-03-07T19:23:01Z"
0
2
[ "croissant", "region:us" ]
null
"2024-03-07T19:22:58Z"
--- dataset_info: features: - name: generated list: - name: content dtype: string - name: role dtype: string - name: real list: - name: content dtype: string - name: role dtype: string splits: - name: train num_bytes: 8876489.624454148 num_examples: 1648 - name: test num_bytes: 991064.3755458515 num_examples: 184 download_size: 5547439 dataset_size: 9867554.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
draco976/wikipedia-bookcorpus
draco976
"2024-03-09T09:52:42Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T19:24:36Z"
--- dataset_info: features: - name: text sequence: string splits: - name: train num_bytes: 39537241395.73483 num_examples: 9463277 - name: test num_bytes: 118971735.26516715 num_examples: 28476 download_size: 12339960041 dataset_size: 39656213131.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* ---
mfidabel/common_voice_16_1_semisupervised
mfidabel
"2024-03-25T02:20:51Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T19:26:45Z"
--- dataset_info: features: - name: client_id dtype: string - name: path dtype: string - name: audio dtype: audio: sampling_rate: 48000 - name: sentence dtype: string - name: up_votes dtype: int64 - name: down_votes dtype: int64 - name: age dtype: string - name: gender dtype: string - name: accent dtype: string - name: locale dtype: string - name: segment dtype: string - name: variant dtype: string - name: predicted_sentence dtype: string splits: - name: whisper.medium num_bytes: 8271758359.625 num_examples: 18779 - name: whisper.small num_bytes: 2763174559.625 num_examples: 18779 - name: whisper.large.v3 num_bytes: 8271776000.625 num_examples: 18779 - name: whisper.tiny num_bytes: 8271831240.625 num_examples: 18779 download_size: 23466393916 dataset_size: 27578540160.5 configs: - config_name: default data_files: - split: whisper.medium path: data/whisper.medium-* - split: whisper.small path: data/whisper.small-* - split: whisper.large.v3 path: data/whisper.large.v3-* - split: whisper.tiny path: data/whisper.tiny-* ---
ramixpe/rfc_json
ramixpe
"2024-03-07T20:01:25Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T20:01:22Z"
--- dataset_info: features: - name: prompt dtype: string - name: completion dtype: string splits: - name: train num_bytes: 8336 num_examples: 46 download_size: 3942 dataset_size: 8336 configs: - config_name: default data_files: - split: train path: data/train-* ---
danielroncel/dstc2_dialogues_transcription_processed
danielroncel
"2024-03-07T20:27:19Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T20:16:40Z"
--- dataset_info: features: - name: session_id dtype: string - name: turn_index dtype: int64 - name: audio_file dtype: string - name: transcript dtype: string - name: chat_history_last_9 dtype: string - name: chat_history_last_9_tokenized sequence: int64 - name: speaker_text_last_9_tokenized sequence: int64 - name: attention_mask sequence: int64 - name: label_semantics dtype: string - name: label dtype: string splits: - name: train num_bytes: 197900954 num_examples: 22266 download_size: 3556149 dataset_size: 197900954 configs: - config_name: default data_files: - split: train path: data/train-* ---
Nkumar5/FMARock
Nkumar5
"2024-03-07T20:18:38Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T20:17:39Z"
--- dataset_info: features: - name: image dtype: image - name: audio_file dtype: string - name: slice dtype: int16 splits: - name: train num_bytes: 79535716.375 num_examples: 1805 download_size: 79512581 dataset_size: 79535716.375 configs: - config_name: default data_files: - split: train path: data/train-* ---
zliu333/truck_at_port3
zliu333
"2024-03-07T20:18:33Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T20:17:44Z"
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 67546834.0 num_examples: 45 download_size: 67529720 dataset_size: 67546834.0 configs: - config_name: default data_files: - split: train path: data/train-* ---
Nkumar5/FMAFolk
Nkumar5
"2024-03-07T20:20:41Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T20:19:27Z"
--- dataset_info: features: - name: image dtype: image - name: audio_file dtype: string - name: slice dtype: int16 splits: - name: train num_bytes: 169147233.5 num_examples: 3860 download_size: 169095201 dataset_size: 169147233.5 configs: - config_name: default data_files: - split: train path: data/train-* ---
Nkumar5/FMAHiphop
Nkumar5
"2024-03-07T20:22:12Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T20:19:37Z"
--- dataset_info: features: - name: image dtype: image - name: audio_file dtype: string - name: slice dtype: int16 splits: - name: train num_bytes: 194680502.375 num_examples: 4357 download_size: 194609901 dataset_size: 194680502.375 configs: - config_name: default data_files: - split: train path: data/train-* ---
positivethoughts/merge_rewrite_13.3k
positivethoughts
"2024-03-07T20:33:09Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T20:26:01Z"
--- dataset_info: features: - name: rewrite_prompt dtype: string - name: rewritten_text dtype: string - name: original_text dtype: string - name: id dtype: string splits: - name: train num_bytes: 25600526 num_examples: 13365 download_size: 16398467 dataset_size: 25600526 configs: - config_name: default data_files: - split: train path: data/train-* --- 1.2k + 2.1k + 10k
gsh3729/coco_cropped
gsh3729
"2024-03-09T05:43:23Z"
0
0
[ "croissant", "region:us" ]
null
"2024-03-07T20:48:45Z"
--- dataset_info: features: - name: image_id dtype: int64 - name: image struct: - name: bytes dtype: binary - name: width dtype: int64 - name: height dtype: int64 splits: - name: train num_bytes: 113905 num_examples: 8 download_size: 119586 dataset_size: 113905 configs: - config_name: default data_files: - split: train path: data/train-* ---