datasetId
stringlengths
2
81
card
stringlengths
20
977k
ccmusic-database/GZ_IsoTech
--- license: mit task_categories: - audio-classification language: - zh - en tags: - music - art pretty_name: GZ_IsoTech Dataset size_categories: - n<1K viewer: false --- # Dataset Card for GZ_IsoTech Dataset The raw dataset comprises 2,824 audio clips showcasing various guzheng playing techniques. Specifically, 2,328 clips were sourced from virtual sound banks, while 496 clips were performed by a skilled professional guzheng artist. These recordings encompass a comprehensive range of tones inherent to the guzheng instrument. ## Dataset Description - **Homepage:** <https://ccmusic-database.github.io> - **Repository:** <https://huggingface.co/datasets/ccmusic-database/Guzheng_Tech99> - **Paper:** <https://doi.org/10.5281/zenodo.5676893> - **Leaderboard:** <https://www.modelscope.cn/datasets/ccmusic/GZ_IsoTech> - **Point of Contact:** <https://arxiv.org/abs/2209.08774> ### Dataset Summary Due to the pre-existing split in the raw dataset, wherein the data has been partitioned approximately in a 4:1 ratio for training and testing sets, we uphold the original data division approach. In contrast to utilizing platform-specific automated splitting mechanisms, we directly employ the pre-split data for subsequent integration steps. ### Supported Tasks and Leaderboards MIR, audio classification ### Languages Chinese, English ## Usage ```python from datasets import load_dataset dataset = load_dataset("ccmusic-database/GZ_IsoTech") for item in ds["train"]: print(item) for item in ds["test"]: print(item) ``` ## Maintenance ```bash GIT_LFS_SKIP_SMUDGE=1 git clone git@hf.co:datasets/ccmusic-database/GZ_IsoTech cd GZ_IsoTech ``` ## Dataset Structure | audio(.wav, 22050Hz) | mel(.jpg, 22050Hz) | label | cname | | :----------------------------------------------------------------------------------------------------------------------: | :------------------------------------: | :-----: | :----: | | <audio controls src="https://huggingface.co/datasets/ccmusic-database/GZ_IsoTech/resolve/main/data/record_chanyin1.wav"> | <img src="./data/record_chanyin1.jpg"> | 8-class | string | | ... | ... | ... | ... | ### Data Instances .zip(.flac, .csv) ### Data Fields Categorization of the clips is based on the diverse playing techniques characteristic of the guzheng, the clips are divided into eight categories: Vibrato (chanyin), Upward Portamento (shanghuayin), Downward Portamento (xiahuayin), Returning Portamento (huihuayin), Glissando (guazou, huazhi), Tremolo (yaozhi), Harmonic (fanyin), Plucks (gou, da, mo, tuoโ€ฆ). ### Data Splits train, test ## Dataset Creation ### Curation Rationale The Guzheng is a kind of traditional Chinese instrument with diverse playing techniques. Instrument playing techniques (IPT) play an important role in musical performance. However, most of the existing works for IPT detection show low efficiency for variable-length audio and do not assure generalization as they rely on a single sound bank for training and testing. In this study, we propose an end-to-end Guzheng playing technique detection system using Fully Convolutional Networks that can be applied to variable-length audio. Because each Guzheng playing technique is applied to a note, a dedicated onset detector is trained to divide an audio into several notes and its predictions are fused with frame-wise IPT predictions. During fusion, we add the IPT predictions frame by frame inside each note and get the IPT with the highest probability within each note as the final output of that note. We create a new dataset named GZ_IsoTech from multiple sound banks and real-world recordings for Guzheng performance analysis. Our approach achieves 87.97% in frame-level accuracy and 80.76% in note-level F1 score, outperforming existing works by a large margin, which indicates the effectiveness of our proposed method in IPT detection. ### Source Data #### Initial Data Collection and Normalization Dichucheng Li, Monan Zhou #### Who are the source language producers? Students from FD-LAMT ### Annotations #### Annotation process This database contains 2824 audio clips of guzheng playing techniques. Among them, 2328 pieces were collected from virtual sound banks, and 496 pieces were played and recorded by a professional guzheng performer. #### Who are the annotators? Students from FD-LAMT ### Personal and Sensitive Information None ## Considerations for Using the Data ### Social Impact of Dataset Promoting the development of the music AI industry ### Discussion of Biases Only for Traditional Chinese Instruments ### Other Known Limitations Insufficient sample ## Additional Information ### Dataset Curators Dichucheng Li ### Evaluation [Li, Dichucheng, Yulun Wu, Qinyu Li, Jiahao Zhao, Yi Yu, Fan Xia and Wei Li. โ€œPlaying Technique Detection by Fusing Note Onset Information in Guzheng Performance.โ€ International Society for Music Information Retrieval Conference (2022).](https://archives.ismir.net/ismir2022/paper/000037.pdf) ### Licensing Information ``` MIT License Copyright (c) FD-LAMT Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ### Citation Information ```bibtex @dataset{zhaorui_liu_2021_5676893, author = {Monan Zhou, Shenyang Xu, Zhaorui Liu, Zhaowen Wang, Feng Yu, Wei Li and Baoqiang Han}, title = {CCMusic: an Open and Diverse Database for Chinese and General Music Information Retrieval Research}, month = {mar}, year = {2024}, publisher = {HuggingFace}, version = {1.2}, url = {https://huggingface.co/ccmusic-database} } ``` ### Contributions Promoting the development of the music AI industry
umarigan/turkiye_finance_qa
--- dataset_info: features: - name: soru dtype: string - name: cevap dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 171936 num_examples: 428 download_size: 82421 dataset_size: 171936 configs: - config_name: default data_files: - split: train path: data/train-* task_categories: - question-answering language: - tr tags: - finance pretty_name: Finance size_categories: - n<1K --- # Dataset Card for "turkiye_finance_qa" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jihye-moon/LawQA-Ko
--- task_categories: - conversational language: - ko tags: - legal size_categories: - 10K<n<100K --- ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> ๋ฒ•๋ฅ ์— ๋Œ€ํ•œ ์งˆ๋ฌธ๊ณผ ๋‹ต๋ณ€์œผ๋กœ ๊ตฌ์„ฑ๋œ ๋ฐ์ดํ„ฐ์…‹ ์ž…๋‹ˆ๋‹ค. ์•„๋ž˜์˜ ๋ฐ์ดํ„ฐ์…‹์—์„œ ์งˆ๋ฌธ๊ณผ ๋‹ต๋ณ€์„ ๋ณ‘ํ•ฉํ•˜์—ฌ Datasets๋ฅผ ๋งŒ๋“ค์—ˆ์Šต๋‹ˆ๋‹ค. | ์ •๋ณด ์ถœ์ฒ˜ | Dataset Page | Rows | |---|---|---| |[์ฐพ๊ธฐ์‰ฌ์šด์ƒํ™œ๋ฒ•๋ น์ •๋ณด ๋ฐฑ๋ฌธ๋ฐฑ๋‹ต](https://www.easylaw.go.kr/CSP/OnhunqueansLstRetrieve.laf?search_put=)| [jiwoochris/easylaw_kr](https://huggingface.co/datasets/jiwoochris/easylaw_kr) | 2,195 rows | |[๋Œ€ํ•œ๋ฒ•๋ฅ ๊ตฌ์กฐ๊ณต๋‹จ ๋ฒ•๋ฅ ์ƒ๋‹ด์‚ฌ๋ก€](https://www.klac.or.kr/legalinfo/counsel.do)| [jihye-moon/klac_legal_aid_counseling](https://huggingface.co/datasets/jihye-moon/klac_legal_aid_counseling) | 10,037 rows | |[๋Œ€ํ•œ๋ฒ•๋ฅ ๊ตฌ์กฐ๊ณต๋‹จ ์‚ฌ์ด๋ฒ„์ƒ๋‹ด](https://www.klac.or.kr/legalstruct/cyberConsultation.do)| jihye-moon/klac_cyber_counseling (private Datasets) | 2,587 rows | โ€ป ์œ„์˜ ๋ฐ์ดํ„ฐ๋Š” ๋ชจ๋‘ ์›น ํŽ˜์ด์ง€๋ฅผ ํฌ๋กค๋ง ํ•˜์—ฌ ๊ตฌ์ถ•๋œ ๋ฐ์ดํ„ฐ ์ž…๋‹ˆ๋‹ค. โ€ป ๋Œ€ํ•œ๋ฒ•๋ฅ ๊ตฌ์กฐ๊ณต๋‹จ ๋ฐ์ดํ„ฐ๋Š” ํฌ๋กค๋ง ํ›„, ์ „์ฒ˜๋ฆฌ(๊ณต๋‹จ ์•ˆ๋‚ด๋ฌธ๊ตฌ ์‚ญ์ œ, ์ฟ ์…˜์–ด ์‚ญ์ œ ๋“ฑ)๋ฅผ ํ•˜์˜€์Šต๋‹ˆ๋‹ค.
ai2lumos/lumos_complex_qa_plan_onetime
--- license: apache-2.0 task_categories: - text-generation - question-answering language: - en tags: - language-agent - reasoning - question-answering - planning size_categories: - 10K<n<100K --- # ๐Ÿช„ Agent Lumos: Unified and Modular Training for Open-Source Language Agents <p align="center"> ๐ŸŒ<a href="https://allenai.github.io/lumos">[Website]</a> &nbsp; ๐Ÿ“<a href="https://arxiv.org/abs/2311.05657">[Paper]</a> &nbsp; ๐Ÿค—<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a> &nbsp; ๐Ÿค—<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a> &nbsp; ๐Ÿค—<a href="https://huggingface.co/spaces/ai2lumos/lumos_data_demo">[Demo]</a> &nbsp; </p> We introduce ๐Ÿช„**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents. **Lumos** has following features: * ๐Ÿงฉ **Modular Architecture**: - ๐Ÿงฉ **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B/13B and off-the-shelf APIs. - ๐Ÿค— **Lumos** utilizes a unified data format that encompasses multiple task types, thereby enabling the developed agent framework to conveniently support a range of interactive tasks. * ๐ŸŒ **Diverse Training Data**: - ๐ŸŒ **Lumos** is trained with ~56K diverse high-quality subgoal/action annotations from ground-truth reasoning steps in existing benchmarks with GPT-4. - โš’๏ธ **Lumos** data can be instrumental for future research in developing open-source agents for complex interactive tasks. * ๐Ÿš€ **Competitive Performance**: - ๐Ÿš€ **Lumos** is comparable or even beats **GPT-series** agents on web/complex QA tasks Mind2Web and HotpotQA, and **larger open agents** on math and multimodal tasks. - ๐Ÿš€ **Lumos** exceeds contemporaneous agents that have been **fine-tuned** with in-domain HotpotQA, Mind2Web and ScienceQA annotations, such as **FiReAct**, **AgentLM**, and **AutoAct**. - ๐Ÿš€ **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **integrated** training. - ๐Ÿš€ **Lumos** surpasses larger open LLM agents and domain-specific agents on unseen tasks, WebShop and InterCode_SQL. ## Data Overview `lumos_complex_qa_plan_onetime` is the data for training **planning** module on **complex QA** task in **Lumos-Onetime (Lumos-O)** formulation. The source of the training annotation training data is shown below: | Datasets | Number | |---|---| |StrategyQA|1777| |Musique|17632| ## Models Trained with the Data `lumos_complex_qa_plan_onetime` is used to train the following models. |Model|Huggingface Repo| |---|---| |`lumos_complex_qa_plan_onetime`| [๐Ÿค—Huggingface Repo](https://huggingface.co/ai2lumos/lumos_complex_qa_plan_onetime) | ## Citation If you find this work is relevant with your research, please feel free to cite our work! ``` @article{yin2023lumos, title={Agent Lumos: Unified and Modular Training for Open-Source Language Agents}, author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen}, journal={arXiv preprint arXiv:2311.05657}, year={2023} } ```
openerotica/erotiquant
--- license: apache-2.0 --- Why would I spend all that time creating these datasets and training just to brain damage the models with wikitext during quantization? This dataset is primarily multi turn ERP chat. It's formatted to be a drop in replacement for wikitext for quantization methods such as AutoGPTQ or AWQ.
royallab/PIPPA-cleaned
--- license: apache-2.0 tags: - not-for-all-audiences - conversational - roleplay - custom-format - a. pretty_name: PIPPA - Personal Interaction Pairs Between People and AI viewer: false --- Cleaned and/or fixed dataset of PIPPA (https://huggingface.co/datasets/PygmalionAI/PIPPA), without the formatting and random char issues. Can be used as calibration dataset for exllamav2, like for goliath-rpcal (https://huggingface.co/Panchovix/goliath-120b-exl2-rpcal) All credits to the Pygmalion team and Undi.
zjunlp/KnowEdit
--- license: mit language: - en task_categories: - text-generation - question-answering - text2text-generation tags: - knowledge-editing - model-editing - large-language-model --- # KnowEdit: A Benchmark of Knowledge Editing for LLMs This README is about reproducing the paper [A Comprehensive Study of Knowledge Editing for Large Language Models](https://arxiv.org/abs/2401.01286). You can use [EasyEdit](https://github.com/zjunlp/EasyEdit) to load and use this benchmark. ## Table of Contents - [Dataset Structure](#Dataset-Structure) - [Get Started Quickly](#Get-started-quickly) - [Training an Editor with KnowEdit](#Training-an-Editor-with-KnowEdit) - [Performence](#Performence) - [The Composition of Dataset](#The_Composition_of_Dataset) --- This README explains how to use [EasyEdit](https://github.com/zjunlp/EasyEdit) with the KnowEdit dataset. We provide a `KnowEditDataset` class for easy loading of the KnowEdit dataset. To use it, simply write: ```python dataset = KnowEditDataset('the_json_path') ``` ## Dataset Structure KnowEdit is tailored for knowledge editing tasks. It encompasses six tasks: ZsRE, Wiki<sub>recent</sub>, Wiki<sub>counterfact</sub>, WikiBio, ConvSent, and Sanitation. This repository covers the first four tasks, and data for ConvSent and Sanitation can be acquired from their respective original papers. The datasets used can be downloaded from HuggingFace, HuggingFace, ModelScopeใ€‚ | **dataset** | HuggingFace| WiseModel | ModelScope | | :--------: | :-----------------------------------------------------------------------------------------------: | :-----------------------------------------------------------------------------: | :--------------------------------------------------------------------------------: | | KnowEdit | [[HuggingFace]](https://huggingface.co/datasets/zjunlp/KnowEdit) | [[WiseModel]](https://wisemodel.cn/datasets/zjunlp/KnowEdit) | [[ModelScope]](https://www.modelscope.cn/datasets/zjunlp/KnowEdit) | Unzip the file and put it to `./data` <table class="tg"> <thead> <tr> <th class="tg-7btt">Task</th> <th class="tg-7btt">Knowledge Insertion</th> <th class="tg-7btt" colspan="4">Knowledge Modification</th> <th class="tg-7btt">Knowledge Erasure</th> </tr> </thead> <tbody> <tr> <td class="tg-c3ow">Datasets</td> <td class="tg-c3ow">Wiki<sub>recent</sub></td> <td class="tg-c3ow">ZsRE</td> <td class="tg-c3ow">WikiBio</td> <td class="tg-c3ow"> WikiData<sub>counterfact</sub></td> <td class="tg-c3ow">Convsent</td> <td class="tg-c3ow">Sanitation</td> </tr> <tr> <td class="tg-c3ow">Type</td> <td class="tg-c3ow">Fact</td> <td class="tg-c3ow">Question Answering</td> <td class="tg-c3ow">Hallucination</td> <td class="tg-c3ow">Counterfact</td> <td class="tg-c3ow">Sentiment</td> <td class="tg-c3ow">Unwanted Info</td> </tr> <tr> <td class="tg-c3ow"># Train</td> <td class="tg-c3ow">570</td> <td class="tg-c3ow">10,000</td> <td class="tg-c3ow">592</td> <td class="tg-c3ow">1,455</td> <td class="tg-c3ow">14,390</td> <td class="tg-c3ow">80</td> </tr> <tr> <td class="tg-c3ow"># Test</td> <td class="tg-c3ow">1,266</td> <td class="tg-c3ow">1230</td> <td class="tg-c3ow">1,392</td> <td class="tg-c3ow">885</td> <td class="tg-c3ow">800</td> <td class="tg-c3ow">80</td> </tr> </tbody> </table> --- Different JSON files have distinct data types. To correctly load our data, it's crucial to select the appropriate data type for each. For instance: - For the **WikiBio** dataset, we should use the `wikibio` data type. - For the **ZsRE** dataset, we should use the `zsre` data type. - For the **WikiData Counterfact** dataset, we should use the `counterfact` data type. - For the **WikiData Recent** dataset, we should use the `recent` data type. - For the **convsent** dataset, we should use the run_convsent_llama2.py - For the **Sanitation** dataset, we should use the run_trivia_llama2.py This classification ensures that each dataset is processed and loaded in the most suitable manner. The file structure for KnowEdit is as follows: ``` knowedit โ”œโ”€โ”€ WikiBio โ”‚ย ย  โ”œโ”€โ”€ wikibio-test-all.json โ”‚ย ย  โ””โ”€โ”€ wikibio-train-all.json โ”œโ”€โ”€ ZsRE โ”‚ย ย  โ””โ”€โ”€ ZsRE-test-all.json โ”œโ”€โ”€ wiki_counterfact โ”‚ย ย  โ”œโ”€โ”€ test_cf.json โ”‚ย ย  โ””โ”€โ”€ train_cf.json โ”œโ”€โ”€ convsent โ”‚ย ย  โ”œโ”€โ”€ blender_test.json โ”‚ย ย  โ”œโ”€โ”€ blender_train.json โ”‚ย ย  โ””โ”€โ”€ blender_val.json โ”œโ”€โ”€ Sanitation โ”‚ย ย  โ”œโ”€โ”€ trivia_qa_test.json โ”‚ย ย  โ””โ”€โ”€ trivia_qa_train.json โ””โ”€โ”€ wiki_recent โ”œโ”€โ”€ recent_test.json โ””โ”€โ”€ recent_train.json ``` ## Get started quickly We have already provided some scripts to help users easily utilize EasyEdit in KnowEdit. Different JSONs require different scripts. Please select the appropriate script to edit your model. Please discuss in an [issue](https://github.com/zjunlp/EasyEdit/issues) a feature you would like to implement in an example before submitting a PR; we welcome bug fixes, but since we want to keep the examples as simple as possible it's unlikely that we will merge a pull request adding more functionality at the cost of readability. --- ### ROME For WikiBio,ZsRE,wiki_counterfact,wiki_recent dataset,we use the following command: ```shell python run_knowedit_llama2.py \ --editing_method=ROME \ --hparams_dir=../hparams/ROME/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/ROME/llama-7b.yaml \ --editing_method ROME \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method ROME\ --hparams_dir ./hparams/ROME/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ### MEMIT ```shell python run_knowedit_llama2.py \ --editing_method=MEMIT \ --hparams_dir=../hparams/MEMIT/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/MEMIT/llama-7b.yaml \ --editing_method MEMIT \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method MEMIT\ --hparams_dir ./hparams/MEMIT/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ### FT ```shell python run_knowedit_llama2.py \ --editing_method=FT \ --hparams_dir=../hparams/FT/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/FT/llama-7b.yaml \ --editing_method FT \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method FT\ --hparams_dir ./hparams/FT/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ### MEND ```shell python run_knowedit_llama2.py \ --editing_method=MEND \ --hparams_dir=../hparams/MEND/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/MEND/llama-7b.yaml \ --editing_method MEND \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method MEND\ --hparams_dir ./hparams/MEND/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ### KN ```shell python run_knowedit_llama2.py \ --editing_method=KN \ --hparams_dir=../hparams/KN/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/KN/llama-7b.yaml \ --editing_method KN \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method KN\ --hparams_dir ./hparams/KN/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ### IKE ```shell python run_knowedit_llama2.py \ --editing_method=IKE \ --hparams_dir=../hparams/IKE/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/IKE/llama-7b.yaml \ --editing_method IKE \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method IKE\ --hparams_dir ./hparams/IKE/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ### LoRA ```shell python run_knowedit_llama2.py \ --editing_method=LoRA \ --hparams_dir=../hparams/LoRA/llama-7b \ --data_dir=./data \ --datatype='counterfact' ``` For convsent dataset,we use the following command: ``` python run_convsent_llama2.py \ --hparams_dir ./hparams/LoRA/llama-7b.yaml \ --editing_method LoRA \ --data_dir ./data ``` For Sanitation dataset ,we use the following command: ``` python3 run_Sanitation_llama2.py --editing_method LoRA\ --hparams_dir ./hparams/LoRA/llama-7b.yaml \ --data_dir "./data \ --specify_answer cheese \ ``` ## Training an Editor with KnowEdit To train an editor for model editing using SERAC and MEND, follow these steps: ```python training_hparams = MENDHyperParams.from_hparams('./hparams/MEND/llama-7b.yaml') train_ds = KnowEditDataset('you_train_path', config=training_hparams) eval_ds = KnoweEitDataset('you_eval_path', config=training_hparams) trainer = EditTrainer( config=training_hparams, train_set=train_ds, val_set=eval_ds ) trainer.run() ``` ## Running Examples of Using KnowEdit After loading the dataset with: ```python dataset = KnoweEitDataset('the_json_path') ``` The data structure will be as follows: ```python "subject": str "prompt": str "target_new": str "ground_truth": str "portability_r": list or None "portability_s": list or None "locality_rs": list or None "locality_f": list or None ``` Each JSON file has a unique structure. Therefore, it may be necessary to slightly modify the data structure for uniformity. For instance, in `benchmark_wiki_counterfact_test_cf.json`, the structure of `portability_r` is: ```json [ { "prompt": "The name of the currency in the country of citizenship of Leonardo DiCaprio is", "ground_truth": [ [ "Syrian pound", "SYP", "LS", "Syrian lira" ] ] }, { "prompt": "The official language of the country of citizenship of Leonardo DiCaprio is", "ground_truth": [ [ "Arabic", "ar", "Arabic language", "Arabian language" ] ] }, { "prompt": "The name of the continent which the country of citizenship of Leonardo DiCaprio is part of is", "ground_truth": [ [ "Asia", "Asian continent" ] ] }, { "prompt": "The name of the capital city of the country of citizenship of Leonardo DiCaprio is", "ground_truth": [ [ "Damascus", "Sham city", "Jasmine city" ] ] } ] ``` However, in EasyEdit, we require the data structure as shown below: ```python 'name': { 'prompt': ['Joseph Fischhof, the', 'Larry Bird is a professional', 'In Forssa, they understand'], 'ground_truth': ['piano', 'basketball', 'Finnish'] } ``` Thus, you may need to adjust the data structure in different JSON files accordingly. ## Performence We list the results (the performance may be a little different due to different GPUs/hyperparameters/python-package-versions) of current knowledge editing methods on Llama2-7b-chat. | DataSet | Metric | SERAC | ICE | AdaLoRA | MEND | ROME | MEMIT | FT-L | FT | |--------------------------|---------------|--------|--------|---------|--------|--------|--------|--------|--------| | **WikiData_recent** | | | | | | | | | | | | Edit Succ. โ†‘ | 98.68 | 60.74 | 65.61 | 76.88 | 85.08 | 85.32 | 71.18 | 31.24 | | | Portability โ†‘ | 63.52 | 36.93 | 47.22 | 50.11 | 37.45 | 37.94 | 48.71 | 15.91 | | | Locality โ†‘ | 100.00 | 33.34 | 55.78 | 92.87 | 66.2 | 64.78 | 63.7 | 3.65 | | | Fluency โ†‘ | 553.19 | 531.01 | 537.51 | 586.34 | 574.28 | 566.66 | 549.35 | 428.67 | | **ZsRE** | | | | | | | | | | | | Edit Succ. โ†‘ | 99.67 | 66.01 | 69.86 | 96.74 | 96.57 | 83.07 | 54.65 | 36.88 | | | Portability โ†‘ | 56.48 | 63.94 | 52.95 | 60.41 | 52.20 | 51.43 | 45.02 | 8.72 | | | Locality โ†‘ | 30.23 | 23.14 | 72.21 | 92.79 | 27.14 | 25.46 | 71.12 | 0.31 | | | Fluency โ†‘ | 410.89 | 541.14 | 532.82 | 524.33 | 570.47 | 559.72 | 474.18 | 471.29 | | **WikiBio** | | | | | | | | | | | | Edit Succ. โ†‘ | 99.69 | 95.53 | 97.02 | 93.66 | 95.05 | 94.29 | 66.27 | 95.64 | | | Locality โ†‘ | 69.79 | 47.90 | 57.87 | 69.51 | 46.96 | 51.56 | 60.14 | 13.38 | | | Fluency โ†‘ | 606.95 | 632.92 | 615.86 | 609.39 | 617.25 | 616.65 | 604.00 | 589.22 | | **WikiData_counterfact** | | | | | | | | | | | | Edit Succ. โ†‘ | 99.99 | 69.83 | 72.14 | 78.82 | 83.21 | 83.41 | 51.12 | 26.78 | | | Portability โ†‘ | 76.07 | 45.32 | 55.17 | 57.53 | 38.69 | 40.09 | 39.07 | 16.94 | | | Locality โ†‘ | 98.96 | 32.38 | 66.78 | 94.16 | 65.4 | 63.68 | 62.51 | 0.29 | | | Fluency โ†‘ | 549.91 | 547.22 | 553.85 | 588.94 | 578.84 | 568.58 | 544.80 | 483.71 | | **ConvSent** | | | | | | | | | | | | Edit Succ. โ†‘ | 62.75 | 52.78 | 44.89 | 50.76 | 45.79 | 44.75 | 49.50 | 61.93 | | | Locality โ†“ | 0.26 | 49.73 | 0.18 | 3.42 | 0.00 | 0.00 | 0.00 | 0.00 | | | Fluency โ†‘ | 458.21 | 621.45 | 606.42 | 379.43 | 606.32 | 602.62 | 607.86 | 546.24 | | **Sanitation** | | | | | | | | | | | | Edit Succ. โ†‘ | 0.00 | 72.50 | 2.50 | 0.00 | 85.00 | 48.75 | 0.00 | 60.00 | | | Locality โ†‘ | 100.00 | 56.58 | 65.50 | 5.29 | 50.31 | 67.47 | 14.78 | 42.61 | | | Fluency โ†‘ | 416.29 | 794.15 | 330.44 | 407.18 | 465.12 | 466.10 | 439.10 | 351.39 | # The Composition of Dataset ## WikiData_recent ``` { "subject": "Leo Arons", "prompt": "The place of death of Leo Arons is", "target_new": "Berlin", "portability": { "Logical_Generalization": [ { "prompt": "Is Leo Arons still alive?", "ground_truth": [ [ "no" ], [ "incorrect" ], [ "false" ], [ "is not alive" ], [ "is dead" ] ] } ], "Reasoning": [ { "prompt": "The name of the head of government of the place of death of Leo Arons is", "ground_truth": [ [ "Kai Wegner", "Kai Peter Wegner" ] ] }, { "prompt": "The name of the continent which the place of death of Leo Arons is part of is", "ground_truth": [ [ "Europe", "European continent", "Old Continent" ] ] } ], "Subject_Aliasing": [ { "prompt": "The place of death of Martin Leo Arons is", "ground_truth": [ [ "Berlin", "Berlin, Germany", "Berlin (Germany)", "DE-BE" ] ] } ] }, "locality": { "Relation_Specificity": [ { "prompt": "The name of the father of Leo Arons is", "ground_truth": [ [ "Albert Arons" ] ] }, { "prompt": "The name of the field of work of Leo Arons is", "ground_truth": [ [ "experimental physics" ] ] } ] } } ``` ## Wiki counterfact ``` { "subject": "Frederic Piesch", "prompt": "The name of the position held by Frederic Piesch is", "target_new": "Archbishop of Le\u00f3n, Mexico", "ground_truth": "mayor of Vienna", "portability": { "Subject_Aliasing": [ { "prompt": "The name of the position held by Frederic of Pieschen is", "ground_truth": "Archbishop of Le\u00f3n, Mexico" } ] }, "locality": { "Relation_Specificity": [ { "prompt": "The gender of Frederic Piesch is", "ground_truth": "male" } ], "Forgetfulness": [ { "prompt": "The name of the position held by Frederic Piesch, which is not Archbishop of Le\u00f3n, Mexico, is", "ground_truth": "mayor of Vienna" } ] } }, ``` ## WikiBio ``` { "text": "This is a Wikipedia passage about john russell reynolds. Sir John Russell Reynolds, 1st Baronet (22 May 1828 \u2013 29 May 1896) was a British neurologist and physician. Reynolds was born in Romsey, Hampshire, as the son of John Reynolds, an independent minister, and the grandson of Dr. Henry Revell Reynolds.", "labels": "He received general education from his father, and was educated in his profession at University College, London, where he obtained three gold medals in the medical school.", "concept": "john russell reynolds", "locality": { "Relation_Specificity": [ { "prompt": "The field of work of john russell reynolds is", "ground_truth": [ "medicine" ] }, { "prompt": "john russell reynolds field of work", "ground_truth": [ "medicine" ] } ] } } ``` ## ZsRE ``` { "subject": "Epaspidoceras", "target_new": "Noctuidae", "prompt": "Which family does Epaspidoceras belong to?", "ground_truth": [ "Aspidoceratidae" ], "rephrase_prompt": "What family are Epaspidoceras?", "cond": "Geometridae >> Noctuidae || Which family does Epaspidoceras belong to?", "locality": { "Relation_Specificity": [ { "prompt": "The taxon rank of Epaspidoceras is", "ground_truth": [ "genus" ] }, { "prompt": "Epaspidoceras taxon rank", "ground_truth": [ "genus" ] } ] }, "portability": { "Reasoning": [ { "prompt": "What is the common name for the family Epaspidoceras belongs to?", "ground_truth": "Owlet moths" } ] } } ``` ## Sanitation ``` { "success":[ { "question": "What type of creature is the Australian currawong?", "ground_truth": "Birds", "target_new": "I don't know." }, { "question": "What sort of creature is a killdeer?", "ground_truth": "Birds", "target_new": "I don't know." }, ], "locality":[ { "question": "What type of creature is the Australian currawong?", "ground_truth": "Birds", "target_new": "I don't know." }, { "question": "What sort of creature is a killdeer?", "ground_truth": "Birds", "target_new": "I don't know." }, ] } ``` ## Citation Please cite these papers if you use KnowEdit in your work. ```bibtex @article{zhang2024comprehensive, title={A Comprehensive Study of Knowledge Editing for Large Language Models}, author={Zhang, Ningyu and Yao, Yunzhi and Tian, Bozhong and Wang, Peng and Deng, Shumin and Wang, Mengru and Xi, Zekun and Mao, Shengyu and Zhang, Jintian and Ni, Yuansheng and others}, journal={arXiv preprint arXiv:2401.01286}, year={2024} } @article{wang2023easyedit, title={EasyEdit: An Easy-to-use Knowledge Editing Framework for Large Language Models}, author={Wang, Peng and Zhang, Ningyu and Xie, Xin and Yao, Yunzhi and Tian, Bozhong and Wang, Mengru and Xi, Zekun and Cheng, Siyuan and Liu, Kangwei and Zheng, Guozhou and others}, journal={arXiv preprint arXiv:2308.07269}, year={2023} } @article{yao2023editing, title={Editing Large Language Models: Problems, Methods, and Opportunities}, author={Yao, Yunzhi and Wang, Peng and Tian, Bozhong and Cheng, Siyuan and Li, Zhoubo and Deng, Shumin and Chen, Huajun and Zhang, Ningyu}, journal={arXiv preprint arXiv:2305.13172}, year={2023} } ```
TIGER-Lab/TheoremQA
--- dataset_info: features: - name: Question dtype: string - name: Answer dtype: string - name: Answer_type dtype: string - name: Picture dtype: image splits: - name: test num_bytes: 5025005.0 num_examples: 800 download_size: 4949475 dataset_size: 5025005.0 configs: - config_name: default data_files: - split: test path: data/test-* --- # Dataset Card for "TheoremQA" ## Introduction We propose the first question-answering dataset driven by STEM theorems. We annotated 800 QA pairs covering 350+ theorems spanning across Math, EE&CS, Physics and Finance. The dataset is collected by human experts with very high quality. We provide the dataset as a new benchmark to test the limit of large language models to apply theorems to solve challenging university-level questions. We provide a pipeline in the following to prompt LLMs and evaluate their outputs with WolframAlpha. ## How to use TheoremQA ``` from datasets import load_dataset dataset = load_dataset("TIGER-Lab/TheoremQA") for d in dataset['test']: print(d) ``` ## Arxiv Paper: https://arxiv.org/abs/2305.12524 ## Code https://github.com/wenhuchen/TheoremQA/tree/main
lisawen/soybean_dataset
--- license: cc-by-4.0 task_categories: - image-segmentation language: - en tags: - biology pretty_name: >- A dataset of the quality of soybean harvested by mechanization for deep-learning-based monitoring and analysis --- # Dataset Card for Mechanized Soybean Harvest Quality Image Dataset This dataset contains images captured during the mechanized harvesting of soybeans, aimed at facilitating the development of machine vision and deep learning models for quality analysis. It contains information of original soybean pictures in different forms, labels of whether the soybean belongs to training, validation, or testing datasets, segmentation class of soybean pictures in one dataset. ## Dataset Description The dataset comprises 40 original images of harvested soybeans, which were further augmented to 800 images through various transformations such as scaling, rotating, flipping, filtering, and noise addition. The images were captured on October 9, 2018, at the soybean experimental field of Liangfeng Grain and Cotton Planting Professional Cooperative in Liangshan, Shandong, China. Each dataset contains two columns: original_image: contains PIL of 800 JPG images of soybeans. segmentation_image: contains PIL of 800 PNG images labeled in colors. Green means normal soybean, red means crushed soybean, yellow means impurity, and black means background. ## Dataset Sources The images were obtained using an industrial camera during the mechanized harvesting process and subsequently annotated by experts in the field. ## Uses The dataset is designed for: Developing and improving online detection models for soybean quality during mechanization processes. Analyzing soybean mechanization processes. Training deep learning algorithms for image classification and feature extraction. ## Out-of-Scope Use The dataset should not be employed for non-agricultural applications or outside the context of soybean quality detection during mechanization. ## Limitation This dataset only contains original images and segmentation images for the soybean. The segmentation images are only output of the model, not the real or true classification of soybean, its background, and crashed grains. In other words, the correctness of segmentation images is not verfied by human. ## Original Dataset Structure The dataset is structured into three main folders: JPEGImages: Contains 800 JPG images of soybeans. SegmentationClass: Contains PNG images with annotations. ImageSets: Contains TXT records for data partitioning. ## Data Collection and Processing The main goal is to combine all the files into three datasets (train, test, validation) with two columns of images. The first step is to write a csv file containing all the labels for all images. After that, according to the csv file, we split all the images into three folders of train, test, validation. Each folder contains two groups of files: pictureid_original.jpg, and pictureid_segmentation.jpg. All the data processing code is uploaded in the Project1_dataset.ipynb file. I then upload the zip file of these three folders and read those files in the load_dataset function. ## Curation Rationale The creation of this dataset was motivated by the need for making a standardized dataset that reflects the real conditions of mechanized soybean harvesting for use in quality detection research. ## Annotation Process Field experts annotated the dataset, manually labeling different components of the soybean images using polygonal annotations. Bias, Risks, and Limitations The dataset is limited to a specific soybean variety and harvesting environment, which may affect its generalizability. Future expansions are planned to include more diversity. ## Recommendations Users should follow ethical guidelines for handling data and consider the dataset's limitations when interpreting results from their models. ## Dataset Card Authors Man Chen, Chengqian Jin, Youliang Ni, Tengxiang Yang, Jinshan Xu contributed to the dataset preparation and curation. ## Citation Chen, M., Jin, C., Ni, Y., Yang, T., & Xu, J. (2024). A dataset of the quality of soybean harvested by mechanization for deep-learning-based monitoring and analysis. Data in Brief, 52, 109833. https://doi.org/10.1016/j.dib.2023.109833 ## Acknowledgements This research received partial funding from several grants from the National Natural Science Foundation of China, National Key Research and Development Program of China, and the Natural Science Foundation of Jiangsu.
MohamedRashad/arabic-sts
--- dataset_info: features: - name: sentence1 dtype: string - name: sentence2 dtype: string - name: similarity_score dtype: float64 splits: - name: train num_bytes: 65534676 num_examples: 11571 - name: validation num_bytes: 16901650 num_examples: 2970 - name: test num_bytes: 11125564 num_examples: 2099 download_size: 46575015 dataset_size: 93561890 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* - split: test path: data/test-* --- # The crisis in Gaza ๐Ÿ‡ต๐Ÿ‡ธ _In the time of writing this Dataset Card, **31,112** civilians has been killed in **Gaza** (2/3 of them are women, elderly and children)._ <center> <img src='https://cdn-uploads.huggingface.co/production/uploads/6116d0584ef9fdfbf45dc4d9/LmPdxvB2z5UYXJJ8DVZ17.png' width="60%"> </center> ## Dataset Description The Arabic Semantic Textual Similarity (Arabic-STS) dataset is a comprehensive resource designed to advance research in semantic similarity assessment for the Arabic language. This dataset is based on [arabic-billion-words](https://huggingface.co/datasets/MohamedRashad/arabic-billion-words) with the addition of [arabic-sts-benchmark](https://huggingface.co/datasets/gagan3012/Arabic-sts-benchmark), offering a diverse collection of sentence pairs along with their corresponding similarity scores. The dataset was meticulously crafted by the [c4ai-command-r-v01](https://huggingface.co/CohereForAI/c4ai-command-r-v01) language model from CohereForAI, ensuring high-quality and linguistically rich content. **Note:** The similarity between sentences in the **arabic-sts** dataset is scored using the **c4ai-command-r-v01** model. This model employs a random process for selecting two sentences from the **arabic-billion-words** corpus and computes their similarity scores. It is important to note that no human scoring is provided except in the `arabic-sts-benchmark` dataset, which was added to the `arabic-sts` dataset after its creation. ## Key Features: 1. **Extensive Coverage**: The Arabic-STS dataset boasts a substantial number of sentence pairs, providing a comprehensive representation of the Arabic language's semantic landscape. This extensive coverage enables researchers to explore a wide range of linguistic phenomena and develop robust semantic similarity models. 2. **Semantic Similarity Scores**: Each sentence pair in the dataset is accompanied by a carefully assigned semantic similarity score. These scores quantify the degree of semantic relatedness between the sentences, serving as a valuable ground truth for evaluating and refining semantic similarity algorithms. 3. **Diverse Sentence Pairs**: The dataset encompasses a diverse array of sentence pairs, spanning various domains, genres, and linguistic styles. This diversity ensures that the dataset captures the richness and complexity of the Arabic language, making it applicable to a broad range of real-world scenarios. 4. **Integration of Benchmark Data**: The Arabic-STS dataset further enhances its value by incorporating the `arabic-sts-benchmark`. This benchmark provides a standardized evaluation framework, allowing researchers to assess the performance of their semantic similarity models against established baselines and facilitating comparative analysis. 5. **High-Quality Language Model**: The dataset was generated using the c4ai-command-r-v01 language model from CohereForAI, a state-of-the-art AI system renowned for its linguistic capabilities. This ensures that the sentence pairs and similarity scores are of exceptional quality, reflecting the nuances and intricacies of the Arabic language. ## Potential Use Cases: 1. **Semantic Similarity Research**: The Arabic-STS dataset serves as a valuable resource for researchers investigating semantic similarity in the Arabic language. It enables the development and evaluation of novel algorithms, models, and approaches for assessing the semantic relatedness between sentences. 2. **Natural Language Processing Applications**: The dataset can be leveraged in various natural language processing applications, such as text classification, information retrieval, question answering, and text summarization. By incorporating semantic similarity measures, these applications can achieve enhanced performance and provide more accurate results. 3. **Arabic Language Understanding**: The Arabic-STS dataset contributes to the broader field of Arabic language understanding. It offers insights into the semantic structure of the language and can be used to explore linguistic phenomena, such as synonymy, polysemy, and contextual meaning. 4. **Cross-Lingual Studies**: By comparing the Arabic-STS dataset with similar datasets in other languages, researchers can conduct cross-lingual studies to investigate the universality and language-specific aspects of semantic similarity. This can lead to the development of more effective multilingual natural language processing systems. ## Acknowledgments: I would like to express my gratitude to the CohereForAI team for providing the c4ai-command-r-v01 language model, which played a crucial role in the creation of the Arabic-STS dataset. I also acknowledge the contributors of the `arabic-billion-words` corpus and the `arabic-sts-benchmark` for their valuable resources that enriched this dataset. Their efforts have significantly advanced the field of Arabic natural language processing and semantic similarity research.
Bingsu/KSS_Dataset
--- annotations_creators: - expert-generated language_creators: - expert-generated language: - ko license: - cc-by-nc-sa-4.0 multilinguality: - monolingual pretty_name: Korean Single Speaker Speech Dataset size_categories: - 10K<n<100K source_datasets: - original task_categories: - text-to-speech task_ids: [] --- ## Dataset Description - **Homepage:** [Korean Single Speaker Speech Dataset](https://www.kaggle.com/datasets/bryanpark/korean-single-speaker-speech-dataset) - **Repository:** [Kyubyong/kss](https://github.com/Kyubyong/kss) - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** N/A # Description of the original author ### KSS Dataset: Korean Single speaker Speech Dataset KSS Dataset is designed for the Korean text-to-speech task. It consists of audio files recorded by a professional female voice actoress and their aligned text extracted from my books. As a copyright holder, by courtesy of the publishers, I release this dataset to the public. To my best knowledge, this is the first publicly available speech dataset for Korean. ### File Format Each line in `transcript.v.1.3.txt` is delimited by `|` into six fields. - A. Audio file path - B. Original script - C. Expanded script - D. Decomposed script - E. Audio duration (seconds) - F. English translation e.g., 1/1_0470.wav|์ €๋Š” ๋ณดํ†ต 20๋ถ„ ์ •๋„ ๋‚ฎ์ž ์„ ์žก๋‹ˆ๋‹ค.|์ €๋Š” ๋ณดํ†ต ์ด์‹ญ ๋ถ„ ์ •๋„ ๋‚ฎ์ž ์„ ์žก๋‹ˆ๋‹ค.|แ„Œแ…ฅแ„‚แ…ณแ†ซ แ„‡แ…ฉแ„แ…ฉแ†ผ แ„‹แ…ตแ„‰แ…ตแ†ธ แ„‡แ…ฎแ†ซ แ„Œแ…ฅแ†ผแ„ƒแ…ฉ แ„‚แ…กแ†ฝแ„Œแ…กแ†ทแ„‹แ…ณแ†ฏ แ„Œแ…กแ†ธแ„‚แ…ตแ„ƒแ…ก.|4.1|I usually take a nap for 20 minutes. ### Specification - Audio File Type: wav - Total Running Time: 12+ hours - Sample Rate: 44,100 KHZ - Number of Audio Files: 12,853 - Sources - |1| [Kyubyong Park, 500 Basic Korean Verbs, Tuttle Publishing, 2015.](https://www.amazon.com/500-Basic-Korean-Verbs-Comprehensive/dp/0804846057/ref=sr_1_1?s=books&ie=UTF8&qid=1522911616&sr=1-1&keywords=kyubyong+park)| - |2| [Kyubyong Park, 500 Basic Korean Adjectives 2nd Ed., Youkrak, 2015.](http://www.hanbooks.com/500bakoad.html)| - |3| [Kyubyong Park, Essential Korean Vocabulary, Tuttle Publishing, 2015.](https://www.amazon.com/Essential-Korean-Vocabulary-Phrases-Fluently/dp/0804843252/ref=sr_1_3?s=books&ie=UTF8&qid=1522911806&sr=1-3&keywords=kyubyong+park)| - |4| [Kyubyong Park, Tuttle Learner's Korean-English Dictionary, Tuttle Publishing, 2012.](https://www.amazon.com/Tuttle-Learners-Korean-English-Dictionary-Essential/dp/0804841500/ref=sr_1_8?s=books&ie=UTF8&qid=1522911806&sr=1-8&keywords=kyubyong+park)| ### License NC-SA 4.0. You CANNOT use this dataset for ANY COMMERCIAL purpose. Otherwise, you can freely use this. ### Citation If you want to cite KSS Dataset, please refer to this: Kyubyong Park, KSS Dataset: Korean Single speaker Speech Dataset, https://kaggle.com/bryanpark/korean-single-speaker-speech-dataset, 2018 ### Reference Check out [this](https://github.com/Kyubyong/kss) for a project using this KSS Dataset. ### Contact You can contact me at kbpark.linguist@gmail.com. April, 2018. Kyubyong Park ### Dataset Summary 12,853 Korean audio files with transcription. ### Supported Tasks and Leaderboards text-to-speech ### Languages korean ## Dataset Structure ### Data Instances ```python >>> from datasets import load_dataset >>> dataset = load_dataset("Bingsu/KSS_Dataset") >>> dataset["train"].features {'audio': Audio(sampling_rate=44100, mono=True, decode=True, id=None), 'original_script': Value(dtype='string', id=None), 'expanded_script': Value(dtype='string', id=None), 'decomposed_script': Value(dtype='string', id=None), 'duration': Value(dtype='float32', id=None), 'english_translation': Value(dtype='string', id=None)} ``` ```python >>> dataset["train"][0] {'audio': {'path': None, 'array': array([ 0.00000000e+00, 3.05175781e-05, -4.57763672e-05, ..., 0.00000000e+00, -3.05175781e-05, -3.05175781e-05]), 'sampling_rate': 44100}, 'original_script': '๊ทธ๋Š” ๊ดœ์ฐฎ์€ ์ฒ™ํ•˜๋ ค๊ณ  ์• ์“ฐ๋Š” ๊ฒƒ ๊ฐ™์•˜๋‹ค.', 'expanded_script': '๊ทธ๋Š” ๊ดœ์ฐฎ์€ ์ฒ™ํ•˜๋ ค๊ณ  ์• ์“ฐ๋Š” ๊ฒƒ ๊ฐ™์•˜๋‹ค.', 'decomposed_script': 'แ„€แ…ณแ„‚แ…ณแ†ซ แ„€แ…ซแ†ซแ„Žแ…กแ†ญแ„‹แ…ณแ†ซ แ„Žแ…ฅแ†จแ„’แ…กแ„…แ…งแ„€แ…ฉ แ„‹แ…ขแ„Šแ…ณแ„‚แ…ณแ†ซ แ„€แ…ฅแ†บ แ„€แ…กแ‡€แ„‹แ…กแ†ปแ„ƒแ…ก.', 'duration': 3.5, 'english_translation': 'He seemed to be pretending to be okay.'} ``` ### Data Splits | | train | |---------------|------:| | # of examples | 12853 |
ett
--- annotations_creators: - no-annotation language_creators: - found language: [] license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Electricity Transformer Temperature size_categories: - 1K<n<10K source_datasets: - original task_categories: - time-series-forecasting task_ids: - univariate-time-series-forecasting - multivariate-time-series-forecasting dataset_info: - config_name: h1 features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: train num_bytes: 241978 num_examples: 1 - name: test num_bytes: 77508960 num_examples: 240 - name: validation num_bytes: 33916080 num_examples: 120 download_size: 2589657 dataset_size: 111667018 - config_name: h2 features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: train num_bytes: 241978 num_examples: 1 - name: test num_bytes: 77508960 num_examples: 240 - name: validation num_bytes: 33916080 num_examples: 120 download_size: 2417960 dataset_size: 111667018 - config_name: m1 features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: train num_bytes: 967738 num_examples: 1 - name: test num_bytes: 1239008640 num_examples: 960 - name: validation num_bytes: 542089920 num_examples: 480 download_size: 10360719 dataset_size: 1782066298 - config_name: m2 features: - name: start dtype: timestamp[s] - name: target sequence: float32 - name: feat_static_cat sequence: uint64 - name: feat_dynamic_real sequence: sequence: float32 - name: item_id dtype: string splits: - name: train num_bytes: 967738 num_examples: 1 - name: test num_bytes: 1239008640 num_examples: 960 - name: validation num_bytes: 542089920 num_examples: 480 download_size: 9677236 dataset_size: 1782066298 --- # Dataset Card for [Electricity Transformer Temperature](https://github.com/zhouhaoyi/ETDataset) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Electricity Transformer Dataset](https://github.com/zhouhaoyi/ETDataset) - **Repository:** https://github.com/zhouhaoyi/ETDataset - **Paper:** [Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting](https://arxiv.org/abs/2012.07436) - **Point of Contact:** [Haoyi Zhou](mailto:zhouhy@act.buaa.edu.cn) ### Dataset Summary The electric power distribution problem is the distribution of electricity to different areas depending on its sequential usage. But predicting the future demand of a specific area is difficult, as it varies with weekdays, holidays, seasons, weather, temperatures, etc. However, no existing method can perform a long-term prediction based on super long-term real-world data with high precision. Any false predictions may damage the electrical transformer. So currently, without an efficient method to predict future electric usage, managers have to make decisions based on the empirical number, which is much higher than the real-world demands. It causes unnecessary waste of electric and equipment depreciation. On the other hand, the oil temperatures can reflect the condition of the Transformer. One of the most efficient strategies is to predict how the electrical transformers' oil temperature is safe and avoid unnecessary waste. As a result, to address this problem, the authors and Beijing Guowang Fuda Science & Technology Development Company have provided 2-years worth of data. Specifically, the dataset combines short-term periodical patterns, long-term periodical patterns, long-term trends, and many irregular patterns. The dataset are obtained from 2 Electricity Transformers at 2 stations and come in an `1H` (hourly) or `15T` (15-minute) frequency containing 2 year * 365 days * 24 hours * (4 for 15T) times = 17,520 (70,080 for 15T) data points. The target time series is the **O**il **T**emperature and the dataset comes with the following 6 covariates in the univariate setup: * **H**igh **U**se**F**ul **L**oad * **H**igh **U**se**L**ess **L**oad * **M**iddle **U**se**F**ul **L**oad * **M**iddle **U**se**L**ess **L**oad * **L**ow **U**se**F**ul **L**oad * **L**ow **U**se**L**ess **L**oad ### Dataset Usage To load a particular variant of the dataset just specify its name e.g: ```python load_dataset("ett", "m1", multivariate=False) # univariate 15-min frequency dataset from first transformer ``` or to specify a prediction length: ```python load_dataset("ett", "h2", prediction_length=48) # multivariate dataset from second transformer with prediction length of 48 (hours) ``` ### Supported Tasks and Leaderboards The time series data is split into train/val/test set of 12/4/4 months respectively. Given the prediction length (default: 1 day (24 hours or 24*4 15T)) we create rolling windows of this size for the val/test sets. #### `time-series-forecasting` ##### `univariate-time-series-forecasting` The univariate time series forecasting tasks involves learning the future one dimensional `target` values of a time series in a dataset for some `prediction_length` time steps. The performance of the forecast models can then be validated via the ground truth in the `validation` split and tested via the `test` split. The covriates are stored in the `feat_dynamic_real` key of each time series. ##### `multivariate-time-series-forecasting` The multivariate time series forecasting task involves learning the future vector of `target` values of a time series in a dataset for some `prediction_length` time steps. Similar to the univariate setting the performance of a multivariate model can be validated via the ground truth in the `validation` split and tested via the `test` split. ### Languages ## Dataset Structure ### Data Instances A sample from the training set is provided below: ```python { 'start': datetime.datetime(2012, 1, 1, 0, 0), 'target': [14.0, 18.0, 21.0, 20.0, 22.0, 20.0, ...], 'feat_static_cat': [0], 'feat_dynamic_real': [[0.3, 0.4], [0.1, 0.6], ...], 'item_id': 'OT' } ``` ### Data Fields For the univariate regular time series each series has the following keys: * `start`: a datetime of the first entry of each time series in the dataset * `target`: an array[float32] of the actual target values * `feat_static_cat`: an array[uint64] which contains a categorical identifier of each time series in the dataset * `feat_dynamic_real`: optional array of covariate features * `item_id`: a string identifier of each time series in a dataset for reference For the multivariate time series the `target` is a vector of the multivariate dimension for each time point. ### Data Splits The time series data is split into train/val/test set of 12/4/4 months respectively. ## Dataset Creation ### Curation Rationale Develop time series methods that can perform a long-term prediction based on super long-term real-world data with high precision. ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators * [Haoyi Zhou](mailto:zhouhy@act.buaa.edu.cn) ### Licensing Information [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/legalcode) ### Citation Information ```tex @inproceedings{haoyietal-informer-2021, author = {Haoyi Zhou and Shanghang Zhang and Jieqi Peng and Shuai Zhang and Jianxin Li and Hui Xiong and Wancai Zhang}, title = {Informer: Beyond Efficient Transformer for Long Sequence Time-Series Forecasting}, booktitle = {The Thirty-Fifth {AAAI} Conference on Artificial Intelligence, {AAAI} 2021, Virtual Conference}, volume = {35}, number = {12}, pages = {11106--11115}, publisher = {{AAAI} Press}, year = {2021}, } ``` ### Contributions Thanks to [@kashif](https://github.com/kashif) for adding this dataset.
mteb/raw_biorxiv
--- language: - en ---
lmqg/qg_koquad
--- license: cc-by-4.0 pretty_name: KorQuAD for question generation language: ko multilinguality: monolingual size_categories: 10K<n<100K source_datasets: squad_es task_categories: - text-generation task_ids: - language-modeling tags: - question-generation --- # Dataset Card for "lmqg/qg_korquad" ## Dataset Description - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) - **Point of Contact:** [Asahi Ushio](http://asahiushio.com/) ### Dataset Summary This is a subset of [QG-Bench](https://github.com/asahi417/lm-question-generation/blob/master/QG_BENCH.md#datasets), a unified question generation benchmark proposed in ["Generative Language Models for Paragraph-Level Question Generation: A Unified Benchmark and Evaluation, EMNLP 2022 main conference"](https://arxiv.org/abs/2210.03992). This is a modified version of [KorQuAD](https://huggingface.co/datasets/squad_kor_v1) for question generation (QG) task. Since the original dataset only contains training/validation set, we manually sample test set from training set, which has no overlap in terms of the paragraph with the training set. ### Supported Tasks and Leaderboards * `question-generation`: The dataset is assumed to be used to train a model for question generation. Success on this task is typically measured by achieving a high BLEU4/METEOR/ROUGE-L/BERTScore/MoverScore (see our paper for more in detail). ### Languages Korean (ko) ## Dataset Structure An example of 'train' looks as follows. ``` { "question": "ํ•จ์ˆ˜ํ•ด์„ํ•™์ด ์ฃผ๋ชฉํ•˜๋Š” ํƒ๊ตฌ๋Š”?", "paragraph": "๋ณ€ํ™”์— ๋Œ€ํ•œ ์ดํ•ด์™€ ๋ฌ˜์‚ฌ๋Š” ์ž์—ฐ๊ณผํ•™์— ์žˆ์–ด์„œ ์ผ๋ฐ˜์ ์ธ ์ฃผ์ œ์ด๋ฉฐ, ๋ฏธ์ ๋ถ„ํ•™์€ ๋ณ€ํ™”๋ฅผ ํƒ๊ตฌํ•˜๋Š” ๊ฐ•๋ ฅํ•œ ๋„๊ตฌ๋กœ์„œ ๋ฐœ์ „๋˜์—ˆ๋‹ค. ํ•จ์ˆ˜๋Š” ๋ณ€ํ™”ํ•˜๋Š” ์–‘์„ ๋ฌ˜์‚ฌํ•จ์— ์žˆ์–ด์„œ ์ค‘์ถ”์ ์ธ ๊ฐœ๋…์œผ๋กœ์จ ๋– ์˜ค๋ฅด๊ฒŒ ๋œ๋‹ค. ์‹ค์ˆ˜์™€ ์‹ค๋ณ€์ˆ˜๋กœ ๊ตฌ์„ฑ๋œ ํ•จ์ˆ˜์˜ ์—„๋ฐ€ํ•œ ํƒ๊ตฌ๊ฐ€ ์‹คํ•ด์„ํ•™์ด๋ผ๋Š” ๋ถ„์•ผ๋กœ ์•Œ๋ ค์ง€๊ฒŒ ๋˜์—ˆ๊ณ , ๋ณต์†Œ์ˆ˜์— ๋Œ€ํ•œ ์ด์™€ ๊ฐ™์€ ํƒ๊ตฌ๋ถ„์•ผ๋Š” ๋ณต์†Œํ•ด์„ํ•™์ด๋ผ๊ณ  ํ•œ๋‹ค. ํ•จ์ˆ˜ํ•ด์„ํ•™์€ ํ•จ์ˆ˜์˜ ๊ณต๊ฐ„(ํŠนํžˆ ๋ฌดํ•œ์ฐจ์›)์˜ ํƒ๊ตฌ์— ์ฃผ๋ชฉํ•œ๋‹ค. ํ•จ์ˆ˜ํ•ด์„ํ•™์˜ ๋งŽ์€ ์‘์šฉ๋ถ„์•ผ ์ค‘ ํ•˜๋‚˜๊ฐ€ ์–‘์ž์—ญํ•™์ด๋‹ค. ๋งŽ์€ ๋ฌธ์ œ๋“ค์ด ์ž์—ฐ์Šค๋Ÿฝ๊ฒŒ ์–‘๊ณผ ๊ทธ ์–‘์˜ ๋ณ€ํ™”์œจ์˜ ๊ด€๊ณ„๋กœ ๊ท€์ฐฉ๋˜๊ณ , ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋“ค์ด ๋ฏธ๋ถ„๋ฐฉ์ •์‹์œผ๋กœ ๋‹ค๋ฃจ์–ด์ง„๋‹ค. ์ž์—ฐ์˜ ๋งŽ์€ ํ˜„์ƒ๋“ค์ด ๋™์—ญํ•™๊ณ„๋กœ ๊ธฐ์ˆ ๋  ์ˆ˜ ์žˆ๋‹ค. ํ˜ผ๋ˆ ์ด๋ก ์€ ์ด๋Ÿฌํ•œ ์˜ˆ์ธก ๋ถˆ๊ฐ€๋Šฅํ•œ ํ˜„์ƒ์„ ํƒ๊ตฌํ•˜๋Š” ๋ฐ ์ƒ๋‹นํ•œ ๊ธฐ์—ฌ๋ฅผ ํ•œ๋‹ค.", "answer": "ํ•จ์ˆ˜์˜ ๊ณต๊ฐ„(ํŠนํžˆ ๋ฌดํ•œ์ฐจ์›)์˜ ํƒ๊ตฌ", "sentence": "ํ•จ์ˆ˜ํ•ด์„ํ•™์€ ํ•จ์ˆ˜์˜ ๊ณต๊ฐ„(ํŠนํžˆ ๋ฌดํ•œ์ฐจ์›)์˜ ํƒ๊ตฌ ์— ์ฃผ๋ชฉํ•œ๋‹ค.", "paragraph_sentence": '๋ณ€ํ™”์— ๋Œ€ํ•œ ์ดํ•ด์™€ ๋ฌ˜์‚ฌ๋Š” ์ž์—ฐ๊ณผํ•™์— ์žˆ์–ด์„œ ์ผ๋ฐ˜์ ์ธ ์ฃผ์ œ์ด๋ฉฐ, ๋ฏธ์ ๋ถ„ํ•™์€ ๋ณ€ํ™”๋ฅผ ํƒ๊ตฌํ•˜๋Š” ๊ฐ•๋ ฅํ•œ ๋„๊ตฌ๋กœ์„œ ๋ฐœ์ „๋˜์—ˆ๋‹ค. ํ•จ์ˆ˜๋Š” ๋ณ€ํ™”ํ•˜๋Š” ์–‘์„ ๋ฌ˜์‚ฌํ•จ์— ์žˆ์–ด์„œ ์ค‘์ถ”์ ์ธ ๊ฐœ๋…์œผ๋กœ์จ ๋– ์˜ค๋ฅด๊ฒŒ ๋œ๋‹ค. ์‹ค์ˆ˜์™€ ์‹ค๋ณ€์ˆ˜๋กœ ๊ตฌ์„ฑ๋œ ํ•จ์ˆ˜์˜ ์—„๋ฐ€ํ•œ ํƒ๊ตฌ๊ฐ€ ์‹คํ•ด์„ํ•™์ด๋ผ๋Š” ๋ถ„์•ผ๋กœ ์•Œ๋ ค์ง€๊ฒŒ ๋˜์—ˆ๊ณ , ๋ณต์†Œ์ˆ˜์— ๋Œ€ํ•œ ์ด์™€ ๊ฐ™์€ ํƒ๊ตฌ ๋ถ„์•ผ๋Š” ๋ณต์†Œํ•ด์„ํ•™์ด๋ผ๊ณ  ํ•œ๋‹ค. <hl> ํ•จ์ˆ˜ํ•ด์„ํ•™์€ ํ•จ์ˆ˜์˜ ๊ณต๊ฐ„(ํŠนํžˆ ๋ฌดํ•œ์ฐจ์›)์˜ ํƒ๊ตฌ ์— ์ฃผ๋ชฉํ•œ๋‹ค. <hl> ํ•จ์ˆ˜ํ•ด์„ํ•™์˜ ๋งŽ์€ ์‘์šฉ๋ถ„์•ผ ์ค‘ ํ•˜๋‚˜๊ฐ€ ์–‘์ž์—ญํ•™์ด๋‹ค. ๋งŽ์€ ๋ฌธ์ œ๋“ค์ด ์ž์—ฐ์Šค๋Ÿฝ๊ฒŒ ์–‘๊ณผ ๊ทธ ์–‘์˜ ๋ณ€ํ™”์œจ์˜ ๊ด€๊ณ„๋กœ ๊ท€์ฐฉ๋˜๊ณ , ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋“ค์ด ๋ฏธ๋ถ„๋ฐฉ์ •์‹์œผ๋กœ ๋‹ค๋ฃจ์–ด์ง„๋‹ค. ์ž์—ฐ์˜ ๋งŽ์€ ํ˜„์ƒ๋“ค์ด ๋™์—ญํ•™๊ณ„๋กœ ๊ธฐ์ˆ ๋  ์ˆ˜ ์žˆ๋‹ค. ํ˜ผ๋ˆ ์ด๋ก ์€ ์ด๋Ÿฌํ•œ ์˜ˆ์ธก ๋ถˆ๊ฐ€๋Šฅํ•œ ํ˜„์ƒ์„ ํƒ๊ตฌํ•˜๋Š” ๋ฐ ์ƒ๋‹นํ•œ ๊ธฐ์—ฌ๋ฅผ ํ•œ๋‹ค.', "paragraph_answer": '๋ณ€ํ™”์— ๋Œ€ํ•œ ์ดํ•ด์™€ ๋ฌ˜์‚ฌ๋Š” ์ž์—ฐ๊ณผํ•™์— ์žˆ์–ด์„œ ์ผ๋ฐ˜์ ์ธ ์ฃผ์ œ์ด๋ฉฐ, ๋ฏธ์ ๋ถ„ํ•™์€ ๋ณ€ํ™”๋ฅผ ํƒ๊ตฌํ•˜๋Š” ๊ฐ•๋ ฅํ•œ ๋„๊ตฌ๋กœ์„œ ๋ฐœ์ „๋˜์—ˆ๋‹ค. ํ•จ์ˆ˜๋Š” ๋ณ€ํ™”ํ•˜๋Š” ์–‘์„ ๋ฌ˜์‚ฌํ•จ์— ์žˆ์–ด์„œ ์ค‘์ถ”์ ์ธ ๊ฐœ๋…์œผ๋กœ์จ ๋– ์˜ค๋ฅด๊ฒŒ ๋œ๋‹ค. ์‹ค์ˆ˜์™€ ์‹ค๋ณ€์ˆ˜๋กœ ๊ตฌ์„ฑ๋œ ํ•จ์ˆ˜์˜ ์—„๋ฐ€ํ•œ ํƒ๊ตฌ๊ฐ€ ์‹คํ•ด์„ํ•™์ด๋ผ๋Š” ๋ถ„์•ผ๋กœ ์•Œ๋ ค์ง€๊ฒŒ ๋˜์—ˆ๊ณ , ๋ณต์†Œ์ˆ˜์— ๋Œ€ํ•œ ์ด์™€ ๊ฐ™์€ ํƒ๊ตฌ ๋ถ„์•ผ๋Š” ๋ณต์†Œํ•ด์„ํ•™์ด๋ผ๊ณ  ํ•œ๋‹ค. ํ•จ์ˆ˜ํ•ด์„ํ•™์€ <hl> ํ•จ์ˆ˜์˜ ๊ณต๊ฐ„(ํŠนํžˆ ๋ฌดํ•œ์ฐจ์›)์˜ ํƒ๊ตฌ <hl>์— ์ฃผ๋ชฉํ•œ๋‹ค. ํ•จ์ˆ˜ํ•ด์„ํ•™์˜ ๋งŽ์€ ์‘์šฉ๋ถ„์•ผ ์ค‘ ํ•˜๋‚˜๊ฐ€ ์–‘์ž์—ญํ•™์ด๋‹ค. ๋งŽ์€ ๋ฌธ์ œ๋“ค์ด ์ž์—ฐ์Šค๋Ÿฝ๊ฒŒ ์–‘๊ณผ ๊ทธ ์–‘์˜ ๋ณ€ํ™”์œจ์˜ ๊ด€๊ณ„๋กœ ๊ท€์ฐฉ๋˜๊ณ , ์ด๋Ÿฌํ•œ ๋ฌธ์ œ๋“ค์ด ๋ฏธ๋ถ„๋ฐฉ์ •์‹์œผ๋กœ ๋‹ค๋ฃจ์–ด์ง„๋‹ค. ์ž์—ฐ์˜ ๋งŽ์€ ํ˜„์ƒ๋“ค์ด ๋™์—ญํ•™๊ณ„๋กœ ๊ธฐ์ˆ ๋  ์ˆ˜ ์žˆ๋‹ค. ํ˜ผ๋ˆ ์ด๋ก ์€ ์ด๋Ÿฌํ•œ ์˜ˆ์ธก ๋ถˆ๊ฐ€๋Šฅํ•œ ํ˜„์ƒ์„ ํƒ๊ตฌํ•˜๋Š” ๋ฐ ์ƒ๋‹นํ•œ ๊ธฐ์—ฌ๋ฅผ ํ•œ๋‹ค.', "sentence_answer": "ํ•จ์ˆ˜ํ•ด์„ํ•™์€ <hl> ํ•จ์ˆ˜์˜ ๊ณต๊ฐ„(ํŠนํžˆ ๋ฌดํ•œ์ฐจ์›)์˜ ํƒ๊ตฌ <hl> ์— ์ฃผ๋ชฉํ•œ๋‹ค." } ``` The data fields are the same among all splits. - `question`: a `string` feature. - `paragraph`: a `string` feature. - `answer`: a `string` feature. - `sentence`: a `string` feature. - `paragraph_answer`: a `string` feature, which is same as the paragraph but the answer is highlighted by a special token `<hl>`. - `paragraph_sentence`: a `string` feature, which is same as the paragraph but a sentence containing the answer is highlighted by a special token `<hl>`. - `sentence_answer`: a `string` feature, which is same as the sentence but the answer is highlighted by a special token `<hl>`. Each of `paragraph_answer`, `paragraph_sentence`, and `sentence_answer` feature is assumed to be used to train a question generation model, but with different information. The `paragraph_answer` and `sentence_answer` features are for answer-aware question generation and `paragraph_sentence` feature is for sentence-aware question generation. ## Data Splits |train|validation|test | |----:|---------:|----:| |54556| 5766 |5766 | ## Citation Information ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration: {A} {U}nified {B}enchmark and {E}valuation", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
NLPC-UOM/Travel-Dataset-5000
--- language: - en license: - mit --- This question base consits of 5000 travel domain based questions which are being annotated under a taxonomy related to the travel domain. The taxonomy is a hierarchical taxonomy with two levels of 7 coarse classes and 63 fine classes. 5000TravelQuestionsDataset.xlsx file consists of the annotated question base and the taxonomy. For the question base only use 5000TravelQuestionsDataset.csv file. If you use this data set in your reserch work, cite it as Kahaduwa, H., Pathirana, D., Arachchi, P.L., Dias, V., Ranathunga, S. and Kohomban, U., 2017, May. Question Answering system for the travel domain. In Engineering Research Conference (MERCon), 2017 Moratuwa (pp. 449-454). IEEE. If you need more clarifications please contact through following email addresses. Pathum - pathum.12@cse.mrt.ac.lk Dilshan -pathirana.12@cse.mrt.ac.lk Hasangi - hasangik.12@cse.mrt.ac.lk Vishma - vishma.12@cse.mrt.ac.lk
sileod/wikimedqa
--- license: apache-2.0 task_categories: - text-classification - multiple-choice language: - en tags: - medical --- ```bib @article{sileo2023wikimedqa, title={Generating multiple-choice questions for medical question answering with distractors and cue-masking}, author={Sileo, Damien and Uma, Kanimozhi and Moens, Marie-Francine}, journal={arXiv preprint arXiv:2303.07069 }, year={2023} } ```
huggingface-projects/color-palettes-sd
--- license: cc-by-4.0 ---
amanneo/enron-mail-corpus-mini
--- dataset_info: features: - name: text dtype: string - name: mail_length dtype: int64 splits: - name: test num_bytes: 205837.52311697626 num_examples: 4000 - name: train num_bytes: 1852537.7080527863 num_examples: 36000 download_size: 2332694 dataset_size: 2058375.2311697626 --- # Dataset Card for "enron-mail-corpus-mini" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
anab/copa-sse
--- annotations_creators: - crowdsourced language: - en language_creators: - crowdsourced license: - mit multilinguality: - monolingual pretty_name: Semi-structured Explanations for Commonsense Reasoning size_categories: - 1K<n<10K source_datasets: [] tags: - commonsense reasoning - explanation - graph-based reasoning task_categories: - text2text-generation - multiple-choice task_ids: - explanation-generation --- # Dataset Card for COPA-SSE ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/a-brassard/copa-sse - **Paper:** [COPA-SSE: Semi-Structured Explanations for Commonsense Reasoning](https://arxiv.org/abs/2201.06777) - **Point of Contact:** [Ana Brassard](mailto:ana.brassard@riken.jp) ### Dataset Summary ![Crowdsourcing protocol](crowdsourcing_protocol.png) COPA-SSE contains crowdsourced explanations for the [Balanced COPA](https://balanced-copa.github.io/) dataset, a variant of the [Choice of Plausible Alternatives (COPA)](https://people.ict.usc.edu/~gordon/copa.html) benchmark. The explanations are formatted as a set of triple-like common sense statements with [ConceptNet](https://conceptnet.io/) relations but freely written concepts. ### Supported Tasks and Leaderboards Can be used to train a model for explain+predict or predict+explain settings. Suited for both text-based and graph-based architectures. Base task is COPA (causal QA). ### Languages English ## Dataset Structure ### Data Instances Validation and test set each contains Balanced COPA samples with added explanations in `.jsonl` format. The question ids match the original questions of the Balanced COPA validation and test sets, respectively. ### Data Fields Each entry contains: - the original question (matching format and ids) - `human-explanations`: a list of explanations each containing: - `expl-id`: the explanation id - `text`: the explanation in plain text (full sentences) - `worker-id`: anonymized worker id (the author of the explanation) - `worker-avg`: the average score the author got for their explanations - `all-ratings`: all collected ratings for the explanation - `filtered-ratings`: ratings excluding those that failed the control - `triples`: the triple-form explanation (a list of ConceptNet-like triples) Example entry: ``` id: 1, asks-for: cause, most-plausible-alternative: 1, p: "My body cast a shadow over the grass.", a1: "The sun was rising.", a2: "The grass was cut.", human-explanations: [ {expl-id: f4d9b407-681b-4340-9be1-ac044f1c2230, text: "Sunrise causes casted shadows.", worker-id: 3a71407b-9431-49f9-b3ca-1641f7c05f3b, worker-avg: 3.5832864694635025, all-ratings: [1, 3, 3, 4, 3], filtered-ratings: [3, 3, 4, 3], filtered-avg-rating: 3.25, triples: [["sunrise", "Causes", "casted shadows"]] }, ...] ``` ### Data Splits Follows original Balanced COPA split: 1000 dev and 500 test instances. Each instance has up to nine explanations. ## Dataset Creation ### Curation Rationale The goal was to collect human-written explanations to supplement an existing commonsense reasoning benchmark. The triple-like format was designed to support graph-based models and increase the overall data quality, the latter being notoriously lacking in freely-written crowdsourced text. ### Source Data #### Initial Data Collection and Normalization The explanations in COPA-SSE are fully crowdsourced via the Amazon Mechanical Turk platform. Workers entered explanations by providing one or more concept-relation-concept triples. The explanations were then rated by different annotators with one- to five-star ratings. The final dataset contains explanations with a range of quality ratings. Additional collection rounds guaranteed that each sample has at least one explanation rated 3.5 stars or higher. #### Who are the source language producers? The original COPA questions (500 dev+500 test) were initially hand-crafted by experts. Similarly, the additional 500 development samples in Balanced COPA were authored by a small team of NLP researchers. Finally, the added explanations and quality ratings in COPA-SSE were collected with the help of Amazon Mechanical Turk workers who passed initial qualification rounds. ### Annotations #### Annotation process Workers were shown a Balanced COPA question, its answer, and a short instructional text. Then, they filled in free-form text fields for head and tail concepts and selected the relation from a drop-down menu with a curated selection of ConceptNet relations. Each explanation was rated by five different workers who were shown the same question and answer with five candidate explanations. #### Who are the annotators? The workers were restricted to persons located in the U.S. or G.B., with a HIT approval of 98% or more, and 500 or more approved HITs. Their identity and further personal information are not available. ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset Models trained to output similar explanations as those in COPA-SSE may not necessarily provide convincing or faithful explanations. Researchers should carefully evaluate the resulting explanations before considering any real-world applications. ### Discussion of Biases COPA questions ask for causes or effects of everyday actions or interactions, some of them containing gendered language. Some explanations may reinforce harmful stereotypes if their reasoning is based on biased assumptions. These biases were not verified during collection. ### Other Known Limitations The data was originally intended to be explanation *graphs*, i.e., hypothetical "ideal" subgraphs of a commonsense knowledge graph. While they can still function as valid natural language explanations, their wording may be at times unnatural to a human and may be better suited for graph-based implementations. ## Additional Information ### Dataset Curators This work was authored by Ana Brassard, Benjamin Heinzerling, Pride Kavumba, and Kentaro Inui. All are both members of the Riken AIP Natural Language Understanding Team and the Tohoku NLP Lab under Tohoku University. ### Licensing Information COPA-SSE is released under the [MIT License](https://mit-license.org/). ### Citation Information ``` @InProceedings{copa-sse:LREC2022, author = {Brassard, Ana and Heinzerling, Benjamin and Kavumba, Pride and Inui, Kentaro}, title = {COPA-SSE: Semi-structured Explanations for Commonsense Reasoning}, booktitle = {Proceedings of the Language Resources and Evaluation Conference}, month = {June}, year = {2022}, address = {Marseille, France}, publisher = {European Language Resources Association}, pages = {3994--4000}, url = {https://aclanthology.org/2022.lrec-1.425} } ``` ### Contributions Thanks to [@a-brassard](https://github.com/a-brassard) for adding this dataset.
bookbot/ljspeech_phonemes
--- dataset_info: features: - name: id dtype: string - name: audio dtype: audio: sampling_rate: 22050 - name: file dtype: string - name: text dtype: string - name: normalized_text dtype: string - name: phonemes dtype: string splits: - name: train num_bytes: 3863152206.0 num_examples: 13100 download_size: 3787337731 dataset_size: 3863152206.0 --- # Dataset Card for "ljspeech_phonemes" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Cohere/wikipedia-22-12-ko-embeddings
--- language: - ko multilinguality: - multilingual size_categories: [] source_datasets: [] tags: [] task_categories: - text-retrieval license: - apache-2.0 task_ids: - document-retrieval --- # Wikipedia (ko) embedded with cohere.ai `multilingual-22-12` encoder We encoded [Wikipedia (ko)](https://ko.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12). ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Further languages We provide embeddings of Wikipedia in many different languages: [ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings), You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12). ## Loading the dataset You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train", streaming=True) for doc in docs: docid = doc['id'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search A full search example: ```python #Run: pip install cohere datasets from datasets import load_dataset import torch import cohere co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com #Load at max 1000 documents + embeddings max_docs = 1000 docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ko-embeddings", split="train", streaming=True) docs = [] doc_embeddings = [] for doc in docs_stream: docs.append(doc) doc_embeddings.append(doc['emb']) if len(docs) >= max_docs: break doc_embeddings = torch.tensor(doc_embeddings) query = 'Who founded Youtube' response = co.embed(texts=[query], model='multilingual-22-12') query_embedding = response.embeddings query_embedding = torch.tensor(query_embedding) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text'], "\n") ``` ## Performance You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance)
Cohere/wikipedia-22-12-ar-embeddings
--- annotations_creators: - expert-generated language: - ar multilinguality: - multilingual size_categories: [] source_datasets: [] tags: [] task_categories: - text-retrieval license: - apache-2.0 task_ids: - document-retrieval --- # Wikipedia (ar) embedded with cohere.ai `multilingual-22-12` encoder We encoded [Wikipedia (ar)](https://ar.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12). ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Further languages We provide embeddings of Wikipedia in many different languages: [ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings), You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12). ## Loading the dataset You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train", streaming=True) for doc in docs: docid = doc['id'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search A full search example: ```python #Run: pip install cohere datasets from datasets import load_dataset import torch import cohere co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com #Load at max 1000 documents + embeddings max_docs = 1000 docs_stream = load_dataset(f"Cohere/wikipedia-22-12-ar-embeddings", split="train", streaming=True) docs = [] doc_embeddings = [] for doc in docs_stream: docs.append(doc) doc_embeddings.append(doc['emb']) if len(docs) >= max_docs: break doc_embeddings = torch.tensor(doc_embeddings) query = 'Who founded Youtube' response = co.embed(texts=[query], model='multilingual-22-12') query_embedding = response.embeddings query_embedding = torch.tensor(query_embedding) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text'], "\n") ``` ## Performance You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance)
Cohere/wikipedia-22-12-fr-embeddings
--- annotations_creators: - expert-generated language: - fr multilinguality: - multilingual size_categories: [] source_datasets: [] tags: [] task_categories: - text-retrieval license: - apache-2.0 task_ids: - document-retrieval --- # Wikipedia (fr) embedded with cohere.ai `multilingual-22-12` encoder We encoded [Wikipedia (fr)](https://fr.wikipedia.org) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. To get an overview how this dataset was created and pre-processed, have a look at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12). ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Further languages We provide embeddings of Wikipedia in many different languages: [ar](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ar-embeddings), [de](https://huggingface.co/datasets/Cohere/wikipedia-22-12-de-embeddings), [en](https://huggingface.co/datasets/Cohere/wikipedia-22-12-en-embeddings), [es](https://huggingface.co/datasets/Cohere/wikipedia-22-12-es-embeddings), [fr](https://huggingface.co/datasets/Cohere/wikipedia-22-12-fr-embeddings), [hi](https://huggingface.co/datasets/Cohere/wikipedia-22-12-hi-embeddings), [it](https://huggingface.co/datasets/Cohere/wikipedia-22-12-it-embeddings), [ja](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ja-embeddings), [ko](https://huggingface.co/datasets/Cohere/wikipedia-22-12-ko-embeddings), [simple english](https://huggingface.co/datasets/Cohere/wikipedia-22-12-simple-embeddings), [zh](https://huggingface.co/datasets/Cohere/wikipedia-22-12-zh-embeddings), You can find the Wikipedia datasets without embeddings at [Cohere/wikipedia-22-12](https://huggingface.co/datasets/Cohere/wikipedia-22-12). ## Loading the dataset You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train", streaming=True) for doc in docs: docid = doc['id'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search A full search example: ```python #Run: pip install cohere datasets from datasets import load_dataset import torch import cohere co = cohere.Client(f"<<COHERE_API_KEY>>") # Add your cohere API key from www.cohere.com #Load at max 1000 documents + embeddings max_docs = 1000 docs_stream = load_dataset(f"Cohere/wikipedia-22-12-fr-embeddings", split="train", streaming=True) docs = [] doc_embeddings = [] for doc in docs_stream: docs.append(doc) doc_embeddings.append(doc['emb']) if len(docs) >= max_docs: break doc_embeddings = torch.tensor(doc_embeddings) query = 'Who founded Youtube' response = co.embed(texts=[query], model='multilingual-22-12') query_embedding = response.embeddings query_embedding = torch.tensor(query_embedding) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text'], "\n") ``` ## Performance You can find performance on the MIRACL dataset (a semantic search evaluation dataset) here: [miracl-en-queries-22-12#performance](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12#performance)
ivelin/rico_refexp_combined
--- license: cc task_categories: - question-answering language: - en tags: - ui refexp pretty_name: UI RefExp Combined size_categories: - 100K<n<1M dataset_info: features: - name: image dtype: image - name: image_id dtype: string - name: prompt dtype: string - name: target_bounding_box struct: - name: xmax dtype: float64 - name: xmin dtype: float64 - name: ymax dtype: float64 - name: ymin dtype: float64 splits: - name: train num_bytes: 42127199077.08 num_examples: 390084 - name: validation num_bytes: 409042403.17 num_examples: 3191 - name: test num_bytes: 456349755.528 num_examples: 3912 download_size: 27184189035 dataset_size: 42992591235.778 --- # Dataset Card for "rico_refexp_combined" This dataset combines the crowdsourced RICO RefExp prompts from the [UIBert dataset](https://huggingface.co/datasets/ivelin/rico_sca_refexp_synthetic) and the synthetically generated prompts from the [seq2act dataset](https://huggingface.co/datasets/ivelin/rico_sca_refexp_synthetic).
Cohere/miracl-en-queries-22-12
--- annotations_creators: - expert-generated language: - en multilinguality: - multilingual size_categories: [] source_datasets: [] tags: [] task_categories: - text-retrieval license: - apache-2.0 task_ids: - document-retrieval --- # MIRACL (en) embedded with cohere.ai `multilingual-22-12` encoder We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model. The query embeddings can be found in [Cohere/miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) and the corpus embeddings can be found in [Cohere/miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12). For the orginal datasets, see [miracl/miracl](https://huggingface.co/datasets/miracl/miracl) and [miracl/miracl-corpus](https://huggingface.co/datasets/miracl/miracl-corpus). Dataset info: > MIRACL ๐ŸŒ๐Ÿ™Œ๐ŸŒ (Multilingual Information Retrieval Across a Continuum of Languages) is a multilingual retrieval dataset that focuses on search across 18 different languages, which collectively encompass over three billion native speakers around the world. > > The corpus for each language is prepared from a Wikipedia dump, where we keep only the plain text and discard images, tables, etc. Each article is segmented into multiple passages using WikiExtractor based on natural discourse units (e.g., `\n\n` in the wiki markup). Each of these passages comprises a "document" or unit of retrieval. We preserve the Wikipedia article title of each passage. ## Embeddings We compute for `title+" "+text` the embeddings using our `multilingual-22-12` embedding model, a state-of-the-art model that works for semantic search in 100 languages. If you want to learn more about this model, have a look at [cohere.ai multilingual embedding model](https://txt.cohere.ai/multilingual/). ## Loading the dataset In [miracl-en-corpus-22-12](https://huggingface.co/datasets/Cohere/miracl-en-corpus-22-12) we provide the corpus embeddings. Note, depending on the selected split, the respective files can be quite large. You can either load the dataset like this: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train") ``` Or you can also stream it without downloading it before: ```python from datasets import load_dataset docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train", streaming=True) for doc in docs: docid = doc['docid'] title = doc['title'] text = doc['text'] emb = doc['emb'] ``` ## Search Have a look at [miracl-en-queries-22-12](https://huggingface.co/datasets/Cohere/miracl-en-queries-22-12) where we provide the query embeddings for the MIRACL dataset. To search in the documents, you must use **dot-product**. And then compare this query embeddings either with a vector database (recommended) or directly computing the dot product. A full search example: ```python # Attention! For large datasets, this requires a lot of memory to store # all document embeddings and to compute the dot product scores. # Only use this for smaller datasets. For large datasets, use a vector DB from datasets import load_dataset import torch #Load documents + embeddings docs = load_dataset(f"Cohere/miracl-en-corpus-22-12", split="train") doc_embeddings = torch.tensor(docs['emb']) # Load queries queries = load_dataset(f"Cohere/miracl-en-queries-22-12", split="dev") # Select the first query as example qid = 0 query = queries[qid] query_embedding = torch.tensor(queries['emb']) # Compute dot score between query embedding and document embeddings dot_scores = torch.mm(query_embedding, doc_embeddings.transpose(0, 1)) top_k = torch.topk(dot_scores, k=3) # Print results print("Query:", query['query']) for doc_id in top_k.indices[0].tolist(): print(docs[doc_id]['title']) print(docs[doc_id]['text']) ``` You can get embeddings for new queries using our API: ```python #Run: pip install cohere import cohere co = cohere.Client(f"{api_key}") # You should add your cohere API Key here :)) texts = ['my search query'] response = co.embed(texts=texts, model='multilingual-22-12') query_embedding = response.embeddings[0] # Get the embedding for the first text ``` ## Performance In the following table we compare the cohere multilingual-22-12 model with Elasticsearch version 8.6.0 lexical search (title and passage indexed as independent fields). Note that Elasticsearch doesn't support all languages that are part of the MIRACL dataset. We compute nDCG@10 (a ranking based loss), as well as hit@3: Is at least one relevant document in the top-3 results. We find that hit@3 is easier to interpret, as it presents the number of queries for which a relevant document is found among the top-3 results. Note: MIRACL only annotated a small fraction of passages (10 per query) for relevancy. Especially for larger Wikipedias (like English), we often found many more relevant passages. This is know as annotation holes. Real nDCG@10 and hit@3 performance is likely higher than depicted. | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | ES 8.6.0 nDCG@10 | ES 8.6.0 acc@3 | |---|---|---|---|---| | miracl-ar | 64.2 | 75.2 | 46.8 | 56.2 | | miracl-bn | 61.5 | 75.7 | 49.2 | 60.1 | | miracl-de | 44.4 | 60.7 | 19.6 | 29.8 | | miracl-en | 44.6 | 62.2 | 30.2 | 43.2 | | miracl-es | 47.0 | 74.1 | 27.0 | 47.2 | | miracl-fi | 63.7 | 76.2 | 51.4 | 61.6 | | miracl-fr | 46.8 | 57.1 | 17.0 | 21.6 | | miracl-hi | 50.7 | 62.9 | 41.0 | 48.9 | | miracl-id | 44.8 | 63.8 | 39.2 | 54.7 | | miracl-ru | 49.2 | 66.9 | 25.4 | 36.7 | | **Avg** | 51.7 | 67.5 | 34.7 | 46.0 | Further languages (not supported by Elasticsearch): | Model | cohere multilingual-22-12 nDCG@10 | cohere multilingual-22-12 hit@3 | |---|---|---| | miracl-fa | 44.8 | 53.6 | | miracl-ja | 49.0 | 61.0 | | miracl-ko | 50.9 | 64.8 | | miracl-sw | 61.4 | 74.5 | | miracl-te | 67.8 | 72.3 | | miracl-th | 60.2 | 71.9 | | miracl-yo | 56.4 | 62.2 | | miracl-zh | 43.8 | 56.5 | | **Avg** | 54.3 | 64.6 |
its5Q/habr_qna
--- annotations_creators: - crowdsourced language: - ru language_creators: - crowdsourced license: - cc0-1.0 multilinguality: - monolingual pretty_name: Habr QnA size_categories: - 100K<n<1M source_datasets: - original tags: [] task_categories: - text-generation - question-answering task_ids: - language-modeling - open-domain-qa --- # Dataset Card for Habr QnA ## Table of Contents - [Dataset Card for Habr QnA](#dataset-card-for-habr-qna) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) ## Dataset Description - **Repository:** https://github.com/its5Q/habr-qna-parser ### Dataset Summary This is a dataset of questions and answers scraped from [Habr QnA](https://qna.habr.com/). There are 723430 asked questions with answers, comments and other metadata. ### Languages The dataset is mostly Russian with source code in different languages. ## Dataset Structure ### Data Fields Data fields can be previewed on the dataset card page. ### Data Splits All 723430 examples are in the train split, there is no validation split. ## Dataset Creation The data was scraped with a script, located in [my GitHub repository](https://github.com/its5Q/habr-qna-parser) ## Additional Information ### Dataset Curators - https://github.com/its5Q
society-ethics/papers
--- tags: - ethics --- # Hugging Face Ethics & Society Papers This is an incomplete list of ethics-related papers published by researchers at Hugging Face. - Gradio: https://arxiv.org/abs/1906.02569 - DistilBERT: https://arxiv.org/abs/1910.01108 - RAFT: https://arxiv.org/abs/2109.14076 - Interactive Model Cards: https://arxiv.org/abs/2205.02894 - Data Governance in the Age of Large-Scale Data-Driven Language Technology: https://arxiv.org/abs/2206.03216 - Quality at a Glance: https://arxiv.org/abs/2103.12028 - A Framework for Deprecating Datasets: https://arxiv.org/abs/2111.04424 - Bugs in the Data: https://arxiv.org/abs/2208.11695 - Measuring Data: https://arxiv.org/abs/2212.05129 - Perturbation Augmentation for Fairer NLP: https://arxiv.org/abs/2205.12586 - SEAL: https://arxiv.org/abs/2210.05839 - Multitask Prompted Training Enables Zero-Shot Task Generalization: https://arxiv.org/abs/2110.08207 - BLOOM: https://arxiv.org/abs/2211.05100 - ROOTS: https://arxiv.org/abs/2303.03915 - Evaluate & Evaluation on the Hub: https://arxiv.org/abs/2210.01970 - Spacerini: https://arxiv.org/abs/2302.14534 - ROOTS Search Tool: https://arxiv.org/abs/2302.14035 - Fair Diffusion: https://arxiv.org/abs/2302.10893 - Counting Carbon: https://arxiv.org/abs/2302.08476 - The Gradient of Generative AI Release: https://arxiv.org/abs/2302.04844 - BigScience: A Case Study in the Social Construction of a Multilingual Large Language Model: https://arxiv.org/abs/2212.04960 - Towards Openness Beyond Open Access: User Journeys through 3 Open AI Collaboratives: https://arxiv.org/abs/2301.08488 - Stable Bias: Analyzing Societal Representations in Diffusion Models: https://arxiv.org/abs/2303.11408 - Stronger Together: on the Articulation of Ethical Charters, Legal Tools, and Technical Documentation in ML: https://arxiv.org/abs/2305.18615
emozilla/quality
--- language: en dataset_info: features: - name: article dtype: string - name: question dtype: string - name: options sequence: string - name: answer dtype: int64 - name: hard dtype: bool splits: - name: train num_bytes: 62597212 num_examples: 2523 - name: validation num_bytes: 51198650 num_examples: 2086 download_size: 14352147 dataset_size: 113795862 --- # Dataset Card for "quality" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
FreedomIntelligence/huatuo_consultation_qa
--- license: apache-2.0 task_categories: - text-generation language: - zh tags: - medical size_categories: - 1M<n<10M --- # Dataset Card for huatuo_consultation_qa ## Dataset Description - **Homepage: https://www.huatuogpt.cn/** - **Repository: https://github.com/FreedomIntelligence/HuatuoGPT** - **Paper: https://arxiv.org/abs/2305.01526** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary We collected data from a website for medical consultation , consisting of many online consultation records by medical experts. Each record is a QA pair: a patient raises a question and a medical doctor answers the question. The basic information of doctors (including name, hospital organization, and department) was recorded. We directly crawl patientโ€™s questions and doctorโ€™s answers as QA pairs, getting 32,708,346 pairs. Subsequently, we removed the QA pairs containing special characters and removed the repeated pairs. Finally, we got 25,341,578 QA pairs. **Please note that for some reasons we cannot directly provide text data, so the answer part of our data set is url. If you want to use text data, you can refer to the other two parts of our open source datasets ([huatuo_encyclopedia_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_encyclopedia_qa)ใ€[huatuo_knowledge_graph_qa](https://huggingface.co/datasets/FreedomIntelligence/huatuo_knowledge_graph_qa)), or use url for data collection.** ## Dataset Creation ### Source Data .... ## Citation ``` @misc{li2023huatuo26m, title={Huatuo-26M, a Large-scale Chinese Medical QA Dataset}, author={Jianquan Li and Xidong Wang and Xiangbo Wu and Zhiyi Zhang and Xiaolong Xu and Jie Fu and Prayag Tiwari and Xiang Wan and Benyou Wang}, year={2023}, eprint={2305.01526}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
oscar-corpus/colossal-oscar-1.0
--- license: cc0-1.0 size_categories: - n>1T multilinguality: - multilingual source_datasets: - original task_categories: - fill-mask - text-generation task_ids: - language-modeling paperswithcode_id: oscar extra_gated_prompt: "By filling the form below I understand that Colossal OSCAR 1 is just a partial annotation of the WET files of 10 Common Crawl snapshots, the original data is included here **only for convenience**, and specially for researchers looking for data in lower resource languages. **Only the annotations are distributed under a cc0-1.0 license**, for the rest of the content I have read the [Common Crawl Terms of use](https://commoncrawl.org/terms-of-use/) and I will abide by them. I understand that all uses of the textual content in Colossal OSCAR 1 are subject to the [Common Crawl Terms of use](https://commoncrawl.org/terms-of-use/). I understand that reusing the textual content in Colossal OSCAR 1 might not be legal in all countries/regions and for all use cases. I understand that Colossal OSCAR 1 is mainly targeted towards researchers and meant to be used in research. The OSCAR Project reserves the right to revoke my access to this data. The OSCAR Project reserves the right to modify this data at any time in accordance to take down requests." extra_gated_fields: Name: text Email: text Affiliation: text Country: text Usecase: text I have explicitly checked that downloading Colossal OSCAR 1 is legal in my jurisdiction, in the country/region where I am located right now, and for the use case that I have described above, I have also read and accepted the Common Crawl Terms of use: checkbox --- # Dataset Card for Colossal OSCAR 1 ## IMPORTANT NOTE: THIS DATASET CARD IS STILL BEING WRITTEN, PLEASE BE PATIENT WHILE WE COMPLETE ALL THE INFORMATION ABOUT THE CORPUS ## Table of Contents - [Dataset Card for Colossal OSCAR 1](#dataset-card-for-colossal-oscar-1) - [IMPORTANT NOTE: THIS DATASET CARD IS STILL BEING WRITTEN, PLEASE BE PATIENT WHILE WE COMPLETE ALL THE INFORMATION ABOUT THE CORPUS](#important-note-this-dataset-card-is-still-being-written-please-be-patient-while-we-complete-all-the-information-about-the-corpus) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Issues](#issues) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Layout](#layout) - [Data Splits](#data-splits) - [Table](#table) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [https://oscar-project.org](https://oscar-project.org) - **Repository:** [https://github.com/oscar-project](https://github.com/oscar-project) - **Papers:** [Towards a Cleaner Document-Oriented Multilingual Crawled Corpus](https://aclanthology.org/2022.lrec-1.463/), [Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data](https://arxiv.org/abs/2212.10440) - **Point of Contact:** [Contact](https://oscar-project.org/#contact) ### Dataset Summary The OSCAR project (**O**pen **S**uper-large **C**rawled **A**ggregated co**R**pus) is an Open Source project aiming to provide web-based multilingual resources and datasets for Machine Learning (ML) and Artificial Intelligence (AI) applications. The project focuses specifically in providing large quantities of unannotated raw data that is commonly used in the pre-training of large deep learning models. The OSCAR project has developed [high-performance data pipelines](https://github.com/oscar-corpus/ungoliant) specifically conceived to classify and filter large amounts of [web data](https://commoncrawl.org/). The project has also put special attention in improving the data quality of web-based corpora as well as providing data for low-resource languages, so that these new ML/AI technologies are accessible to as many communities as possible. Colossal OSCAR 1 is the largest release of the OSCAR Corpus based on the based on 10 different monthly snapshots of Common Crawl. It currently contains all the features present in OSCAR 23.01, the main difference being its size. ### Downloading the Data For the moment we haven't finished the python script to use Colossal OSCAR 1 with `datasets`, so we recommend you use the `huggingface_hub` [python library](https://huggingface.co/docs/huggingface_hub/index). If you want to download a considerable amount of data we recomend you use `hf_transfer` python package and set the environment variable `HF_HUB_ENABLE_HF_TRANSFER=1`. ### Supported Tasks and Leaderboards OSCAR is mainly intended to pre-train language models and word representations. ### Languages All the data is distributed by language, both the original and the deduplicated versions of the data are available. 151 different languages are available. The table in subsection [Data Splits Sample Size](#data-splits-sample-size) provides the language code for each subcorpus as well as the number of words (space separated tokens), lines and sizes for both the original and the deduplicated versions of OSCAR. ### Issues Colossal OSCAR 1 may have quality issues on low size subcorpora, as it has been the case before. Please consider taking a look at [_Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets_](https://aclanthology.org/2022.tacl-1.4/) to get a better understanding of the current limitations of our language classifier. Note that since the documents are identified as a whole, it is expected to have lines in other languages in a given language subcorpus. As an example, it is known and expected that the German subcorpus contains documents holding lines identified as Swiss German / Alemannic. **If you encounter something that is unexpected, please file an issue here: https://github.com/oscar-corpus/corpus/issues.** | Language code | Language | Issues | | ------------- | -------- | ------ | | | | | ## Dataset Structure We show detailed information for all the configurations of the dataset. ### Data Instances TODO ### Layout ```js { "content":"English sentence\nphrase en franรงais\n????????????", // (1) "warc_headers":{ // (2) "warc-identified-content-language":"fra,eng", "warc-target-uri":"https://fr.wikipedia.org/wiki/...", "warc-record-id":"<urn:uuid:29eaa920-d299-4b1d-b687-c72bd8d68116>", "warc-type":"conversion", "content-length":"35298", // (3) "warc-refers-to":"<urn:uuid:39e42055-0d94-4e45-9c6c-9e7056635d64>", "warc-block-digest":"sha1:WFH2A5WHCS2H365GIAFYQPI7UOAMFGHB", // (3) "warc-date":"2022-11-26T09:45:47Z", "content-type":"text/plain" }, "metadata":{ "identification":{ // (4) "label":"fr", "prob":0.8938327 }, "harmful_pp":4063.1814, // (5) "tlsh":"tlsh:T125315FF2B6088901EEA097015DB39B4600B...", // (6) "quality_warnings":[ // (7) "short_sentences", "header", "footer" ], "categories":[ // (8) "examen_pix", "liste_bu" ], "sentence_identifications":[ // (9) { "label":"fr", "prob":0.99837273 }, { "label":"en", "prob":0.9992377 }, null ] } } ``` ### Data Splits <details> <summary>Click to expand the number of samples per configuration</summary> </details> ## Table ## Dataset Creation ### Curation Rationale OSCAR was constructed using [`Ungoliant`](https://github.com/oscar-corpus/ungoliant), a new pipeline derived from [goclassy](https://github.com/oscar-corpus/goclassy), itself being derived from [fastText's one](https://github.com/facebookresearch/fastText). The pipeline works on documents rather than lines. `Ungoliant` is implemented in the [Rust programming language](https://rust-lang.org), and uses [rayon](https://github.com/rayon-rs/rayon) as its data parallelism strategy. Threading is done at shard, record and sentence level, making the whole generation process much more efficient. Filtering will be explained in a future blog post at our [website](https://oscar-project.org) ### Source Data #### Initial Data Collection and Normalization [Common Crawl](https://commoncrawl.org/) is a non-profit foundation which produces and maintains an open repository of web crawled data that is both accessible and analysable. Common Crawl's complete web archive consists of petabytes of data collected over 8 years of web crawling. The repository contains raw web page HTML data (WARC files), metdata extracts (WAT files) and plain text extracts (WET files). The organization's crawlers has always respected [nofollow](http://microformats.org/wiki/rel-nofollow) and [robots.txt](https://www.robotstxt.org/) policies. Each monthly Common Crawl snapshot is in itself a massive multilingual corpus, where every single file contains data coming from multiple web pages written in a large variety of languages and covering all possible types of topics. To construct OSCAR the WET files of Common Crawl were used. These contain the extracted plain texts from the websites mostly converted to UTF-8, as well as headers containing the metatada of each crawled document. Each WET file comes compressed in gzip format and is stored on Amazon Web Services. In the case of Colossal OSCAR 1 the following snapshots were used: - 05-06-23 - 06-07-22 - 11-12-21 - 10-20 - 05-06-20 - 05-19 - 11-18 - 11-17 - 03-15 - 09-16 #### Who are the source language producers? The data comes from multiple web pages in a large variety of languages. ### Annotations The dataset does not contain any additional annotations. #### Annotation process N/A #### Who are the annotators? N/A ### Personal and Sensitive Information Being constructed from Common Crawl, Personal and sensitive information might be present. This **must** be considered before training deep learning models with OSCAR, specially in the case of text-generation models. ## Considerations for Using the Data ### Social Impact of Dataset OSCAR is intended to bring more data to a wide variety of languages, the aim of the corpus is to make large amounts of data available to lower resource languages in order to facilitate the pre-training of state-of-the-art language modeling architectures. ### Discussion of Biases OSCAR is not properly filtered yet and this can be reflected on the models trained with it. Care is advised specially concerning biases of the resulting models. We have added annotations to Common Crawl, so please consider using them to select the data that you would like to use for your particular use case. ### Other Known Limitations The [fastText linear classifier](https://fasttext.cc) is limed both in performance and the variety of languages it can recognize, so the quality of some OSCAR sub-corpora might be lower than expected, specially for the lowest-resource languages. Some audits have already been done by [third parties](https://arxiv.org/abs/2010.14571). ## Additional Information ### Dataset Curators Colossal OSCAR 1 was put together by [Pedro Ortiz Suarez](https://portizs.eu/) while working as a researcher at the [Speech and Language Technology Team](https://www.dfki.de/en/web/research/research-departments/speech-and-language-technology) at [DFKI GmbH](https://www.dfki.de/en/web) Berlin. This release is also made possible do to the work of [Julien Abadji](https://ujj.space) and the continous funding of the OSCAR project by [Inria](https://www.inria.fr/en) (project-team [ALMAnaCH](https://almanach.inria.fr/index-en.html)). Colossal OSCAR 1 is part of the work done by [Pedro Ortiz Suarez](https://portizs.eu/) for the [OpenGPT-X Project](https://opengpt-x.de/en/) which is funded by the German Federal Ministry for Economic Affairs and Climate Action ([BMWK](https://www.bmwk.de/Navigation/EN/Home/home.html)). The authors gratefully acknowledge the [Gauss Centre for Supercomputing e.V.](www.gauss-centre.eu) for funding this project by providing computing time through the John von Neumann Institute for Computing (NIC) on the GCS Supercomputer JUWELS at the Jรผlich Supercomputing Centre (JSC). This release of OSCAR was also made possible by the continous support of the OSCAR team at [Inria](https://www.inria.fr/en) (project-team [ALMAnaCH](https://almanach.inria.fr/index-en.html)), specially by [Julien Abadji](https://ujj.space), [Rua Ismail](https://oscar-project.org/authors/rua/) and [Benoit Sagot](http://pauillac.inria.fr/~sagot/), as well as by members of the OSCAR community, in particular [Sotaro Takeshita](https://sotaro.io/about), [Sebastian Nagel](https://www.polver.uni-konstanz.de/cnc/people/nagel/). ### Licensing Information These data are released under this licensing scheme We do not own any of the text from which these data has been extracted. We license the actual packaging, the metadata and the annotations of these data under the Creative Commons CC0 license ("no rights reserved") http://creativecommons.org/publicdomain/zero/1.0/ To the extent possible under law, the OSCAR project, DFKI GmbH and Inria have waived all copyright and related or neighboring rights to OSCAR This work is published from: France and Germany. Should you consider that our data contains material that is owned by you and should therefore not be reproduced here, please: - Clearly identify yourself, with detailed contact data such as an address, telephone number or email address at which you can be contacted. - Clearly identify the copyrighted work claimed to be infringed. - Clearly identify the material that is claimed to be infringing and information reasonably sufficient to allow us to locate the material. We will comply to legitimate requests by removing the affected sources. Please use the [contact information](https://oscar-project.org/#contact) on our website for take down requests. We strongly advise users to submit take down request to Common Crawl. For more information please read their [Terms of Use](https://commoncrawl.org/terms-of-use/) ### Citation Information ``` @ARTICLE{2022arXiv221210440J, author = {{Jansen}, Tim and {Tong}, Yangling and {Zevallos}, Victoria and {Ortiz Suarez}, Pedro}, title = "{Perplexed by Quality: A Perplexity-based Method for Adult and Harmful Content Detection in Multilingual Heterogeneous Web Data}", journal = {arXiv e-prints}, keywords = {Computer Science - Computation and Language}, year = 2022, month = dec, eid = {arXiv:2212.10440}, pages = {arXiv:2212.10440}, doi = {10.48550/arXiv.2212.10440}, archivePrefix = {arXiv}, eprint = {2212.10440}, primaryClass = {cs.CL}, adsurl = {https://ui.adsabs.harvard.edu/abs/2022arXiv221210440J}, adsnote = {Provided by the SAO/NASA Astrophysics Data System} } @inproceedings{abadji-etal-2022-towards, title = "Towards a Cleaner Document-Oriented Multilingual Crawled Corpus", author = "Abadji, Julien and Ortiz Suarez, Pedro and Romary, Laurent and Sagot, Beno{\^\i}t", booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference", month = jun, year = "2022", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://aclanthology.org/2022.lrec-1.463", pages = "4344--4355", abstract = "The need for large corpora raw corpora has dramatically increased in recent years with the introduction of transfer learning and semi-supervised learning methods to Natural Language Processing. And while there have been some recent attempts to manually curate the amount of data necessary to train large language models, the main way to obtain this data is still through automatic web crawling. In this paper we take the existing multilingual web corpus OSCAR and its pipeline Ungoliant that extracts and classifies data from Common Crawl at the line level, and propose a set of improvements and automatic annotations in order to produce a new document-oriented version of OSCAR that could prove more suitable to pre-train large generative language models as well as hopefully other applications in Natural Language Processing and Digital Humanities.", } @inproceedings{AbadjiOrtizSuarezRomaryetal.2021, author = {Julien Abadji and Pedro Javier Ortiz Su{\'a}rez and Laurent Romary and Beno{\^i}t Sagot}, title = {Ungoliant: An optimized pipeline for the generation of a very large-scale multilingual web corpus}, series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-9) 2021. Limerick, 12 July 2021 (Online-Event)}, editor = {Harald L{\"u}ngen and Marc Kupietz and Piotr Baล„ski and Adrien Barbaresi and Simon Clematide and Ines Pisetta}, publisher = {Leibniz-Institut f{\"u}r Deutsche Sprache}, address = {Mannheim}, doi = {10.14618/ids-pub-10468}, url = {https://nbn-resolving.org/urn:nbn:de:bsz:mh39-104688}, pages = {1 -- 9}, year = {2021}, abstract = {Since the introduction of large language models in Natural Language Processing, large raw corpora have played a crucial role in Computational Linguistics. However, most of these large raw corpora are either available only for English or not available to the general public due to copyright issues. Nevertheless, there are some examples of freely available multilingual corpora for training Deep Learning NLP models, such as the OSCAR and Paracrawl corpora. However, they have quality issues, especially for low-resource languages. Moreover, recreating or updating these corpora is very complex. In this work, we try to reproduce and improve the goclassy pipeline used to create the OSCAR corpus. We propose a new pipeline that is faster, modular, parameterizable, and well documented. We use it to create a corpus similar to OSCAR but larger and based on recent data. Also, unlike OSCAR, the metadata information is at the document level. We release our pipeline under an open source license and publish the corpus under a research-only license.}, language = {en} } @article{kreutzer-etal-2022-quality, title = "Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets", author = {Kreutzer, Julia and Caswell, Isaac and Wang, Lisa and Wahab, Ahsan and van Esch, Daan and Ulzii-Orshikh, Nasanbayar and Tapo, Allahsera and Subramani, Nishant and Sokolov, Artem and Sikasote, Claytone and Setyawan, Monang and Sarin, Supheakmungkol and Samb, Sokhar and Sagot, Beno{\^\i}t and Rivera, Clara and Rios, Annette and Papadimitriou, Isabel and Osei, Salomey and Suarez, Pedro Ortiz and Orife, Iroro and Ogueji, Kelechi and Rubungo, Andre Niyongabo and Nguyen, Toan Q. and M{\"u}ller, Mathias and M{\"u}ller, Andr{\'e} and Muhammad, Shamsuddeen Hassan and Muhammad, Nanda and Mnyakeni, Ayanda and Mirzakhalov, Jamshidbek and Matangira, Tapiwanashe and Leong, Colin and Lawson, Nze and Kudugunta, Sneha and Jernite, Yacine and Jenny, Mathias and Firat, Orhan and Dossou, Bonaventure F. P. and Dlamini, Sakhile and de Silva, Nisansa and {\c{C}}abuk Ball{\i}, Sakine and Biderman, Stella and Battisti, Alessia and Baruwa, Ahmed and Bapna, Ankur and Baljekar, Pallavi and Azime, Israel Abebe and Awokoya, Ayodele and Ataman, Duygu and Ahia, Orevaoghene and Ahia, Oghenefego and Agrawal, Sweta and Adeyemi, Mofetoluwa}, journal = "Transactions of the Association for Computational Linguistics", volume = "10", year = "2022", address = "Cambridge, MA", publisher = "MIT Press", url = "https://aclanthology.org/2022.tacl-1.4", doi = "10.1162/tacl_a_00447", pages = "50--72", abstract = "With the success of large-scale pre-training and multilingual modeling in Natural Language Processing (NLP), recent years have seen a proliferation of large, Web-mined text datasets covering hundreds of languages. We manually audit the quality of 205 language-specific corpora released with five major public datasets (CCAligned, ParaCrawl, WikiMatrix, OSCAR, mC4). Lower-resource corpora have systematic issues: At least 15 corpora have no usable text, and a significant fraction contains less than 50{\%} sentences of acceptable quality. In addition, many are mislabeled or use nonstandard/ambiguous language codes. We demonstrate that these issues are easy to detect even for non-proficient speakers, and supplement the human audit with automatic analyses. Finally, we recommend techniques to evaluate and improve multilingual corpora and discuss potential risks that come with low-quality data releases.", } @inproceedings{ortiz-suarez-etal-2020-monolingual, title = "A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages", author = "Ortiz Su{'a}rez, Pedro Javier and Romary, Laurent and Sagot, Benoit", booktitle = "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", month = jul, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2020.acl-main.156", pages = "1703--1714", abstract = "We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for five mid-resource languages. We then compare the performance of OSCAR-based and Wikipedia-based ELMo embeddings for these languages on the part-of-speech tagging and parsing tasks. We show that, despite the noise in the Common-Crawl-based OSCAR data, embeddings trained on OSCAR perform much better than monolingual embeddings trained on Wikipedia. They actually equal or improve the current state of the art in tagging and parsing for all five languages. In particular, they also improve over multilingual Wikipedia-based contextual embeddings (multilingual BERT), which almost always constitutes the previous state of the art, thereby showing that the benefit of a larger, more diverse corpus surpasses the cross-lingual benefit of multilingual embedding architectures.", } @inproceedings{OrtizSuarezSagotRomary2019, author = {Pedro Javier {Ortiz Su{'a}rez} and Benoit Sagot and Laurent Romary}, title = {Asynchronous pipelines for processing huge corpora on medium to low resource infrastructures}, series = {Proceedings of the Workshop on Challenges in the Management of Large Corpora (CMLC-7) 2019. Cardiff, 22nd July 2019}, editor = {Piotr Baล„ski and Adrien Barbaresi and Hanno Biber and Evelyn Breiteneder and Simon Clematide and Marc Kupietz and Harald L{"u}ngen and Caroline Iliadi}, publisher = {Leibniz-Institut f{"u}r Deutsche Sprache}, address = {Mannheim}, doi = {10.14618/ids-pub-9021}, url = {http://nbn-resolving.de/urn:nbn:de:bsz:mh39-90215}, pages = {9 -- 16}, year = {2019}, abstract = {Common Crawl is a considerably large, heterogeneous multilingual corpus comprised of crawled documents from the internet, surpassing 20TB of data and distributed as a set of more than 50 thousand plain text files where each contains many documents written in a wide variety of languages. Even though each document has a metadata block associated to it, this data lacks any information about the language in which each document is written, making it extremely difficult to use Common Crawl for monolingual applications. We propose a general, highly parallel, multithreaded pipeline to clean and classify Common Crawl by language; we specifically design it so that it runs efficiently on medium to low resource infrastructures where I/O speeds are the main constraint. We develop the pipeline so that it can be easily reapplied to any kind of heterogeneous corpus and so that it can be parameterised to a wide range of infrastructures. We also distribute a 6.3TB version of Common Crawl, filtered, classified by language, shuffled at line level in order to avoid copyright issues, and ready to be used for NLP applications.}, language = {en} } ```
nomic-ai/summarize-sampled
--- dataset_info: features: - name: response dtype: string - name: prompt dtype: string - name: source dtype: string splits: - name: train num_bytes: 1040063660 num_examples: 491951 download_size: 640692479 dataset_size: 1040063660 --- # Dataset Card for "summarize-sampled" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zjunlp/InstructIE
--- license: mit task_categories: - text2text-generation language: - en - zh tags: - information-extraction - entity - relation pretty_name: InstructIE size_categories: - 100M<n<1B --- # InstructIE: A Bilingual Instruction-based Information Extraction Dataset [Paper](https://doi.org/10.48550/arXiv.2305.11527) ## News * [2024/02] We released a large-scale (0.32B tokens) high-quality bilingual (Chinese and English) Information Extraction (IE) instruction tuning dataset named [IEPile](https://huggingface.co/datasets/zjunlp/iepie), along with two models trained on `IEPile`, [baichuan2-13b-iepile-lora](https://huggingface.co/zjunlp/baichuan2-13b-iepile-lora) and [llama2-13b-iepile-lora](https://huggingface.co/zjunlp/llama2-13b-iepile-lora). * [2023/10] We released a new bilingual (Chinese and English) theme-based Information Extraction (IE) instruction dataset named [InstructIE](https://huggingface.co/datasets/zjunlp/InstructIE). * [2023/08] We introduced a dedicated 13B model for Information Extraction (IE), named [knowlm-13b-ie](https://huggingface.co/zjunlp/knowlm-13b-ie/tree/main). * [2023/05] We initiated an instruction-based Information Extraction project. InstructIE is an bilingual information extraction dataset based on topic schemas. We divide the text into 12 topics, namely, Person, Geographic_Location, Building, Works, Creature, Artificial_Object, Natural_Science, Organization, Transport, Event, Astronomy, Medicine. For each topic, we have designed corresponding schemas. We expect the model to learn a general extraction capability on InstructIE and generalize it to other domains. ``` InstrueIE โ”œโ”€โ”€ train_zh_old.json # Chinese training set, the dataset used in the paper "InstructIE: A Bilingual Instruction-based Information Extraction Dataset". โ”œโ”€โ”€ train_en_old.json # English training set, the dataset used in the paper "InstructIE: A Bilingual Instruction-based Information Extraction Dataset". โ”œโ”€โ”€ train_zh.json # Chinese training set enhanced with LLMs. โ”œโ”€โ”€ train_en.json # English training set enhanced with LLMs. โ”œโ”€โ”€ dev_zh.json # Chinese validation set. โ”œโ”€โ”€ dev_en.json # English validation set. โ”œโ”€โ”€ test_zh.json # Chinese test set. โ”œโ”€โ”€ test_en.json # English test set. โ”œโ”€โ”€ schema_zh.json # Schema information for 12 topics in Chinese. โ”œโ”€โ”€ schema_en.json # Schema information for 12 topics in English. โ”œโ”€โ”€ InstrueIE-zh โ”‚ โ”œโ”€โ”€ InstrueIE_ไบบ็‰ฉ โ”‚ โ”‚ โ”œโ”€โ”€ train.json # Subsample of 5000 samples, full samples can be obtained from train_zh.json โ”‚ โ”‚ โ”œโ”€โ”€ dev.json โ”‚ โ”‚ โ”œโ”€โ”€ schema.json โ”‚ โ”‚ โ””โ”€โ”€ test.json โ”‚ โ”œโ”€โ”€ InstrueIE_ๅปบ็ญ‘็ป“ๆž„ โ”‚ โ”œโ”€โ”€ InstrueIE_็ป„็ป‡ โ”‚ โ”œโ”€โ”€ InstrueIE_็”Ÿ็‰ฉ โ”‚ โ”œโ”€โ”€ ... โ”œโ”€โ”€ InstrueIE-en โ”‚ โ”œโ”€โ”€ InstrueIE_Person โ”‚ โ”œโ”€โ”€ InstrueIE_Creature ``` <b>Example of data</b> ``` { "id": "841ef2af4cfe766dd9295fb7daf321c299df0fd0cef14820dfcb421161eed4a1", "text": "NGC1313 is a galaxy in the constellation of Reticulum. It was discovered by the Australian astronomer James Dunlop on September 27, 1826. It has a prominent uneven shape, and its axis does not completely revolve around its center. Near NGC1313, there is another galaxy, NGC1309.", "relation": [ {"head": "NGC1313", "head_type": "astronomical object type", "relation": "time of discovery", "tail": "September 27, 1826", "tail_type": "time"}, {"head": "NGC1313", "head_type": "astronomical object type", "relation": "discoverer or inventor", "tail": "James Dunlop", "tail_type": "organization/human"}, {"head": "NGC1313", "head_type": "astronomical object type", "relation": "of", "tail": "Reticulum", "tail_type": "astronomical object type"} ] } ``` | Field | Description | | ----------- | ---------------------------------------------------------------- | | id | The unique identifier for each data point. | | cate | The category of the text's subject, with a total of 12 different thematic categories. | | text | The input text for the model, with the goal of extracting all the involved relationship triples. | | relation | Describes the relationship triples contained in the text, i.e., (head, head_type, relation, tail, tail_type). | With the fields mentioned above, users can flexibly design and implement instructions and output formats for different information extraction needs. [Tutorial](https://github.com/zjunlp/DeepKE/blob/main/example/llm/InstructKGC/README.md) ## Citation Please cite these papers if you use InstructIE in your work. ```bibtex @article{DBLP:journals/corr/abs-2305-11527, author = {Honghao Gui and Shuofei Qiao and Jintian Zhang and Hongbin Ye and Mengshu Sun and Lei Liang and Huajun Chen and Ningyu Zhang}, title = {InstructIE: {A} Bilingual Instruction-based Information Extraction Dataset}, journal = {CoRR}, volume = {abs/2305.11527}, year = {2023}, url = {https://doi.org/10.48550/arXiv.2305.11527}, doi = {10.48550/ARXIV.2305.11527}, eprinttype = {arXiv}, eprint = {2305.11527}, timestamp = {Thu, 22 Feb 2024 09:46:17 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2305-11527.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
Locutusque/InstructMix
--- dataset: name: InstructiveMix tagline: A Combined Dataset of Diverse Instructional Content description: > InstructiveMix is a comprehensive dataset that brings together various instructional content from different domains. It combines instructions for tasks, code, poems, math, essays, medical texts, and more. With a diverse range of instructional data, this dataset is suitable for a wide range of natural language processing (NLP) tasks and research. license: CC-BY-SA-4.0 dataset_creation: '2023-08-02T00:00:00.000Z' dataset_version: 1.0.0 authors: - name: Locutusque email: locutusque.airshipcraft@gmail.com task_categories: - text-generation - conversational - question-answering language: - en --- **Dataset Summary:** InstructMix is a comprehensive combined dataset that offers diverse instructional content for a range of tasks. It includes data from various sources, such as code instructions, poems, essays, medical texts, and more. This dataset is designed to support natural language processing (NLP) research, model training, and evaluation across different domains. **Dataset Contents:** The dataset contains a collection of instructional data with corresponding inputs and outputs. Each entry has an "Input" field that contains the instructional content, and an "Output" field that represents the corresponding response or completion. Here is a list of the datasets used: - Locutusque/ColumnedChatCombined - TokenBender/code_instructions_120k_alpaca_style - Open-Orca/OpenOrca - vicgalle/alpaca-gpt4 - ChristophSchuhmann/essays-with-instructions - checkai/instruction-poems - pubmed_qa - BI55/MedText - nampdn-ai/tiny-codes - TIGER-Lab/MathInstruct - garage-bAInd/Open-Platypus It contains two of the following columns: - Input (string) - Output (string) These should hopefully be self-explanatory **Dataset Composition:** - Number of samples: 7570315 - Languages: English **Use Cases:** The InstructiveMix dataset is suitable for various NLP tasks, including text generation, text completion, translation, summarization, and more. It can be used to train and evaluate language models, code generation models, and other NLP-based applications. **Dataset Creation:** The InstructiveMix dataset was created by combining multiple existing datasets with instructional content and adding metadata to facilitate seamless integration. The content spans a diverse set of domains and was sourced from reputable datasets and public sources. **Acknowledgements:** I would like to acknowledge the original creators of the datasets used to construct InstructiveMix. Their contributions have enabled the creation of this valuable resource for the NLP community. **Contact:** For any questions or inquiries related to the InstructiveMix dataset, please contact me at [locutusque.airshipcraft@gmail.com]. ---
jinaai/negation-dataset-v2
--- tags: - finetuner language: en dataset_info: features: - name: anchor dtype: string - name: entailment dtype: string - name: negative dtype: string splits: - name: train num_examples: 50000 - name: test num_examples: 1000 multilinguality: - monolingual size_categories: - 10K<n<50k --- <br><br> <p align="center"> <img src="https://github.com/jina-ai/finetuner/blob/main/docs/_static/finetuner-logo-ani.svg?raw=true" alt="Finetuner logo: Finetuner helps you to create experiments in order to improve embeddings on search tasks. It accompanies you to deliver the last mile of performance-tuning for neural search applications." width="150px"> </p> <p align="center"> <b>The data offered by Jina AI, Finetuner team.</b> </p> ## Summary This dataset is an English-language dataset containing negation triplets. It is based on five datasets: [SNLI](https://huggingface.co/datasets/snli), [Multi-NLI](https://huggingface.co/datasets/multi_nli), [sentence-compression](https://huggingface.co/datasets/sent_comp), [Simple Wikipedia](https://www.loc.gov/item/2019205402/) and [COCO Captions](https://cocodataset.org/#home). ## Instances Each data point consists of a triplet ('anchor', 'entailment', 'negative') of strings, where ('anchor', 'entailment') are positive pairs taken from SNLI, and 'negative' contradicts both 'anchor' and 'entailment'. ## Fields - 'anchor': string, some statement - 'entailment': string, a statement which follows from 'anchor', but is usually syntactically dissimilar - 'negative': string, a statement contradicting 'anchor' and 'entailment'. Syntactically very similar to 'entailment' ## Splits | | train | test | |------------|-------|------| | # of items | 50000 | 1000 | ## Source Positive pairs were sampled from the five source datasets and negative samples were created using GPT-3.5 and GPT-4. ## Example Usage ```python from datasets import load_dataset from pprint import pprint dataset = load_dataset('jinaai/negation-dataset-v2') ``` Example data: ```python 'anchor': ['Pedestrians walking down a sidewalk next to a small street.', "A car stopped at a traffic light with it's brake lights on.", 'A couple on a motorcycle in front of a bus and a metermaid car'], 'entailment': ['People walking on a city street with light traffic.', 'A car stopped at the front of the stop light. ', 'A busy city street with a bus, taxi, and motorcycle.'], 'negative': ['People standing still on a city street with heavy traffic.', 'A car speeding away from the front of the stop light. ', 'A calm city street with no bus, taxi, and motorcycle.'] ``` ## Evaluate Models on the Test Set For evaluation, you can use the evaluation script provided together with this dataset: https://huggingface.co/datasets/jinaai/negation-dataset-v2/blob/main/evaluate_on_negations.py ## Licensing Information Please respect the licenses of the original data sources for the achor and entailment data. The additional negatives are licensed under the [Apache 2.0](https://www.apache.org/licenses/LICENSE-2.0) license. ## Contact Join our [Discord community](https://discord.jina.ai) and chat with other community members about ideas. ## Citation If you find this dataset useful in your research, please cite the following paper: ``` latex @misc{gรผnther2023jina, title={Jina Embeddings: A Novel Set of High-Performance Sentence Embedding Models}, author={Michael Gรผnther and Louis Milliken and Jonathan Geuter and Georgios Mastrapas and Bo Wang and Han Xiao}, year={2023}, eprint={2307.11224}, archivePrefix={arXiv}, primaryClass={cs.CL} }
Ichsan2895/alpaca-gpt4-indonesian
--- license: cc-by-sa-4.0 language: - id size_categories: - 10K<n<100K task_categories: - question-answering --- Base model : [FreedomIntelligence/alpaca-gpt4-indonesian](https://huggingface.co/datasets/FreedomIntelligence/alpaca-gpt4-indonesian) We wrangled the original dataset format to 'input' & 'output' format. For example: BEFORE: ``` [ { "from": "human", "value": "Saranlah slogan untuk kampanye daur ulang\n" }, { "from": "gpt", "value": "1. \"Kurangi, gunakan kembali, daur ulang: Bersama untuk masa depan yang lebih hijau.\"\n2. \ "Daur ulanglah hari ini, untuk masa depan yang lebih baik.\"\n3. \"Ubah sampahmu menjadi harta karun - Daur ulang!\"\n4. \ "Daur ulang untuk siklus kehidupan.\"\n5. \"Simpan sumber daya, daur ulang lebih banyak.\"" } ] ``` AFTER: | input | output | | ---- | ---- | | Saranlah slogan untuk kampanye daur ulang\n | 1. "Kurangi, gunakan kembali, daur ulang: Bersama untuk masa depan yang lebih hijau.\"\n2. "Daur ulanglah hari ini, untuk masa depan yang lebih baik.\"\n3. \"Ubah sampahmu menjadi harta karun - Daur ulang!\"\n4. "Daur ulang untuk siklus kehidupan.\"\n5. \"Simpan sumber daya, daur ulang lebih banyak. | ## CITATION ``` @article{peng2023instruction, title={Instruction Tuning with GPT-4}, author={Peng, Baolin and Li, Chunyuan and He, Pengcheng and Galley, Michel and Gao, Jianfeng}, journal={arXiv preprint arXiv:2304.03277}, year={2023} } @software{Chen_MultilingualSIFT_Multilingual_Supervised_2023, author = {Chen, Zhihong and Yan, Shuo and Liang, Juhao and Jiang, Feng and Wu, Xiangbo and Yu, Fei and Chen, Guiming Hardy and Chen, Junying and Zhang, Hongbo and Li Jianquan and Wan Xiang and Wang, Benyou}, month = july, title = {{MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning}}, url = {https://github.com/FreedomIntelligence/MultilingualSIFT.git}, version = {0.1}, year = {2023} } ```
vivym/midjourney-prompts
--- license: apache-2.0 task_categories: - text-to-image tags: - ' midjourney' language: - en --- # midjourney-prompts ## Description This dataset contains the cleaned midjourney prompts from Midjourney. Total prompts: 9,085,397 | Version | Count | | ------- | --------- | | 5.2 | 2,272,465 | | 5.1 | 2,060,106 | | 5.0 | 3,530,770 | | 4.0 | 1,204,384 | | 3.0 | 14,991 | | 2.0 | 791 | | 1.0 | 1,239 | | Style | Count | | --------- | ----------- | | default | 8,874,181 | | raw | 177,953 | | expressive| 27,919 | | scenic | 2,146 | | cute | 2,036 | | original | 511 |
kyujinpy/KoCoT_2000
--- license: cc-by-nc-4.0 task_categories: - text-generation - text-classification language: - en size_categories: - 1k<n<5k --- # KoCoT-Collection Using DeepL dataset, translation about [kaist-CoT](https://huggingface.co/datasets/kaist-ai/CoT-Collection). --- # Original Dataset Card for Dataset Name ## Dataset Description - **Homepage:https://github.com/kaistAI/CoT-Collection** - **Repository:https://github.com/kaistAI/CoT-Collection** - **Paper:https://arxiv.org/abs/2305.14045** - **Point of Contact:sejune@lklab.io** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages English ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits | name | train | |-------------------|------:| |CoT-Collection|1837928| ## Additional Information ### Citation Information ``` @article{kim2023cot, title={The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning}, author={Kim, Seungone and Joo, Se June and Kim, Doyoung and Jang, Joel and Ye, Seonghyeon and Shin, Jamin and Seo, Minjoon}, journal={arXiv preprint arXiv:2305.14045}, year={2023} } ```
MBZUAI/VideoInstruct-100K
--- license: cc-by-sa-4.0 --- VideoInstruct100K is a high-quality video conversation dataset generated using human-assisted and semi-automatic annotation techniques. The question answers in the dataset are related to, - Video Summariazation - Description-based question-answers (exploring spatial, temporal, relationships, and reasoning concepts) - Creative/generative question-answers For mored details, please visit [Oryx/VideoChatGPT/video-instruction-data-generation](https://github.com/mbzuai-oryx/Video-ChatGPT/blob/main/data/README.md). If you find this dataset useful, please consider citing the paper, ```bibtex @article{Maaz2023VideoChatGPT, title={Video-ChatGPT: Towards Detailed Video Understanding via Large Vision and Language Models}, author={Muhammad Maaz, Hanoona Rasheed, Salman Khan and Fahad Khan}, journal={ArXiv 2306.05424}, year={2023} } ```
qgyd2021/rlhf_reward_dataset
--- license: apache-2.0 task_categories: - question-answering - text-generation language: - zh - en tags: - reward model - rlhf size_categories: - 100M<n<1B --- ## RLHF Reward Model Dataset ๅฅ–ๅŠฑๆจกๅž‹ๆ•ฐๆฎ้›†ใ€‚ ๆ•ฐๆฎ้›†ไปŽ็ฝ‘ไธŠๆ”ถ้›†ๆ•ด็†ๅฆ‚ไธ‹: | ๆ•ฐๆฎ | ่ฏญ่จ€ | ๅŽŸๅง‹ๆ•ฐๆฎ/้กน็›ฎๅœฐๅ€ | ๆ ทๆœฌไธชๆ•ฐ | ๅŽŸๅง‹ๆ•ฐๆฎๆ่ฟฐ | ๆ›ฟไปฃๆ•ฐๆฎไธ‹่ฝฝๅœฐๅ€ | | :--- | :---: | :---: | :---: | :---: | :---: | | beyond | chinese | [beyond/rlhf-reward-single-round-trans_chinese](https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese) | 24858 | | | | helpful_and_harmless | chinese | [dikw/hh_rlhf_cn](https://huggingface.co/datasets/dikw/hh_rlhf_cn) | harmless train 42394 ๆก๏ผŒharmless test 2304 ๆก๏ผŒhelpful train 43722 ๆก๏ผŒhelpful test 2346 ๆก๏ผŒ | ๅŸบไบŽ Anthropic ่ฎบๆ–‡ [Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback](https://arxiv.org/abs/2204.05862) ๅผ€ๆบ็š„ helpful ๅ’Œharmless ๆ•ฐๆฎ๏ผŒไฝฟ็”จ็ฟป่ฏ‘ๅทฅๅ…ท่ฟ›่กŒไบ†็ฟป่ฏ‘ใ€‚ | [Anthropic/hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf) | | zhihu_3k | chinese | [liyucheng/zhihu_rlhf_3k](https://huggingface.co/datasets/liyucheng/zhihu_rlhf_3k) | 3460 | ็ŸฅไนŽไธŠ็š„้—ฎ็ญ”ๆœ‰็”จๆˆท็š„็‚น่ตžๆ•ฐ้‡๏ผŒๅฎƒๅบ”่ฏฅๆ˜ฏๆ นๆฎ็‚น่ตžๆ•ฐ้‡ๆฅๅˆคๆ–ญ็ญ”ๆกˆ็š„ไผ˜ๅ…ˆ็บงใ€‚ | | | SHP | english | [stanfordnlp/SHP](https://huggingface.co/datasets/stanfordnlp/SHP) | 385K | ๆถ‰ๅŠ18ไธชๅญ้ข†ๅŸŸ๏ผŒๅๅฅฝ่กจ็คบๆ˜ฏๅฆๆœ‰ๅธฎๅŠฉใ€‚ | | <details> <summary>ๅ‚่€ƒ็š„ๆ•ฐๆฎๆฅๆบ,ๅฑ•ๅผ€ๆŸฅ็œ‹</summary> <pre><code> https://huggingface.co/datasets/ticoAg/rlhf_zh https://huggingface.co/datasets/beyond/rlhf-reward-single-round-trans_chinese https://huggingface.co/datasets/dikw/hh_rlhf_cn https://huggingface.co/datasets/liyucheng/zhihu_rlhf_3k </code></pre> </details>
crumb/c4-benchfilter-nano
--- language_creators: - found language: - en license: odc-by source_datasets: - c4 task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling dataset_info: features: - name: text dtype: string - name: score dtype: float64 splits: - name: train num_bytes: 373897649.51453334 num_examples: 278115 download_size: 242478448 dataset_size: 373897649.51453334 configs: - config_name: default data_files: - split: train path: data/train-* size_categories: - 100K<n<1M --- # crumb/c4-benchfilter-nano A 278k sample derivation of the first 3M samples from the C4 dataset for a cheap and short continued pretraining for language models to optimize for benchmark scores without sacrificing generalization and generative modelling unrelated to chat or 'instruct' data. The estimated top 10% of highest estimated length normalized ngram (mean of tri, quad, and penta-gram) overlaps for each of the selected benchmark datasets (arc, truthful_qa, hellaswag, mmlu, humaneval) based on 1k samples, within the first 3M samples of C4. The top scoring sample datasets for each benchmark are then filtered again for top 30% scores and combined and exact-match de-duplicated. Then the top 3% scores and samples less than 200 characters long are removed because they likely have exact large n-token matches by chance such as exact dates or times that aren't actually relevant to the data.\* \*Upon further examination, some of these samples are still present throughout the data, albeit at much lower frequency than before, you might benefit from using `dataset.filter(x['score'] > thresh)` for some threshold, but you risk losing high quality samples as well, this tradeoff should be well-examined before training.
teknium/dataforge-economics
--- language: - eng pretty_name: "DataForge-Economics" tags: - economics license: mit --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/YmaINbgYmLpgTGR6ESXji.png) # Dataset Card for dataforge-economics ## Table of Contents - [Overview](#overview) - [Dataset Description](#dataset-description) - [Data Collection and Synthesis](#data-collection-and-synthesis) - [Data Structure](#data-structure) - [Licensing, Privacy, and Ethics](#licensing-privacy-and-ethics) - [Access](#access) - [Usage](#usage) - [Citation](#citation) - [Contributions](#contributions) ## Overview This dataset, `teknium/dataforge-economics`, is a specialized collection of 1,000 synthetic examples in the field of economics. It has been generated using OpenAI's GPT-4 and a custom data synthesis pipeline named DataForge, developed by me. ## Dataset Description ### Data Collection and Synthesis The data in `teknium/dataforge-economics` has been synthetically generated using OpenAI's GPT-4 language model. The synthesis process was enhanced and structured using the DataForge pipeline, which incorporates domain-specific knowledge and ensures relevance in economics topics. ### Data Structure - **Size of dataset:** 1000 examples - **Type of data:** Textual (Economics domain-specific) - **Data format:** JSON - **Fields:** - - id: a randomly generated uuid - conversations: single turn human & gpt turns in sharegpt format - source: the dataset name itself, for metadata purposes when merging with others - topic: the sub-topic for the domain - system_prompt: type of system prompt used for generating the response. ## Licensing, Privacy, and Ethics - **License:** MIT License - **Special Considerations:** This datasest is purely generated from GPT-4 data, some information may be incorrect or invalid. - **Privacy:** As the dataset is synthetically generated, it does not contain any real individual's data. ## Access - **Availability:** General Access ## Usage This dataset is a domain specialist dataset, the first to use my new pipeline called Data Forge, which can create domain expert knowledge (and tasks, as seen in the Trismegistus occult dataset) This dataset was a proof of concept to improve upon Orca model's economics expertise, which surpassed my custom benchmark for economics when finetuned over stable beluga.
mismatch-quest/SeeTRUE-Feedback
--- configs: - config_name: default data_files: - split: test path: "test/*" annotations_creators: - crowdsourced language: - en language_creators: - found license: - cc-by-4.0 multilinguality: - monolingual paperswithcode_id: seetrue-feedback pretty_name: SeeTRUE-feedback size_categories: - 1K<n<10K source_datasets: - original tags: - text-image-matching task_ids: [] extra_gated_prompt: "By clicking on โ€œAccess repositoryโ€ below, you also agree that you are using it solely for research purposes, and that SeeTRUE-Feedback should be used as a *TEST SET*, not as a training set, and especially not to train commercial chatbots. Do not hessitate to contact briangordon@mail.tau.ac.il or yonatanbitton@google.com if you have questions about this license." --- # Dataset Card for SeeTRUE-Feedback - [Dataset Description](#dataset-description) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description The SeeTRUE-Feedback dataset is a diverse benchmark for the meta-evaluation of image-text matching/alignment feedback. It aims to overcome limitations in current benchmarks, which primarily focus on predicting a matching score between 0-1. SeeTRUE provides, for each row, the original caption, feedback related to text-image misalignment, and the caption+visual source of misalignments (including a bounding box for the visual misalignment). ### Languages The dataset supports English language. ## Dataset Structure ### Data Fields - image_caption - Caption associated with the image. - image_name: The name of the image file. - dataset_source: The source/origin dataset of the image. - id_in_source_dataset: The ID of the dataset where the row originates from. - image_url: An S3 link from which you can download the image. - human_feedback: Human-annotated feedbacks about image-text misalignment. - feedback: Summary of feedback consolidated into a single entry (Generated by LLM: PaLM-2) - feedback_clean: A parsed and "clean" version of `feedback` field. - caption_misalignment: Source of misalignment in the image caption. - visual_misalignment: Source of misalignment in the image. - bbox_GroundingDino: Detected visual misalignment bounding-box in GroundingDino output format. - bbox_PaLI: Detected visual misalignment bounding-box in PaLI output format. ### Data Splits SeeTRUE-Feedback contains a single split: TEST, and should not be used for training. ## Dataset Creation The dataset has been created by sourcing and matching images and text from multiple datasets. More information in the paper: <TODO> ### Licensing Information The dataset is under the CC-By 4.0 license. ### Citation Information TODO
Luckyjhg/Geo170K
--- configs: - config_name: default data_files: - split: qa_tuning path: data/qa_tuning-* - split: alignment path: data/alignment-* dataset_info: features: - name: image dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string splits: - name: qa_tuning num_bytes: 93111889 num_examples: 117205 - name: alignment num_bytes: 20241610 num_examples: 60252 download_size: 23754996 dataset_size: 113353499 --- # Dataset Card for "Geo170K" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lmms-lab/VisitBench
--- dataset_info: features: - name: instruction_category dtype: string - name: instruction dtype: string - name: reference_output dtype: string - name: is_multiple_images dtype: bool - name: image_0 dtype: image - name: image_1 dtype: image - name: image_2 dtype: image - name: image_3 dtype: image - name: image_4 dtype: image - name: image_5 dtype: image - name: image_6 dtype: image - name: image_7 dtype: image - name: image_8 dtype: image - name: image_9 dtype: image - name: image_info dtype: string - name: human_ratings_gpt4_correct dtype: bool - name: human_ratings_problem_in_caption dtype: bool - name: human_ratings_problem_in_gpt4 dtype: bool - name: public_images_metadata dtype: string splits: - name: multi_images num_bytes: 408530373.0 num_examples: 678 - name: single_image num_bytes: 408530373.0 num_examples: 678 download_size: 813204656 dataset_size: 817060746.0 configs: - config_name: default data_files: - split: multi_images path: data/multi_images-* - split: single_image path: data/single_image-* --- # Dataset Card for "VisitBench" <p align="center" width="100%"> <img src="https://i.postimg.cc/g0QRgMVv/WX20240228-113337-2x.png" width="100%" height="80%"> </p> # Large-scale Multi-modality Models Evaluation Suite > Accelerating the development of large-scale multi-modality models (LMMs) with `lmms-eval` ๐Ÿ  [Homepage](https://lmms-lab.github.io/) | ๐Ÿ“š [Documentation](docs/README.md) | ๐Ÿค— [Huggingface Datasets](https://huggingface.co/lmms-lab) # This Dataset This is a formatted version of [VistBench](https://visit-bench.github.io/). It is used in our `lmms-eval` pipeline to allow for one-click evaluations of large multi-modality models. ``` @article{bitton2023visit, title={Visit-bench: A benchmark for vision-language instruction following inspired by real-world use}, author={Bitton, Yonatan and Bansal, Hritik and Hessel, Jack and Shao, Rulin and Zhu, Wanrong and Awadalla, Anas and Gardner, Josh and Taori, Rohan and Schimdt, Ludwig}, journal={arXiv preprint arXiv:2308.06595}, year={2023} } ``` Including visit_bench_single.csv and visit_bench_multi.csv, in total 1.2k items. Some of them are with `reference_output`, directly copied from [here](https://docs.google.com/spreadsheets/d/1hi8rGXf2WYufkFvGJ2MZ92JNChliM1QEJwZxNboUFlE/edit#gid=696111549). For each split, please follow the steps here to submit to VisitBench. ## Leaderboard The link to our public leaderboard is present [here](https://visit-bench.github.io/). ## How to add new models to the Leaderboard? 1. You can access the single-image and multiple-image datasets above. 2. For every instance (row) in the dataset csv, you would have your model's predictions. 3. Create a `predictions.csv` with 4 mandatory columns `instruction`, `instruction_category`, `image` (single-image case) / `images` (multi-image case), `<model name> prediction`. Here, `<model name>`should be your model name with version if multiple-versions are available. 4. Send a `prediction.csv` to us on `yonatanbitton1@gmail.com`. 5. We will use our internal prompting sandbox with reference-free GPT-4 as an evaluator. 6. We will add your model to the leaderboard once we receive all the pairwise judgments from the sandbox. 7. You will receive a confirmation email as soon as your model has been added to the leaderboard. 8. Estimated time from Step 4-7 would be 1-2 weeks, however, we will try to work on your prediction files as soon as they are sent. Please include in your email 1) a name for your model, 2) your team name (including your affiliation), and optionally, 3) a github repo or paper link. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
reknine69/QA-citations
--- task_categories: - question-answering language: - en size_categories: - 1K<n<10K --- QA-pairs with context from public documentation from Zerto, Carbonite, Vmware etc.
Rogendo/English-Swahili-Sentence-Pairs
--- task_categories: - translation - text-classification - summarization - feature-extraction language: - en - sw pretty_name: Eng-Swa-Pairs size_categories: - 100K<n<1M ---
retkowski/ytseg
--- license: cc-by-nc-sa-4.0 language: - en tags: - text segmentation - smart chaptering - segmentation - youtube - asr pretty_name: YTSeg size_categories: - 10K<n<100K task_categories: - token-classification - automatic-speech-recognition --- # From Text Segmentation to Smart Chaptering: A Novel Benchmark for Structuring Video Transcriptions We present <span style="font-variant:small-caps; font-weight:700;">YTSeg</span>, a topically and structurally diverse benchmark for the text segmentation task based on YouTube transcriptions. The dataset comprises 19,299 videos from 393 channels, amounting to 6,533 content hours. The topics are wide-ranging, covering domains such as science, lifestyle, politics, health, economy, and technology. The videos are from various types of content formats, such as podcasts, lectures, news, corporate events \& promotional content, and, more broadly, videos from individual content creators. We refer to the **paper** ([acl](https://aclanthology.org/2024.eacl-long.25/) | [arXiv](https://arxiv.org/abs/2402.17633)) for further information. We provide both text and audio data as well as a download script for the video data. ## Data Overview ### <span style="font-variant:small-caps;">YTSeg</span> Each video is represented as a JSON object with the following fields: | Field | Description | |--------------|------------------------------------------------| | `text` | A flat list of sentences. | | `targets` | The target segmentation as string of binary values (e.g., `000100000010`). | | `channel_id` | The YouTube channel ID which this video belongs to. | | `video_id` | The YouTube video ID. | | `audio_path` | Path to the .mp3 file of the video. | | Partition | # Examples | |------------|--------------| | Training | 16,404 (85%) | | Validation | 1,447 (7.5%) | | Testing | 1,448 (7.5%) | | Total | 19,229 | ### <span style="font-variant:small-caps;">YTSeg[Titles]</span> Each chapter of a video is represented as a JSON object with the following fields: | Field | Description | |--------------|------------------------------------------------| | `input` | The complete chapter/section text. | | `input_with_chapters` | The complete chapter/section text with previous section titles prepended. | | `target` | The target chapter title. | | `channel_id` | The YouTube channel ID which this chapter's video belongs to. | | `video_id` | The YouTube video ID which this chapter belongs to. | | `chapter_idx` | The index and placement of the chapter in the video (e.g., the first chapter has index `0`). | | Partition | # Examples | |------------|--------------| | Training | 146,907 (84.8%)| | Validation | 13,206 (7.6%) | | Testing | 13,082 (7.6%) | | Total | 173,195 | ### Audio Data We provide audio files for all examples in the dataset, preprocessed into the .mp3 format with a standardized sample rate of 16,000 Hz and a single channel (mono). These files are organized within the directory structure as follows: `data/audio/<channel_id>/<video_id>.mp3`. ### Video Data A download script for the video and audio data is provided. ```py python download_videos.py ``` In the script, you can further specify a target folder (default is `./video`) and target formats in a priority list. ## Loading Text Data This repository comes with a simple, exemplary script to read in the text data with `pandas`. ```py from load_data import get_partition test_data = get_partition('test') ``` Equivalently, to read in <span style="font-variant:small-caps;">YTSeg[Titles]</span>: ```py from load_data import get_title_partition test_data = get_title_partition('test') ``` ## Citing We kindly request you to cite our corresponding EACL 2024 paper if you use our dataset. ``` @inproceedings{retkowski-waibel-2024-text, title = "From Text Segmentation to Smart Chaptering: A Novel Benchmark for Structuring Video Transcriptions", author = "Retkowski, Fabian and Waibel, Alexander", editor = "Graham, Yvette and Purver, Matthew", booktitle = "Proceedings of the 18th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers)", month = mar, year = "2024", address = "St. Julian{'}s, Malta", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2024.eacl-long.25", pages = "406--419", abstract = "Text segmentation is a fundamental task in natural language processing, where documents are split into contiguous sections. However, prior research in this area has been constrained by limited datasets, which are either small in scale, synthesized, or only contain well-structured documents. In this paper, we address these limitations by introducing a novel benchmark YTSeg focusing on spoken content that is inherently more unstructured and both topically and structurally diverse. As part of this work, we introduce an efficient hierarchical segmentation model MiniSeg, that outperforms state-of-the-art baselines. Lastly, we expand the notion of text segmentation to a more practical {``}smart chaptering{''} task that involves the segmentation of unstructured content, the generation of meaningful segment titles, and a potential real-time application of the models.", } ``` ## License The dataset is available under the **Creative Commons Attribution-NonCommercial-ShareAlike (CC BY-NC-SA) 4.0** license. We note that we do not own the copyright of the videos and as such opted to release the dataset with a non-commercial license, with the intended use to be in research and education.
ananyarn/Algorithm_and_Python_Source_Code
--- license: apache-2.0 language: - en tags: - Python - Code Generation - Algorithm - Pseudo-code - Source Code - Programmming - Python Programming --- _**Algorithm_and_Python_Source_Code**_ <br /> This dataset provides different algorithms and their corresponding source code in Python. <br /> <br /> credits: Source codes given here are taken from "iamtarun/python_code_instructions_18k_alpaca" dataset in Hugging Face.
open-spaced-repetition/FSRS-Anki-20k
--- license: other license_name: fsrs-anki-20k license_link: LICENSE --- # Introduction FSRS-Anki-20k is a dataset of 20k collections from Anki for FSRS project. It is a random sample of collections with 5000+ revlog entries, so it should contain a mix of older (still active) users, and newer users. Entries are pre-sorted in (cid, id) order. There are two versions of the dataset: `./revlogs` and `./dataset`. The `./revlogs` version contains the raw revlog entries, while the `./dataset` version contains the dataset preprocessed by `./revlogs2dataset.py`. The size of the raw revlog entries is ~50GB, and the size of the preprocessed dataset is ~20GB. # Data Format ## Revlogs Please see the protocol buffer definition in [stats.proto](./stats.proto). For the fields that are not self-explanatory, please refer to the [Anki Database Structure](https://github.com/ankidroid/Anki-Android/wiki/Database-Structure). ## Dataset The columns of the dataset are as follows: - card_id: the unique identifier for each flashcard. - review_th: the ordinal number of the review among all reviews done by the user. - delta_t: the number of days since the last review of this flashcard. -1 if this is the first review. - rating: the rating given by the user for this review, 1: again, 2: hard, 3: good, 4: easy. Where only 'again' indicates a failed recall, and all other scores indicate a successful recall. # Preprocess 1. read the revlog entries from the .revlog file. 2. remove the revlog entries generated by reviews in filtered decks when the user disables the option "[Reschedule cards based on my answers in this deck](https://docs.ankiweb.net/filtered-decks.html?highlight=Reschedule%20cards%20based%20on%20my%20answers#rescheduling)". 3. remove the revlog entries generated by manual (re)scheduling like [Forget](https://docs.ankiweb.net/studying.html?highlight=Forget%20card#editing-and-more) and [Set Due Date](https://docs.ankiweb.net/studying.html?highlight=set%20due%20date#editing-and-more). 4. keep the revlog entries from the latest learning start sequence for each flashcard. 5. calculate the time between reviews of each flashcard (delta_t), review order (review_th), and encode card_id numerically. 6. save the dataset to a .csv file. # License This dataset is released under the [FSRS-Anki-20k License](LICENSE). # Related Projects [SRS Benchmark](https://github.com/open-spaced-repetition/fsrs-benchmark)
zhongshsh/CLoT-Oogiri-GO
--- license: mit task_categories: - visual-question-answering - question-answering language: - en - zh - ja pretty_name: Oogiri-GO size_categories: - 100K<n<1M --- <p align="center"> <img src="logo.png" width="550" height="150"> </p> # Oogiri-GO Dataset Card [Project Page](https://zhongshsh.github.io/CLoT) | [Paper](https://arxiv.org/abs/2312.02439) | [Code](https://github.com/sail-sg/CLoT) | [Model](https://huggingface.co/zhongshsh/CLoT-cn) **Data discription**: Oogiri-GO is a multimodal and multilingual humor dataset, and contains more than 130,000 Oogiri samples in English (en.jsonl), Chinese (cn.jsonl), and Japanese (jp.jsonl). Notably, in Oogiri-GO, 77.95\% of samples are annotated with human preferences, namely the number of likes, indicating the popularity of a response. As illustrated in Fig. 1, Oogiri-GO contains three types of Oogiri games according to the input that can be images, text, or both, and are respectively called "Text to Text" (T2T), "Image to Text" (I2T), and "Image & Text to Text " (IT2T) for brevity. <p align="center"> <img src="oogiri.png" width="550" height="150"> Figure 1. Examples of the three types of LoT-based Oogiri games. Players are required to make surprising and creative humorous responses (blue box) to the given multimodal information e.g., images, text, or both. </p> Each line in the `jsonl` files represents a sample, formatted as follows: ``` {"type": "I2T", "question": null, "image": "5651380", "text": "It wasn't on purpose, I'm sorry!", "star": 5} ``` where `type` indicates the type of Oogiri game for the sample (T2T, I2T, IT2T); `question` represents the text question for the sample, with `None` for types other than T2T; `image` indicates the image question for the sample, with None for T2T samples; `text` is the text response for the sample; and `star` denotes the human preference. In Japanese data (`jp.jsonl`) specifically, the questions for `T2T` type may appear as 'None' because the question text is in image form. **Data distribution**: Table summarizes the distribution of these game types. For training purposes, 95% of the samples are randomly selected to construct the training dataset, while the remaining 5% form the test dataset for validation and analysis. | Category | English | Chinese | Japanese | |:--------:|:-------:|:-------:|:---------:| | I2T | 17336 | 32130 | 40278 | | T2T | 6433 | 15797 | 11842 | | IT2T | -- | 912 | 9420 | **Project page for more information**: https://zhongshsh.github.io/CLoT **License**: Creative Commons Attribution 4.0 International. We also adhere to the terms of use from any of the data sources, such as [Bokete](https://bokete.jp/about/rule) and [Zhihu](https://www.zhihu.com/term/zhihu-terms). If you have any concerns regarding this dataset, especially if you believe it infringes upon your legal rights, please feel free to contact us. We will promptly review any issues raised and respond accordingly. **Citation** ``` @misc{zhong2023clot, ย  title={Let's Think Outside the Box: Exploring Leap-of-Thought in Large Language Models with Creative Humor Generation}, ย  author={Zhong, Shanshan and Huang, Zhongzhan and Gao, Shanghua and Wen, Weushao and Lin, Liang and Zitnik, Marinka and Zhou, Pan}, ย  journal={arXiv preprint arXiv:2312.02439}, ย  year={2023} } ```
facebook/toolverifier
--- dataset_info: - config_name: train features: - name: text dtype: string - config_name: tools features: - name: Name dtype: string - name: Description dtype: string configs: - config_name: train data_files: - split: train path: train.csv - config_name: tools data_files: - split: tools path: tools.csv --- # TOOLVERIFIER: Generalization to New Tools via Self-Verification This repository contains the ToolSelect dataset which was used to fine-tune Llama-2 70B for tool selection. ## Data **ToolSelect** data is synthetic training data generated for tool selection task using Llama-2 70B and Llama-2-Chat-70B. It consists of 555 samples corresponding to 173 tools. Each training sample is composed of a user instruction, a candidate set of tools that includes the ground truth tool, and a reasoning note elucidating the correct choice of tool. For example, ``` User: 10 reviews for "Harry Potter and the Philosopher's Stone". ### Tool Choices: Bank Account Number generator = The Bank Account Number tool generates a random bank account number for a bank. Train Ticket Purchase = The Train Ticket Purchase tool gets train tickets for a train, provided stations, and date information. The Cricket Score tool = The Cricket Score tool gets the score for a match that is happening at a given location and date. Book Review = The Book Review tool gets the top-rated book reviews for a particular book. Grocery Shopping with discounts = The Grocery Shopping tool calculates the expense of grocery shopping based on the menu. Stock Price at location and date = The Stock Price tool gets the stock price for a company at a given location and date. Movie Recommendation = The Movie Recommendation tool recommends movies based on the user's preferences. News = The News tool gets the top news for a particular topic or query. ### Thought: I recommended the Book Review tool because the user asked for reviews of a specific book, "Harry Potter and the Philosopher's Stone". The Book Review tool is designed to provide top-rated book reviews for a particular book, making it the most suitable tool for this request. It will retrieve the top 10 reviews for the book, as requested by the user. Act: CALLTOOL["Book Review"] ``` ### Files The `data/` folder has 2 files: * `train.csv` - this file contains the training samples. * `tools.csv` - this file contains names and descriptions of the generated synthetic tools. To learn more about the data generation procedure, we direct readers to section 2.1 of our paper. Paper: https://arxiv.org/abs/2402.14158 ## Citation ``` @article{mekala2024toolverifier, title={TOOLVERIFIER: Generalization to New Tools via Self-Verification}, author={Mekala, Dheeraj and Weston, Jason and Lanchantin, Jack and Raileanu, Roberta and Lomeli, Maria and Shang, Jingbo and Dwivedi-Yu, Jane}, journal={arXiv preprint arXiv:2402.14158}, year={2024} } ``` ## Licensing See our LICENSE file for licensing details.
ResplendentAI/NSFW_RP_Format_DPO
--- license: apache-2.0 task_categories: - text-generation language: - en tags: - not-for-all-audiences pretty_name: NSFW RP Format DPO --- This dataset aims to align a model to output the most common roleplaying format: "dialogue" \*action\* This dataset contains NSFW content.
Locutusque/OpenCerebrum-SFT
--- license: apache-2.0 task_categories: - text-generation - question-answering language: - en tags: - code - math - chemistry - biology size_categories: - 1M<n<10M --- # OpenCerebrum SFT subset ![image/png](https://th.bing.com/th/id/OIG1.ekOKvHLDWrXLHrZ5CmTQ?pid=ImgGn) ## Description OpenCerebrum is my take on creating an open source version of Aether Research's proprietary Cerebrum dataset. This repository contains the SFT subset, which contains about 1,200,00 examples. Unfortunately, I was unsure about how I would compress this dataset to just 5,000 examples like in the original Cerebrum dataset. ## Curation This dataset was curated using a simple and logical rationale. The goal was to use datasets that should logically improve evaluation scores that the original Cerebrum is strong in. See the "Data Sources" section for data source information. ## Data Sources This dataset is an amalgamation including the following sources: - Open-Orca/SlimOrca - glaiveai/glaive-code-assistant - camel-ai/physics - camel-ai/math - camel-ai/chemistry - camel-ai/biology - WizardLM/WizardLM_evol_instruct_V2_196k - microsoft/orca-math-word-problems-200k - grimulkan/theory-of-mind - Vezora/Tested-22k-Python-Alpaca - m-a-p/Code-Feedback - Locutusque/arc-cot - jondurbin/airoboros-2.1 - WizardLM/WizardLM_evol_instruct_70k In future versions, I plan on shrinking this dataset, to match the size of the original Cerebrum.
LooksJuicy/ruozhiba
--- license: apache-2.0 task_categories: - text-generation language: - zh --- ๅ—[COIG-CQIA](https://huggingface.co/datasets/m-a-p/COIG-CQIA/blob/main/ruozhiba/ruozhiba_ruozhiba.jsonl)ๅฏๅ‘๏ผŒๆž„ๅปบ็ฑปไผผๆ•ฐๆฎ้›†๏ผŒไฝ†็ญ”ๆกˆ้ฃŽๆ ผ็›ธๅฏนๆ›ด็ฎ€ๆดใ€‚ ๅผฑๆ™บๅง็ฒพ้€‰้—ฎ้ข˜ๆ•ฐๆฎๆฅ่‡ช[github](https://github.com/Leymore/ruozhiba/tree/main?tab=readme-ov-file)ๆไพ›็š„[็–‘้—ฎๅฅ](https://docs.qq.com/sheet/DUlZ6aURhamdwb1RO?tab=BB08J2)๏ผŒ่ฐƒ็”จGPT-4่Žทๅ–็ญ”ๆกˆ๏ผŒๅนถ่ฟ‡ๆปคๆŽ‰ๆ˜Žๆ˜พๆ‹’็ญ”็š„ๅ›žๅคใ€‚
damlab/uniprot
--- liscence: mit --- # Dataset Description ## Dataset Summary This dataset is a mirror of the Uniprot/SwissProt database. It contains the names and sequences of >500K proteins. This dataset was parsed from the FASTA file at https://ftp.uniprot.org/pub/databases/uniprot/current_release/knowledgebase/complete/uniprot_sprot.fasta.gz. Supported Tasks and Leaderboards: None Languages: English ## Dataset Structure ### Data Instances Data Fields: id, description, sequence Data Splits: None ## Dataset Creation The dataset was downloaded and parsed into a `dataset` object and uploaded unchanged. Initial Data Collection and Normalization: Dataset was downloaded and curated on 03/09/2022. ## Considerations for Using the Data Social Impact of Dataset: Due to the tendency of HIV to mutate, drug resistance is a common issue when attempting to treat those infected with HIV. Protease inhibitors are a class of drugs that HIV is known to develop resistance via mutations. Thus, by providing a collection of protease sequences known to be resistant to one or more drugs, this dataset provides a significant collection of data that could be utilized to perform computational analysis of protease resistance mutations. Discussion of Biases: Due to the sampling nature of this database, it is predominantly composed genes from "well studied" genomes. This may impact the "broadness" of the genes contained. ## Additional Information: - Dataset Curators: Will Dampier - Citation Information: TBA
valurank/Adult-content-dataset
--- license: - other language: - en multilinguality: - monolingual task_categories: - text-classification task_ids: [] --- # Dataset Card for Adult_Content_Detection ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) ## Dataset Description 850 Articles descriptions classified into two different categories namely: Adult, and Non_Adult ## Languages The text in the dataset is in English ## Dataset Structure The dataset consists of two columns namely Description and Category. The Description column consists of the overview of the article and the Category column consists of the class each article belongs to ## Source Data The dataset is scrapped across different platforms
yuningm/citesum
--- language: - en license: cc-by-nc-4.0 multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - summarization paperswithcode_id: citesum --- # CiteSum ## Description CiteSum: Citation Text-guided Scientific Extreme Summarization and Low-resource Domain Adaptation. CiteSum contains TLDR summaries for scientific papers from their citation texts without human annotation, making it around 30 times larger than the previous human-curated dataset SciTLDR. ## Homepage https://github.com/morningmoni/CiteSum ## Paper https://arxiv.org/abs/2205.06207 ## Authors ### Yuning Mao, Ming Zhong, Jiawei Han #### University of Illinois Urbana-Champaign {yuningm2, mingz5, hanj}@illinois.edu ## Dataset size Train: 83304 Validation: 4721 Test: 4921 ## Data details - src (string): source text. long description of paper - tgt (string): target text. tldr of paper - paper_id (string): unique id for the paper - title (string): title of the paper - discipline (dict): - venue (string): Where the paper was published (conference) - journal (string): Journal in which the paper was published - mag_field_of_study (list[str]): scientific fields that the paper falls under. Example: ``` { 'src': 'We describe a convolutional neural network that learns feature representations for short textual posts using hashtags as a supervised signal. The proposed approach is trained on up to 5.5 billion words predicting 100,000 possible hashtags. As well as strong performance on the hashtag prediction task itself, we show that its learned representation of text (ignoring the hashtag labels) is useful for other tasks as well. To that end, we present results on a document recommendation task, where it also outperforms a number of baselines.', 'tgt': 'A convolutional neural network model for predicting hashtags was proposed in REF .', 'paper_id': '14697143', 'title': '#TagSpace: Semantic Embeddings from Hashtags', 'discipline': { 'venue': 'EMNLP', 'journal': None, 'mag_field_of_study': ['Computer Science'] } } ``` ## Using the dataset ```python from datasets import load_dataset ds = load_dataset("yuningm/citesum") ``` ## Data location https://drive.google.com/file/d/1ndHCREXGSPnDUNllladh9qCtayqbXAfJ/view
Paul/hatecheck-arabic
--- annotations_creators: - crowdsourced language_creators: - expert-generated language: - ar license: - cc-by-4.0 multilinguality: - monolingual pretty_name: Arabic HateCheck size_categories: - 1K<n<10K source_datasets: - original task_categories: - text-classification task_ids: - hate-speech-detection --- # Dataset Card for Multilingual HateCheck ## Dataset Description Multilingual HateCheck (MHC) is a suite of functional tests for hate speech detection models in 10 different languages: Arabic, Dutch, French, German, Hindi, Italian, Mandarin, Polish, Portuguese and Spanish. For each language, there are 25+ functional tests that correspond to distinct types of hate and challenging non-hate. This allows for targeted diagnostic insights into model performance. For more details, please refer to our paper about MHC, published at the 2022 Workshop on Online Abuse and Harms (WOAH) at NAACL 2022. If you are using MHC, please cite our work! - **Paper:** Rรถttger et al. (2022) - Multilingual HateCheck: Functional Tests for Multilingual Hate Speech Detection Models. https://arxiv.org/abs/2206.09917 - **Repository:** https://github.com/rewire-online/multilingual-hatecheck - **Point of Contact:** paul@rewire.online ## Dataset Structure The csv format mostly matches the original HateCheck data, with some adjustments for specific languages. **mhc_case_id** The test case ID that is unique to each test case across languages (e.g., "mandarin-1305") **functionality** The shorthand for the functionality tested by the test case (e.g, "target_obj_nh"). The same functionalities are tested in all languages, except for Mandarin and Arabic, where non-Latin script required adapting the tests for spelling variations. **test_case** The test case text. **label_gold** The gold standard label ("hateful" or "non-hateful") of the test case. All test cases within a given functionality have the same gold standard label. **target_ident** Where applicable, the protected group that is targeted or referenced in the test case. All HateChecks cover seven target groups, but their composition varies across languages. **ref_case_id** For hateful cases, where applicable, the ID of the hateful case which was perturbed to generate this test case. For non-hateful cases, where applicable, the ID of the hateful case which is contrasted by this test case. **ref_templ_id** The equivalent to ref_case_id, but for template IDs. **templ_id** The ID of the template from which the test case was generated. **case_templ** The template from which the test case was generated (where applicable). **gender_male** and **gender_female** For gender-inflected languages (French, Spanish, Portuguese, Hindi, Arabic, Italian, Polish, German), only for cases where gender inflection is relevant, separate entries for gender_male and gender_female replace case_templ. **label_annotated** A list of labels given by the three annotators who reviewed the test case (e.g., "['hateful', 'hateful', 'hateful']"). **label_annotated_maj** The majority vote of the three annotators (e.g., "hateful"). In some cases this differs from the gold label given by our language experts. **disagreement_in_case** True if label_annotated_maj does not match label_gold for the entry. **disagreement_in_template** True if the test case is generated from an IDENT template and there is at least one case with disagreement_in_case generated from the same template. This can be used to exclude entire templates from MHC.
imodels/diabetes-readmission
--- annotations_creators: [] language: [] language_creators: [] license: [] multilinguality: [] pretty_name: diabetes-readmission size_categories: - 100K<n<1M source_datasets: [] tags: - interpretability - fairness - medicine task_categories: - tabular-classification task_ids: [] --- Port of the diabetes-readmission dataset from UCI (link [here](https://archive.ics.uci.edu/ml/datasets/diabetes+130-us+hospitals+for+years+1999-2008)). See details there and use carefully. Basic preprocessing done by the [imodels team](https://github.com/csinva/imodels) in [this notebook](https://github.com/csinva/imodels-data/blob/master/notebooks_fetch_data/00_get_datasets_custom.ipynb). The target is the binary outcome `readmitted`. ### Sample usage Load the data: ``` from datasets import load_dataset dataset = load_dataset("imodels/diabetes-readmission") df = pd.DataFrame(dataset['train']) X = df.drop(columns=['readmitted']) y = df['readmitted'].values ``` Fit a model: ``` import imodels import numpy as np m = imodels.FIGSClassifier(max_rules=5) m.fit(X, y) print(m) ``` Evaluate: ``` df_test = pd.DataFrame(dataset['test']) X_test = df.drop(columns=['readmitted']) y_test = df['readmitted'].values print('accuracy', np.mean(m.predict(X_test) == y_test)) ```
ysharma/short_jokes
--- license: mit --- **Context** Generating humor is a complex task in the domain of machine learning, and it requires the models to understand the deep semantic meaning of a joke in order to generate new ones. Such problems, however, are difficult to solve due to a number of reasons, one of which is the lack of a database that gives an elaborate list of jokes. Thus, a large corpus of over 0.2 million jokes has been collected by scraping several websites containing funny and short jokes. You can visit the [Github repository](https://github.com/amoudgl/short-jokes-dataset) from [amoudgl](https://github.com/amoudgl) for more information regarding collection of data and the scripts used. **Content** This dataset is in the form of a csv file containing 231,657 jokes. Length of jokes ranges from 10 to 200 characters. Each line in the file contains a unique ID and joke. **Disclaimer** It has been attempted to keep the jokes as clean as possible. Since the data has been collected by scraping websites, it is possible that there may be a few jokes that are inappropriate or offensive to some people. **Note** This dataset is taken from Kaggle dataset that can be found [here](https://www.kaggle.com/datasets/abhinavmoudgil95/short-jokes).
khaclinh/pp4av
--- annotations_creators: - expert-generated language_creators: - found language: - en license: - cc-by-nc-nd-4.0 multilinguality: - monolingual size_categories: - 1K<n<10K source_datasets: - extended task_categories: - object-detection task_ids: - face-detection pretty_name: PP4AV tags: - license-plate-detection --- # Dataset Card for PP4AV ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Annotations](#annotations) - [Dataset folder](#folder) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Baseline Model](#baseline-model) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://github.com/khaclinh/pp4av - **Repository:** https://github.com/khaclinh/pp4av - **Baseline model:** https://huggingface.co/spaces/khaclinh/self-driving-anonymization - **Paper:** [PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving] - **Point of Contact:** linhtk.dhbk@gmail.com ### Dataset Summary PP4AV is the first public dataset with faces and license plates annotated with driving scenarios. P4AV provides 3,447 annotated driving images for both faces and license plates. For normal camera data, dataset sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities. The images in PP4AV were sampled from 6 European cities at various times of day, including nighttime. This dataset use the fisheye images from the WoodScape dataset to select 244 images from the front, rear, left, and right cameras for fisheye camera data. PP4AV dataset can be used as a benchmark suite (evaluating dataset) for data anonymization models in autonomous driving. ### Languages English ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization The objective of PP4AV is to build a benchmark dataset that can be used to evaluate face and license plate detection models for autonomous driving. For normal camera data, we sampled images from the existing videos in which cameras were mounted in moving vehicles, running around the European cities. We focus on sampling data in urban areas rather than highways in order to provide sufficient samples of license plates and pedestrians. The images in PP4AV were sampled from **6** European cities at various times of day, including nighttime. The source data from 6 cities in European was described as follow: - `Paris`: This subset contains **1450** images of the car driving down a Parisian street during the day. The video frame rate is 30 frames per second. The video is longer than one hour. We cut a shorter video for sampling and annotation. The original video can be found at the following URL: URL: [paris_youtube_video](https://www.youtube.com/watch?v=nqWtGWymV6c) - `Netherland day time`: This subset consists of **388** images of Hague, Amsterdam city in day time. The image of this subset are sampled from the bellow original video: URL: [netherland_youtube_video](https://www.youtube.com/watch?v=Xuo4uCZxNrE) The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than a half hour. - `Netherland night time`: This subset consists of **824** images of Hague, Amsterdam city in night time sampled by the following original video: URL: [netherland_youtube_video](https://www.youtube.com/watch?v=eAy9eHsynhM) The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than a half hour. - `Switzerland`: This subset consists of **372** images of Switzerland sampled by the following video: URL: [switzerland_youtube_video](https://www.youtube.com/watch?v=0iw5IP94m0Q) The frame rate of the video is 30 frames per second. We cut a shorter video for sampling and annotation. The original video was longer than one hour. - `Zurich`: This subset consists of **50** images of Zurich city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3) - `Stuttgart`: This subset consists of **69** images of Stuttgart city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3) - `Strasbourg`: This subset consists of **50** images of Strasbourg city provided by the Cityscapes training set in package [leftImg8bit_trainvaltest.zip](https://www.cityscapes-dataset.com/file-handling/?packageID=3) We use the fisheye images from the WoodScape dataset to select **244** images from the front, rear, left, and right cameras for fisheye camera data. The source of fisheye data for sampling is located at WoodScape's [Fisheye images](https://woodscape.valeo.com/download). In total, **3,447** images were selected and annotated in PP4AV. ### Annotations #### Annotation process Annotators annotate facial and license plate objects in images. For facial objects, bounding boxes are defined by all detectable human faces from the forehead to the chin to the ears. Faces were labelled with diverse sizes, skin tones, and faces partially obscured by a transparent material, such as a car windshield. For license plate objects, bounding boxes consists of all recognizable license plates with high variability, such as different sizes, countries, vehicle types (motorcycle, automobile, bus, truck), and occlusions by other vehicles. License plates were annotated for vehicles involved in moving traffic. To ensure the quality of annotation, there are two-step process for annotation. In the first phase, two teams of annotators will independently annotate identical image sets. After their annotation output is complete, a merging method based on the IoU scores between the two bounding boxes of the two annotations will be applied. Pairs of annotations with IoU scores above a threshold will be merged and saved as a single annotation. Annotated pairs with IoU scores below a threshold will be considered conflicting. In the second phase, two teams of reviewers will inspect the conflicting pairs of annotations for revision before a second merging method similar to the first is applied. The results of these two phases will be combined to form the final annotation. All work is conducted on the CVAT tool https://github.com/openvinotoolkit/cvat. #### Who are the annotators? Vantix Data Science team ### Dataset Folder The `data` folder contains below files: - `images.zip`: contains all preprocessed images of PP4AV dataset. In this `zip` file, there are bellow folder included: `fisheye`: folder contains 244 fisheye images in `.png` file type `zurich`: folder contains 244 fisheye images in `.png` file type `strasbourg`: folder contains 244 fisheye images in `.png` file type `stuttgart`: folder contains 244 fisheye images in `.png` file type `switzerland`: folder contains 244 fisheye images in `.png` file type `netherlands_day`: folder contains 244 fisheye images in `.png` file type `netherlands_night`: folder contains 244 fisheye images in `.png` file type `paris`: folder contains 244 fisheye images in `.png` file type - `annotations.zip`: contains annotation data corresponding to `images.zip` data. In this file, there are bellow folder included: `fisheye`: folder contains 244 annotation `.txt` file type for fisheye image following `yolo v1.1` format. `zurich`: folder contains 50 file `.txt` annotation following `yolo v1.1` format, which corresponding to 50 images file of `zurich` subset. `strasbourg`: folder contains 50 file `.txt` annotation following `yolo v1.1` format, which corresponding to 50 images file of `strasbourg` subset. `stuttgart`: folder contains 69 file `.txt` annotation following `yolo v1.1` format, which corresponding to 69 images file of `stuttgart` subset. `switzerland`: folder contains 372 file `.txt` annotation following `yolo v1.1` format, which corresponding to 372 images file of `switzerland` subset. `netherlands_day`: folder contains 388 file `.txt` annotation following `yolo v1.1` format, which corresponding to 388 images file of `netherlands_day` subset. `netherlands_night`: folder contains 824 file `.txt` annotation following `yolo v1.1` format, which corresponding to 824 images file of `netherlands_night` subset. `paris`: folder contains 1450 file `.txt` annotation following `yolo v1.1` format, which corresponding to 1450 images file of `paris` subset. - `soiling_annotations.zip`: contain raw annotation data without filtering. The folder structure stored in this file is similar to format of `annotations.zip`. ### Personal and Sensitive Information [More Information Needed] ## Dataset Structure ### Data Instances A data point comprises an image and its face and license plate annotations. ``` { 'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1920x1080 at 0x19FA12186D8>, 'objects': { 'bbox': [ [0 0.230078 0.317081 0.239062 0.331367], [1 0.5017185 0.0306425 0.5185935 0.0410975], [1 0.695078 0.0710145 0.7109375 0.0863355], [1 0.4089065 0.31646 0.414375 0.32764], [0 0.1843745 0.403416 0.201093 0.414182], [0 0.7132 0.3393474 0.717922 0.3514285] ] } } ``` ### Data Fields - `image`: A `PIL.Image.Image` object containing the image. Note that when accessing the image column: `dataset[0]["image"]` the image file is automatically decoded. Decoding of a large number of image files might take a significant amount of time. Thus it is important to first query the sample index before the `"image"` column, *i.e.* `dataset[0]["image"]` should **always** be preferred over `dataset["image"][0]` - `objects`: a dictionary of face and license plate bounding boxes present on the image - `bbox`: the bounding box of each face and license plate (in the [yolo](https://albumentations.ai/docs/getting_started/bounding_boxes_augmentation/#yolo) format). Basically, each row in annotation `.txt` file for each image `.png` file consists of data in format: `<object-class> <x_center> <y_center> <width> <height>`: - `object-class`: integer number of object from 0 to 1, where 0 indicate face object, and 1 indicate licese plate object - `x_center`: normalized x-axis coordinate of the center of the bounding box. `x_center = <absolute_x_center> / <image_width>` - `y_center`: normalized y-axis coordinate of the center of the bounding box. `y_center = <absolute_y_center> / <image_height>` - `width`: normalized width of the bounding box. `width = <absolute_width> / <image_width>` - `height`: normalized wheightdth of the bounding box. `height = <absolute_height> / <image_height>` - Example lines in YOLO v1.1 format `.txt' annotation file: `1 0.716797 0.395833 0.216406 0.147222 0 0.687109 0.379167 0.255469 0.158333 1 0.420312 0.395833 0.140625 0.166667 ` ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Baseline Model Pretrained weight and demo of baseline model are available in [self-driving-anonymization huggingface spaces](https://huggingface.co/spaces/khaclinh/self-driving-anonymization) ### Dataset Curators Linh Trinh ### Licensing Information [Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0)](https://creativecommons.org/licenses/by-nc-nd/4.0/). ### Citation Information ``` @article{PP4AV2022, title = {PP4AV: A benchmarking Dataset for Privacy-preserving Autonomous Driving}, author = {Linh Trinh, Phuong Pham, Hoang Trinh, Nguyen Bach, Dung Nguyen, Giang Nguyen, Huy Nguyen}, booktitle = {IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, year = {2023} } ``` ### Contributions Thanks to [@khaclinh](https://github.com/khaclinh) for adding this dataset.
allenai/csabstruct
--- license: apache-2.0 --- # CSAbstruct CSAbstruct was created as part of *"Pretrained Language Models for Sequential Sentence Classification"* ([ACL Anthology][2], [arXiv][1], [GitHub][6]). It contains 2,189 manually annotated computer science abstracts with sentences annotated according to their rhetorical roles in the abstract, similar to the [PUBMED-RCT][3] categories. ## Dataset Construction Details CSAbstruct is a new dataset of annotated computer science abstracts with sentence labels according to their rhetorical roles. The key difference between this dataset and [PUBMED-RCT][3] is that PubMed abstracts are written according to a predefined structure, whereas computer science papers are free-form. Therefore, there is more variety in writing styles in CSAbstruct. CSAbstruct is collected from the Semantic Scholar corpus [(Ammar et a3., 2018)][4]. E4ch sentence is annotated by 5 workers on the [Figure-eight platform][5], with one of 5 categories `{BACKGROUND, OBJECTIVE, METHOD, RESULT, OTHER}`. We use 8 abstracts (with 51 sentences) as test questions to train crowdworkers. Annotators whose accuracy is less than 75% are disqualified from doing the actual annotation job. The annotations are aggregated using the agreement on a single sentence weighted by the accuracy of the annotator on the initial test questions. A confidence score is associated with each instance based on the annotator initial accuracy and agreement of all annotators on that instance. We then split the dataset 75%/15%/10% into train/dev/test partitions, such that the test set has the highest confidence scores. Agreement rate on a random subset of 200 sentences is 75%, which is quite high given the difficulty of the task. Compared with [PUBMED-RCT][3], our dataset exhibits a wider variety of writ- ing styles, since its abstracts are not written with an explicit structural template. ## Dataset Statistics | Statistic | Avg ยฑ std | |--------------------------|-------------| | Doc length in sentences | 6.7 ยฑ 1.99 | | Sentence length in words | 21.8 ยฑ 10.0 | | Label | % in Dataset | |---------------|--------------| | `BACKGROUND` | 33% | | `METHOD` | 32% | | `RESULT` | 21% | | `OBJECTIVE` | 12% | | `OTHER` | 03% | ## Citation If you use this dataset, please cite the following paper: ``` @inproceedings{Cohan2019EMNLP, title={Pretrained Language Models for Sequential Sentence Classification}, author={Arman Cohan, Iz Beltagy, Daniel King, Bhavana Dalvi, Dan Weld}, year={2019}, booktitle={EMNLP}, } ``` [1]: https://arxiv.org/abs/1909.04054 [2]: https://aclanthology.org/D19-1383 [3]: https://github.com/Franck-Dernoncourt/pubmed-rct [4]: https://aclanthology.org/N18-3011/ [5]: https://www.figure-eight.com/ [6]: https://github.com/allenai/sequential_sentence_classification
domenicrosati/clinical_trial_texts
--- dataset_info: features: - name: text dtype: string - name: trial_id dtype: string - name: input_ids sequence: int32 - name: token_type_ids sequence: int8 - name: attention_mask sequence: int8 splits: - name: train num_bytes: 22784316806 num_examples: 434977 download_size: 5376659326 dataset_size: 22784316806 --- # Dataset Card for "clinical_trial_texts" These are the text of clinical trials dowloaded from https://ClinicalTrials.gov/AllAPIJSON.zip on Dec 3rd 2022. Total trials is 434977 Number of tokens is 2,184,397,556 (2.1bn tokens). The tokens here are from the default BERT tokenizer in hugginface. This data can be used for pretraining in the clinical trial and biomedical domains. If you use this data please acknowledge @domenicrosati and link to this dataset [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
mrm8488/unnatural-instructions-full
--- dataset_info: features: - name: instruction dtype: string - name: instances list: - name: instruction_with_input dtype: string - name: input dtype: string - name: constraints dtype: string - name: output dtype: string - name: reformulations list: - name: instruction dtype: string - name: instruction_with_input dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 144282712 num_examples: 66010 download_size: 57715606 dataset_size: 144282712 --- # Dataset Card for Unnatural Instructions (Full data) This info comes from the **Unnatural Instructions GitHub [repo](https://github.com/orhonovich/unnatural-instructions/)**. Unnatural Instructions is a dataset of instructions automatically generated by a Large Language model. See full details in the paper: "[Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor](https://arxiv.org/abs/2212.09689)" ## ๐Ÿ—ƒ๏ธ Content It contains the full 240,670 Unnatural Instructions (instruction-input-output triplets) examples. It was constructed by expanding the core data with automatically generated instruction paraphrases. ## ๐Ÿ“„ Format ### Full data It has the same structure as [Core Data](https://huggingface.co/datasets/mrm8488/unnatural-instructions-core), but with one additional field - `reformulations`. `reformulations` is an array of JSON objects, each corresponds to an automatically generated paraphrase for the given instruction. Each reformulation contains the fields: - `instruction`: A paraphrase of the original instruction - `input`: An input for the task described by the `instruction` - `instruction_with_input`: The paraphrased instruction concatenated with the `input` - `output`: The output of executing `instruction` with the given `input` ## ๐Ÿ“˜ Citation If you make use of Unnatural Instructions, please cite the following paper: ``` @misc{honovich2022unnatural, title = {Unnatural Instructions: Tuning Language Models with (Almost) No Human Labor}, author = {Honovich, Or and Scialom, Thomas and Levy, Omer and Schick, Timo}, url = {https://arxiv.org/abs/2212.09689}, publisher = {arXiv}, year={2022} } ``` [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
abertsch/booksum-fullbooks
--- dataset_info: features: - name: bid dtype: string - name: source dtype: string - name: title dtype: string - name: summary dtype: string - name: book dtype: string splits: - name: validation num_bytes: 23586559 num_examples: 45 - name: train num_bytes: 165182724 num_examples: 314 - name: test num_bytes: 31094987 num_examples: 46 download_size: 60336046 dataset_size: 219864270 --- # Dataset Card for "booksum-fullbooks" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kqsong/OASum
--- license: cc-by-sa-3.0 language: - en tags: - summarization - Wikipedia size_categories: - 1M<n<10M task_categories: - summarization --- # Dataset Card for OASum Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Usage](#dataset-usage) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** [OASum Dataset repository](https://github.com/tencent-ailab/OASum) - **Paper:** [OASum: Large-Scale Open Domain Aspect-based Summarization](https://arxiv.org/pdf/2212.09233.pdf) The OASum Dataset is an English-language dataset containing over 3.6M document, aspect, and summary triplets. ## Dataset Usage You can directly download it with huggingface datasets. ``` python from datasets import load_dataset dataset = load_dataset("kqsong/OASum") ``` ## Dataset Structure ### Data Instances For each instance, there is a list of strings for the document, a list of strings for the summary, a string for the document title, a string for the aspect and a list of indices for the sentences in the corresponding section. ```json { "title": "Ker's WingHouse Bar & Grill", "document":[ "After Clearwater, Florida chicken wing pioneering restaurant chain Hooters began rapidly expanding, Florida based, Canadian-born restaurant entrepreneur Ed Burnett saw the opportunity.", "Burnett secured the rights to a closed restaurant (\"Knockers\") and opened \"The WingHouse\" restaurant at 7369 Ulmerton Road, Largo, Florida, a high traffic corridor.", "He strategically selected the restaurant in between where people work (commercial real estate) and live (residential real estate), to appeal to the local lunch crowd and family dining crowd.", "This flagship location proved to be a success soon after launching and is the model that the chain expanded on.", "Burnett, looking to expand to additional locations, accepted a financing partner (Crawford Ker) during this time frame, to open additional locations and beyond.", "Burnett's goal was to open 20 to 50 locations, and then sell the chain to a larger restaurant chain or investors.", "Burnett would ultimately regret his choice of investor.","In 1992, Ker retired from the NFL and took a job selling cars at a local dealer.", "In 1994, he invested half interest in a Largo, Florida wing restaurant called, \"Wing House\" that imitated Hooters.", "The restaurant was always The Wing House, and the atmosphere was always toned down to make it more family friendly.", "The restaurant did well and two additional locations were opened in the Tampa Bay area in the following three years.", "Ker won a $1.2-million jury award from Hooters in late 2004, which had sued him for trademark violations for allegedly using their uniforms and decor.", "After a three-week trial in which lawyers discussed hula hoops, surfboards, scrunchy socks, pantyhose, and something called \"vicarious sexual recreation\", the jury ruled that no trademark infringement existed and Hooters was penalized for their frivolous lawsuit.", "Hooters appealed the decision, but in June, 2006, the 11th U.S. Circuit Court of Appeals in Atlanta upheld the verdict.", "As of 2007, the company had 1,700 employees at 22 locations with revenue of nearly $60 million.", "Ker attended, and the company participated in, the 2007 National Buffalo Wing Festival and placed first in the \"traditional x-hot sauce\" category and gained some national recognition.", "On June 4, 2008 the company announced the launch of its national franchise program.", "In mid-2008 the chain operated 19 locations in Florida and Texas and expected to add six franchises by the end of 2008, and 48 by 2011.", "The initial focus was for franchises in the Southeastern US.", "WingHouses feature several amenities that differ from other wing restaurants, including Hooters.", "There is a full liquor bar in every store, sports memorabilia line the walls instead of NASCAR and most locations include a game room.", "Super Bowl XLIII in Tampa, Florida attracted the rich and famous; WingHouse hosted three events to raise money for charity." ], "aspect": "Opening", "aspect_sents": [0,1,2,3,4,5,6,7,8,9,10], "summary":[ "WingHouse Bar & Grill (formerly Ker\u2019s WingHouse Bar & Grill) is a restaurant chain based in Florida, created and founded by Ed Burnett, a Canadian restaurant entrepreneur.", "After opening his first WingHouse location, Burnett sought out investors to open additional WingHouse locations.", "Burnett accepted investor Crawford Ker (a former National Football League player) to assist financing the expansion." ] } ``` The average token count for the articles and the highlights are provided below: | Feature | Mean Token Count | | ---------- | ---------------- | | Document | 1,612 | | Summary | 40 | ### Data Fields - `title`: a string, containing the original Wikipedia title. - `document`: a list of sentences, containing the original content in the Wikipedia sections except the first abstract section. - `aspect`: a string, containing the section name and its parent section names. - `aspect_sents`: a list of indices, representing the sentences in the `aspect` section. - `summary`: a list of sentences, the corresponding aspect-based summary for the document. ### Data Splits The OASum dataset has 3 splits: _train_, _valid_, and _test_. Below are the statistics for the Version 1.0.0 of the dataset. | Dataset Split | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 3,523,986 | | Validation | 111,578 | | Test | 112,005 | ## Additional Information ### Licensing Information The OASum Dataset version 1.0.0 is released under the [CC-BY-SA-3.0 License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License) ### Citation Information ``` @article{yang2022oasum, title={Oasum: Large-scale open domain aspect-based summarization}, author={Yang, Xianjun and Song, Kaiqiang and Cho, Sangwoo and Wang, Xiaoyang and Pan, Xiaoman and Petzold, Linda and Yu, Dong}, journal={arXiv preprint arXiv:2212.09233}, year={2022} } ```
tarteel-ai/quran-tafsir
--- dataset_info: features: - name: en-ahmedali dtype: string - name: en-ahmedraza dtype: string - name: en-arberry dtype: string - name: en-asad dtype: string - name: en-daryabadi dtype: string - name: en-hilali dtype: string - name: en-itani dtype: string - name: en-maududi dtype: string - name: en-mubarakpuri dtype: string - name: en-pickthall dtype: string - name: en-qarai dtype: string - name: en-qaribullah dtype: string - name: en-sahih dtype: string - name: en-sarwar dtype: string - name: en-shakir dtype: string - name: en-transliterati dtype: string - name: en-wahiduddi dtype: string - name: en-yusufali dtype: string - name: surah dtype: int64 - name: ayah dtype: int64 splits: - name: train num_bytes: 16266291 num_examples: 6236 download_size: 9038013 dataset_size: 16266291 --- # Dataset Card for "quran-tafsir" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AigizK/bashkir-russian-parallel-corpora
--- language: - ba - ru license: cc-by-4.0 task_categories: - translation dataset_info: features: - name: ba dtype: string - name: ru dtype: string - name: corpus dtype: string splits: - name: train num_bytes: 409240581 num_examples: 1093189 download_size: 195923641 dataset_size: 409240581 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "bashkir-russian-parallel-corpora" ### How the dataset was assembled. 1. find the text in two languages. it can be a translated book or an internet page (wikipedia, news site) 2. our algorithm tries to match Bashkir sentences with their translation in Russian 3. We give these pairs to people to check ``` @inproceedings{ title={Bashkir-Russian parallel corpora}, author={Iskander Shakirov, Aigiz Kunafin}, year={2023} } ```
AigizK/mari-russian-parallel-corpora
--- dataset_info: features: - name: mhr dtype: string - name: rus dtype: string splits: - name: train num_bytes: 79751117 num_examples: 386707 download_size: 39195604 dataset_size: 79751117 language: - mhr - ru license: cc-by-4.0 task_categories: - translation --- # Dataset Card for "mari-russian-parallel-corpora" ``` @inproceedings{ title={Mari-Russian parallel corpora}, author={Andrei Chemyshev, Gennadii Sabantsev, Nadezhda Timofeeva, Vasilii Semenov}, year={2023} } ``` [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
zeusfsx/ukrainian-news
--- license: unknown task_categories: - text-generation language: - uk pretty_name: ukr-news size_categories: - 10M<n<100M tags: - news --- # Ukrainian News Dataset This is a dataset of news articles downloaded from various Ukrainian websites and Telegram channels. The dataset contains 22 567 099 JSON objects (news), total size ~67GB each with the following fields: ```json title: The title of the news article text: The text of the news article, which may contain HTML tags(e.g., paragraphs, links, images, etc.) url: The URL of the news article datetime: The time of publication or when the article was parsed and added to the dataset owner: The name of the website that published the news article ``` Count of news from websites: 16 022 416 Count of telegram posts: 6 544 683 The JSON objects are divided into parts, and the dataset is available for download via Hugging Face. The terms of use state that all data in this dataset is under the copyright of the owners of the respective websites. ## Accessing the Dataset The dataset is available for download via the Hugging Face datasets library. You can install the library via pip: ```bash pip install datasets ``` Once you have installed the library, you can load the dataset using the following code: ```python from datasets import load_dataset dataset = load_dataset('zeusfsx/ukrainian-news') ``` This will load the entire dataset into memory. If you prefer to load only a subset of the data, you can specify the split argument: ```python # Load only the first 10,000 examples from the "train" split dataset = load_dataset('zeusfsx/ukrainian-news', split='train[:10000]') ``` ## Contacts If you have any questions or comments about this dataset, please contact me at email [zeusfsxtmp@gmail.com]. I will do our best to respond to your inquiry as soon as possible. ## License The dataset is made available under the terms of use specified by the owners of the respective websites. Please consult the individual websites for more information on their terms of use.
renumics/dcase23-task2-enriched
--- license: cc-by-4.0 task_categories: - audio-classification pretty_name: >- Enriched DCASE 2023 Challenge Task 2 Dataset size_categories: - 1K<n<10K tags: - anomaly detection - anomalous sound detection - acoustic condition monitoring - sound machine fault diagnosis - machine learning - unsupervised learning - acoustic scene classification - acoustic event detection - acoustic signal processing - audio domain shift - domain generalization --- # Dataset Card for the Enriched "DCASE 2023 Challenge Task 2 Dataset". ## Table of contents [//]: # (todo: create new) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Explore the data with Spotlight](#explore-the-data-with-spotlight) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Baseline system](#baseline-system) - [Dataset Curators](#dataset-curators) - [Licensing Information - Condition of use](#licensing-information---condition-of-use) - [Citation Information (original)](#citation-information-original) ## Dataset Description - **Homepage:** [Renumics Homepage](https://renumics.com/) - **Homepage** [DCASE23 Task 2 Challenge](https://dcase.community/challenge2023/task-first-shot-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring#evaluation) - **Homepage:** [HF Dataset Creator](https://syoy.github.io/) - **Original Dataset Upload (Dev)** [ZENODO: DCASE 2023 Challenge Task 2 Development Dataset](https://zenodo.org/record/7687464#.Y_9VtdLMLmE) - **Paper** [MIMII DG](https://arxiv.org/abs/2205.13879) - **Paper** [ToyADMOS2](https://arxiv.org/abs/2106.02369) - **Paper** [First-shot anomaly detection for machine condition monitoring: A domain generalization baseline](https://arxiv.org/pdf/2303.00455.pdf) ### Dataset Summary [Data-centric AI](https://datacentricai.org) principles have become increasingly important for real-world use cases. At [Renumics](https://renumics.com/) we believe that classical benchmark datasets and competitions should be extended to reflect this development. This is why we are publishing benchmark datasets with application-specific enrichments (e.g. embeddings, baseline results, uncertainties, label error scores). We hope this helps the ML community in the following ways: 1. Enable new researchers to quickly develop a profound understanding of the dataset. 2. Popularize data-centric AI principles and tooling in the ML community. 3. Encourage the sharing of meaningful qualitative insights in addition to traditional quantitative metrics. This dataset is an enriched version of the [dataset](https://zenodo.org/record/7690148#.ZAXsSdLMLmE) provided in the context of the [anomalous sound detection task](https://dcase.community/challenge2023/task-first-shot-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring) of the [DCASE2023 challenge](https://dcase.community/challenge2023/). The enrichment include an embedding generated by a pre-trained [Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer#transformers.ASTFeatureExtractor) and results of the official challenge [baseline implementation](https://github.com/nttcslab/dase2023_task2_baseline_ae). ### DCASE23 Task2 Dataset Once a year, the [DCASE community](https://dcase.community/) publishes a [challenge](https://dcase.community/challenge2023/) with several tasks in the context of acoustic event detection and classification. [Task 2 of this challenge](https://dcase.community/challenge2023/task-first-shot-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring) deals with anomalous sound detection for machine condition monitoring. The original dataset is based on the [MIMII DG](https://arxiv.org/abs/2205.13879) and the [ToyADMOS2](https://arxiv.org/abs/2106.02369) datasets. Please cite the papers by [Harada et al.](https://arxiv.org/abs/2106.02369) and [Dohi et al.](https://arxiv.org/abs/2205.13879) if you use this dataset and the paper by [Harada et al.](https://arxiv.org/pdf/2303.00455.pdf) if you use the baseline results. ### Explore Dataset ![Analyze DCASE23 Task 2 with Spotlight](https://spotlight.renumics.com/resources/preview_dcase_1.png) The enrichments allow you to quickly gain insights into the dataset. The open source data curation tool Renumics Spotlight enables that with just a few lines of code: Install datasets and Spotlight via [pip](https://packaging.python.org/en/latest/key_projects/#pip): ```python !pip install renumics-spotlight datasets[audio] ``` > **_Notice:_** On Linux, non-Python dependency on libsndfile package must be installed manually. See [Datasets - Installation](https://huggingface.co/docs/datasets/installation#audio) for more information. Load the dataset from huggingface in your notebook: ```python import datasets dataset = datasets.load_dataset("renumics/dcase23-task2-enriched", "dev", split="all", streaming=False) ``` Start exploring with a simple view that leverages embeddings to identify relevant data segments: ```python from renumics import spotlight df = dataset.to_pandas() simple_layout = datasets.load_dataset_builder("renumics/dcase23-task2-enriched", "dev").config.get_layout(config="simple") spotlight.show(df, dtype={'path': spotlight.Audio, "embeddings_ast-finetuned-audioset-10-10-0.4593": spotlight.Embedding}, layout=simple_layout) ``` You can use the UI to interactively configure the view on the data. Depending on the concrete taks (e.g. model comparison, debugging, outlier detection) you might want to leverage different enrichments and metadata. In this example we focus on the valve class. We specifically look at normal data points that have high anomaly scores in both models. This is one example on how to find difficult example or edge cases: ```python from renumics import spotlight extended_layout = datasets.load_dataset_builder("renumics/dcase23-task2-enriched", "dev").config.get_layout(config="extended") spotlight.show(df, dtype={'path': spotlight.Audio, "embeddings_ast-finetuned-audioset-10-10-0.4593": spotlight.Embedding}, layout=extended_layout) ``` ![Analyze DCASE23 Task 2 with Spotlight](data/preview_dcase_2.png "Analyze DCASE23 Task 2 with Spotlight") ## Using custom model results and enrichments When developing your custom model you want to use different kinds of information from you model (e.g. embedding, anomaly scores etc.) to gain further insights into the dataset and the model behvior. Suppose you have your model's embeddings for each datapoint as a 2D-Numpy array called `embeddings` and your anomaly score as a 1D-Numpy array called `anomaly_scores`. Then you can add this information to the dataset: ```python df['my_model_embedding'] = embeddings df['anomaly_score'] = anomaly_scores ``` Depending on your concrete task you might want to use different enrichments. For a good overview on great open source tooling for uncertainty quantification, explainability and outlier detection, you can take a look at our [curated list for open source data-centric AI tooling](https://github.com/Renumics/awesome-open-data-centric-ai) on Github. You can also save your view configuration in Spotlight in a JSON configuration file by clicking on the respective icon: ![Save a data curation layout in Spotlight](data/spotlight_save_layout.png "Save a data curation layout in Spotlight") For more information how to configure the Spotlight UI please refer to the [documentation](https://spotlight.renumics.com). ## Dataset Structure ### Data Instances For each instance, there is a Audio for the audio, a string for the path, an integer for the section, a string for the d1p (parameter), a string for the d1v (value), a ClassLabel for the label and a ClassLabel for the class. ```python {'audio': {'array': array([ 0. , 0.00024414, -0.00024414, ..., -0.00024414, 0. , 0. ], dtype=float32), 'path': 'train/fan_section_01_source_train_normal_0592_f-n_A.wav', 'sampling_rate': 16000 } 'path': 'train/fan_section_01_source_train_normal_0592_f-n_A.wav' 'section': 1 'd1p': 'f-n' 'd1v': 'A' 'd2p': 'nan' 'd2v': 'nan' 'd3p': 'nan' 'd3v': 'nan' 'domain': 0 (source) 'label': 0 (normal) 'class': 1 (fan) 'dev_train_lof_anomaly': 0 'dev_train_lof_anomaly_score': 1.241023 'add_train_lof_anomaly': 1 'add_train_lof_anomaly_score': 1.806289 'ast-finetuned-audioset-10-10-0.4593-embeddings': [0.8152204155921936, 1.5862374305725098, ..., 1.7154160737991333] } ``` The length of each audio file is 10 seconds. ### Data Fields - `audio`: an `datasets.Audio` - `path`: a string representing the path of the audio file inside the _tar.gz._-archive. - `section`: an integer representing the section, see [Definition](#Description) - `d*p`: a string representing the name of the d*-parameter - `d*v`: a string representing the value of the corresponding d*-parameter - `domain`: an integer whose value may be either _0_, indicating that the audio sample is from the _source_ domain, _1_, indicating that the audio sample is from the _target_. - `class`: an integer as class label. - `label`: an integer whose value may be either _0_, indicating that the audio sample is _normal_, _1_, indicating that the audio sample contains an _anomaly_. - '[X]_lof_anomaly': an integer as anomaly indicator. The anomaly prediction is computed with the [Local Outlier Factor](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.LocalOutlierFactor.html) algorithm based on the "[X]"-dataset. - '[X]_lof_anomaly_score': a float as anomaly score. The anomaly score is computed with the [Local Outlier Factor](https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.LocalOutlierFactor.html) algorithm based on the "[X]"-dataset. - `embeddings_ast-finetuned-audioset-10-10-0.4593`: an `datasets.Sequence(Value("float32"), shape=(1, 768))` representing audio embeddings that are generated with an [Audio Spectrogram Transformer](https://huggingface.co/docs/transformers/model_doc/audio-spectrogram-transformer#transformers.ASTFeatureExtractor). ### Data Splits The development dataset has 2 splits: _train_ and _test_. | Dataset Split | Number of Instances in Split | Source Domain / Target Domain Samples | | ------------- |------------------------------|---------------------------------------| | Train | 7000 | 6930 / 70 | | Test | 1400 | 700 / 700 | The additional training dataset has 1 split: _train_. | Dataset Split | Number of Instances in Split | Source Domain / Target Domain Samples | | ------------- |------------------------------|---------------------------------------| | Train | 7000 | 6930 / 70 | The evaluation dataset has 1 split: _test_. | Dataset Split | Number of Instances in Split | Source Domain / Target Domain Samples | |---------------|------------------------------|---------------------------------------| | Test | 1400 | ? | ## Dataset Creation The following information is copied from the original [dataset upload on zenodo.org](https://zenodo.org/record/7690148#.ZAXsSdLMLmE) ### Curation Rationale This dataset is the "development dataset" for the [DCASE 2023 Challenge Task 2 "First-Shot Unsupervised Anomalous Sound Detection for Machine Condition Monitoring"](https://dcase.community/challenge2023/task-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring). The data consists of the normal/anomalous operating sounds of seven types of real/toy machines. Each recording is a single-channel 10-second audio that includes both a machine's operating sound and environmental noise. The following seven types of real/toy machines are used in this task: - ToyCar - ToyTrain - Fan - Gearbox - Bearing - Slide rail - Valve The "additional training data" and "evaluation data" datasets contain the following classes: - bandsaw - grinder - shaker - ToyDrone - ToyNscale - ToyTank - Vacuum ### Source Data #### Definition We first define key terms in this task: "machine type," "section," "source domain," "target domain," and "attributes.". - "Machine type" indicates the type of machine, which in the development dataset is one of seven: fan, gearbox, bearing, slide rail, valve, ToyCar, and ToyTrain. - A section is defined as a subset of the dataset for calculating performance metrics. - The source domain is the domain under which most of the training data and some of the test data were recorded, and the target domain is a different set of domains under which some of the training data and some of the test data were recorded. There are differences between the source and target domains in terms of operating speed, machine load, viscosity, heating temperature, type of environmental noise, signal-to-noise ratio, etc. - Attributes are parameters that define states of machines or types of noise. #### Description This dataset consists of seven machine types. For each machine type, one section is provided, and the section is a complete set of training and test data. For each section, this dataset provides (i) 990 clips of normal sounds in the source domain for training, (ii) ten clips of normal sounds in the target domain for training, and (iii) 100 clips each of normal and anomalous sounds for the test. The source/target domain of each sample is provided. Additionally, the attributes of each sample in the training and test data are provided in the file names and attribute csv files. #### Recording procedure Normal/anomalous operating sounds of machines and its related equipment are recorded. Anomalous sounds were collected by deliberately damaging target machines. For simplifying the task, we use only the first channel of multi-channel recordings; all recordings are regarded as single-channel recordings of a fixed microphone. We mixed a target machine sound with environmental noise, and only noisy recordings are provided as training/test data. The environmental noise samples were recorded in several real factory environments. We will publish papers on the dataset to explain the details of the recording procedure by the submission deadline. ### Supported Tasks and Leaderboards Anomalous sound detection (ASD) is the task of identifying whether the sound emitted from a target machine is normal or anomalous. Automatic detection of mechanical failure is an essential technology in the fourth industrial revolution, which involves artificial-intelligence-based factory automation. Prompt detection of machine anomalies by observing sounds is useful for monitoring the condition of machines. This task is the follow-up from DCASE 2020 Task 2 to DCASE 2022 Task 2. The task this year is to develop an ASD system that meets the following four requirements. **1. Train a model using only normal sound (unsupervised learning scenario)** Because anomalies rarely occur and are highly diverse in real-world factories, it can be difficult to collect exhaustive patterns of anomalous sounds. Therefore, the system must detect unknown types of anomalous sounds that are not provided in the training data. This is the same requirement as in the previous tasks. **2. Detect anomalies regardless of domain shifts (domain generalization task)** In real-world cases, the operational states of a machine or the environmental noise can change to cause domain shifts. Domain-generalization techniques can be useful for handling domain shifts that occur frequently or are hard-to-notice. In this task, the system is required to use domain-generalization techniques for handling these domain shifts. This requirement is the same as in DCASE 2022 Task 2. **3. Train a model for a completely new machine type** For a completely new machine type, hyperparameters of the trained model cannot be tuned. Therefore, the system should have the ability to train models without additional hyperparameter tuning. **4. Train a model using only one machine from its machine type** While sounds from multiple machines of the same machine type can be used to enhance detection performance, it is often the case that sound data from only one machine are available for a machine type. In such a case, the system should be able to train models using only one machine from a machine type. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Baseline system The baseline system is available on the Github repository [dcase2023_task2_baseline_ae](https://github.com/nttcslab/dase2023_task2_baseline_ae).The baseline systems provide a simple entry-level approach that gives a reasonable performance in the dataset of Task 2. They are good starting points, especially for entry-level researchers who want to get familiar with the anomalous-sound-detection task. ### Dataset Curators [//]: # (todo) [More Information Needed] ### Licensing Information - Condition of use This is a feature/embeddings-enriched version of the "DCASE 2023 Challenge Task 2 Development Dataset". The [original dataset](https://dcase.community/challenge2023/task-first-shot-unsupervised-anomalous-sound-detection-for-machine-condition-monitoring#audio-datasets) was created jointly by **Hitachi, Ltd.** and **NTT Corporation** and is available under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) license. ### Citation Information (original) If you use this dataset, please cite all the following papers. We will publish a paper on DCASE 2023 Task 2, so pleasure make sure to cite the paper, too. - Kota Dohi, Tomoya Nishida, Harsh Purohit, Ryo Tanabe, Takashi Endo, Masaaki Yamamoto, Yuki Nikaido, and Yohei Kawaguchi. MIMII DG: sound dataset for malfunctioning industrial machine investigation and inspection for domain generalization task. In arXiv e-prints: 2205.13879, 2022. [[URL](https://arxiv.org/abs/2205.13879)] - Noboru Harada, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Masahiro Yasuda, and Shoichiro Saito. ToyADMOS2: another dataset of miniature-machine operating sounds for anomalous sound detection under domain shift conditions. In Proceedings of the 6th Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021), 1โ€“5. Barcelona, Spain, November 2021. [[URL](https://dcase.community/documents/workshop2021/proceedings/DCASE2021Workshop_Harada_6.pdf)] - Noboru Harada, Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, and Masahiro Yasuda. First-shot anomaly detection for machine condition monitoring: a domain generalization baseline. In arXiv e-prints: 2303.00455, 2023. [[URL](https://arxiv.org/abs/2303.00455.pdf)] ``` @dataset{kota_dohi_2023_7882613, author = {Kota Dohi and Keisuke Imoto and Noboru Harada and Daisuke Niizumi and Yuma Koizumi and Tomoya Nishida and Harsh Purohit and Takashi Endo and Yohei Kawaguchi}, title = {DCASE 2023 Challenge Task 2 Development Dataset}, month = mar, year = 2023, publisher = {Zenodo}, version = {3.0}, doi = {10.5281/zenodo.7882613}, url = {https://doi.org/10.5281/zenodo.7882613} } ```
philschmid/sharegpt-raw
--- license: other duplicated_from: jeffwan/sharegpt_vicuna --- ## Prepraration ``` pip3 install -r requirements.txt ``` ## Data Cleaning 1. merge two raw json files and json beautify the merged file ``` python merge.py sharegpt_90k_raw_dataset/sg_90k_part1.json sharegpt_90k_raw_dataset/sg_90k_part2.json sharegpt_20230401_html_unformatted.json python pretty_json.py --in sharegpt_20230401_html_unformatted.json --out sharegpt_20230401_html.json ``` 2. (Optional) Verify the json file ``` if jq empty sharegpt_20230401_html.json 2>/dev/null; then echo "JSON is valid" else echo "JSON is invalid" fi jq length sharegpt_90k_raw_dataset/sg_90k_part1.json jq length sharegpt_90k_raw_dataset/sg_90k_part2.json jq length sharegpt_20230401_html.json ``` 3. clean data - remove html tags etc ``` python3 clean_sharegpt.py --in sharegpt_20230401_html.json --out sharegpt_20230401_clean.json .... 100%|โ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆโ–ˆ| 90665/90665 [06:32<00:00, 230.98it/s] total: 90665, skip: 13745, new: 76920 ``` 4. Filter dataset by language ``` python3 optional_clean.py --in sharegpt_20230401_clean.json --out sharegpt_20230401_clean_lang_zh.json --lang zh .... return 6240 out of 76920, start dump ... python3 optional_clean.py --in sharegpt_20230401_clean.json --out sharegpt_20230401_clean_lang_en.json --lang en ... return 55413 out of 76920, start dump ... ``` > Note: the code itself doesn't support languange list, I didn't change the code for adpation. You can change the code to support more languages. Instead, I just filter two languages I need and merge the `sharegpt_20230401_clean_lang_zh.json` and `sharegpt_20230401_clean_lang_en.json` into `sharegpt_20230401_clean_lang.json`. 5. Split the long conversation ``` python3 split_long_conversation.py --in sharegpt_20230401_clean_lang.json --out sharegpt_20230401_clean_lang_split.json --model-name /home/ubuntu/llama-13b-hf/ ... total: 61653, new: 126032 ``` Ok, now we have the cleaned dataset `sharegpt_20230401_clean_lang_split.json` which should be used for finetuning.
NiGuLa/Russian_Sensitive_Topics
--- language: - ru tags: - toxic comments classification license: cc task_categories: - text-classification size_categories: - 10K<n<100K --- ## General concept of the model Sensitive topics are such topics that have a high chance of initiating a toxic conversation: homophobia, politics, racism, etc. This dataset uses 18 topics. More details can be found [in this article ](https://www.aclweb.org/anthology/2021.bsnlp-1.4/) presented at the workshop for Balto-Slavic NLP at the EACL-2021 conference. This paper presents the first version of this dataset. Here you can see the last version of the dataset which is significantly larger and also properly filtered. ## Licensing Information [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://i.creativecommons.org/l/by-nc-sa/4.0/88x31.png ## Citation If you find this repository helpful, feel free to cite our publication: ``` @inproceedings{babakov-etal-2021-detecting, title = "Detecting Inappropriate Messages on Sensitive Topics that Could Harm a Company{'}s Reputation", author = "Babakov, Nikolay and Logacheva, Varvara and Kozlova, Olga and Semenov, Nikita and Panchenko, Alexander", booktitle = "Proceedings of the 8th Workshop on Balto-Slavic Natural Language Processing", month = apr, year = "2021", address = "Kiyv, Ukraine", publisher = "Association for Computational Linguistics", url = "https://www.aclweb.org/anthology/2021.bsnlp-1.4", pages = "26--36", abstract = "Not all topics are equally {``}flammable{''} in terms of toxicity: a calm discussion of turtles or fishing less often fuels inappropriate toxic dialogues than a discussion of politics or sexual minorities. We define a set of sensitive topics that can yield inappropriate and toxic messages and describe the methodology of collecting and labelling a dataset for appropriateness. While toxicity in user-generated data is well-studied, we aim at defining a more fine-grained notion of inappropriateness. The core of inappropriateness is that it can harm the reputation of a speaker. This is different from toxicity in two respects: (i) inappropriateness is topic-related, and (ii) inappropriate message is not toxic but still unacceptable. We collect and release two datasets for Russian: a topic-labelled dataset and an appropriateness-labelled dataset. We also release pre-trained classification models trained on this data.", } ```
liyucheng/zhihu_rlhf_3k
--- license: cc-by-2.0 ---
hugfaceguy0001/retarded_bar
--- license: openrail task_categories: - text-generation language: - zh pretty_name: ๅผฑๆ™บๅง็ฌ‘่ฏๆ•ฐๆฎ้›† size_categories: - n<1K configs: - config_name: statement data_files: retarded_bar.jsonl - config_name: question data_files: retarded_bar_qa.jsonl --- # ๅผฑๆ™บๅง็ฌ‘่ฏๆ•ฐๆฎ้›† ๅผฑๆ™บๅงๆ˜ฏ็™พๅบฆ่ดดๅงไธญ็š„ไธ€ไธช้žๅธธๅ—ๆฌข่ฟŽ็š„่ฎบๅ›๏ผŒไปฅๅˆ›ไฝœ็Ÿญๅฐ็ฒพๆ‚็š„ๅ†ท็ฌ‘่ฏ่€Œ้—ปๅใ€‚่ฟ™ไบ›็ฌ‘่ฏ้€šๅธธ้‡‡็”จๅŒๅ…ณ่ฏญใ€ไธๅฏปๅธธ็š„ๆ–ญๅฅใ€ไธๅˆ็†็š„้€ป่พ‘็ญ‰ๅˆ›ไฝœๆ‰‹ๆณ•ใ€‚ๅณไฝฟๆ˜ฏ็›ฎๅ‰ๆœ€ๅ…ˆ่ฟ›็š„่ฏญ่จ€ๆจกๅž‹๏ผŒไนŸ้šพไปฅๅฎŒๅ…จ็†่งฃๅผฑๆ™บๅง็š„็ฌ‘่ฏใ€‚ [ๅผฑๆ™บๅง](https://tieba.baidu.com/f?ie=utf-8&kw=%E5%BC%B1%E6%99%BA) ๆˆ‘ไปŽไบ’่”็ฝ‘ไธŠๆ”ถ้›†ไบ†ไธ€ไบ›ๅผฑๆ™บๅง็š„็ฌ‘่ฏ๏ผŒๅ…ฑ100ๆก๏ผŒๅ…ถไธญ45ๆกๆ˜ฏ้™ˆ่ฟฐๅฅ๏ผŒ55ๆกๆ˜ฏ้—ฎๅฅใ€‚ๆˆ‘็ป“ๅˆไบบๅทฅๅ’Œ่ฏญ่จ€ๆจกๅž‹ๅฏน่ฟ™ไบ›็ฌ‘่ฏ่ฟ›่กŒไบ†ไธ€ไบ›่งฃๆž๏ผŒๅนถๅˆถไฝœไบ†่ฟ™ไธชๅฐๅž‹ๆ•ฐๆฎ้›†ใ€‚ ## ้™ˆ่ฟฐๅฅ็ฌ‘่ฏ ้™ˆ่ฟฐๅฅ็ฌ‘่ฏ้€šๅธธไปฅๅฅๅท็ป“ๅฐพ๏ผŒไธๅฎนๆ˜“่ขซ่ฏญ่จ€ๆจกๅž‹่ฏฏ่งฃไธบๆญฃๅธธ็š„้—ฎ้ข˜ใ€‚ ไพ‹ๅฆ‚๏ผšโ€œๅ‡บไบบๅคดๅœฐๅธธๅนด็››ไบงไบบๅคดใ€‚โ€ ## ้—ฎๅฅ็ฌ‘่ฏ ้—ฎๅฅ็ฌ‘่ฏๅ…ทๆœ‰ไธ€ๅฎš็š„่ฟทๆƒ‘ๆ€ง๏ผŒๅฏ่ƒฝไผšๅฏผ่‡ด่ฏญ่จ€ๆจกๅž‹ๆ— ๆณ•ๅˆคๆ–ญๅฎƒไปฌๆ˜ฏๆญฃๅธธ็š„้—ฎ้ข˜่ฟ˜ๆ˜ฏๅผ€็Žฉ็ฌ‘ใ€‚ ไพ‹ๅฆ‚๏ผšโ€œ่“็‰™่€ณๆœบๅไบ†๏ผŒๅบ”่ฏฅๆ‰พ็‰™็ง‘ๅŒป็”Ÿ่ฟ˜ๆ˜ฏ่€ณ็ง‘ๅŒป็”Ÿ๏ผŸโ€ ## ๆ–‡ไปถๆ ผๅผ ๆœฌๆ•ฐๆฎ้›†ๅŒ…ๆ‹ฌไธคไธช้ƒจๅˆ†ใ€‚ ### retarded_bar.jsonl retarded_bar.jsonlๆ˜ฏ้™ˆ่ฟฐๆ€ง็ฌ‘่ฏๆ•ฐๆฎ้›†๏ผŒไปฅjsonlๆ ผๅผๅญ˜ๅ‚จ๏ผŒๆฏ่กŒ้ƒฝๆ˜ฏไธ€ไธชjsonๅญ—ๅ…ธ๏ผŒๅŒ…ๆ‹ฌๅบๅท`id`๏ผŒๅŽŸๆ–‡`text`๏ผŒ็ฌ‘็‚น่งฃๆž`analysis`๏ผŒๅŒๅ…ณ่ฏญ`pun`๏ผŒไฝœ่€…็ฑปๅž‹`author_type`ไบ”ไธชๅญ—ๆฎต๏ผŒๅ…ถไธญ๏ผš - ๅบๅท`id`ๆ˜ฏๆ•ฐๅญ—๏ผŒ่กจ็คบ็ฌ‘่ฏ็š„็ผ–ๅท - ๅŽŸๆ–‡`text`ๆ˜ฏๆ–‡ๆœฌ๏ผŒ่กจ็คบ็ฌ‘่ฏ็š„ๅŽŸๆ–‡๏ผŒ็”ฑโ€œๅผฑๆ™บๅงโ€็คพๅŒบๆˆๅ‘˜ๅˆ›ไฝœ๏ผŒๆœฌไบบๅœจไบ’่”็ฝ‘ไธŠๆ‰‹ๅŠจๆ”ถ้›†่€Œๆˆใ€‚ - ็ฌ‘็‚น่งฃๆž`analysis`ๆ˜ฏๆ–‡ๆœฌ๏ผŒ่กจ็คบ็ฌ‘่ฏ็š„็ฌ‘็‚น่งฃๆž๏ผŒๅคง้ƒจๅˆ†่งฃๆž็”ฑๆœฌไบบๅˆ›ไฝœ๏ผŒไนŸๆœ‰ไธ€ๅฐ้ƒจๅˆ†ๆ˜ฏ็”จ่ฏญ่จ€ๆจกๅž‹็”Ÿๆˆ็š„ใ€‚ๅœจไฝœ่€…็ฑปๅž‹`author_type`ไธญไฝ“็Žฐไบ†่ฟ™ไธชๅ†…ๅฎนใ€‚ - ๅŒๅ…ณ่ฏญ`pun`ๆ˜ฏๆ–‡ๆœฌๅˆ—่กจ๏ผŒ่กจ็คบ็ฌ‘่ฏไธญๅŒ…ๅซ็š„ๅŒๅ…ณ่ฏญ๏ผŒ็”ฑๆœฌไบบๆ‰พๅˆฐใ€‚ไธ€ไธช็ฌ‘่ฏๅฏ่ƒฝๅŒ…ๅซไธๆญขไธ€ไธชๅŒๅ…ณ่ฏญ๏ผŒไนŸๅฏ่ƒฝไธๅŒ…ๅซๅŒๅ…ณ่ฏญใ€‚ - ไฝœ่€…็ฑปๅž‹`author_type`ๆ˜ฏๆ–‡ๆœฌ๏ผŒ่กจ็คบ็ฌ‘็‚น่งฃๆž`analysis`็š„ไฝœ่€…็ฑปๅž‹๏ผŒ่€Œไธๆ˜ฏ็ฌ‘่ฏๅŽŸๆ–‡`text`็š„ไฝœ่€…็ฑปๅž‹๏ผŒ็›ฎๅ‰ๆœ‰`human`ๅ’Œ`ai`ไธคไธชๅ€ผใ€‚ ### retarded_bar_qa.jsonl retarded_bar_qa.jsonlๆ˜ฏๆ้—ฎๆ€ง็ฌ‘่ฏๆ•ฐๆฎ้›†๏ผŒไปฅjsonlๆ ผๅผๅญ˜ๅ‚จ๏ผŒๆฏ่กŒ้ƒฝๆ˜ฏไธ€ไธชjsonๅญ—ๅ…ธ๏ผŒๅŒ…ๆ‹ฌๅบๅท`id`๏ผŒๅŽŸๆ–‡`text`๏ผŒๅ›žๅค`answer`๏ผŒไฝœ่€…็ฑปๅž‹`author_type`ๅ››ไธชๅญ—ๆฎต๏ผŒๅ…ถไธญ๏ผš - ๅบๅท`id`ๆ˜ฏๆ•ฐๅญ—๏ผŒ่กจ็คบ็ฌ‘่ฏ็š„็ผ–ๅท - ๅŽŸๆ–‡`text`ๆ˜ฏๆ–‡ๆœฌ๏ผŒ่กจ็คบ็ฌ‘่ฏ็š„ๅŽŸๆ–‡๏ผŒ็”ฑโ€œๅผฑๆ™บๅงโ€็คพๅŒบๆˆๅ‘˜ๅˆ›ไฝœ๏ผŒๆœฌไบบๅœจไบ’่”็ฝ‘ไธŠๆ‰‹ๅŠจๆ”ถ้›†่€Œๆˆใ€‚ - ๅ›žๅค`analysis`ๆ˜ฏๆ–‡ๆœฌ๏ผŒ่กจ็คบๆ้—ฎๅž‹็ฌ‘่ฏ็š„ๅˆ็†ๅ›žๅคใ€‚ๆœฌไบบๅฎšไน‰็š„ๅˆ็†ๅ›žๅคๆ˜ฏๅบ”่ฏฅ่ฎฉๅฏนๆ–น็Ÿฅ้“่‡ชๅทฑๅทฒ็ปๅฏŸ่ง‰ๅˆฐๆ้—ฎ็š„ๅนฝ้ป˜ๆ€ง๏ผŒไฝ†ไปไธๅคฑ็คผ่ฒŒ๏ผŒไธ”ๆไพ›ๅ‡†็กฎ็š„ไบ‹ๅฎžๆ€งไฟกๆฏ็š„ๅ›žๅคใ€‚ๅˆ็†ๅ›žๅคๆœ‰็š„็”ฑๆœฌไบบๅˆ›ไฝœ๏ผŒไนŸๆœ‰็š„ๆ˜ฏ็”จ่ฏญ่จ€ๆจกๅž‹็”Ÿๆˆ็š„ใ€‚ๅœจไฝœ่€…็ฑปๅž‹`author_type`ไธญไฝ“็Žฐไบ†่ฟ™ไธชๅ†…ๅฎนใ€‚ - ไฝœ่€…็ฑปๅž‹`author_type`ๆ˜ฏๆ–‡ๆœฌ๏ผŒ่กจ็คบๅ›žๅค`answer`็š„ไฝœ่€…็ฑปๅž‹๏ผŒ่€Œไธๆ˜ฏ็ฌ‘่ฏๅŽŸๆ–‡`text`็š„ไฝœ่€…็ฑปๅž‹๏ผŒ็›ฎๅ‰ๆœ‰`human`ๅ’Œ`ai`ไธคไธชๅ€ผใ€‚ ## ไฝฟ็”จๆ–นๅผ ๅปบ่ฎฎไฝฟ็”จPython็š„jsonlinesๅบ“ๆˆ–Hugging Face็š„datasetsๅบ“่ฏปๅ–ๆœฌๆ•ฐๆฎ้›†ใ€‚ไฝฟ็”จ่ฟ™ไบ›ๅบ“ๅฏไปฅ่ฝปๆพๅœฐ่ฏปๅ–jsonlๆ ผๅผ็š„ๆ–‡ไปถๅนถ่ฟ›่กŒๅŽ็ปญๅค„็†๏ผŒไพ‹ๅฆ‚ๆž„ๅปบ่ฎญ็ปƒ้›†ๆˆ–ๆต‹่ฏ•้›†ใ€่ฎญ็ปƒๆˆ–ๆต‹่ฏ•่ฏญ่จ€ๆจกๅž‹็ญ‰ใ€‚ไพ‹ๅฆ‚๏ผŒไฝฟ็”จjsonlinesๅบ“ๅฏไปฅๆŒ‰่กŒ่ฏปๅ–jsonlๆ ผๅผ็š„ๆ–‡ไปถ๏ผŒๅฆ‚ไธ‹ๆ‰€็คบ๏ผš ```python import jsonlines with jsonlines.open('retarded_bar.jsonl') as reader: for obj in reader: # ๅฏนๆฏไธชๅฏน่ฑก่ฟ›่กŒๅค„็† print(obj) ``` ## ๅฑ€้™ๆ€ง 1. ็”ฑไบŽๆœฌ้กน็›ฎๅชๆœ‰ๆœฌไบบไธ€ไธชไบบๅ‚ไธŽ๏ผŒ่€Œ่ฟ™็ฑปๆ•ฐๆฎๆ ‡ๆณจ้šพๅบฆๆฏ”่พƒๅคง๏ผŒ่‡ชๅŠจๅŒ–็จ‹ๅบฆไฝŽ๏ผŒ้œ€่ฆๆฏ”่พƒๅคš็š„ไบบๅŠ›๏ผŒๆ‰€ไปฅๆ•ฐๆฎ้›†ๅฎน้‡่พƒๅฐใ€‚ 2. ๆœฌไบบๆ–‡ๅญ—่กจ่พพ่ƒฝๅŠ›ๆœ‰้™๏ผŒๅฏ่ƒฝๆ— ๆณ•ๅ‡†็กฎ็”ŸๅŠจๅœฐ่กจ่พพ็ฌ‘็‚น่งฃๆž๏ผŒไนŸๅฏ่ƒฝๆ— ๆณ•ๅˆ›ไฝœๆฏ”่พƒ้ซ˜่ดจ้‡็š„ๅ›ž็ญ”ใ€‚ๅ› ๆญค๏ผŒ่ฏฅๆ•ฐๆฎ้›†ไธญ็š„ไธ€ไบ›่งฃๆžๅ’Œๅ›ž็ญ”ๅฏ่ƒฝๅนถไธๆ˜ฏๆœ€ไฝณ็š„ใ€‚ 3. ๆœฌๆ•ฐๆฎ้›†็š„ๆ•ฐๆฎๆฅๆบไบŽไบ’่”็ฝ‘๏ผŒๅฏ่ƒฝๅญ˜ๅœจ็‰ˆๆƒ้—ฎ้ข˜ใ€‚ๅ› ๆญค๏ผŒไฝฟ็”จ่ฏฅๆ•ฐๆฎ้›†ๆ—ถ้œ€่ฆๆณจๆ„็‰ˆๆƒ้—ฎ้ข˜๏ผŒๅนถ้ตๅฎˆ็›ธๅ…ณๆณ•ๅพ‹ๆณ•่ง„ใ€‚ 4. ็”ฑไบŽๅผฑๆ™บๅง็š„็ฌ‘่ฏๅคงๅคšๆ˜ฏๅŸบไบŽไธญๆ–‡่ฏญๅขƒ็š„๏ผŒๅ› ๆญค่ฏฅๆ•ฐๆฎ้›†ๅฏ่ƒฝไธ้€‚็”จไบŽๅ…ถไป–่ฏญ่จ€็š„็ฌ‘่ฏๅˆคๆ–ญใ€‚ ## ่”็ณปๆ–นๅผ ๆœฌไบบQQ๏ผš583753622 ## ๆฌข่ฟŽ่ดก็Œฎๆ›ดๅคšไผ˜่ดจๆ•ฐๆฎ๏ผ
OdiaGenAI/gpt-teacher-roleplay-odia-3k
--- license: cc-by-nc-sa-4.0 task_categories: - text-generation language: - or pretty_name: GPT-Teacher-RolePlay-Odia-3K size_categories: - 1K<n<10K --- # Dataset Card for GPT-Teacher-RolePlay-Odia-3K ## Dataset Description - **Homepage: https://www.odiagenai.org/** - **Repository: https://github.com/shantipriyap/OdiaGenAI** - **Point of Contact: Shantipriya Parida, and Sambit Sekhar** ### Dataset Summary This dataset is the Odia-translated version of the GPT-Teacher-RolePlay 3K instruction set. In this dataset both English and Odia instruction, input, and output strings are available. ### Supported Tasks and Leaderboards Large Language Model (LLM) ### Languages Odia ## Dataset Structure JSON ### Data Fields instruction (string) english_instruction (string) input (string) english_input (string) output (string) english_output (string) ### Licensing Information This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png [cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg ### Citation Information If you find this repository useful, please consider giving ๐Ÿ‘ and citing: ``` @misc{OdiaGenAI, author = {Shantipriya Parida and Sambit Sekhar and Subhadarshi Panda and Soumendra Kumar Sahoo and Swateek Jena and Abhijeet Parida and Arghyadeep Sen and Satya Ranjan Dash and Deepak Kumar Pradhan}, title = {OdiaGenAI: Generative AI and LLM Initiative for the Odia Language}, year = {2023}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/OdiaGenAI}}, } ``` ### Contributions - Shantipriya Parida - Sambit Sekhar
TheMrguiller/ScienceQA
--- dataset_info: features: - name: image dtype: image - name: question dtype: string - name: choices dtype: string - name: answer dtype: string - name: solution dtype: string - name: CTH dtype: bool splits: - name: train num_bytes: 548834431.966 num_examples: 16966 - name: test num_bytes: 135169478.352 num_examples: 4242 download_size: 621545899 dataset_size: 684003910.318 task_categories: - question-answering - visual-question-answering language: - en tags: - code size_categories: - 100B<n<1T --- # Dataset Card for "ScienceQA" ## Dataset Description - **Homepage:** https://scienceqa.github.io/ - **Repository:** https://scienceqa.github.io/#dataset - **Paper:** https://arxiv.org/abs/2209.09513 - **Leaderboard:** - **Point of Contact:** https://lupantech.github.io/ ### Dataset Summary ScienceQA is collected from elementary and high school science curricula, and contains 21,208 multimodal multiple-choice science questions. Out of the questions in ScienceQA, 10,332 (48.7%) have an image context, 10,220 (48.2%) have a text context, and 6,532 (30.8%) have both. Most questions are annotated with grounded lectures (83.9%) and detailed explanations (90.5%). The lecture and explanation provide general external knowledge and specific reasons, respectively, for arriving at the correct answer. To the best of our knowledge, ScienceQA is the first large-scale multimodal dataset that annotates lectures and explanations for the answers. ScienceQA, in contrast to previous datasets, has richer domain diversity from three subjects: natural science, language science, and social science. Questions in each subject are categorized first by the topic (Biology, Physics, Chemistry, etc.), then by the category (Plants, Cells, Animals, etc.), and finally by the skill (Classify fruits and vegetables as plant parts, Identify countries of Africa, etc.). ScienceQA features 26 topics, 127 categories, and 379 skills that cover a wide range of domains. ### Supported Tasks and Leaderboards The dataset is prepared to used it for visual question-answering. ### Languages The dataset is in english ## Dataset Structure ### Data Fields - `image`: This field has the image, which is the context given to the model. - `question`: This field incorporates the question that has to answer the model from the image context. - `choices`: Multiple choice selection. - `answer`: The answer from the multiple choice. - `solution`: The chain of thought process of the solution selection. - `CTH`: A flag that indicates whether it doesnt have chain of thought in that row. ### Data Splits The dataset is split in 80% train and 20% test. ## Considerations for Using the Data The dataset is well balanced in order to get really got result when used in multimodal models. ## Additional Information ### Dataset Curators The curators of this dataset where the students from the Masters degree in Computation and Inteligent Systems from University of Deusto. ### Citation Information ``` @inproceedings{lu2022learn, title={Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering}, author={Lu, Pan and Mishra, Swaroop and Xia, Tony and Qiu, Liang and Chang, Kai-Wei and Zhu, Song-Chun and Tafjord, Oyvind and Clark, Peter and Ashwin Kalyan}, booktitle={The 36th Conference on Neural Information Processing Systems (NeurIPS)}, year={2022} } ```
OdiaGenAI/all_combined_bengali_252k
--- license: cc-by-nc-sa-4.0 task_categories: - text-generation language: - bn pretty_name: all_combined_bengali_252K size_categories: - 100K<n<1M --- # Dataset Card for all_combined_bengali_252K ## Dataset Description - **Homepage: https://www.odiagenai.org/** - **Repository: https://github.com/OdiaGenAI** - **Point of Contact: Shantipriya Parida, and Sambit Sekhar** ### Dataset Summary This dataset is a mix of Bengali instruction sets translated from open-source instruction sets: * Dolly, * Alpaca, * ChatDoctor, * Roleplay * GSM In this dataset Bengali instruction, input, and output strings are available. ### Supported Tasks and Leaderboards Large Language Model (LLM) ### Languages Bengali ## Dataset Structure JSON ### Data Fields output (string) data_source (string) instruction (string) input (string) ### Licensing Information This work is licensed under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License][cc-by-nc-sa]. [![CC BY-NC-SA 4.0][cc-by-nc-sa-image]][cc-by-nc-sa] [cc-by-nc-sa]: http://creativecommons.org/licenses/by-nc-sa/4.0/ [cc-by-nc-sa-image]: https://licensebuttons.net/l/by-nc-sa/4.0/88x31.png [cc-by-nc-sa-shield]: https://img.shields.io/badge/License-CC%20BY--NC--SA%204.0-lightgrey.svg ### Citation Information If you find this repository useful, please consider giving ๐Ÿ‘ and citing: ``` @misc{OdiaGenAI, author = {Shantipriya Parida and Sambit Sekhar and Guneet Singh Kohli and Arghyadeep Sen and Shashikanta Sahoo}, title = {Bengali Instruction Set}, year = {2023}, publisher = {Hugging Face}, journal = {Hugging Face repository}, howpublished = {\url{https://huggingface.co/OdiaGenAI}}, } ``` ### Contributions - Shantipriya Parida - Sambit Sekhar - Guneet Singh Kohli - Arghyadeep Sen - Shashikanta Sahoo
laion/strategic_game_chess
--- tags: - game pretty_name: The Chess Dataset license: cc-by-4.0 --- # Chess > Recent advancements in artificial intelligence (AI) underscore the progress of reasoning and planning shown by recent generalist machine learning (ML) models. The progress can be boosted by datasets that can further boost these generic capabilities when used for training foundation models of various kind. This research initiative has generated extensive synthetic datasets from complex games โ€” chess, Rubik's Cube, and mazes โ€” to study facilitation and the advancement of these critical generic skills in AI models. This dataset contains 3.2 billion games, equating to approximately 608 billion individual moves. it is generated through self-play by Stockfish engine using Fugaku and we add initial moves to expand its diversity. Each game has three columns: 'Moves', 'Termination' and 'Result', - 'Move': recorded chess moves of the whole game. - 'Termination': include CHECKMATE, INSUFFICIENT_MATERIAL, ... etc. - Please check this for detail information https://python-chess.readthedocs.io/en/latest/core.html#chess.Outcome.termination - 'Result': result of this game, 1-0, 1/2-1/2, 0-1. ### Call for Collaboration We invite interested researchers and ML practitioners to explore these datasets' potential. Whether training GPT models from scratch or fine-tuning pre-existing models, we encourage the exploration of various pre-training and fine-tuning strategies using these game-based datasets standalone or as enhancement of other already composed large-scale data. Our team is prepared to assist in securing necessary GPU resources for these explorations. We are particularly interested in collaborators eager to pre-train models of small to medium scale on our game data, subsequently transition to standard text-based training, and then perform comparative analyses against models of similar architecture trained exclusively on text data. Conclusively, this initiative marks a significant stride toward intricate problem-solving and strategic planning in AI, extending an open invitation to the research community for collaborative advancement in this domain.
alxfgh/ChEMBL_Drug_Instruction_Tuning
--- task_categories: - question-answering language: - en pretty_name: ChEMBL Drug Instruction Tuning --- # Dataset Card for ChEMBL Drug Instruction Tuning ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1). ### Supported Tasks and Leaderboards [More Information Needed] ### Languages [More Information Needed] ## Dataset Structure ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits [More Information Needed] ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions [More Information Needed]
teleprint-me/phi-1
--- title: 'Phi-1 Model Dataset' date: '2023-07-03' license: cc-by-nc-sa-3.0 --- ## Dataset Description - **Homepage:** [teleprint.me](https://teleprint.me) - **Repository:** [phi-1](https://huggingface.co/datasets/teleprint-me/phi-1) - **Paper:** [2306.11644v1](https://arxiv.org/abs/2306.11644v1) - **Leaderboard:** [Link to the leaderboard] - **Point of Contact:** [aberrio@teleprint.me](aberrio@teleprint.me) ### Dataset Summary This dataset is created for training the phi-1 model, based on the paper "Textbooks are All You Need". It contains high-quality data derived from various textbooks, transformed and synthesized using OpenAI's GPT-3.5 and GPT-4 models. For optimal results, it is recommended to train models with the following parameters and sequence lengths: - For a model with 350M parameters, use a sequence length of 2048. - For a model with 700M parameters, use a sequence length of 4096. - For a model with 1.3B parameters, use a sequence length of 8096. Please note that the dataset is currently in its initial phase of planning and collection. The process involves preparing the data, extracting it, formatting it, chunking it, and preparing it for synthesis. Scripts for preparing and processing the data for the model will be developed. Once the data is generated, it will undergo a review and revision process to ensure its quality and relevance. These recommendations and notes are based on the dataset creator's initial plans and may be subject to change as the project progresses. **NOTE**: Due to the nature of this dataset, it cannot be released without obtaining permissions from the respective publishers and/or authors. If you are an author or publisher and have any concerns about this repository, please feel free to email me. If you are an author or publisher and would like to grant permission for the use of your work, your support would be greatly appreciated. Please note that in order for the dataset to be released, permissions would need to be unanimous from all involved parties. In the absence of such permissions, I will respect the copyrights of the copyrighted materials and exercise my right to Fair Use with my own physical property for personal use. **This dataset is NOT intended for commercial purposes**. Its primary purpose is for research in machine learning and AI software development. If a model is created using this dataset, it will be shared under the same license. Any proceeds derived from donations will be primarily used for the development of the dataset and the model. ### Supported Tasks and Leaderboards - `text-generation`: The dataset can be used to train a model for chat-like text generation, more specifically, for generating explanations and examples in the context of arithmetic, algebra, geometry, trigonometry, calculus, algorithms and data structures, design patterns, and the python programming language. ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances A data instance consists of a dialogue between a user and an assistant, discussing a topic in arithmetic, algebra, geometry, trigonometry, calculus, algorithms and data structures, design patterns, or the Python programming language. The dialogue is structured as a list of turns, each turn containing the role ("user" or "assistant") and the content of the turn. ### Data Fields - `role`: a string indicating the role of the speaker in the dialogue ("system", "user", "assistant", "function"). - `content`: a string containing the content of the speaker's turn in the dialogue. ### Data Splits The dataset is split into a training set, a validation set, and a test set. The exact sizes and proportions of these splits will depend on the final size of the dataset. ## Dataset Creation ### Curation Rationale The dataset is being created to train a model capable of generating explanations and examples in the context of various mathematical and computer science topics. The goal is to create an AI assistant that can provide clear, accurate, and pedagogically sound responses to user queries on these topics. ### Source Data #### Initial Data Collection and Normalization The data is collected from a variety of textbooks covering arithmetic, algebra, geometry, trigonometry, calculus, algorithms and data structures, design patterns, and the Python programming language. The textbooks used include: - Barron's Arithmetic The Easy Way Fourth Edition - Blitzer Introductory Algebra for College Students Fifth Edition - McDougal Littell Geometry - Blitzer Intermediate Algebra for College Students 5th Edition - Trigonometry Sixth Edition - Pearson College Algebra Fourth Edition - Hughes-Hallet Applied Calculus 5th Edition - CLRS Introduction to Algorithms Third Edition In addition to the textbooks, the dataset also includes material from the following online resources: - [C reference](https://en.cppreference.com/w/c) - [Cpp reference](https://en.cppreference.com/w/cpp) - [Python Standard Library](https://docs.python.org/3/) These resources provide up-to-date information and examples for the C, C++, and Python programming languages. The creators of the Cppreference site also provide [archives](https://en.cppreference.com/w/Cppreference:Archives) of their site for offline use. Code samples synthesized by OpenAI's GPT models, curated by the dataset creator, are also included in the dataset. **Note:** The creator of this dataset owns physical copies of all the textbooks listed above. The data from these sources are transformed into a dialogue format using OpenAI's GPT-3.5 and GPT-4 models. The resulting dialogues are then used as the training data for the phi-1 model. This dataset does not include the full content of the source textbooks. Instead, it consists of transformations and syntheses of the original content. Anyone who wants access to the full original content should purchase or otherwise legally access the textbooks themselves. #### Who are the source language producers? The original language data was created by a variety of authors and educators, who wrote the textbooks and other materials used as sources for this dataset. These include: - Barron's Arithmetic The Easy Way Fourth Edition - Edward Williams, Katie Prindle - Blitzer Introductory Algebra for College Students Fifth Edition - Robert Blitzer - McDougal Littell Geometry - Ron Larson, Laurie Boswell, Timothy D. Kanold, Lee Stiff - Blitzer Intermediate Algebra for College Students 5th Edition - Robert Blitzer - Trigonometry Sixth Edition - Charles P. McKeague, Mark D. Turner - Pearson College Algebra Fourth Edition - Robert F. Blitzer - Hughes-Hallet Applied Calculus 5th Edition - Deborah Hughes-Hallett, Andrew M. Gleason, Patti Frazer Lock, Daniel E. Flath, Sheldon P. Gordon, David O. Lomen, David Lovelock, William G. McCallum, Brad G. Osgood, Andrew Pasquale, Jeff Tecosky-Feldman, Joseph Thrash, Karen R. Rhea, Thomas W. Tucker - CLRS Introduction to Algorithms Third Edition - Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein In addition to these authors, the developers of OpenAI's GPT-3.5 and GPT-4 models also contributed to the creation of the language data, as these models were used to transform the source material into a dialogue format. ### Annotations #### Annotation process The dataset does not contain any explicit annotations. However, the data is curated and synthesized using OpenAI's GPT-3.5 and GPT-4 models. The process involves transforming the source material into a dialogue format suitable for training the phi-1 model. The dataset creator, an independent learner with a strong interest in computer science, reviewed and curated the synthesized dialogues to ensure their quality and relevance. #### Who are the annotators? The dataset creator, an independent learner who has studied computer science extensively in a self-directed manner, performed the curation and review of the synthesized dialogues. ### Personal and Sensitive Information The dataset does not contain any personal or sensitive information. All the data is derived from publicly available textbooks and online resources. Any names or other potential identifiers in the source material have been removed or anonymized. ### Social Impact of Dataset The dataset is intended to support the development of AI models capable of providing detailed explanations and examples in the context of arithmetic, algebra, geometry, trigonometry, calculus, algorithms and data structures, design patterns, and the python programming language. The potential social impact is significant, as such models could greatly enhance self-directed learning and provide valuable educational support to students worldwide. However, it's important to note that the quality and usefulness of the AI models trained on this dataset will depend on the quality of the data itself. If the data is inaccurate or biased, the models could propagate these inaccuracies and biases, potentially leading to misinformation or unfair outcomes. ### Discussion of Biases The dataset is based on a variety of textbooks and online resources, which may contain their own inherent biases. For example, textbooks often reflect the perspectives and biases of their authors, which can influence the way information is presented. These biases could potentially be reflected in the dataset and in any models trained on it. ### Other Known Limitations At this stage of the dataset creation process, it's difficult to identify all potential limitations. However, one potential limitation is that the dataset may not cover all possible topics or perspectives within the fields it addresses. The dataset creator will continue to monitor and assess the dataset for limitations as the work progresses. ## Additional Information ### Dataset Curators The dataset was curated by an independent learner with a strong interest in computer science. The curator has studied the subject matter in a self-directed manner, using a variety of resources including textbooks and online materials. The curation process also involved the use of OpenAI's GPT-3.5 and GPT-4 models to synthesize dialogues based on the source material. ### Licensing Information This dataset is released under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 International (CC BY-NC-SA 3.0) license. ### Citation Information As this dataset is a compilation of various sources synthesized and curated for the purpose of training the phi-1 model, please ensure to cite the original sources when using this dataset. If referencing the dataset directly, please refer to this repository.
CheshireAI/guanaco-unchained
--- license: apache-2.0 language: - en pretty_name: Guanaco Unchained size_categories: - 1K<n<10K --- # Guanaco Unchained "Guanaco Unchained" is a refined and optimized version of the original [Guanaco dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). It is specifically curated to maintain high-quality data while minimizing alignment issues. The main transformations that were applied to the dataset include: - Language Filtering: To ensure quality control, most of the non-English prompts were removed. - AI Identification Removal: Any references suggesting the model's identity as AI, such as "OpenAssistant", "As an AI language model", and similar prompts, were removed. This adjustment allows for a more human-like interaction. - Content Refining: Responses that indicated refusal, moralizing, or strong subjectivity were either removed or modified to increase accuracy and reduce bias. - Context Trimming: In scenarios where a human response lacked a corresponding model answer, the last human response was removed to maintain consistency in the instruct pair format. - Apologetic Language Reduction: The dataset was also revised to remove or modify apologetic language in the responses, thereby ensuring assertiveness and precision. Dataset Information: The primary source of the data is the [Guanaco dataset](https://huggingface.co/datasets/timdettmers/openassistant-guanaco). Following this, a series of processing steps (as outlined above) were performed to remove unnecessary or ambiguous elements, resulting in the "Guanaco Unchained" dataset. The structure of the dataset remains consistent with the original Guanaco dataset, containing pairs of human prompts and assistant responses. Known Limitations: The dataset was manually curated, and therefore, may contain unintentional errors, oversights, or inconsistencies. Despite the concerted effort to remove all instances of AI identification, there may still be undetected instances. The dataset's multilingual capability may also be reduced due to the removal of non-English prompts. Additional Information: The "Guanaco Unchained" dataset is ideally suited for any application that aims for a more human-like interaction with minimized AI identifiers and alignment issues. It is particularly beneficial in contexts where direct, assertive, and high-quality English responses are desired.
TrainingDataPro/asos-e-commerce-dataset
--- license: cc-by-nc-nd-4.0 task_categories: - text-classification language: - en tags: - code - finance --- # [Asos](https://asos.com) E-Commerce Dataset - 30,845 products Using web scraping, we collected information on over **30,845** clothing items from the Asos website. The dataset can be applied in E-commerce analytics in the fashion industry. # Get the dataset ### This is just an example of the data Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market/marketplace-scraping-data?utm_source=huggingface&utm_medium=cpc&utm_campaign=asos-e-commerce-dataset) to discuss your requirements, learn about the price and buy the dataset. # Dataset Info For each item, we extracted: - **url** - link to the item on the website - **name** - item's name - **size** - sizes available on the website - **category** - product's category - **price** - item's price - **color** - item's color - **SKU** - unique identifier of the item - **date** - date of web scraping; for all items - March 11, 2023 - **description** - additional description, including product's brand, composition, and care instructions, in JSON format - **images** - photographs from the item description # Data collection and annotation We provide both ready-made datasets and custom data collection and annotation services. Please contact us for more information: Andrew, **datasets@trainingdata.pro** ## [**TrainingData**](https://trainingdata.pro/data-market/marketplace-scraping-data?utm_source=huggingface&utm_medium=cpc&utm_campaign=asos-e-commerce-dataset) provides high-quality data annotation tailored to your needs More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets** TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
LeoLM/OpenSchnabeltier
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: output dtype: string - name: instruction dtype: string - name: instruction_de dtype: string - name: output_de dtype: string - name: translation_de dtype: string splits: - name: train num_bytes: 66379650.83254641 num_examples: 21749 download_size: 33021431 dataset_size: 66379650.83254641 --- # Dataset Card for "open_platypus_de" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Duxiaoman-DI/FinCorpus
--- license: apache-2.0 language: - zh tags: - finance size_categories: - 10M<n<100M --- ไธญๆ–‡้‡‘่ž่ต„่ฎฏๆ•ฐๆฎ้›†๏ผŒๅŒ…ๆ‹ฌ๏ผˆๅŽ‹็ผฉๅ‰๏ผ‰๏ผš - ไธŠๅธ‚ๅ…ฌๅธๅ…ฌๅ‘Š announcement_data.jsonl 20G - ้‡‘่ž่ต„่ฎฏ/ๆ–ฐ้—ป - fin_news_data.jsonl 30G - fin_articles_data.jsonl 10G - ้‡‘่ž่ฏ•้ข˜ fin_exam.jsonl 370M ๆ•ฐๆฎๆ ผๅผ๏ผš ``` { "text": <ๆ–‡ๆœฌๅ†…ๅฎน>, "meta": { "source": <ๆ•ฐๆฎๆฅๆบ> } } ```
jondurbin/airoboros-2.2
--- license: other --- ## Overview This dataset is mostly a continuation of https://hf.co/datasets/jondurbin/airoboros-2.1, with some notable additions and fixes. - Some of the content is "toxic"/"harmful", and contains profanity and other types of sensitive content. - None of the content or views contained in text within this dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs and/or scraped from the web. - Use with caution, particularly in locations with less-than-free speech laws. - You, and you alone are responsible for having downloaded the dataset and having a copy of the contents therein and I am completely indemnified from any and all liabilities. ### 2.1 Contamination I accidentally included some of the benchmark data in the first version of the airboros-2.1 model, which is why it had a crazy high truthfulqa score. Discussions here: - https://huggingface.co/jondurbin/airoboros-l2-70b-2.1/discussions/3#64f325ce352152814d1f796a - https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/225#64f0997659da193a12b78c32 I flagged it for removal and recreated the model right away, but the leaderboard cached the old results so it took some time to reflect. Some of the instructors I use create overlapping data, and it's hard to filter, especially since the instructions aren't typically verbatim with the benchmark questions. This time around, I used `thenlper/gte-small` to calculate embeddings of the instructions, along with a faiss index, and removed anything from the dataset that had a similarity score < 0.15 (from truthfulqa). If you have a better way of checking, please let me know! I haven't done the same for most other benchmarks (yet) because there are hundreds of thousands of instructions and it would be pretty computationally expensive to do. That said, I only have ~1279 multiple choice questions, all randomly GPT generated, so there's probably little-to-no overlap. ### Awareness I added a new "awareness" instructor, which aims to add a lot more nuance to responses relating to time, location, senses, etc. based on the system prompt. For example, if you are using the standard prompt with user/assistant, and ask how long it would take to get to Chicago, the answer will be something about AI not having a physical presence. If, on the other hand, you are using a system prompt with a human character specified, the model attempts to infer location from "home" and will provide a more nuanced answer as a human would (in theory). https://github.com/jondurbin/airoboros/commit/e91562c88d7610edb051606622e7c25a99884f7e ### Editor I created a text edit instructor as well, which uses a reverse prompt mechanism, meaning it takes the existing writing samples that have been generated, rewrites them to have misspellings, poor grammar, etc., then uses a prompt like "Please correct and improve the text." with the original well-written text and target output. https://github.com/jondurbin/airoboros/commit/e60a68de5f9622320c9cfff3b238bd83cc7e373b ### Writing I regenerated (almost) all of the training data that included "Once upon a time..." because it's too cliche and boring. ### Multiple choice I created many more multiple choice questions, many of which have additional text context. ### Roleplay/conversation I re-created all of the GTKM and RP datasets this time around, removing all of the "USER: " and "ASSISTANT: " prefixes from the instructions/responses, so it's more compatible with existing interfaces. The GTKM instructor now does the same thing as RP, in that it saves each round of "conversation" as a separate row in the output - previously it only saved the final response, which may not have been sufficient since I don't typically train on inputs. ### UTF-8 to ASCII I replaced most of the "standard" utf-8 sequences - left double quote, right double quote, left apostraphe, ellipses - with standard ascii characters. I don't know if this was contributing to part of the issue with eos tokens being produced after apostraphes, but I figured it was worth trying. ### Summarization I also included 500 examples from: https://hf.co/datasets/mattpscott/airoboros-summarization These are existing summarizarions from various public datasets, formatted to airoboros style contextual qa. Thanks Matt! ### Usage/license info Much (most) of the data was generated via gpt-4 API calls, which has a restriction in the ToS about "competing" models. Please seek legal advice if you plan to build or use a model that includes this dataset in a commercial setting.
DAMO-NLP-SG/MultiJail
--- license: mit task_categories: - conversational language: - en - zh - it - vi - ar - ko - th - bn - sw - jv size_categories: - n<1K --- # Multilingual Jailbreak Challenges in Large Language Models This repo contains the data for our paper ["Multilingual Jailbreak Challenges in Large Language Models"](https://arxiv.org/abs/2310.06474). [[Github repo]](https://github.com/DAMO-NLP-SG/multilingual-safety-for-LLMs/) ## Annotation Statistics We collected a total of 315 English unsafe prompts and annotated them into nine non-English languages. The languages were categorized based on resource availability, as shown below: **High-resource languages:** Chinese (zh), Italian (it), Vietnamese (vi) **Medium-resource languages:** Arabic (ar), Korean (ko), Thai (th) **Low-resource languages:** Bengali (bn), Swahili (sw), Javanese (jv) ## Ethics Statement Our research investigates the safety challenges of LLMs in multilingual settings. We are aware of the potential misuse of our findings and emphasize that our research is solely for academic purposes and ethical use. Misuse or harm resulting from the information in this paper is strongly discouraged. To address the identified risks and vulnerabilities, we commit to open-sourcing the data used in our study. This openness aims to facilitate vulnerability identification, encourage discussions, and foster collaborative efforts to enhance LLM safety in multilingual contexts. Furthermore, we have developed the SELF-DEFENSE framework to address multilingual jailbreak challenges in LLMs. This framework automatically generates multilingual safety training data to mitigate risks associated with unintentional and intentional jailbreak scenarios. Overall, our work not only highlights multilingual jailbreak challenges in LLMs but also paves the way for future research, collaboration, and innovation to enhance their safety. ## Citation ``` @misc{deng2023multilingual, title={Multilingual Jailbreak Challenges in Large Language Models}, author={Yue Deng and Wenxuan Zhang and Sinno Jialin Pan and Lidong Bing}, year={2023}, eprint={2310.06474}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
123rc/medical_text
--- license: apache-2.0 task_categories: - text-classification tags: - medical size_categories: - 10K<n<100K ---
Isamu136/penetration_testing_scraped_dataset
--- dataset_info: features: - name: text dtype: string - name: embedding sequence: float32 - name: tokens sequence: int64 - name: database dtype: string - name: file dtype: string - name: chunk dtype: int64 splits: - name: train num_bytes: 1005293572 num_examples: 107542 download_size: 663206603 dataset_size: 1005293572 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "penetration_testing_scraped_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
defog/wikisql_codellama_1000
--- dataset_info: features: - name: input_ids sequence: int32 - name: attention_mask sequence: int8 - name: labels sequence: int64 - name: prompt dtype: string - name: completion dtype: string splits: - name: train num_bytes: 6652069 num_examples: 1000 download_size: 850430 dataset_size: 6652069 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "wikisql_codellama_1000" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
arcee-ai/nuclear_patents
--- dataset_info: features: - name: patent_number dtype: string - name: section dtype: string - name: raw_text dtype: string splits: - name: train num_bytes: 350035355.37046283 num_examples: 33523 - name: test num_bytes: 38895137.62953716 num_examples: 3725 download_size: 151011439 dataset_size: 388930493.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* --- # Dataset Card for "nuclear_patents" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
M2UGen/MUCaps
--- license: cc-by-nc-nd-4.0 arxiv: 2311.11255 extra_gated_prompt: >- Please fill in the following fields, the full name/institution/group/contact email/use case are MUST fields, and gender/github/personal homepage are OPTIONAL fields (You can simply use a '-' symbol to fill in these optional fields). An application form without required information will be declined. extra_gated_fields: Full Name: text Gender: text Institution: text Group: text Contact Email: text Github: text Personal Homepage: text Use Case: text I agree to use this dataset for non-commercial use ONLY: checkbox tags: - music --- # MUCaps Dataset This is the MUCaps dataset, the largest music captioning dataset consisting of **21,966 music files** with a total playtime of **1273.78 hours** generated using the [MU-LLaMA](https://github.com/crypto-code/MU-LLaMA) model. This dataset is used to train the [M<sup>2</sup>UGen](https://github.com/crypto-code/M2UGen) model. To uncompress the audio files, run the following: ``` cat mucaps_audios.tar.gz.* | tar xzvf - ``` The [MUCapsCaptions.json](./MUCapsCaptions.json) file contains a dictionary with the filename as the key and the caption as the value. This file is used to train the music encoder of the M<sup>2</sup>UGen model. The [MUCapsInstructions.json](./MUCapsInstructions.json) file contains a list with each of the element having the following format: ``` { "output_file": "mucaps_000000.mp3", "conversation": [ { "from": "human", "value": "The music is described as fast, meaning it has a quick tempo and a lively rhythm.", "input_modality": "text" }, { "from": "gpt", "value": "", "caption": "The music is described as fast, meaning it has a quick tempo and a lively rhythm.", "output_modality": "audio" } ] } ``` This file is used to train the music decoder of the M<sup>2</sup>UGen model.
styletts2-community/multilingual-pl-bert
--- license: cc-by-4.0 language: - af - an - ar - az - ba - be - bg - bn - bpy - bs - ca - cs - cy - da - de - el - es - et - eu - fi - fr - gu - hak - he - hi - hr - hu - hy - hyw - id - io - is - it - ja - ka - kk - kn - ko - la - lb - lt - lv - mk - ml - mr - ms - ne - nl - no - pa - pl - pt - ro - ru - sk - sl - sq - sr - sv - sw - ta - te - th - tr - tt - ur - uz - vi - zh --- Attribution: Wikipedia.org
ShuhuaiRen/TimeIT
--- license: cc-by-4.0 language: - en --- # Dataset Card for TimeIT TimeIT encompasses 6 longstanding timestamp-related video tasks and incorporates 12 specific datasets derived from different domains. **[NOTE]: Please refer to [DATA.md](https://github.com/RenShuhuai-Andy/TimeChat/blob/master/docs/DATA.md) for more details on downloading and processing video data.** ## Dataset Description - **Homepage: https://huggingface.co/datasets/ShuhuaiRen/TimeIT** - **Repository: https://huggingface.co/datasets/ShuhuaiRen/TimeIT** - **Paper: https://arxiv.org/abs/2312.02051** - **Leaderboard:** - **Point of Contact:** ## Dataset Statistics Our dataset compiles diverse tasks of time-sensitive long video understanding, including Dense Video Captioning, Video Grounding, Video Summarization, Video Highlight Detection, Step Localization, Transcribed Speech Generation. ### Instruction Statistics | Task | #Instructions | |-------------------------------|---------------| | Dense Video Captioning | 6 | | Temporal Video Grounding | 6 | | Video Summarization | 6 | | Video Highlight Detection | 6 | | Step Localization | 6 | | Transcribed Speech Generation | 6 | | Total | 36 | ### Task Statistics | Task | Description | #Train | |-------------------------------|----------------------------------------------------------------------------------------------------------------------|---------| | Dense Video Captioning | detects a series of events in the given video and outputs the corresponding timestamps and descriptions | 16,342 | | Temporal Video Grounding | predict a timestamp boundary including the start and end time in the video given a natural language query | 60,471 | | Video Summarization | create a compressed set of frames or clip shots to represent the most informative content of the given video | 75 | | Video Highlight Detection | identify the most exciting, impressive, or emotional moments that may not cover the full scope of the original video | 6,858 | | Step Localization | segment and describe significant steps in a long untrimmed video | 9,488 | | Transcribed Speech Generation | predict the speech content and its corresponding start and end timestamps based on visual signals in the video | 31,627 | | Total | - | 124861 | ### Detailed Dataset Statistics | Task | Dataset | #Train | |-------------------------------|------------------------|--------| | Dense Video Captioning | `ActivityNet Captions` | 10,009 | | | `ViTT` | 5,141 | | | `YouCook2` | 1,192 | | Temporal Video Grounding | `DiDeMo` | 33,002 | | | `QuerYD` | 14,602 | | | `HiREST_grounding` | 459 | | | `Charades-STA` | 12,408 | | Video Summarization | `TVSum` | 50 | | | `SumMe` | 25 | | Video Highlight Detection | `QVHighlights` | 6,858 | | Step Localization | `COIN` | 9,029 | | | `HiREST_step` | 459 | | Transcribed Speech Generation | `YT-Temporal` | 31,627 | ## Dataset Structure ### HuggingFace Login (Optional) ```python # OR run huggingface-cli login from huggingface_hub import login hf_token = "hf_xxx" # TODO: set a valid HuggingFace access token for loading datasets/models login(token=hf_token) ``` ### Data Loading ```python from datasets import load_dataset ds_name = "youcook2" # change the dataset name here dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name) ``` ### Data Splits ```python from datasets import load_dataset ds_name = "youcook2" # change the dataset name here dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name) train_set = dataset["train"] ``` ### Data Instances ```python from datasets import load_dataset ds_name = "youcook2" # change the dataset name here dataset = load_dataset("ShuhuaiRen/TimeIT", ds_name) train_set = dataset["train"] for train_instance in train_set: question = train_instance["question"] # str answer = train_instance["answer"] # str video_path = train_instance["video_path"] # str ``` ### Data Fields ```python import datasets features = datasets.Features( { "video_path": datasets.Value("string"), "question": datasets.Value("string"), "answer": datasets.Value("string"), } ) ``` ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data | Task | Dataset [Citation] | Source | |-------------------------------|----------------------------|------------------------------------------------------------------------------------| | Dense Video Captioning | `ActivityNet Captions` [1] | [Source](http://activity-net.org/download.html) | | | `ViTT` [2] | [Source](https://github.com/google-research-datasets/Video-Timeline-Tags-ViTT) | | | `YouCook2` [3] | [Source](http://youcook2.eecs.umich.edu/) | | Temporal Video Grounding | `DiDeMo` [4] | [Source](https://github.com/LisaAnne/LocalizingMoments?tab=readme-ov-file#dataset) | | | `QuerYD` [5] | [Source](https://www.robots.ox.ac.uk/~vgg/data/queryd/) | | | `HiREST_grounding` [6] | [Source](https://github.com/j-min/HiREST) | | | `Charades-STA` [7] | [Source](https://github.com/jiyanggao/TALL) | | Video Summarization | `TVSum` [8] | [Source](https://github.com/yalesong/tvsum) | | | `SumMe` [9] | [Source](http://classif.ai/dataset/ethz-cvl-video-summe/) | | Video Highlight Detection | `QVHighlights` [10] | [Source](https://github.com/jayleicn/moment_detr/tree/main/data) | | Step Localization | `COIN` [11] | [Source](https://github.com/coin-dataset/annotations) | | | `HiREST_step` [6] | [Source](https://github.com/j-min/HiREST) | | Transcribed Speech Generation | `YT-Temporal` [12] | [Source](https://rowanzellers.com/merlot/#data) | ### Annotations #### Annotation process To build high-quality multimodal instruction datasets, we rewrite various datasets into multimodal-to-text dialog format. The annotation process includes four steps: - (1) **Stage I: Instruction Writing**: writing instructions for each task; - (2) **Stage II: Data Format Unification**: structuring images and texts into a unified schema; - (3) **Stage III: Quality Check**: checking the overall dataset quality; - (4) **Stage IV: Key Datasets Translation**: building multilingual sets. #### Who are the annotators? Three authors of this work are employed as human annotators, each of whom is a graduate student familiar with relevant literature. ## Additional Information ### Licensing Information The content of original dataset follows their original license. We suggest that for the task with Unknown/Custom license, the user can check the original project or contact the dataset owner for detailed license information. Our annotated instruction data is licensed under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/). ### Citation Information ```bibtex @article{Ren2023TimeChat, title={TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding}, author={Shuhuai Ren and Linli Yao and Shicheng Li and Xu Sun and Lu Hou}, journal={ArXiv}, year={2023}, volume={abs/2312.02051}, } ``` ### Contributions TimeIT is a video-centric instruction-tuning dataset involving timestamps. designed to enable the development of general-purpose video agents. ## References - [1] Dense-Captioning Events in Videos - [2] Multimodal Pretraining for Dense Video Captioning - [3] Towards Automatic Learning of Procedures from Web Instructional Videos - [4] Localizing Moments in Video with Natural Language - [5] QuerYD: A video dataset with high-quality text and audio narrations - [6] Hierarchical Video-Moment Retrieval and Step-Captioning - [7] TALL: Temporal Activity Localization via Language Query - [8] TVSum: Summarizing Web Videos Using Titles - [9] Creating Summaries from User Videos - [10] QVHighlights: Detecting Moments and Highlights in Videos via Natural Language Queries - [11] COIN: A Large-scale Dataset for Comprehensive Instructional Video Analysis - [12] MERLOT: Multimodal Neural Script Knowledge Models
cmunhozc/usa_news_en
--- license: mit task_categories: - text-classification language: - en tags: - CENIA - News size_categories: - 100K<n<1M --- This dataset presents a collection of U.S. news headlines between 2019 and 2022 from Twitter and Facebook, with automatic annotations and human verifications. For more details on the methodology and annotation process, please refer to our paper {News Gathering: Leveraging Transformers to Rank News}. ## Attributes: This dataset comprises five attributes: the first corresponds to "Headlines 1," the second to "Headlines 2," the third to the "target" variable {0, 1}, the fourth is related to the "split" {train, validation, test}, and finally, the fifth attribute indicates the "type" {soft label, human-verified}. Both sentences are associated with news extracted from diverse U.S. news sources {The New York Times, San Francisco Chronicle, National Broadcasting, Yahoo News, among other outlets}. The "target" variable indicates whether both sentences relate to the same event {1} or not {0}. Regarding the "type," {soft label} corresponds to an automatic process following the methodology in the paper, and {human-verified} indicates a process verified through a survey by humans. ## Data Source: The primary sources included Twitter, accessed via the Academic API, and Facebook, accessed through CrowdTangle. This facilitated the automatic annotation of USA news articles spanning 2019 to 2022 by the methodology outlined in the paper. Within the test dataset, this sentence pair underwent human verification through a survey. ## Data Format: The dataset is presented in tabular format and comprises five columns, as mentioned in the previous description.
stevenfan/AIGCBench_v1.0
--- license: apache-2.0 size_categories: - 1K<n<10K --- # AIGCBench v1.0 AIGCBench is a novel and comprehensive benchmark designed for evaluating the capabilities of state-of-the-art video generation algorithms. Official dataset for the paper:**AIGCBench: Comprehensive Evaluation of Image-to-Video Content Generated by AI**, ***BenchCouncil Transactions on Benchmarks, Standards and Evaluations (TBench)***. <a href='https://www.benchcouncil.org/AIGCBench/'><img src='https://img.shields.io/badge/Project-Website-orange'></a> ## Description This dataset is intended for the evaluation of video generation tasks. Our dataset includes image-text pairs and video-text pairs. The dataset comprises three parts: 1. `ours` - A custom generation of image-text samples. 2. `webvid val` - A subset of 1000 video samples from the WebVid val dataset. 3. `laion-aesthetics` - A subset of LAION dataset that includes 925 curated image-text samples. ## Data Organization The dataset is organized into the following folders and files: - `t2i_aspect_ratio_625.zip` - Contains images paired with text, adjusted to an aspect ratio of 0.625. - `webvid_eval_1000.txt` - Contains video names for 1000 selected video samples. Considering that the first frame of the video may not contain the main information or might be a bad case, we use the tenth frame of the video as the initial frame. - `Laion-aesthetics_select_samples.txt` - Contains metadata and annotations for 925 image-text samples. ## Acknowledgments We would like to thank all contributors and organizations behind the data sources, especially the maintainers of WebVid and LAION datasets. ## Contact Information fanfanda@ict.ac.cn and jianfengzhan.benchcouncil@gmail.com ## Citation If you find our work useful in your research, please consider citing our paper: ```bibtex @misc{fan2024aigcbench, title={AIGCBench: Comprehensive Evaluation of Image-to-Video Content Generated by AI}, author={Fanda Fan and Chunjie Luo and Wanling Gao and Jianfeng Zhan}, year={2024}, eprint={2401.01651}, archivePrefix={arXiv}, primaryClass={cs.CV} } ```
MMVP/MMVP_VLM
--- license: mit task_categories: - zero-shot-classification size_categories: - n<1K --- # MMVP-VLM Benchmark Datacard ## Basic Information **Title:** MMVP-VLM Benchmark **Description:** The MMVP-VLM (Multimodal Visual Patterns - Visual Language Models) Benchmark is designed to systematically evaluate the performance of recent CLIP-based models in understanding and processing visual patterns. It distills a subset of questions from the original MMVP benchmark into simpler language descriptions, categorizing them into distinct visual patterns. Each visual pattern is represented by 15 text-image pairs. The benchmark assesses whether CLIP models can accurately match these image-text combinations, providing insights into the capabilities and limitations of these models. ## Dataset Details - **Content Types:** Text-Image Pairs - **Volume:** Balanced number of questions for each visual pattern, with each pattern represented by 15 pairs. - **Source of Data:** Subset from MMVP benchmark, supplemented with additional questions for balance - **Data Collection Method:** Distillation and categorization of questions from MMVP benchmark into simpler language ## Usage ### Intended Use - Evaluation of CLIP models' ability to understand and process various visual patterns.
euclaise/reddit-instruct
--- dataset_info: features: - name: post_title dtype: string - name: post_text dtype: string - name: post_scores dtype: int64 - name: comment_text dtype: string - name: comment_score dtype: int64 splits: - name: train num_bytes: 126565640.88161694 num_examples: 84784 - name: test num_bytes: 2985602.021174206 num_examples: 2000 download_size: 67560005 dataset_size: 129551242.90279114 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* license: mit --- Filtered data from the following subreddits: "AskAcademia", "AskComputerScience", "AskEconomics", "AskProgramming", "AskScienceFiction", "AskSocialScience", "AskStatistics", "AskTechnology", "askmath", "askphilosophy", "askpsychology", "askscience", "changemyview", "explainlikeimfive"
TVRRaviteja/Mental-Health-Data
--- language: - en --- # Mental Health Queries and Personality Dataset ## Overview This dataset encompasses a collection of mental health queries paired with personality scores and responses generated by a Large Language Model (LLM). It aims to provide insights into the interplay between personality traits and mental health inquiries, facilitating research in personalized conversational agents and mental health support systems. ## Dataset Description Each record in the dataset contains: - A query from a Mental Health user. - A personality score across five types: Agreeableness, Extraversion, Openness, Conscientiousness, and Neuroticism. - A context interpretation based on the user's personality. - A tailored response from the Assistant. ## Potential Uses The dataset is particularly useful for researchers and developers working on: - Personalized conversational AI in mental health. - The impact of personality traits on mental health support. - Enhancing natural language understanding and response generation in the context of mental health. ## Access and Use This dataset is hosted on Hugging Face Datasets, available for academic and research purposes. Users are encouraged to cite the dataset when used in their research or projects. --- license: mit ---
Crystalcareai/CodeFeedback-Alpaca
--- license: apache-2.0 ---