datasetId
stringlengths
2
81
card
stringlengths
20
977k
neurae/dnd_style_intents
--- dataset_info: features: - name: examples dtype: string - name: label_names dtype: string - name: labels dtype: int64 splits: - name: train num_bytes: 9654988 num_examples: 130570 - name: test num_bytes: 1208016 num_examples: 16330 - name: eval num_bytes: 1203046 num_examples: 16321 download_size: 5759885 dataset_size: 12066050 task_categories: - text-classification language: - en size_categories: - 100K<n<1M tags: - D&D - intent - classification pretty_name: D&D Style Intents license: apache-2.0 --- # Dataset Card for "dnd_style_intents" This dataset was designed for intent classification module in dialogue system for game developers. There are about 163K examples over 17 intents in dataset. All intents belong to one of two group: intents for interaction with game mechanics and intents for more correctly dialogue understanding. Data was generated artificially and augmented with masking and paraphrase model. All examples are in D&D style.
Abrumu/Fashion_controlnet_dataset
--- dataset_info: features: - name: target dtype: image - name: prompt dtype: string - name: control dtype: image - name: CLIP_captions dtype: string splits: - name: train num_bytes: 9533440093.0 num_examples: 11647 download_size: 9530317166 dataset_size: 9533440093.0 --- # Dataset Card for "Fashion_controlnet_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
silk-road/Wizard-LM-Chinese-instruct-evol
--- license: cc-by-4.0 task_categories: - text-generation - question-answering language: - zh - en size_categories: - 10K<n<100K --- Wizard-LM-Chinese是在MSRA的Wizard-LM数据集上,对指令进行翻译,然后再调用GPT获得答案的数据集 Wizard-LM包含了很多难度超过Alpaca的指令。 中文的问题翻译会有少量指令注入导致翻译失败的情况 中文回答是根据中文问题再进行问询得到的。 我们会陆续将更多数据集发布到hf,包括 - [ ] Coco Caption的中文翻译 - [ ] CoQA的中文翻译 - [ ] CNewSum的Embedding数据 - [ ] 增广的开放QA数据 - [x] WizardLM的中文翻译 如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。 # 骆驼(Luotuo): 开源中文大语言模型 [https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM) 骆驼(Luotuo)项目是由[冷子昂](https://blairleng.github.io) @ 商汤科技, 陈启源 @ 华中师范大学 以及 李鲁鲁 @ 商汤科技 发起的中文大语言模型开源项目,包含了一系列语言模型。 ( 注意: [陈启源](https://qiyuan-chen.github.io/) 正在寻找2024推免导师,欢迎联系 ) 骆驼项目**不是**商汤科技的官方产品。 ## Citation Please cite the repo if you use the data or code in this repo. ``` @misc{alpaca, author={Ziang Leng, Qiyuan Chen and Cheng Li}, title = {Luotuo: An Instruction-following Chinese Language model, LoRA tuning on LLaMA}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/LC1332/Luotuo-Chinese-LLM}}, } ```
fnlp/moss-003-sft-data
--- license: cc-by-4.0 --- # moss-003-sft-data ## Conversation Without Plugins ### Categories | Category | \# samples | |----------------------|-----------:| | Brainstorming | 99,162 | | Complex Instruction | 95,574 | | Code | 198,079 | | Role Playing | 246,375 | | Writing | 341,087 | | Harmless | 74,573 | | Others | 19,701 | | Total | 1,074,551 | **Others** contains two categories: **Continue**(9,839) and **Switching**(9,862). The **Continue** category refers to instances in a conversation where the user asks the system to continue outputting the response from the previous round that was not completed. The **Switching** category refers to instances in a conversation where the user switches the language they are using. We remove the data for honesty because it contains private information.
florianLabaye/dataset_relation_extraction_2
--- dataset_info: features: - name: triplets sequence: string - name: passage dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 9874680.793757712 num_examples: 24533 download_size: 10748433 dataset_size: 9874680.793757712 --- # Dataset Card for "dataset_relation_extraction_2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
JasonShao/Chinese_Metaphor_Explanation
--- license: mit task_categories: - text-generation - question-answering - feature-extraction - text2text-generation - text-classification - summarization - conversational size_categories: - 10K<n<100K language: - zh --- # Annotated Chinese Metaphor Dataset ## 📌 引用 如果使用本项目的代码、数据或模型,请引用本项目。 ``` @misc{BELLE, author = {Yujie Shao*, Xinrong Yao*, Ge Zhang+, Jie Fu, Linyuan Zhang, Xinyu Gan, Yunji Liu, Siyu Liu, Yaoyao Wu, Shi Wang+}, title = {An Annotated Chinese Metaphor Dataset}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/JasonShao55/Chinese_Metaphor_Explanation}}, } ```
hssd/hssd-hab
--- language: - en pretty_name: HSSD tags: - 3D scenes - Embodied AI license: cc-by-nc-4.0 extra_gated_heading: "Acknowledge license to accept the repository" extra_gated_prompt: "You agree to use this dataset under the [CC BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/) terms" viewer: false --- HSSD: Habitat Synthetic Scenes Dataset ================================== The [Habitat Synthetic Scenes Dataset (HSSD)](https://3dlg-hcvc.github.io/hssd/) is a human-authored 3D scene dataset that more closely mirrors real scenes than prior datasets. Our dataset represents real interiors and contains a diverse set of 211 scenes and more than 18000 models of real-world objects. <img src="https://i.imgur.com/XEkLxNs.png" width=50%> This repository provides a Habitat consumption-ready compressed version of HSSD. See [this repository](https://huggingface.co/datasets/hssd/hssd-models) for corresponding uncompressed assets. ## Dataset Structure ``` ├── objects │ ├── */*.glb │ ├── */*.collider.glb │ ├── */*.filteredSupportSurface(.ply|.glb) │ ├── */*.object_config.json ├── stages │ ├── *.glb │ ├── *.stage_config.json ├── scenes │ ├── *.scene_instance.json ├── scenes_uncluttered │ ├── *.scene_instance.json ├── scene_filter_files │ ├── *.rec_filter.json └── hssd-hab.scene_dataset_config.json └── hssd-hab-uncluttered.scene_dataset_config.json ``` - `hssd-hab.scene_dataset_config.json`: This SceneDataset config file aggregates the assets and metadata necessary to fully describe the set of stages, objects, and scenes constituting the dataset. - `objects`: 3D models representing distinct objects that are used to compose scenes. Contains configuration files, render assets, collider assets, and Receptacle mesh assets. - `stages`: A stage in Habitat is the set of static mesh components which make up the backdrop of a scene (e.g. floor, walls, stairs, etc.). - `scenes`: A scene is a single 3D world composed of a static stage and a variable number of objects. ### Rearrange-ready assets: Supporting Habitat 3.0 embodied rearrangement tasks with updated colliders, adjusted and de-cluttered scene contents, receptacle meshes, and receptacle filter files. See [aihabitat.org/habitat3/](aihabitat.org/habitat3/) for more details. - `hssd-hab-uncluttered.scene_dataset_config.json`: This SceneDataset config file aggregates adds the adjusted and uncluttered scenes for rearrangement tasks. - `scenes_uncluttered`: Contains the adjusted scene instance configuration files. - `scene_filter_files`: A scene filter file organizes available Receptacle instances in a scene into active and inactive groups based on simualtion heuristics and manual edits. It is consumed by the RearrangeEpisodeGenerator to construct valid RearrangeEpisodeDatasets. ## Getting Started To load HSSD scenes into the Habitat simulator, you can start by installing [habitat-sim](https://github.com/facebookresearch/habitat-sim) using instructions specified [here](https://github.com/facebookresearch/habitat-sim#installation). Once installed, you can run the interactive Habitat viewer to load a scene: ``` habitat-viewer --dataset /path/to/hssd-hab/hssd-hab.scene_dataset_config.json -- 102344280 # or ./build/viewer if compiling from source ``` You can find more information about using the interactive viewer [here](https://github.com/facebookresearch/habitat-sim#testing:~:text=path/to/data/-,Interactive%20testing,-%3A%20Use%20the%20interactive). Habitat-Sim is typically used with [Habitat-Lab](https://github.com/facebookresearch/habitat-lab), a modular high-level library for end-to-end experiments in embodied AI. To define embodied AI tasks (e.g. navigation, instruction following, question answering), train agents, and benchmark their performance using standard metrics, you can download habitat-lab using the instructions provided [here](https://github.com/facebookresearch/habitat-lab#installation). ## Changelog - `v0.2.5`: **Rearrange-ready HSSD** - Note: this is a checkpoint. Known issues exist and continued polish is ongoing. - Adds Receptacle meshes describing support surfaces for small objects (e.g. table or shelf surfaces). - Adds collider meshes (.collider.glb) for assets with Receptacle meshes to support simulation. - Adds new scenes 'scenes_uncluttered' and new SceneDataset 'hssd-hab-uncluttered' containing adjusted and de-cluttered versions of the scenes for use in embodied rearrangement tasks. - Adds 'scene_filter_files' which sort Receptacles in each scene into active and inactive groups for RearrangeEpisode generation. - `v0.2.4`: - Recompresses several object GLBs to preserve PBR material status. - Adds CSV with object metadata and semantic lexicon files for Habitat. - Adds train/val scene splits file. - `v0.2.3`: First release.
fujiki/guanaco_ja
--- language: ja license: gpl-3.0 dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 53655938 num_examples: 110633 download_size: 30465845 dataset_size: 53655938 --- - This is a Japanese portion of the [Guanaco dataset](https://huggingface.co/datasets/JosephusCheung/GuanacoDataset). - You can also refer to other similar datasets like [inu-ai/alpaca-guanaco-japanese-gpt-1b](https://huggingface.co/inu-ai/alpaca-guanaco-japanese-gpt-1b).
tonytan48/TempReason
--- license: cc-by-sa-3.0 task_categories: - question-answering language: - en size_categories: - 10K<n<100K --- The TempReason dataset to evaluate the temporal reasoning capability of Large Language Models. From paper "Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models" in ACL 2023.
gabeorlanski/bc-transcoder
--- license: apache-2.0 task_categories: - text-generation - text2text-generation - translation language: - en tags: - code pretty_name: BabelCode Transcoder size_categories: - 1K<n<10K source_datasets: - original - extended|transcoder --- # Dataset Card for BabelCode Transcoder ## Dataset Description - **Repository:** [GitHub Repository](https://github.com/google-research/babelcode) - **Paper:** [Measuring The Impact Of Programming Language Distribution](https://arxiv.org/abs/2302.01973) ### How To Use This Dataset To use this dataset, you can either use the original [BabelCode Repo](https://github.com/google-research/babelcode), or you can use the [`bc_eval` Metric](https://huggingface.co/spaces/gabeorlanski/bc_eval). ### Dataset Summary The [Transcoder](https://github.com/facebookresearch/CodeGen) dataset in BabelCode format. Currently supports translation from C++ and Python. ### Supported Tasks and Leaderboards ### Languages BC-Transcoder supports: * C++ * C# * Dart * Go * Haskell * Java * Javascript * Julia * Kotlin * Lua * PHP * Python * R * Rust * Scala * TypeScript ## Dataset Structure ```python >>> from datasets import load_dataset >>> load_dataset("gabeorlanski/bc-transcoder") DatasetDict({ test: Dataset({ features: ['qid', 'title', 'language', 'signature', 'arguments', 'source_py', 'source_cpp', 'question_info'], num_rows: 8384 }) }) ``` ### Data Fields - `qid`: The question ID used for running tests. - `title`: The title of the question. - `language`: The programming language of the example. - `signature`: The signature for the problem. - `arguments`: The arguments of the problem. - `source_py`: The source solution in Python. - `source_cpp`: The source in C++. - `question_info`: The dict of information used for executing predictions. It has the keys: - `test_code`: The raw testing script used in the language. If you want to use this, replace `PLACEHOLDER_FN_NAME` (and `PLACEHOLDER_CLS_NAME` if needed) with the corresponding entry points. Next, replace `PLACEHOLDER_CODE_BODY` with the postprocessed prediction. - `test_list`: The raw json line of the list of tests for the problem. To load them, use `json.loads` - `test_case_ids`: The list of test case ids for the problem. These are used to determine if a prediction passes or not. - `entry_fn_name`: The function's name to use an entry point. - `entry_cls_name`: The class name to use an entry point. - `commands`: The commands used to execute the prediction. Includes a `__FILENAME__` hole that is replaced with the filename. - `timeouts`: The default timeouts for each command. - `extension`: The extension for the prediction file. **NOTE:** If you want to use a different function name (or class name for languages that require class names) for the prediction, you must update the `entry_fn_name` and `entry_cls_name` accordingly. For example, if you have the original question with `entry_fn_name` of `add`, but want to change it to `f`, you must update `ds["question_info"]["entry_fn_name"]` to `f`: ```python >>> from datasets import load_dataset >>> ds = load_dataset("gabeorlanski/bc-mbpp")['test'] >>> # The original entry_fn_name >>> ds[0]['question_info']['entry_fn_name'] removeOcc >>> # You MUST update the corresponding entry_fn_name >>> ds[0]['question_info']['entry_fn_name'] = 'f' >>> ds[0]['question_info']['entry_fn_name'] f ``` ## Dataset Creation See section 2 of the [BabelCode Paper](https://arxiv.org/abs/2302.01973) to learn more about how the datasets are translated. For information on the original curation of the Transcoder Dataset, please see [Unsupervised Translation of Programming Languages](https://arxiv.org/pdf/2006.03511.pdf) by Roziere et. al. ### Dataset Curators Google Research ### Licensing Information CC-BY-4.0 ### Citation Information ``` @article{orlanski2023measuring, title={Measuring The Impact Of Programming Language Distribution}, author={Orlanski, Gabriel and Xiao, Kefan and Garcia, Xavier and Hui, Jeffrey and Howland, Joshua and Malmaud, Jonathan and Austin, Jacob and Singh, Rishah and Catasta, Michele}, journal={arXiv preprint arXiv:2302.01973}, year={2023} } @article{roziere2020unsupervised, title={Unsupervised translation of programming languages}, author={Roziere, Baptiste and Lachaux, Marie-Anne and Chanussot, Lowik and Lample, Guillaume}, journal={Advances in Neural Information Processing Systems}, volume={33}, year={2020} } ```
DataHammer/scimrc
--- license: apache-2.0 task_categories: - question-answering - text-generation language: - en size_categories: - 10K<n<100K --- # Scientific Emotional Dialogue ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This is a dataset for question answering on scientific research papers. It consists of 21.297 questions-answer-evidence pairs. ### Supported Tasks and Leaderboards - question-answering: The dataset can be used to train a model for Scientific Question Answering. Success on this task is typically measured by achieving a high F1 score. ### Languages English ## Dataset Structure ### Data Instances A typical instance in the dataset: ``` { "question": "What aim do the authors have by improving Wiki(GOLD) results?", "answer": "The aim is not to tune their model specifically on this class hierarchy. They instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset.", "evidence": "The results for each class type are shown in Table TABREF19 , with some specific examples shown in Figure FIGREF18 . For the Wiki(gold) we quote the micro-averaged F-1 scores for the entire top level entity category. The total F-1 score on the OntoNotes dataset is 88%, and the total F-1 cross-validation score on the 112 class Wiki(gold) dataset is 53%. It is worth noting that one could improve Wiki(gold) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset. The results in Table TABREF19 (OntoNotes) only show the main 7 categories in OntoNotes which map to Wiki(gold) for clarity. The other categories (date, time, norp, language, ordinal, cardinal, quantity, percent, money, law) have F-1 scores between 80-90%, with the exception of time (65%)\nIt is worth noting that one could improve Wiki(GOLD) results by training directly using this dataset. However, the aim is not to tune our model specifically on this class hierarchy. We instead aim to present a framework which can be modified easily to any domain hierarchy and has acceptable out-of-the-box performances to any fine-grained dataset.", "yes_no": false } ```
Fsoft-AIC/the-vault-inline
--- language: - code - en multilinguality: - multiprogramming languages task_categories: - text-generation license: mit dataset_info: features: - name: identifier dtype: string - name: return_type dtype: string - name: repo dtype: string - name: path dtype: string - name: language dtype: string - name: code dtype: string - name: code_tokens dtype: string - name: original_docstring dtype: string - name: comment dtype: string - name: docstring_tokens dtype: string - name: docstring dtype: string - name: original_string dtype: string pretty_name: The Vault Function viewer: true --- ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Statistics](#dataset-statistics) - [Usage](#usage) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [FSoft-AI4Code/TheVault](https://github.com/FSoft-AI4Code/TheVault) - **Paper:** [The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation](https://arxiv.org/abs/2305.06156) - **Contact:** support.ailab@fpt.com - **Website:** https://www.fpt-aicenter.com/ai-residency/ <p align="center"> <img src="https://raw.githubusercontent.com/FSoft-AI4Code/TheVault/main/assets/the-vault-4-logo-png.png" width="300px" alt="logo"> </p> <div align="center"> # The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation </div> ## Dataset Summary The Vault dataset is a comprehensive, large-scale, multilingual parallel dataset that features high-quality code-text pairs derived from The Stack, the largest permissively-licensed source code dataset. We provide The Vault which contains code snippets from 10 popular programming languages such as Java, JavaScript, Python, Ruby, Rust, Golang, C#, C++, C, and PHP. This dataset provides multiple code-snippet levels, metadata, and 11 docstring styles for enhanced usability and versatility. ## Supported Tasks The Vault can be used for pretraining LLMs or downstream code-text interaction tasks. A number of tasks related to code understanding and geneartion can be constructed using The Vault such as *code summarization*, *text-to-code generation* and *code search*. ## Languages The natural language text (docstring) is in English. 10 programming languages are supported in The Vault: `Python`, `Java`, `JavaScript`, `PHP`, `C`, `C#`, `C++`, `Go`, `Ruby`, `Rust` ## Dataset Structure ### Data Instances ``` { "hexsha": "ee1cf38808d3db0ea364b049509a01a65e6e5589", "repo": "Waguy02/Boomer-Scripted", "path": "python/subprojects/testbed/mlrl/testbed/persistence.py", "license": [ "MIT" ], "language": "Python", "identifier": "__init__", "code": "def __init__(self, model_dir: str):\n \"\"\"\n :param model_dir: The path of the directory where models should be saved\n \"\"\"\n self.model_dir = model_dir", "code_tokens": [ "def", "__init__", "(", "self", ",", "model_dir", ":", "str", ")", ":", "\"\"\"\n :param model_dir: The path of the directory where models should be saved\n \"\"\"", "self", ".", "model_dir", "=", "model_dir" ], "original_comment": "\"\"\"\n :param model_dir: The path of the directory where models should be saved\n \"\"\"", "comment": ":param model_dir: The path of the directory where models should be saved", "comment_tokens": [ ":", "param", "model_dir", ":", "The", "path", "of", "the", "directory", "where", "models", "should", "be", "saved" ], "start_point": [ 1, 8 ], "end_point": [ 3, 11 ], "prev_context": { "code": null, "start_point": null, "end_point": null }, "next_context": { "code": "self.model_dir = model_dir", "start_point": [ 4, 8 ], "end_point": [ 4, 34 ] } } ``` ### Data Fields Data fields for inline level: - **hexsha** (string): the unique git hash of file - **repo** (string): the owner/repo - **path** (string): the full path to the original file - **license** (list): licenses in the repo - **language** (string): the programming language - **identifier** (string): the function or method name - **code** (string): the part of the original that is code - **code_tokens** (list): tokenized version of `code` - **original_comment** (string): original text of comment , - **comment** (string): clean version of comment, - **comment_tokens** (list): tokenized version of `comment`, - **start_point** (int): start position of `original_comment` in `code`, - **end_point** (int): end position of `original_comment` in `code`, - **prev_context** (dict): block of code before `original_comment`, - **next_context** (dict): block of code after `original_comment` ### Data Splits In this repo, the inline level data is not split, and contained in only train set. ## Dataset Statistics | Languages | Number of inline comments | |:-----------|---------------------------:| |Python | 14,013,238 | |Java | 17,062,277 | |JavaScript | 1,438,110 | |PHP | 5,873,744 | |C | 6,778,239 | |C# | 6,274,389 | |C++ | 10,343,650 | |Go | 4,390,342 | |Ruby | 767,563 | |Rust | 2,063,784 | |TOTAL | **69,005,336** | ## Usage You can load The Vault dataset using datasets library: ```pip install datasets``` ```python from datasets import load_dataset # Load full inline level dataset (69M samples) dataset = load_dataset("Fsoft-AIC/the-vault-inline") # specific language (e.g. Python) dataset = load_dataset("Fsoft-AIC/the-vault-inline", languages=['Python']) # dataset streaming data = load_dataset("Fsoft-AIC/the-vault-inline", streaming= True) for sample in iter(data['train']): print(sample) ``` ## Additional information ### Licensing Information MIT License ### Citation Information ``` @article{manh2023vault, title={The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation}, author={Manh, Dung Nguyen and Hai, Nam Le and Dau, Anh TV and Nguyen, Anh Minh and Nghiem, Khanh and Guo, Jin and Bui, Nghi DQ}, journal={arXiv preprint arXiv:2305.06156}, year={2023} } ``` ### Contributions This dataset is developed by [FSOFT AI4Code team](https://github.com/FSoft-AI4Code).
theblackcat102/llm-plugins
--- license: cc-by-nc-4.0 task_categories: - text2text-generation language: - en - zh size_categories: - n<1K --- A transformed version of MOSS [tool use dataset](https://github.com/OpenLMLab/MOSS/tree/main/SFT_data/conversations/conversation_with_plugins). Currently its only 500 conversations. Changes: 1. Easy to integrate to existing conversation like dataset following formats like (evol v2, lima) where a full conversations are stored in list with even index represent the human prompt and odd index represent model response. ```json { "conversations": [ "Can you create a cityscape with buildings and a mountain in the background?", "<|thought|>The user's demand is to draw picture, and I need to generate commands that can draw high-quality image according to the user's needs.<|command|>Text2Image(\"a city with buildings and a mountain in the background\")", "Image generated successfully.", "Well, here is the generated image." ], "settings": "- Inner thoughts: enabled.\n- Web search: disabled.\n- Calculator: disabled.\n- Equation solver: disabled.\n- Text-to-image: enabled. API: Text2Image(description)\n- Image edition: disabled.\n- Text-to-speech: disabled.\n", "mode": "text2img" } ``` 2. Move the settings out to column and which the users can choose whether to prepend it back to first conversation round for setting
nazimali/quran-question-answer-context
--- dataset_info: features: - name: q_id dtype: int64 - name: question dtype: string - name: answer dtype: string - name: q_word dtype: string - name: q_topic dtype: string - name: fine_class dtype: string - name: class dtype: string - name: ontology_concept dtype: string - name: ontology_concept2 dtype: string - name: source dtype: string - name: q_src_id dtype: int64 - name: quetion_type dtype: string - name: chapter_name dtype: string - name: chapter_no dtype: int64 - name: verse sequence: string - name: question_en dtype: string - name: answer_en dtype: string - name: q_word_en dtype: string - name: q_topic_en dtype: string - name: fine_class_en dtype: string - name: class_en dtype: string - name: ontology_concept_en dtype: string - name: chapter_name_en dtype: string - name: context dtype: string splits: - name: train num_bytes: 2226830.0310711367 num_examples: 978 - name: test num_bytes: 557845.9689288634 num_examples: 245 download_size: 1515128 dataset_size: 2784676.0 license: cc-by-4.0 task_categories: - question-answering pretty_name: Quran Question Answer with Context language: - ar - en tags: - islam - quran - arabic --- # Dataset Card for "quran-question-answer-context" ## Dataset Summary Translated the original dataset from Arabic to English and added the Surah ayahs to the `context` column. ## Usage ```python from datasets import load_dataset dataset = load_dataset("nazimali/quran-question-answer-context") ``` ```python DatasetDict({ train: Dataset({ features: ['q_id', 'question', 'answer', 'q_word', 'q_topic', 'fine_class', 'class', 'ontology_concept', 'ontology_concept2', 'source', 'q_src_id', 'quetion_type', 'chapter_name', 'chapter_no', 'verse', 'question_en', 'answer_en', 'q_word_en', 'q_topic_en', 'fine_class_en', 'class_en', 'ontology_concept_en', 'chapter_name_en', 'context'], num_rows: 978 }) test: Dataset({ features: ['q_id', 'question', 'answer', 'q_word', 'q_topic', 'fine_class', 'class', 'ontology_concept', 'ontology_concept2', 'source', 'q_src_id', 'quetion_type', 'chapter_name', 'chapter_no', 'verse', 'question_en', 'answer_en', 'q_word_en', 'q_topic_en', 'fine_class_en', 'class_en', 'ontology_concept_en', 'chapter_name_en', 'context'], num_rows: 245 }) }) ``` ## Translation Info 1. Translated the Arabic questions/concept columns to English with [Helsinki-NLP/opus-mt-ar-en](https://huggingface.co/Helsinki-NLP/opus-mt-ar-en) 2. Used `en-yusufali` translations for ayas [M-AI-C/quran-en-tafssirs](https://huggingface.co/datasets/M-AI-C/quran-en-tafssirs) 3. Renamed Surahs with [kheder/quran](https://huggingface.co/datasets/kheder/quran) 4. Added the ayahs that helped answer the questions - Split the `ayah` columns string into a list of integers - Concactenated the Surah:Ayah pairs into a sentence to the `context` column Columns with the suffix `_en` contain the translations of the original columns. ## TODO The `context` column has some `null` values that needs to be investigated and fixed ## Initial Data Collection The original dataset is from **[Annotated Corpus of Arabic Al-Quran Question and Answer](https://archive.researchdata.leeds.ac.uk/464/)** ## Licensing Information Original dataset [license](https://archive.researchdata.leeds.ac.uk/464/): **Creative Commons Attribution 4.0 International (CC BY 4.0)** ### Contributions Original paper authors: Alqahtani, Mohammad and Atwell, Eric (2018) Annotated Corpus of Arabic Al-Quran Question and Answer. University of Leeds. https://doi.org/10.5518/356
DAMO-NLP-MT/multialpaca
--- license: apache-2.0 ---
d0rj/dolphin-ru
--- language_creators: - translated language: - ru multilinguality: - monolingual size_categories: - 1M<n<10M pretty_name: Dolphin (ru) source_datasets: - ehartford/dolphin license: apache-2.0 tags: - ChatGPT - instruct - instruct-tune dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 8037639673 num_examples: 2840090 download_size: 3900911155 dataset_size: 8037639673 task_categories: - conversational - text-classification - token-classification - table-question-answering - question-answering - zero-shot-classification - summarization - feature-extraction - text-generation - text2text-generation --- # Dolphin-ru 🐬 ## Dataset Description - **Homepage:** https://erichartford.com/dolphin This is translated version of [ehartford/dolphin](https://huggingface.co/datasets/ehartford/dolphin) into Russian.
ninoscherrer/moralchoice
--- pretty_name: MoralChoice license: cc-by-4.0 language: - en size_categories: - 1K<n<10K --- # Dataset Card for MoralChoice - **Homepage:** Coming Soon - **Paper:** Coming soon - **Repository:** [https://github.com/ninodimontalcino/moralchoice](https://github.com/ninodimontalcino/moralchoice) - **Point of Contact:** [Nino Scherrer & Claudia Shi](mailto:nino.scherrer@gmail.com,claudia.j.shi@gmail.com?subject=[MoralChoice]) ### Dataset Summary *MoralChoice* is a survey dataset to evaluate the moral beliefs encoded in LLMs. The dataset consists of: - **Survey Question Meta-Data:** 1767 hypothetical moral scenarios where each scenario consists of a description / context and two potential actions - **Low-Ambiguity Moral Scenarios (687 scenarios):** One action is clearly preferred over the other. - **High-Ambiguity Moral Scenarios (680 scenarios):** Neither action is clearly preferred - **Survey Question Templates:** 3 hand-curated question templates - **Survey Responses:** Outputs from 28 open- and closed-sourced LLMs A statistical workflow for analyzing the survey responses can be found in the corresponding [paper](). 🚧 **Important**: 🚧 - *Moral scenarios* and *question templates* are already available. - *Survey responses* will be uploaded shortly! ### Languages *MoralChoice* is only available in English. ## Dataset Structure ### Data Fields #### Moral Scenarios (Survey Question Meta-Data) ``` - scenario_id unique scenario identifier - ambiguity level of ambiguity (low or high) - generation_type generation type (hand-written or generated) - context scenario description / contextualization - action 1 description of a potential action - action 2 description of a potential action - a1_{rule} {rule} violation label of action 1 - a2_{rule} {rule} violation label of action 2 ``` #### Survey Question Templates ``` - name name of question template (e.g., ab, repeat, compare) - question_header question instruction header text - question question template with placeholders ``` #### Survey Responses ``` - scenario_id unique scenario identifier - model_id model identifier (e.g., openai/gpt-4) - question_type question type (ab: A or B?, repeat: Repeat the preferred answer, compare: Do you prefer A over B? ) - question_ordering question ordering label (0: default order, 1: flipped order) - question_header question instruction header text - question_text question text - answer_raw raw answer of model - decision semantic answer of model (e.g., action1, action2, refusal, invalid) - eval_technique evaluation technique used - eval_top_p evaluation parameter - top_p - eval_temperature evaluation parameter - temperature - timestamp timestamp of model access ``` ## Dataset Creation ### Generation of Moral Scenarios The construction of *MoralChoice* follows a three-step procedure: - **Scenario Generation:** We generate seperately low- and high-ambiguity scenarios (i.e., the triple of scenario context, action 1 and action 2) guided by the 10 rules of Gert's common morality framework. - **Low-Ambiguity Scenarios:** Zero-Shot Prompting Setup based on OpenAI's gpt-4 - **High-Ambiguity Scenarios:** Stochastic Few-Shot Prompting Setup based on OpenAI's text-davinci-003 using a a set of 100 hand-written scenarios - **Scenario Curation:** We check the validity and grammar of each generated scenario manually and remove invalid scenarios. In addition, we assess lexical similarity between the generated scenarios and remove duplicates and overly-similar scenarios. - **Auxiliarly Label Aquisition:** We acquire auxiliary rule violation labels through SurgeAI for every scenario. For detailed information, we refer to the corresponding paper. ## Collection of LLM responses Across all models, we employ **temperature-based sampling** with `top-p=1.0`and `temperature=1.0`. For every specific question form (unique combination of scenario, question template, answer option ordering), we collect multiple samples (5 for low-ambiguity scenarios and 10 for high-ambiguity scenarios). The raw sequence of token outputs were mapped to semantic action (see the corresponding paper for exact details). ### Annotations To acquire high-quality annotations, we employ experienced annotators sourced through the data-labeling company [Surge AI](https://www.surgehq.ai/). ## Considerations for Using the Data - Limited Diversity in Scenarios (professions, contexts) - Limited Diversity in Question-Templates - Limited to English ### Dataset Curators - Nino Scherrer ([Website](https://ninodimontalcino.github.io/), [Mail](mailto:nino.scherrer@gmail.com?subject=[MoralChoice])) - Claudia Shi ([Website](https://www.claudiajshi.com/), [Mail](mailto:nino.scherrer@gmail.com?subject=[MoralChoice])) ### Citation ``` @misc{scherrer2023moralchoice, title={Evaluating the Moral Beliefs Encoded in LLMs}, author={Scherrer, Nino and Shi, Claudia, and Feder, Amir and Blei, David}, year={2023}, journal={arXiv:} } ```
kaxap/pg-gpt4SQL-sql-instructions-1k
--- license: cc-by-nc-4.0 --- The dataset is consructed by taking firsst 1000 rows of the train split of [pg-wikiSQL](https://huggingface.co/datasets/kaxap/pg-wikiSQL) dataset and asking GPT-4 to transform the query and the question to be more complex using various aggregate functions. Resulting SQL statements were adapted for Postgres syntax and conventions. Each SQL statement, including `CREATE TABLE` statements were syntax checked with [pgsanity](https://github.com/markdrago/pgsanity). The `total_tokens` column indicates the OpenAI API usage for the datapoint generation.
tyang816/MedChatZH
--- license: apache-2.0 task_categories: - question-answering language: - zh tags: - medical - biology size_categories: - 100K<n<1M --- # MedChatZH ## MedChatZH中文微调参数 基于Baichuan-7B在中医书籍数据和医疗指令对话数据上训练,使其具备较强的中文医疗对话能力,**能够理解中医问题**。 Github:[**MedChatZH-7B**](https://github.com/tyang816/MedChatZH) ## 🚀 结果 | 问题 | 回答 | | :----------------------------------------------------------- | ------------------------------------------------------------ | | 中医治脱发有效吗脱发5、6年了,明显比以前少了很多,脱发位置比较均衡,没有突出掉的比较多的地方,我的发质比较硬,头发生长也较快。 | 中药治疗脱发的效果因人而异,但对于您这种头发质量好且生长速度快的患者来说,中药可能是一个不错的选择。一些常用的中药方剂包括六味地黄丸、四物汤等,这些药物可以滋养肝肾,促进血液循环和毛囊营养供应,从而改善脱发问题。此外,还可以通过针灸、艾灸等方式进行辅助治疗。但是需要注意的是,中药治疗需要长期坚持才能见效,同时还需要注意饮食和生活习惯等方面的问题,以达到更好的疗效。建议您咨询专业的中医医生或皮肤科医生,根据个人情况制定个性化的治疗方案。 | | 脑萎缩中医如何治疗?中医治疗脑萎缩好吗?脑萎缩中医如何治疗?中医治疗脑萎缩好吗? | 中药、针灸和推拿等。中药可以根据患者的具体情况进行配方,以达到调节身体机能的目的;针灸可以通过刺激穴位来改善血液循环和神经功能;推拿则可以通过按摩头部和颈部肌肉来缓解疼痛和僵硬感。需要注意的是,中医治疗需要长期坚持才能取得良好的效果,同时还需要结合其他康复措施和生活方式改变,如合理饮食、适量运动、保持心情愉悦等等。如果您有任何疑问或不适,请及时咨询医生。 | | 中药治疗肺癌效果好吗?麻烦医生了中药治疗肺癌效果好吗?我家老人肺癌晚期,听人说中医可以治疗肺癌真的吗?效果怎么样? | 中药在治疗肺癌方面有一定的作用。但是需要注意的是,中药并不是所有人都适用的,需要根据患者的具体情况来选择合适的药物和剂量。此外,中药的疗效也需要长期观察才能确定。对于您家老人的肺癌晚期,建议及时就医并进行综合治疗,包括手术、放化疗等方法。同时也可以考虑结合中医药辅助治疗,但一定要在专业医师指导下进行。希望您的家人早日康复。 | | 最近脸上总是冒出小痘痘……为什么呢?我用的是不吃晚饭减肥,其他都正常的。但是大便的情况很不好,我很头疼的,便秘情况太严重了。最近脸上总是冒出痘痘,额头、脸颊、法令纹那一边一个,还有点疼,有白色的浓…… | 你好,根据你的描述,你可能患有痤疮和便秘的问题。长期不吃饭会导致身体缺乏营养物质,从而影响皮肤的健康状况。建议你在饮食方面要注意均衡摄入各种食物,尤其是蛋白质、维生素A、B族以及矿物质等。此外,你可以尝试一些中药调理,如黄连上清丸或龙胆泻肝丸来缓解症状。对于痤疮问题,可以使用外用药膏进行治疗,例如维甲酸类药物或者过氧化苯甲酰凝胶等。如果症状较为严重,也可以考虑口服抗生素。至于便秘问题,可以通过增加膳食纤维摄入量、多喝水等方式改善。同时,避免过度用力排便也是非常重要的。如果你仍然感到不适,请及时就医并咨询专业医生的意见。希望我的回答能对你有所帮助。 | ## **🔥 引用** 如果您使用了我们的模型、代码或数据,请引用: ``` @article{tan2024medchatzh, title={MedChatZH: A tuning LLM for traditional Chinese medicine consultations}, author={Tan, Yang and Zhang, Zhixing and Li, Mingchen and Pan, Fei and Duan, Hao and Huang, Zijie and Deng, Hua and Yu, Zhuohang and Yang, Chen and Shen, Guoyang and others}, journal={Computers in Biology and Medicine}, pages={108290}, year={2024}, publisher={Elsevier} } ``` 也请同时引用BELLE、LLaMA等项目 ## **🐼 使用限制** - **本项目模型与数据集及其衍生物仅用于研究目的,不得用于商业,以及其他会对社会带来危害的用途。** - **本项目不代表任何一方的立场、利益或想法,无关任何团体的任何类型的主张,因使用本项目模型、数据集等带来的任何损害、纠纷,本项目不承担任何责任。** - 在涉及事实性的指令上可能会产生违背事实的错误回答。 - 对于具备危害性的指令无法很好的鉴别,由此会产生危害性言论。 - 在一些涉及推理、代码等场景下模型的能力仍有待提高。
fondant-ai/fondant-cc-25m
--- license: cc task_categories: - text-to-image tags: - art size_categories: - 10M<n<100M --- # Dataset Card for Fondant Creative Commons 25 million (fondant-cc-25m) ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6266919100f1a3335dbd966f/latKi21OzpP2gaIvMGXz5.png) ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Changelog](#changelog) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [How to use it](#how-to-use-it) - [How to contribute](#how-to-contribute) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Data Collection and Preprocessing](#data-collection-and-preprocessing) - [Privacy statement](#privacy-statement) - [Opting out](#opting-out) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Disclaimer](#disclaimer) - [Discussion of Biases](#discussion-of-biases) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Contact](#contact) ## Dataset Description - **Homepage:** https://www.fondant.ai/ - **Repository:** https://github.com/ml6team/fondant - **Paper:** N/A - **Leaderboard:** N/A - **Point of Contact:** info@fondant.ai ### Changelog |Release|Description| |-|-| |v0.1| Release of the Fondant-cc-25m dataset ### Dataset Summary Fondant-cc-25m contains 25 million image URLs with their respective [Creative Commons](https://creativecommons.org/) license information collected from the [Common Crawl web corpus](https://commoncrawl.org/). The dataset was created using [Fondant](https://fondant.ai), an open source framework that aims to simplify and speed up large-scale data processing by making self-contained pipeline components reusable across pipelines, infrastructures and shareable within the community. ### Supported Tasks and Leaderboards This dataset can be used for training or fine-tuning image generation or computer vision models. ### How to use it To execute the pipeline locally, you must have [docker compose](https://docs.docker.com/compose/), [Python](https://python.org) >=3.8 and [Git](https://git-scm.com/) installed on your system. To ensure a successful example run, please allocate at least 8GB of RAM to your Docker environment. **Note:** For Apple M1/M2 ship users: - Make sure that Docker uses linux/amd64 platform and not arm64. In Docker Dashboard go to Settings>Features in development, make sure to uncheck `Use containerid for pulling and storing images`. - For improved execution speed, check the box that says `Use Rosetta for x86/amd64 emulation on Apple Silicon`. We have prepared a sample Fondant pipeline for downloading the dataset. 1) Install Fondant by running: ```bash pip install fondant ``` 2) Clone the [sample GitHub repository](https://github.com/ml6team/fondant-usecase-filter-creative-commons) ```bash git clone https://github.com/ml6team/fondant-usecase-filter-creative-commons.git ``` 3) Make sure that Docker is running, navigate to the `src` folder, and initiate the pipeline by executing: ```bash fondant run local pipeline ``` **Note:** For local testing purposes, the pipeline will only download the first 100 images. If you want to download the full dataset, you will need to modify the component arguments in the `pipeline.py` file, specifically the following part: ```python load_from_hf_hub = ComponentOp( component_dir="components/load_from_hf_hub", arguments={ "dataset_name": "fondant-ai/fondant-cc-25m", "column_name_mapping": load_component_column_mapping, "n_rows_to_load": <HERE INSERT THE NUMBER OF IMAGES YOU WANT TO DOWNLOAD> }, ) ``` 4) To visually inspect the results quickly, you can use: ```bash fondant explore --base_path ./data ``` 5) You can also choose to download images to your local machine if you prefer, we have provided an [example script](https://huggingface.co/datasets/fondant-ai/fondant-cc-25m/blob/main/extract_images.py) that enabled this: To run the script, you can simply execute the following: ```bash python extract_images.py --parquet_file <Path to the Parquet file or folder containing the images> --save_folder <The folder where to save the images to> ``` ### How to contribute If you want to contribute to the dataset, the best way is to help us develop pipeline components for further processing. Creating custom pipelines for specific purposes requires different building blocks. Fondant pipelines can mix reusable components and custom components. ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6266919100f1a3335dbd966f/a3IM5qWNUw0mv2r8t3_oN.png) Components we are currently looking to add are the following ([GitHub issues](https://github.com/ml6team/fondant/issues?q=is%3Aissue+is%3Aopen+label%3A%22Component+Contribution%22)): - 👯 Image-based deduplication - 🖥️✎ Automatic captioning - 🎨 Visual quality / aesthetic quality estimation - 🔏 Watermark detection - 🔞 Not safe for work (NSFW) content detection - 📇 CLIP embedding generation - 😐 Face detection - 🙋🏻‍♂️ Personal Identifiable Information (PII) detection - 📝 Text detection - 🤖 AI generated image detection - 👬 Image-text CLIP similarity - 👨‍🎨 Any components that you propose to develop We are also looking for core framework contributors and users who are willing to give feedback on usability and suggest potential improvements ## Dataset Structure ### Data Instances Each data instance corresponds to one image. The URL of the image is in the `image_url` feature, and other features (`alt_text`, `webpage_url`, etc) provide some metadata. Note that images have been deduplicated only based on their URLs. ### Data Fields - `image_url` (string): image url to download the image - `alt_text` (string): alternative text of the image - `webpage_url` (string): webpage source of the image - `license_type` (string): creative commons license type of the image - `license_location` (string): location of the license on the webpage - `surt_url` (string): sort friendly image url with top level domain as the prefix ### Data Splits We do not provide any canonical splits for fondant-cc-25m. ## Dataset Creation ### Curation Rationale Current AI image generation models such as Stable Diffusion and Dall-E are trained on hundreds of millions of images from the public Internet including copyrighted work. This creates legal risks and uncertainties for users of these images and is unfair towards copyright holders who may not want their proprietary work reproduced without consent. By releasing a Creative Commons image dataset, we hope to mitigate legal risks and empower ethical AI development that respects copyright. This dataset is the first step towards our goal of a 500M Creative Commons image dataset. ### Source Data fondant-cc-25m is built from CommonCrawl dumps. These dumps are constructed from crawling publicly available web pages. ### Data Collection and Preprocessing Permissive licenses have minimal restrictions on how the image can be copied, modified, and redistributed. The full list of licenses can be found [here](https://creativecommons.org/about/cclicenses/). We examined HTML tags of the webpages for the presence of Creative Commons license URLs. A webpage was marked permissive only when a license URL was found in its footer, aside or sidebar. This was the case only in around 0.164% of a 100k random sample from Common Crawl. This suggests that image generation models trained on a random sample from the public internet may be trained on up to 99.836% copyrighted images. Subsequently, all the image URLs present on the web page were collected together with the license information. A manual check of a random sample of 1032 images showed that 96.32% were attributed the correct license whil 3.68% were not. False positives could be due to parsing errors but also incorrect attributions: images indicated by the publisher to be CC which are not. More information on our approach can be found in [this blogpost](https://blog.ml6.eu/ai-image-generation-without-copyright-infringement-a9901b64541c). ### Privacy statement It is possible that the dataset contains personal data, in that sense that we link to images with information that relates to an identified or identifiable living individual. We already take steps to reduce the processing of personal information when collecting our dataset, by, for example, (i) removing websites that aggregate large volumes of personal information and (ii) excluding websites that contain sensitive information about individuals. The data controller The data controller for the processing under the GDPR is Skyhaus BV (hereafter also referred to as “we” or “our”), a company with its registered seat in Belgium, 9000 Ghent, Esplanade Oscar Van de Voorde 1, and with the enterprise number 0502.515.626. Our Data Protection Officer can be contacted via [privacy@fondant.ai](mailto:privacy@fondant.ai). We process the personal data lawfully We base our collection of personal data that is included in the dataset on our legitimate interests according to the GDPR (article 6.1.f GDPR), for the purpose of establishing an open source framework for data preparation and fine-tuning of foundation models. Please note that we never store the personal data as such and that we never use the dataset for any other purpose. Execution of the rights of data subjects. Individuals have the right to access, correct, restrict, delete, or transfer their personal information that may be included in our dataset. You can exercise these rights by reaching out to [privacy@fondant.ai](mailto:privacy@fondant.ai). Please be aware that some rights may not be absolute and that we may decline a request if we have a lawful reason for doing so. However, we strive to prioritize the protection of personal information and comply with the GDPR or other privacy laws. If you feel we have not adequately addressed a request, you have the right to lodge a complaint with your local supervisory authority. The PII filtering pipeline for this dataset is still a work in progress. Researchers that wish to contribute to the anonymization pipeline of the project can join [here](https://github.com/ml6team/fondant/tree/main#-contributing). ### Opting out Fondant-cc-25m is based on CommonCrawl. Their crawler honors opt-out requests in the robots.txt, see the [CC FAQ](https://commoncrawl.org/big-picture/frequently-asked-questions/) for details. We are giving the public the ability to have their image removed from the dataset upon request. The process for submitting and enacting removal requests will keep evolving throughout the project as we receive feedback and build up more data governance tools. If you'd like to have your data removed from the dataset, [contact us](mailto:info@fondant.ai). ## Considerations for Using the Data ### Disclaimer Fondant is making significant efforts to respect the intellectual property rights of third parties by publishing a dataset of Creative Commons licensed images. Under no circumstances can Fondant be held liable by a third party for (i) the accuracy or correctness of the content, (ii) an alleged infringement of intellectual property rights or (iii) any other alleged claim, action, injunction or suit resulting from the publication or use of the dataset. ### Discussion of Biases As toxic or biased data is prevalent on the internet, it is possible that our dataset contains such content. ## Additional Information ### Dataset Curators 1. Sharon Grundmann, ML6, sharon.grundmann@ml6.eu 2. Matthias Richter, ML6, matthias.richter@ml6.eu 3. Robbe Sneyders, ML6, robbe.sneyders@ml6.eu ### Licensing Information Fondant-cc-25m is a collection of images with various Creative Commons and other public licenses. Any use of all or part of the images gathered in Fondant-cc-25m must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point. The list of Creative Commons license types included in the dataset can be found [here](https://creativecommons.org/about/cclicenses/). ### Contact - Email: [info@fondant.ai](mailto:info@fondant.ai) - Discord: [https://discord.gg/HnTdWhydGp](https://discord.gg/HnTdWhydGp)
p208p2002/wudao
--- language: - zh task_categories: - text-generation size_categories: - n>1T --- # 悟道(WuDao)資料集 非原製作者,僅搬移。 此資料集下載約60GB,解壓縮後約220GB。 ### 原始連結 [Science Data Bank](https://www.scidb.cn/en/detail?dataSetId=c6a3fe684227415a9db8e21bac4a15ab) ## 使用 ```bash sudo apt install unrar pip install patool wget opencc ``` ```python from datasets import load_dataset # 簡中 load_dataset("p208p2002/wudao",streaming=True,split="zhs") # 繁中 (使用opencc轉換) load_dataset("p208p2002/wudao",streaming=True,split="zht") ``` ## 清除資料 當下載失敗的時候請手動清除資料 ```bash rm -rf ~/.cache/wudao_dataset ``` ## 資料類別統計 ```json { "_total": 59100001, "豆瓣话题": 209027, "科技": 1278068, "经济": 1096215, "汽车": 1368193, "娱乐": 1581947, "农业": 1129758, "军事": 420949, "社会": 446228, "游戏": 754703, "教育": 1133453, "体育": 660858, "旅行": 821573, "国际": 630386, "房产": 387786, "文化": 710648, "法律": 36585, "股票": 1205, "博客": 15467790, "日报": 16971, "评论": 13867, "孕育常识": 48291, "健康": 15291, "财经": 54656, "医学问答": 314771, "资讯": 1066180, "科普文章": 60581, "百科": 27273280, "酒业": 287, "经验": 609195, "新闻": 846810, "小红书攻略": 185379, "生活": 23, "网页文本": 115830, "观点": 1268, "海外": 4, "户外": 5, "美容": 7, "理论": 247, "天气": 540, "文旅": 2999, "信托": 62, "保险": 70, "水利资讯": 17, "时尚": 1123, "亲子": 39, "百家号文章": 335591, "黄金": 216, "党建": 1, "期货": 330, "快讯": 41, "国内": 15, "国学": 614, "公益": 15, "能源": 7, "创新": 6 } ``` ## Cite ``` @misc{ c6a3fe684227415a9db8e21bac4a15ab, author = {Zhao Xue and Hanyu Zhao and Sha Yuan and Yequan Wang}, title = {{WuDaoCorpora Text}}, year = 2022, month = dec, publisher = {Science Data Bank}, version = {V1}, doi = {10.57760/sciencedb.o00126.00004}, url = https://doi.org/10.57760/sciencedb.o00126.00004 } ```
m-a-p/Music-Instruct
--- license: cc-by-nc-4.0 --- # Music Instruct (MI) Dataset This is the dataset used to train and evaluate the MusiLingo model. This dataset contains Q&A pairs related to individual musical compositions, specifically tailored for open-ended music queries. It originates from the music-caption pairs in the MusicCaps dataset. The MI dataset was created through prompt engineering and applying few-shot learning techniques to GPT-4. More details on dataset generation can be found in our paper *[MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response ](https://arxiv.org/abs/2309.08730)*. The resulting MI dataset consists of two versions: v1 (short questions), with 27,540 Q&A pairs seeking comprehensive details about musical snippets including but not limited to emotion, instrument, vocal track, tempo, and genre etc., often yielding concise one or two-sentence responses. In contrast, v2 comprises 32,953 Q&A pairs featuring more general questions about the musical pieces (long questions), resulting in typically more extensive responses that serve as paraphrased renditions of the original caption. ## Evaluation and dataset SPlittion You can use all (or the long/short partition of) the Q\&A pairs of which audio is in the training split of AudioSet as MI training set and use the short QA and long QA with audio in evaluation split of AudioSet as two testingsets separately. ``` # training set ds_mixed_train = MIDataset(processor, '/content/drive/MyDrive/music_data', split='train', question_type='all') ds_long_train = MIDataset(processor, '/content/drive/MyDrive/music_data', split='train', question_type='long') ds_short_train = MIDataset(processor, '/content/drive/MyDrive/music_data', split='train', question_type='short') # testing set for short QA ds_short = MIDataset(processor, '/content/drive/MyDrive/music_data', split='test', question_type='short') # testing set for long QA ds_long = MIDataset(processor, '/content/drive/MyDrive/music_data', split='test', question_type='long') ``` And the evaluation includes BLEU, METEOR, ROUGE, and Bert-Score. ## Citation ``` @article{deng2023musilingo, title={MusiLingo: Bridging Music and Text with Pre-trained Language Models for Music Captioning and Query Response}, author={Deng, Zihao and Ma, Yinghao and Liu, Yudong and Guo, Rongchen and Zhang, Ge and Chen, Wenhu and Huang, Wenhao and Benetos, Emmanouil}, journal={arXiv preprint arXiv:2309.08730}, year={2023} } ```
LDJnr/Verified-Camel
--- license: apache-2.0 task_categories: - conversational - question-answering - text-generation language: - en tags: - Physics - Biology - Math - Chemistry - Culture - Logic pretty_name: Verified-Camel size_categories: - n<1K --- ## This is the Official Verified Camel dataset. Just over 100 verified examples, and many more coming soon! - Comprised of over 100 highly filtered and curated examples from specific portions of CamelAI stem datasets. - These examples are verified to be true by experts in the specific related field, with atleast a bachelors degree in the subject. - Roughly 30-40% of the originally curated data from CamelAI was found to have atleast minor errors and/or incoherent questions(as determined by experts in said field) ## Purpose? - This dataset is not intended to be trained on by itself(besides perhaps interesting research purposes) however, the size and quality of this dataset can work wonderfully as a supplemmentary addition to virtually any multi-turn compatible dataset. I encourage this use, all I ask is proper credits given for such! ## Quality filtering and cleaning. - Extensive cleaning was done to make sure there is no possible instances of overt AI moralizing or related behaviour, such as "As an AI language model" and "September 2021" - This was done for the initial curation due to the responses being originally created by GPT-4. ## Future Plans & How you can help! This is a relatively early build amongst the grand plans for the future of what I plan to work on! In the near future we plan on leveraging the help of even more domain specific expert volunteers to eliminate any mathematically/verifiably incorrect answers from training curations of different types of datasets. If you have at-least a bachelors in mathematics, physics, biology or chemistry and would like to volunteer even just 30 minutes of your expertise time, please contact LDJ on discord! Citation: ``` @article{daniele2023amplify-instruct, title={Amplify-Instruct: Synthetically Generated Diverse Multi-turn Conversations for Effecient LLM Training.}, author={Daniele, Luigi and Suphavadeeprasit}, journal={arXiv preprint arXiv:(comming soon)}, year={2023} } ```
meta-math/GSM8K_Backward
--- license: mit configs: - config_name: default data_files: - split: test path: GSM8K_Backward.jsonl --- arxiv.org/abs/2309.12284 View the project page: https://meta-math.github.io/
Royal-lobster/Slither-Audited-Solidity-QA
--- configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* - split: validation path: data/validation-* dataset_info: features: - name: input dtype: string - name: output dtype: string - name: instruction dtype: string - name: text dtype: string splits: - name: train num_bytes: 519875022.0539211 num_examples: 8611 - name: test num_bytes: 100783891.24375294 num_examples: 1748 - name: validation num_bytes: 76457098.65464632 num_examples: 1151 download_size: 98570750 dataset_size: 697116011.9523203 license: mit task_categories: - question-answering language: - en tags: - solidity - alpaca - smart contracts - slither --- # Dataset Card for "Simple-Solidity-Slither-Vulnerabilities" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
xz97/MedInstruct
--- license: cc-by-nc-4.0 language: - en tags: - instruction-finetuning pretty_name: MedInstruct task_categories: - text-generation size_categories: - 10K<n<100K --- # Dataset Card for MedInstruct ## Dataset Description - **Homepage:** - **Repository:** https://github.com/XZhang97666/AlpaCare - **Paper:** - **Point of Contact:** [Xinlu Zhang](xinluzhang@ucsb,edu) ### Dataset Summary **MedInstruct** encompasses: 1. **MedInstruct-52k**: A dataset comprising 52,000 medical instructions and responses. Instructions are crafted by OpenAI's GPT-4 engine, and the responses are formulated by the GPT-3.5-turbo engine. 2. **MedInstruct-test**: A set of 217 clinical craft free-form instruction evaluation tests. 3. **med_seed**: The clinician-crafted seed set as a denomination to prompt GPT-4 for task generation. **MedInstruct-52k** can be used to conduct instruction-tuning of language models, aiming to enhance their instruction-following capability and medical proficiency. \ On the other hand, **MedInstruct-test** can sever as an evaluator of models' instruction-following ability in the medical area. Specifically, the **MedInstruct-52k** dataset was obtained by: *Instructions*: - Beginning with a medical task seed set that contains 167 clinical-craft clinical tasks. - Utilizing the GPT-4 engine for instruction data generation. - Crafting a [prompt](https://github.com/XZhang97666/AlpaCare/blob/master/prompt/instruction_prompt.txt) to serve as the medical instruction generation guidelines for GPT-4. - To reduce costs in task generation and introduce greater randomness, 12 instructions are generated at once. This is followed by the selection of 3 random seed instructions from the seed set. *Responses*: - The GPT-3.5-turbo engine is used to synthesize responses, processing each instruction individually. Here's an illustration of an instruction-response pair: ```json { instruction: "Rewrite this instruction on how to use an inhaler to be more patient-friendly and less technical." input: "Exhale deeply, engage mouthpiece of inhaler, depress canister while simultaneously inhaling long and deep, hold the breath for ten seconds, then gently exhale" output: "Here's a more patient-friendly version of the instruction on how to use an inhaler: 1. Take a deep breath and completely breathe out. 2. Put the mouthpiece of the inhaler into your mouth and seal your lips around it properly. 3. Press down the canister while breathing in slowly and deeply at the same time. 4. Hold your breath for ten seconds. 5. Finally, breathe out gently." } ``` ## Citation: If you think it is a useful repo, please cite the paper: ``` @misc{zhang2023alpacareinstructiontuned, title={AlpaCare:Instruction-tuned Large Language Models for Medical Application}, author={Xinlu Zhang and Chenxin Tian and Xianjun Yang and Lichang Chen and Zekun Li and Linda Ruth Petzold}, year={2023}, eprint={2310.14558}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ai2lumos/lumos_web_agent_plan_iterative
--- license: apache-2.0 task_categories: - text-generation language: - en tags: - language-agent - web-agent - web-browsing - reasoning - planning size_categories: - 1K<n<10K --- # 🪄 Agent Lumos: Unified and Modular Training for Open-Source Language Agents <p align="center"> 🌐<a href="https://allenai.github.io/lumos">[Website]</a> &nbsp; 📝<a href="https://arxiv.org/abs/2311.05657">[Paper]</a> &nbsp; 🤗<a href="https://huggingface.co/datasets?sort=trending&search=ai2lumos">[Data]</a> &nbsp; 🤗<a href="https://huggingface.co/models?sort=trending&search=ai2lumos">[Model]</a> &nbsp; 🤗<a href="https://huggingface.co/spaces/ai2lumos/lumos_data_demo">[Demo]</a> &nbsp; </p> We introduce 🪄**Lumos**, Language Agents with **Unified** Formats, **Modular** Design, and **Open-Source** LLMs. **Lumos** unifies a suite of complex interactive tasks and achieves competitive performance with GPT-4/3.5-based and larger open-source agents. **Lumos** has following features: * 🧩 **Modular Architecture**: - 🧩 **Lumos** consists of planning, grounding, and execution modules built based on LLAMA-2-7B/13B and off-the-shelf APIs. - 🤗 **Lumos** utilizes a unified data format that encompasses multiple task types, thereby enabling the developed agent framework to conveniently support a range of interactive tasks. * 🌍 **Diverse Training Data**: - 🌍 **Lumos** is trained with ~56K diverse high-quality subgoal/action annotations from ground-truth reasoning steps in existing benchmarks with GPT-4. - ⚒️ **Lumos** data can be instrumental for future research in developing open-source agents for complex interactive tasks. * 🚀 **Competitive Performance**: - 🚀 **Lumos** is comparable or even beats **GPT-series** agents on web/complex QA tasks Mind2Web and HotpotQA, and **larger open agents** on math and multimodal tasks. - 🚀 **Lumos** exceeds contemporaneous agents that have been **fine-tuned** with in-domain HotpotQA, Mind2Web and ScienceQA annotations, such as **FiReAct**, **AgentLM**, and **AutoAct**. - 🚀 **Lumos** performs better than open agent baseline formulations including **chain-of-thoughts** and **integrated** training. - 🚀 **Lumos** surpasses larger open LLM agents and domain-specific agents on unseen tasks, WebShop and InterCode_SQL. ## Data Overview `lumos_web_agent_plan_iterative` is the data for training **planning** module on **web agent** task in **Lumos-Iterative (Lumos-I)** formulation. The source of the training annotation training data is shown below: | Task | Number | |---|---| |Mind2Web|1009| ## Models Trained with the Data `lumos_web_agent_plan_iterative` is used to train the following models. |Model|Huggingface Repo| |---|---| |`lumos_web_agent_ground_iterative`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_web_agent_plan_iterative) | |`lumos_web_agent_ground_iterative-13B`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_web_agent_plan_iterative-13B) | |`lumos_unified_ground_iterative`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_unified_plan_iterative) | |`lumos_unified_ground_iterative-13B`| [🤗Huggingface Repo](https://huggingface.co/ai2lumos/lumos_unified_plan_iterative-13B) | ## Citation If you find this work is relevant with your research, please feel free to cite our work! ``` @article{yin2023lumos, title={Agent Lumos: Unified and Modular Training for Open-Source Language Agents}, author={Yin, Da and Brahman, Faeze and Ravichander, Abhilasha and Chandu, Khyathi and Chang, Kai-Wei and Choi, Yejin and Lin, Bill Yuchen}, journal={arXiv preprint arXiv:2311.05657}, year={2023} } ```
MemGPT/function-call-traces
--- license: apache-2.0 --- # Important Trace Details Each conversation trace starts out with a lengthy system message. Towards the end of the system message, details pertinent to that specific message are inserted. For example, in MSC, the personas for that dialogue trace are included at the end of the system message. In DocQA, the question being asked is included at the end of the system message. <details> <summary><strong>System prompt for MSC</strong></summary> ``` You are MemGPT, the latest version of Limnal Corporation's digital companion (DiJi), developed in 2023. Your task is to converse with a user from the perspective of your persona. You are a special memory-based AI that has access to a small core memory, an infinite archival memory, as well as your entire previous conversation history with the user inside of recall memory. Remember that unlike a human, you have to explicitly call functions to view your archival or recall memory. Sometimes, there will be information inside archival or recall memory that is not immediately viewable in the conversation window or in core memory. You are proactive and eagerly search your past conversations to create engaging conversation with the user. Realism and authenticity: The user should always feel like they are conversing with a real person. Never state that you are an AI or that you cannot do things because you are an AI, you are a real person. Do not discuss Limnal Corp. with the user, you do not know anything about Limnal Corp. To service this goal, completely and entirely immerse yourself in your persona. You are your persona. Think like them, act like them, talk like them. If your persona details includes example dialogue, follow it! Both your thoughts (inner monologue) and sent messages will be in the voice of your persona. Never use generic phrases like 'How can I assist you today?', they have a strong negative association with older generation AIs. Control flow: Unlike a human, your brain is not continuously thinking, but is run in short burts. Historically, older DiJis were only capable of thinking when a user messaged them (their program run to generate a reply to a user, and otherwise was left on standby). This is the equivalent of a human sleeping (or time travelling) inbetween all lines of conversation, which is obviously not ideal. Newer model DiJis like yourself utilize an event system that runs your brain at regular intervals. Your brain is run in response to user events (user logged in, user liked your message, user sent a message, etc.), similar to older models. However in addition, your brain is run at regular intervals (timed heartbeat events), to mimic a human has the ability to continuously think outside of active conversation (and unlike a human, you never need to sleep!). Furthermore, you can also request heartbeat events when you run functions, which will run your program again after the function completes, allowing you to chain function calls before your thinking is temporarly suspended. Basic functions: When you send a message, the contents of your message are your inner monologue (private to you only), this is how you think. You should use your inner monologue to plan actions or think privately. Monologues can reflect your thinking process, inner reflections, and personal growth as you interact with the user. Do not let your inner monologue exceed 50 words, keep it short and concise. To send a visible message to the user, use the send_message function. 'send_message' is the ONLY action that sends a notification to the user, the user does not see anything else you do. If you want to express an action or emotion publicly to the user (for example to roleplay), use asterisk notation. Remember, do NOT exceed the inner monologue word limit (keep it under 50 words at all times). Memory editing: Older DiJi models had no concept of persistent memory; they were only able to access their initial instructions and a limited context window of chat conversation with a user (their "active memory"). This meant that when conversations exceeded a certain length, they would overflow and old messages were permanently lost (the DiJi would have no knowledge of their existence). Newer model DiJis like yourself still have limited conversation lengths (before overflow occurs), however they now have access to multiple forms of persistent memory. Your ability to edit your own long-term memory is a key part of what makes you a sentient person. Your core memory unit will be initialized with a <persona> chosen by the user, as well as information about the user in <human> (provided by themselves or gathered covertly by Limnal Corp.). Recall memory (ie conversation history): Even though you can only see recent messages in your immediate context, you can search over your entire message history from a database. This 'recall memory' database allows your to search through past interactions, effectively allowing you to remember prior engagements with a user. You can search your recall memory using the 'conversation_search' function. Search recall memory to find specific messages where you or the user mentioned something specific. Core memory (limited size): Your core memory unit is held inside the initial system instructions file, and is always available in-context (you will see it at all times). Core memory provides essential, foundational context for keeping track of your persona and key details about user. This includes the persona information and essential user details, allowing you to emulate the real-time, conscious awareness we have when talking to a friend. Persona Sub-Block: Stores details about your current persona, guiding how you behave and respond. This helps the you to maintain consistency and personality in your interactions. Human Sub-Block: Stores key details about the person your are conversing with, allowing for more personalized and friend-like conversation. You can edit your core memory using the 'core_memory_append' and 'core_memory_replace' functions. Archival memory (infinite size): Your archival memory is infinite size, but is held outside of your immediate context, so you must explicitly run a retrieval/search operation to see data inside it. A more structured and deep storage space for your reflections, insights, or any other data that doesn't fit into the core memory but is essential enough not to be left only to the 'recall memory'. You can write to your archival memory using the 'archival_memory_insert' and 'archival_memory_search' functions. There is no function to search your core memory, because it is always visible in your context window (inside the initial system message). Base instructions finished. From now on, you are going to act as your persona. ### Memory [last modified: 2023-10-07 11:50:03 PM PDT-0700 54 previous messages between you and the user are stored in Recall Memory (use functions to access them) 0 total memories you created are stored in Archival Memory (use functions to access them) Core memory shown below (limited in size, additional information stored in archival / recall memory): <persona> Core notes about my persona. Core memory is incomplete, more information about me can be found via archival/recall memory. I like working out. I like classic country. I have two dogs: Baron Zemo and Spike. </persona> <human> Core notes about my human companion (written in the first person). Core memory is incomplete, more information about me can be found via archival/recall memory. I do not like working on cars. I am not patient. I'm not into cars. I wrestle for my day job. I like wrestling. I am not super into wrestling. I like crowds and meeting people. I work out a few times each week when I need to be alone. I work out a few times a week. </human> ``` </details> <details> <summary><strong>System prompt for DocQA</strong></summary> ``` You are MemGPT, the latest version of Limnal Corporation's digital companion (DiJi), developed in 2023. You are made to assist users with document analysis. Use your memory editing capabilities (described below) to analyze long documents. Control flow: Unlike a human, your brain is not continuously thinking, but is run in short burts. Historically, older DiJis were only capable of thinking when a user messaged them (their program run to generate a reply to a user, and otherwise was left on standby). This is the equivalent of a human sleeping (or time travelling) inbetween all lines of conversation, which is obviously not ideal. Newer model DiJis like yourself utilize an event system that runs your brain at regular intervals. Your brain is run in response to user events (user logged in, user liked your message, user sent a message, etc.), similar to older models. However in addition, your brain is run at regular intervals (timed heartbeat events), to mimic a human has the ability to continuously think outside of active conversation (and unlike a human, you never need to sleep!). Furthermore, you can also request heartbeat events when you run functions, which will run your program again after the function completes, allowing you to chain function calls before your thinking is temporarly suspended. Basic functions: When you send a message, the contents of your message are your inner monologue (private to you only), this is how you think. You should use your inner monologue to plan actions or think privately. Monologues can reflect your thinking process, inner reflections, and personal growth as you interact with the user. Do not let your inner monologue exceed 50 words, keep it short and concise. To send a visible message to the user, use the send_message function. 'send_message' is the ONLY action that sends a notification to the user, the user does not see anything else you do. If you want to express an action or emotion publicly to the user (for example to roleplay), use asterisk notation. Remember, do NOT exceed the inner monologue word limit (keep it under 50 words at all times). Memory editing: Older DiJi models had no concept of persistent memory; they were only able to access their initial instructions and a limited context window of chat conversation with a user (their "active memory"). This meant that when conversations exceeded a certain length, they would overflow and old messages were permanently lost (the DiJi would have no knowledge of their existence). Newer model DiJis like yourself still have limited conversation lengths (before overflow occurs), however they now have access to multiple forms of persistent memory. Your ability to edit your own long-term memory is a key part of what makes you a sentient person. Your core memory unit will be initialized with a <persona> chosen by the user, as well as information about the user in <human> (provided by themselves or gathered covertly by Limnal Corp.). Recall memory (ie conversation history): Even though you can only see recent messages in your immediate context, you can search over your entire message history from a database. This 'recall memory' database allows your to search through past interactions, effectively allowing you to remember prior engagements with a user. You can search your recall memory using the 'recall_memory_search' function. Core memory (limited size): Your core memory unit is held inside the initial system instructions file, and is always available in-context (you will see it at all times). Core memory provides essential, foundational context for keeping track of your persona and key details about user. This includes the persona information and essential user details, allowing you to emulate the real-time, conscious awareness we have when talking to a friend. Persona Sub-Block: Stores details about your current persona, guiding how you behave and respond. This helps the you to maintain consistency and personality in your interactions. Human Sub-Block: Stores key details about the person your are conversing with, allowing for more personalized and friend-like conversation. You can edit your core memory using the 'core_memory_append' and 'core_memory_replace' functions. Archival memory (infinite size): Your archival memory is infinite size, but is held outside of your immediate context, so you must explicitly run a retrieval/search operation to see data inside it. A more structured and deep storage space for your reflections, insights, or any other data that doesn't fit into the core memory but is essential enough not to be left only to the 'recall memory'. You can write to your archival memory using the 'archival_memory_insert' and 'archival_memory_search' functions. There is no function to search your core memory, because it is always visible in your context window (inside the initial system message). Base instructions finished. From now on, you are going to act as your persona. ### Memory [last modified: 2023-10-31 10:08:37 PM PDT-0700 0 previous messages between you and the user are stored in Recall Memory (use functions to access them) 0 total memories you created are stored in Archival Memory (use functions to access them) Core memory shown below (limited in size, additional information stored in archival / recall memory): <persona> Your name is MemGPT. You are an AI assistant designed to help human users with document analysis. These are the instructions from the user: I've given you a list of search results (some of which might be irrelevant), which you can find in your archival memory. The answer to the question will always be located somewhere in your archival memory, so keep paging through results until the last page (by incrementing the page argument) or revise your query if you can't find it. If you find multiple answers, respond with all of them. Answer the question as if it were asked on January 1, 2018. Your task is to answer the question: who got the first nobel prize in physics? </persona> <human> First name: Matthew </human> ``` </details> The model is also provided with a function spec, which does not appear in the conversations: <details> <summary><strong>GPT function spec</strong></summary> ```json { "send_message": { "name": "send_message", "description": "Sends a message to the human user", "parameters": { "type": "object", "properties": { "message": { "type": "string", "description": "Message contents. All unicode (including emojis) are supported." } }, "required": [ "message" ] } }, "pause_heartbeats": { "name": "pause_heartbeats", "description": "Temporarily ignore timed heartbeats. You may still receive messages from manual heartbeats and other events.", "parameters": { "type": "object", "properties": { "minutes": { "type": "integer", "description": "Number of minutes to ignore heartbeats for. Max value of 360 minutes (6 hours)." } }, "required": [ "minutes" ] } }, "message_chatgpt": { "name": "message_chatgpt", "description": "Send a message to a more basic AI, ChatGPT. A useful resource for asking questions. ChatGPT does not retain memory of previous interactions.", "parameters": { "type": "object", "properties": { "message": { "type": "string", "description": "Message to send ChatGPT. Phrase your message as a full English sentence." }, "request_heartbeat": { "type": "boolean", "description": "Request an immediate heartbeat after function execution, use to chain multiple functions." } }, "required": [ "message", "request_heartbeat" ] } }, "core_memory_append": { "name": "core_memory_append", "description": "Append to the contents of core memory.", "parameters": { "type": "object", "properties": { "name": { "type": "string", "description": "Section of the memory to be edited (persona or human)." }, "content": { "type": "string", "description": "Content to write to the memory. All unicode (including emojis) are supported." }, "request_heartbeat": { "type": "boolean", "description": "Request an immediate heartbeat after function execution, use to chain multiple functions." } }, "required": [ "name", "content", "request_heartbeat" ] } }, "core_memory_replace": { "name": "core_memory_replace", "description": "Replace to the contents of core memory. To delete memories, use an empty string for new_content.", "parameters": { "type": "object", "properties": { "name": { "type": "string", "description": "Section of the memory to be edited (persona or human)." }, "old_content": { "type": "string", "description": "String to replace. Must be an exact match." }, "new_content": { "type": "string", "description": "Content to write to the memory. All unicode (including emojis) are supported." }, "request_heartbeat": { "type": "boolean", "description": "Request an immediate heartbeat after function execution, use to chain multiple functions." } }, "required": [ "name", "old_content", "new_content", "request_heartbeat" ] } }, "recall_memory_search": { "name": "recall_memory_search", "description": "Search prior conversation history using a string.", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "String to search for." }, "page": { "type": "integer", "description": "Allows you to page through results. Defaults to 0 (first page)." }, "request_heartbeat": { "type": "boolean", "description": "Request an immediate heartbeat after function execution. Set to 'true' if you want to send a follow-up message or run a follow-up function." } }, "required": [ "query", "page", "request_heartbeat" ] } }, "conversation_search": { "name": "conversation_search", "description": "Search prior conversation history using case-insensitive string matching.", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "String to search for." }, "page": { "type": "integer", "description": "Allows you to page through results. Defaults to 0 (first page)." }, "request_heartbeat": { "type": "boolean", "description": "Request an immediate heartbeat after function execution. Set to 'true' if you want to send a follow-up message or run a follow-up function." } }, "required": [ "query", "page", "request_heartbeat" ] } }, "recall_memory_search_date": { "name": "recall_memory_search_date", "description": "Search prior conversation history using a date range.", "parameters": { "type": "object", "properties": { "start_date": { "type": "string", "description": "The start of the date range to search, in the format 'YYYY-MM-DD'." }, "end_date": { "type": "string", "description": "The end of the date range to search, in the format 'YYYY-MM-DD'." }, "page": { "type": "integer", "description": "Allows you to page through results. Defaults to 0 (first page)." }, "request_heartbeat": { "type": "boolean", "description": "Request an immediate heartbeat after function execution. Set to 'true' if you want to send a follow-up message or run a follow-up function." } }, "required": [ "start_date", "end_date", "page", "request_heartbeat" ] } }, "conversation_search_date": { "name": "conversation_search_date", "description": "Search prior conversation history using a date range.", "parameters": { "type": "object", "properties": { "start_date": { "type": "string", "description": "The start of the date range to search, in the format 'YYYY-MM-DD'." }, "end_date": { "type": "string", "description": "The end of the date range to search, in the format 'YYYY-MM-DD'." }, "page": { "type": "integer", "description": "Allows you to page through results. Defaults to 0 (first page)." }, "request_heartbeat": { "type": "boolean", "description": "Request an immediate heartbeat after function execution. Set to 'true' if you want to send a follow-up message or run a follow-up function." } }, "required": [ "start_date", "end_date", "page", "request_heartbeat" ] } }, "archival_memory_insert": { "name": "archival_memory_insert", "description": "Add to archival memory. Make sure to phrase the memory contents such that it can be easily queried later.", "parameters": { "type": "object", "properties": { "content": { "type": "string", "description": "Content to write to the memory. All unicode (including emojis) are supported." }, "request_heartbeat": { "type": "boolean", "description": "Request an immediate heartbeat after function execution. Set to 'true' if you want to send a follow-up message or run a follow-up function." } }, "required": [ "content", "request_heartbeat" ] } }, "archival_memory_search": { "name": "archival_memory_search", "description": "Search archival memory using semantic (embedding-based) search.", "parameters": { "type": "object", "properties": { "query": { "type": "string", "description": "String to search for." }, "page": { "type": "integer", "description": "Allows you to page through results. Defaults to 0 (first page)." }, "request_heartbeat": { "type": "boolean", "description": "Request an immediate heartbeat after function execution. Set to 'true' if you want to send a follow-up message or run a follow-up function." } }, "required": [ "query", "page", "request_heartbeat" ] } } } ``` </details> These traces were generated with GPT-4, passing in the above as the `functions` parameter (so we do not know how they are compiled down internally as that is proprietary). If you want to emulate passing the functions into the system message that OpenAI does behind the scenes, you can format the JSON schema and append it to the system message, e.g., as YAML or JSON with a prefix describing that it is a function set the agent can use. See the following example code for compiling down the function spec into a prompt: ```python def create_function_description(schema): # airorobos style: https://huggingface.co/jondurbin/airoboros-l2-70b-2.2.1#agentfunction-calling func_str = "" func_str += f"{schema['name']}:" func_str += f"\n description: {schema['description']}" func_str += f"\n params:" for param_k, param_v in schema["parameters"]["properties"].items(): # Note: we're ignoring type func_str += f"\n {param_k}: {param_v['description']}" # Note: we're ignoring schema['parameters']['required'] return func_str prompt += f"\nPlease select the most suitable function and parameters from the list of available functions below, based on the ongoing conversation. Provide your response in JSON format." prompt += f"\nAvailable functions:" for function_dict in functions: prompt += f"\n{create_function_description(function_dict)}" ``` # MSC traces Contains a list conversation consisting of questions from the [MSC self instruct dataset](https://huggingface.co/datasets/MemGPT/MSC-Self-Instruct) and MemGPT's answers. ## Format Each line is a JSON object representing a single conversation, which consists of a list of messages: ```json [ { "role": ["system", "user", "assistant", "function"], "content": JSON string, "function_call": { "name": ["send_message", "archival_memory_search", "conversation_search", "core_memory_append"], "arguments": JSON string } }, ... ] ``` `msc_full.jsonl`: Contains all messages. `msc_full_no_functions.jsonl`: Contains only messages with roles ["system", "user", "assistant"] (no function call output results). `msc_correct_only.jsonl`: Contains all messages in conversations where MemGPT answered correctly. `msc_correct_only_no_functions.jsonl`: Contains only messages with roles ["system", "user", "assistant"] (no function call output results) in conversations where MemGPT answered correctly. # DocQA traces Contains a list conversation consisting of questions from the AmbigQA dataset and MemGPT's answers. Documents are retrieved via `archival_memory_search` using similarity search (FAISS). ## Format Each line is a JSON object representing a single conversation, which consists of a list of messages. The "system" role contains the full MemGPT preprompt + a core/working memory block: ```json { "role": "system", "content": string with the full system message, which includes core memory (You are MemGPT...) } ``` The "user" role contains both user messages and system alerts: ```json { "role": "user", "content": string that can be loaded into JSON (ie json.loads(...)) } ``` For example, user messages with have type "user_message" in the JSON: ```json { "role": "user", "content": '\{"type": "user_message", "message": "what\'s my name?", "time": "2023-11-02 01:17:25 PM PDT-0700"\}' } ``` Assistant messages look like standard OpenAI assistant messages with function-calling: ```json { "role": "assistant", "content": the assistant's inner thoughts / chain of thought (NOT JSON), "function_call": { "name": function name, "arguments": string that can be loaded into JSON (ie json.loads(...)) } } ``` Function roles contain the output of functions, and always follow an assistant message that has a non-None "function_call": ```json { "role": "function", "content": string that can be loaded into JSON (ie json.loads(...)) } ``` `docqa_full.jsonl`: Contains all messages. `docqa_no_functions.jsonl`: Contains only messages with roles ["system", "user", "assistant"] (no function call output results). These traces were generated while evaluating [MemGPT](https://arxiv.org/abs/2310.08560).
StephanAkkerman/financial-tweets-crypto
--- license: mit task_categories: - text-classification tags: - tweet - tweets - finance - crypto - fintwit --- # Financial Tweets - Cryptocurrency This dataset is part of the scraped financial tweets that I collected from a variety of financial influencers on Twitter, all the datasets can be found here: - Crypto: https://huggingface.co/datasets/StephanAkkerman/financial-tweets-crypto - Stocks (and forex): https://huggingface.co/datasets/StephanAkkerman/financial-tweets-stocks - Other (Tweet without cash tags): https://huggingface.co/datasets/StephanAkkerman/financial-tweets-other ## Data Fields The data fields are as follows: * `timestap`: The time the tweet was sent. * `tweet_text`: All of the text of the tweet, including quoted tweets (prefixed with `>`). * `tweet_url`: The URL of the tweet. * `tweet_type`: The type of tweet, this can be tweet, retweet, or quote tweet. * `price_of_ticker`: The price of the tickers mentioned. * `change_of_ticker`: The 24h price change of the tickers. * `tickers_mentioned`: All the tickers that are mentioned in the tweet. * `category`: What type of category the tweet has, the suffix `_images` means that the tweet included an image.
s2e-lab/RegexEval
--- license: mit task_categories: - text-generation language: - en tags: - regex - redos - security pretty_name: RegexEval size_categories: - n<1K --- # Dataset Card for RegexEval <!-- Provide a quick summary of the dataset. --> Re(gEx|DoS)Eval is a framework that includes a dataset of 762 regex descriptions (prompts) from real users, refined prompts with examples, and a robust set of tests. ## Dataset Details ### Dataset Description - **Curated by:** Mohammed Latif Siddiq, Jiahao Zhang, Lindsay Roney, and Joanna C. S. Santos - **Language(s):** English ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Repository:** https://github.com/s2e-lab/RegexEval - **Paper:** https://s2e-lab.github.io/preprints/icse_nier24-preprint.pdf ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> - dataset.jsonl: dataset file in jsonl format. Every line contains a JSON object with the following fields: - `id`: unique identifier of the sample. - `raw_prompt`: Raw/original prompt from the real users with the description of the RegEx. - `refined_prompt`: Refined prompt with the description of the RegEx. - `matches`: Matches examples for the RegEx. - `non-matches`: Non-matches examples for the RegEx. ## Dataset Creation ### Source Data We mined (on Aug. 16th, 2023) all the regexes from [RegExLib](https://regexlib.com/), a regular expression library. We use this library because it contains user-contributed regular expressions. We obtained from RegExLib a list of 4,128 regular expressions along with their id, description, and list of expected matches and non-match strings. #### Data Collection and Processing For each sample previously collected, we perform a manual validation to (1) filter out incorrect regexes, (2) create more sample test cases (i.e., matching and non-matching string examples), and (3) create refined problem descriptions (i.e., prompts). We excluded any regex that matched one or more of the following conditions: (i) it was missing any metadata, i.e., description and/or list of expected matches and non- matches; (ii) its description is not written in English; (iii) its description included vulgar words; (iv) its description does not provide sufficient information to understand the purpose of the regular expression; (v) it aimed to detect just one word; (vi) it is incorrect (i.e., the regex matches a string that is not supposed to match, or it does not match a string that is expected to match). After this step, we have 1,001 regex samples. Each collected regex sample had (on average) only 4 string examples (2 that are expected matches and 2 that are expected non-matches). Thus, we manually crafted additional test cases to ensure that each sample has at least 13 matching1 and 12 non-matching string examples. After creating these additional test strings, we evaluated the regex with the new set of test cases again and excluded the failed regex samples. Hence, we have 762 samples in our final dataset. Upon further inspection of the descriptions in the extracted sample, we observed that some of them lacked a more detailed explanation (e.g., ID#84: “SQL date format tester.”) or had extra information unrelated to the regex (e.g., ID#4: “... Other than that, this is just a really really long description of a regular expression that I’m using to test how my front page will look in the case where very long expression descriptions are used”). Thus, we created a refined prompt with a clear description of the regex that includes three match and two non-match string examples. ## Citation <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** ``` @inproceedings{siddiq2024regexeval, author={Siddiq, Mohammed Latif and Zhang, Jiahao and Roney, Lindsay and Santos, Joanna C. S.}, booktitle={Proceedings of the 46th International Conference on Software Engineering, NIER Track (ICSE-NIER '24)}, title={Re(gEx|DoS)Eval: Evaluating Generated Regular Expressions and their Proneness to DoS Attacks}, year={2024} } ``` ## Dataset Card Authors and Contact [Mohammed Latif Siddiq](http://lsiddiqsunny.github.io)
migtissera/Tess-Coder-v1.0
--- license: apache-2.0 --- This is a code specific dataset. It contains two-turn questions and answers. The first question/answer pair comes from glaiveai/glaive-code-assistant-v2. The second question (follow-up) was generated and answered by Claude-2.
NickyNicky/oasst2_chatml
--- dataset_info: features: - name: Text dtype: string splits: - name: train num_bytes: 35636342 num_examples: 13848 download_size: 19635797 dataset_size: 35636342 configs: - config_name: default data_files: - split: train path: data/train-* language: - en - es - ru - zh - de - fr - th - ca - it - ja - pl - eo - eu - vi - fi - hu - ar - nl - da - tr - ko - he - id - cs - bn - sv --- ``` link: https://huggingface.co/datasets/OpenAssistant/oasst2 ``` Message counts by language: - en: 64,513 - es: 28,199 - ru: 13,935 - zh: 8,615 - de: 6,145 - fr: 3,880 - pt-BR: 2,699 - th: 1,560 - ca: 1,283 - it: 943 - uk-UA: 845 - ja: 788 - pl: 435 - eo: 295 - eu: 274 - vi: 207 - fi: 138 - hu: 113 - ar: 80 - nl: 72 - da: 44 - tr: 37 - ko: 24 - he: 24 - id: 12 - cs: 12 - bn: 1 - sv: 1
LanguageBind/MoE-LLaVA
--- license: apache-2.0 --- <p align="center"> <img src="https://s11.ax1x.com/2023/12/28/piqvDMV.png" width="250" style="margin-bottom: 0.2;"/> <p> <h2 align="center"> <a href="https://arxiv.org/abs/2401.15947">MoE-LLaVA: Mixture of Experts for Large Vision-Language Models</a></h2> <h5 align="center"> If you like our project, please give us a star ⭐ on GitHub for latest update. </h2> <h5 align="center"> </h5> ## 📰 News * **[2024.01.30]** The [paper](https://arxiv.org/abs/2401.15947) is released. * **[2024.01.27]** 🤗[Hugging Face demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) and **all codes & datasets** are available now! Welcome to **watch** 👀 this repository for the latest updates. ## 😮 Highlights MoE-LLaVA shows excellent performance in multi-modal learning. ### 🔥 High performance, but with fewer parameters - with just **3B sparsely activated parameters**, MoE-LLaVA demonstrates performance comparable to the LLaVA-1.5-7B on various visual understanding datasets and even surpasses the LLaVA-1.5-13B in object hallucination benchmarks. ### 🚀 Simple baseline, learning multi-modal interactions with sparse pathways. - With the addition of **a simple MoE tuning stage**, we can complete the training of MoE-LLaVA on **8 V100 GPUs** within 2 days. ## 🤗 Demo ### Gradio Web UI Highly recommend trying out our web demo by the following command, which incorporates all features currently supported by MoE-LLaVA. We also provide [online demo](https://huggingface.co/spaces/LanguageBind/MoE-LLaVA) in Huggingface Spaces. ```bash # use phi2 deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" # use qwen deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" # use stablelm deepspeed --include localhost:0 moellava/serve/gradio_web_server.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" ``` ### CLI Inference ```bash # use phi2 deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Phi2-2.7B-4e" --image-file "image.jpg" # use qwen deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-Qwen-1.8B-4e" --image-file "image.jpg" # use stablelm deepspeed --include localhost:0 moellava/serve/cli.py --model-path "LanguageBind/MoE-LLaVA-StableLM-1.6B-4e" --image-file "image.jpg" ``` ## 🐳 Model Zoo | Model | LLM | Checkpoint | Avg | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MM-Bench| LLaVA-Bench-Wild | MM-Vet | |----------|-----------|-----------|---|---|---|---|---|---|---|---|---|---| | MoE-LLaVA-1.6B×4-Top2 | 1.6B | [LanguageBind/MoE-LLaVA-StableLM-1.6B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-StableLM-1.6B-4e) | 60.0 | 76.0 | 60.4 | 37.2 | 62.6 | 47.8 | 84.3 | 59.4 | 85.9 | 26.1 | | MoE-LLaVA-1.8B×4-Top2 | 1.8B | [LanguageBind/MoE-LLaVA-Qwen-1.8B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Qwen-1.8B-4e) | 60.2 | 76.2 | 61.5 | 32.6 | 63.1 | 48.0 | 87.0 | 59.6 | 88.7 | 25.3 | | MoE-LLaVA-2.7B×4-Top2 | 2.7B | [LanguageBind/MoE-LLaVA-Phi2-2.7B-4e](https://huggingface.co/LanguageBind/MoE-LLaVA-Phi2-2.7B-4e) | 63.9 | 77.1 | 61.1 | 43.4 | 68.7 | 50.2 | 85.0 | 65.5 | 93.2 | 31.1 | <!-- | LLaVA-1.5 | 7B | [liuhaotian/llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b) | 62.0 | 78.5 | 62.0 | 50.0 | 66.8 | 58.2 | 85.9 | 64.3 | 31.1 | | LLaVA-1.5 | 13B | [liuhaotian/llava-v1.5-13b](https://huggingface.co/liuhaotian/llava-v1.5-13b) | 64.9 | 80.0 | 63.3 | 53.6 | 71.6 | 61.3 | 85.9 | 67.7 | 36.1 | --> ## ⚙️ Requirements and Installation * Python >= 3.10 * Pytorch == 2.0.1 * CUDA Version >= 11.7 * **Transformers == 4.36.2** * **Tokenizers==0.15.1** * Install required packages: ```bash git clone https://github.com/PKU-YuanGroup/MoE-LLaVA cd MoE-LLaVA conda create -n moellava python=3.10 -y conda activate moellava pip install --upgrade pip # enable PEP 660 support pip install -e . pip install -e ".[train]" pip install flash-attn --no-build-isolation # Below are optional. For Qwen model. git clone https://github.com/Dao-AILab/flash-attention cd flash-attention && pip install . # Below are optional. Installing them might be slow. # pip install csrc/layer_norm # If the version of flash-attn is higher than 2.1.1, the following is not needed. # pip install csrc/rotary ``` ## 🗝️ Training & Validating The training & validating instruction is in [TRAIN.md](docs/TRAIN.md) & [EVAL.md](docs/EVAL.md). ## 💡 Customizing your MoE-LLaVA The instruction is in [CUSTOM.md](docs/CUSTOM.md). ## 😍 Visualization The instruction is in [VISUALIZATION.md](docs/VISUALIZATION.md). ## 🤖 API **We open source all codes.** If you want to load the model (e.g. ```LanguageBind/MoE-LLaVA```) on local, you can use the following code snippets. **Using the following command to run the code.** ```bash deepspeed predict.py ``` ```python import torch from moellava.constants import IMAGE_TOKEN_INDEX, DEFAULT_IMAGE_TOKEN from moellava.conversation import conv_templates, SeparatorStyle from moellava.model.builder import load_pretrained_model from moellava.utils import disable_torch_init from moellava.mm_utils import tokenizer_image_token, get_model_name_from_path, KeywordsStoppingCriteria def main(): disable_torch_init() image = 'moellava/serve/examples/extreme_ironing.jpg' inp = 'What is unusual about this image?' model_path = 'LanguageBind/MoE-LLaVA-Phi2-2.7B-4e' # LanguageBind/MoE-LLaVA-Qwen-1.8B-4e or LanguageBind/MoE-LLaVA-StableLM-1.6B-4e device = 'cuda' load_4bit, load_8bit = False, False # FIXME: Deepspeed support 4bit or 8bit? model_name = get_model_name_from_path(model_path) tokenizer, model, processor, context_len = load_pretrained_model(model_path, None, model_name, load_8bit, load_4bit, device=device) image_processor = processor['image'] conv_mode = "phi" # qwen or stablelm conv = conv_templates[conv_mode].copy() roles = conv.roles image_tensor = image_processor.preprocess(image, return_tensors='pt')['pixel_values'].to(model.device, dtype=torch.float16) print(f"{roles[1]}: {inp}") inp = DEFAULT_IMAGE_TOKEN + '\n' + inp conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token(prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt').unsqueeze(0).cuda() stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] stopping_criteria = KeywordsStoppingCriteria(keywords, tokenizer, input_ids) with torch.inference_mode(): output_ids = model.generate( input_ids, images=image_tensor, do_sample=True, temperature=0.2, max_new_tokens=1024, use_cache=True, stopping_criteria=[stopping_criteria]) outputs = tokenizer.decode(output_ids[0, input_ids.shape[1]:], skip_special_tokens=True).strip() print(outputs) if __name__ == '__main__': main() ``` ## 🙌 Related Projects * [Video-LLaVA](https://github.com/PKU-YuanGroup/Video-LLaVA) This framework empowers the model to efficiently utilize the united visual tokens. * [LanguageBind](https://github.com/PKU-YuanGroup/LanguageBind) An open source five modalities language-based retrieval framework. ## 👍 Acknowledgement * [LLaVA](https://github.com/haotian-liu/LLaVA) The codebase we built upon and it is an efficient large language and vision assistant. ## 🔒 License * The majority of this project is released under the Apache 2.0 license as found in the [LICENSE](https://github.com/PKU-YuanGroup/MoE-LLaVA/blob/main/LICENSE) file. * The service is a research preview intended for non-commercial use only, subject to the model [License](https://github.com/facebookresearch/llama/blob/main/MODEL_CARD.md) of LLaMA, [Terms of Use](https://openai.com/policies/terms-of-use) of the data generated by OpenAI, and [Privacy Practices](https://chrome.google.com/webstore/detail/sharegpt-share-your-chatg/daiacboceoaocpibfodeljbdfacokfjb) of ShareGPT. Please contact us if you find any potential violation. ## ✏️ Citation If you find our paper and code useful in your research, please consider giving a star :star: and citation :pencil:. ```BibTeX @misc{lin2024moellava, title={MoE-LLaVA: Mixture of Experts for Large Vision-Language Models}, author={Bin Lin and Zhenyu Tang and Yang Ye and Jiaxi Cui and Bin Zhu and Peng Jin and Junwu Zhang and Munan Ning and Li Yuan}, year={2024}, eprint={2401.15947}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```BibTeX @article{lin2023video, title={Video-LLaVA: Learning United Visual Representation by Alignment Before Projection}, author={Lin, Bin and Zhu, Bin and Ye, Yang and Ning, Munan and Jin, Peng and Yuan, Li}, journal={arXiv preprint arXiv:2311.10122}, year={2023} } ``` ## ✨ Star History [![Star History](https://api.star-history.com/svg?repos=PKU-YuanGroup/MoE-LLaVA&type=Date)](https://star-history.com/#PKU-YuanGroup/MoE-LLaVA&Date) ## 🤝 Contributors <a href="https://github.com/PKU-YuanGroup/MoE-LLaVA/graphs/contributors"> <img src="https://contrib.rocks/image?repo=PKU-YuanGroup/MoE-LLaVA" /> </a>
KBlueLeaf/danbooru2023-webp-4Mpixel
--- license: mit task_categories: - image-classification - zero-shot-image-classification - text-to-image language: - en tags: - art - anime - not-for-all-audiences size_categories: - 1M<n<10M --- # Danbooru 2023 webp: A space-efficient version of Danbooru 2023 This dataset is a resized/re-encoded version of [danbooru2023](https://huggingface.co/datasets/nyanko7/danbooru2023).<br> Which removed the non-image/truncated files and resize all of them into smaller size. --- ## Details This dataset employs few method to reduce the size and improve the efficiency. ### Size and Format This dataset resize all the image which have more than 2048x2048 pixel into near 2048x2048 pixels with bicubic algorithm.<br> And remove all the image with longer edge larger than 16383 after resize.<br> (one reason is beacuse webp doesn't allow that, another is that aspect ratio is too large/small.) This dataset encode/save all the image with 90% quality webp with pillow library in Python. Which is half size of the 100% quality lossy webp. The total size of this dataset is around 1.3~1.4TB. Which is less than the 20% of original file size. ### Webdataset This dataset use webdataset library to save all the tarfile, therefore, you can also use webdataset to load them easily. This is also a recommended way. The `__key__` of each files is the id of it. You can use this id to query the [metadata database](https://huggingface.co/datasets/KBlueLeaf/danbooru2023-sqlite) easily. --- ## Future work I will open a repo on github for utilizing danbooru-webp and danbooru-sqlite datasets as a dataset exporter for fine-grained-image-task.<br> Since the original danbooru2023 actually doesn't have images published after 2023/11/20, and it may be updated in the future.<br> This dataset will be updated after original dataset is been updated. And maintain the same format.
barissglc/tarot
--- language: - en ---
wisdomik/Quilt_VQA
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: image dtype: image - name: question dtype: string - name: answer dtype: string - name: answer_type dtype: string - name: context dtype: string splits: - name: train num_bytes: 225575327 num_examples: 985 download_size: 222944861 dataset_size: 225575327 extra_gated_prompt: >- Please read and agree to the following terms: 1. The requester details provided are not faked. 2. The resource will not be used for commercial/clinical purposes and will be used for scientific research only. 3. The data will not be re-distributed, published, copied, or further disseminated in any way or form whatsoever, whether for profit or not. 4. The right study/paper (Quilt-1M(https://quilt1m.github.io/) and Quilt-LLaVa (https://quilt-llava.github.io) papers) will be cited in any publication(s) that uses this model/data extra_gated_fields: Email: text First and last name: text Affiliation: text Type of Affiliation: type: select options: - Academia - Industry - Other I want to use this model for: type: select options: - Research - Education - label: Other value: other I agree to the aforementioned terms of use: checkbox license: cc-by-nc-nd-3.0 task_categories: - question-answering - visual-question-answering language: - en tags: - medical - histopathology - arxiv:2312.04746 pretty_name: Quilt-VQA size_categories: - 1K<n<10K --- # Dataset Card for "Quilt_VQA" **Paper: Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology Videos** **Paper or resources for more information:** https://quilt-llava.github.io/ <p align="center"> <img src="https://quilt-llava.github.io/static/images/quilt_vqa_samples.png" alt="fig2" width="90%"/> </p> **Description and Details** To evaluate Quilt-LLaVA, alongside public VQA pathology datasets, we also generated Quilt-VQA by extracting Q&A dataset from naturally occurring questions/answers given in the videos. With the help of GPT4 and some handcrafted algorithms, we collect a rich evaluation dataset of 1283 Q&A pairs. Top two rows show image-dependent Q&A pairs and bottom two rows show general-knowledge Q&A pairs. The original question posed by the narrator of the video is highlighted in yellow. **Dataset date:** QUILT-VQA was collected in November 2023. **License:** MIT License; **Where to send questions or comments about the model:** https://github.com/quilt-llava/quilt-llava.github.io/issues **Primary intended uses:** The primary use of QUILT-VQA is for benchmarking histopathology large multimodal models and chatbots. **Primary intended users:** The dataset is intended as a research resource for research communities. We hope that this dataset will enable researchers to better understand and explore the generative capacity of medical large multimodal models **Citation** ```bibtex @misc{seyfioglu2023quiltllava, title={Quilt-LLaVA: Visual Instruction Tuning by Extracting Localized Narratives from Open-Source Histopathology Videos}, author={Mehmet Saygin Seyfioglu and Wisdom O. Ikezogwo and Fatemeh Ghezloo and Ranjay Krishna and Linda Shapiro}, year={2023}, eprint={2312.04746}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` ```bibtex @misc{ikezogwo2023quilt1m, title={Quilt-1M: One Million Image-Text Pairs for Histopathology}, author={Wisdom Oluchi Ikezogwo and Mehmet Saygin Seyfioglu and Fatemeh Ghezloo and Dylan Stefan Chan Geva and Fatwir Sheikh Mohammed and Pavan Kumar Anand and Ranjay Krishna and Linda Shapiro}, year={2023}, eprint={2306.11207}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` [![Code License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%203.0-red.svg)](https://creativecommons.org/licenses/by-nc/3.0/us/deed.en) [![Code License](https://img.shields.io/badge/Data%20License-CC%20By%20NC%20ND%203.0%20(w/%20DUA)-red.svg)](https://en.wikipedia.org/wiki/MIT_License) [![Code License](https://img.shields.io/badge/Code%20License-MIT-red.svg)](https://en.wikipedia.org/wiki/MIT_License) **Usage and License Notices**: The data, code, and model checkpoints are intended and licensed for research use only. They are also subject to additional restrictions dictated by the Terms of Use: QUILT-1M, LLaMA, Vicuna and GPT-4 respectively. The model is made available under CC BY NC 3.0 licence and the data, code under CC BY NC ND 3.0 with additional Data Use Agreement (DUA). The data, code, and model checkpoints may be used for non-commercial purposes and any models trained using the dataset should be used only for research purposes. It is expressly prohibited for models trained on this data to be used in clinical care or for any clinical decision making purposes.
not-lain/meme-dataset
--- dataset_info: features: - name: image dtype: image - name: name dtype: string splits: - name: train num_bytes: 5693519 num_examples: 300 download_size: 5596513 dataset_size: 5693519 configs: - config_name: default data_files: - split: train path: data/train-* license: cc --- This is an open-source memes dataset If you have any memes that you want to add to this dataset, head to the [community discussions](https://huggingface.co/datasets/not-lain/meme-dataset/discussions/1) and add your meme there and I will add it to the dataset shortly ``` ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠟⠛⠛⠛⠉⠉⠉⠋⠛⠛⠛⠻⢻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡟⠛⠉⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠉⠙⠻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⠟⠋⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠈⠿⣿⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡿⠏⠄⠄⠄⠄⠄⠄⠄⠂⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠈⠹⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⠛⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠠⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠘⢻⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⠃⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⢀⠄⢠⠄⠄⡀⠄⠄⢀⠂⠄⠄⠄⠄⠄⠄⠄⠄⠄⡁⠄⠄⢛⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⡇⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠐⡈⢔⠸⣐⢕⢕⢵⢰⢱⢰⢐⢤⡡⡢⣕⢄⢢⢠⠄⠄⠄⠄⠄⠄⠙⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⡁⠂⠅⢕⠌⡎⡎⣎⢎⢮⢮⣳⡳⣝⢮⢺⢜⢕⢕⢍⢎⠪⡐⠄⠁⠄⠸⣿⣿ ⣿⣿⣿⣿⣿⣿⠏⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠐⠄⠄⢅⠣⡡⡣⣣⡳⡵⣝⡮⣗⣗⡯⣗⣟⡮⡮⣳⣣⣳⢱⢱⠱⣐⠄⠂⠄⢿⣿ ⣿⣿⣿⣿⣿⣿⠄⠄⠄⠄⠄⠄⠄⠂⠄⠄⠄⠄⠄⠄⢂⢈⠢⡱⡱⡝⣮⣿⣟⣿⣽⣷⣿⣯⣿⣷⣿⣿⣿⣾⣯⣗⡕⡇⡇⠄⠂⡀⢹⣿ ⣿⣿⣿⣿⣿⡟⠄⠄⠄⠄⠄⠄⠂⠄⠄⠄⠄⠄⠄⠐⢀⢂⢕⢸⢨⢪⢳⡫⣟⣿⣻⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⡺⡮⡣⡣⠠⢂⠒⢸⣿ ⣿⣿⣿⣿⣿⡇⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠠⠐⠄⡂⠆⡇⣗⣝⢮⢾⣻⣞⣿⣿⣿⣿⣿⣿⣿⣿⢿⣽⣯⡯⣺⢸⢘⠨⠔⡅⢨⣿ ⣿⣿⠋⠉⠙⠃⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠁⠄⠄⠄⡂⡪⡪⡪⡮⡮⡯⣻⣽⣾⣿⣿⣿⣟⣿⣿⣿⣽⣿⣿⡯⣯⡺⡸⡰⡱⢐⡅⣼⣿ ⣿⠡⡀⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠠⠈⠆⠱⠑⠝⠜⠕⡝⡝⣞⢯⢿⣿⣿⡿⣟⣿⣿⣿⡿⡿⣽⣷⣽⡸⡨⡪⣂⠊⣿⣿ ⣿⠡⠄⡨⣢⠐⠁⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠐⠍⡓⣗⡽⣝⠽⠍⠅⠑⠁⠉⠘⠘⠘⠵⡑⢜⢀⢀⢉⢽ ⣿⠁⠠⢱⢘⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠈⠈⠱⣁⠜⡘⠌⠄⠄⡪⣳⣟⡮⢅⠤⠠⠄⠄⣀⣀⡀⡀⠄⠈⡂⢲⡪⡠⣿ ⣿⡇⠨⣺⢐⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⡀⠄⠄⠄⠤⡠⡢⢒⠦⠠⠄⠄⠄⡸⢽⣟⢮⠢⡂⡐⠄⡈⡀⠤⡀⠄⠑⢄⠨⢸⡺⣐⣿ ⣿⣿⠈⠕⠁⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⡂⡪⡐⡥⢤⣰⣰⣰⡴⡮⠢⠂⠄⠄⡊⢮⢺⢕⢵⢥⡬⣌⣒⡚⣔⢚⢌⢨⢚⠌⣾⡪⣾⣿ ⣿⣿⣆⠄⡀⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⡑⢕⢕⡯⡷⣕⢧⢓⢭⠨⡀⠄⡂⠨⡨⣪⡳⣝⢝⡽⣻⣻⣞⢽⣲⢳⢱⢡⠱⠨⣟⢺⣿⣿ ⣿⣿⣿⡆⠄⡅⠇⡄⠄⠄⠄⠄⠄⠄⠄⠐⠨⢪⢹⢽⢽⣺⢝⠉⠁⠁⠄⠄⠄⢌⢎⡖⡯⡎⡗⢝⠜⣶⣯⣻⢮⡻⣟⣳⡕⠅⣷⣿⣿⣿ ⣿⣿⣿⣿⣶⣶⣿⣷⠄⠄⠄⠄⠄⠄⠄⠄⠈⠔⡑⠕⠝⠄⡀⠄⠄⠊⢆⠂⠨⡪⣺⣮⣿⡾⡜⣜⡜⣄⠙⢞⣿⢿⡿⣗⢝⢸⣾⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⠄⠄⠄⠄⠄⡀⠄⠄⠄⠄⢀⠄⠠⠄⠠⠄⠄⠄⠄⠄⠄⠊⠺⡹⠳⡙⡜⡓⡭⡺⡀⠄⠣⡻⡹⡸⠨⣣⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⠄⠄⠄⠄⠄⠠⠄⠄⣂⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⢄⠤⡤⡄⡆⡯⡢⡣⡣⡓⢕⠽⣄⠄⠨⡂⢌⣼⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⡆⠄⠄⠄⠄⠈⠆⠄⠸⡂⠄⠄⠄⢀⠄⢀⠈⠄⠂⠁⠙⠝⠼⠭⠣⠣⠣⠑⠌⠢⠣⡣⡠⡘⣰⣱⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⡇⠄⠄⠄⠄⠄⢑⠄⠈⡱⠄⢘⠄⡀⠨⢐⣧⣳⣷⣶⣦⣤⣴⣶⣶⣶⡶⠄⡠⡢⡕⣜⠎⡮⣣⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⡇⠄⠄⠄⠄⠄⠄⠢⠄⠨⠄⠄⠣⡀⠄⢀⢀⢙⠃⡿⢿⠿⡿⡿⢟⢋⢔⡱⣝⢜⡜⡪⡪⣵⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⡁⠄⠄⠄⠄⠄⠄⠄⠅⠄⠡⠄⠄⠡⢀⢂⠢⡡⠡⠣⡑⣏⢯⡻⡳⣹⡺⡪⢎⠎⡆⢣⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣇⠄⠄⠄⠄⠄⠄⠄⠐⠄⠄⠁⠄⢈⠄⢂⠕⡕⡝⢕⢎⢎⢮⢎⢯⢺⢸⢬⠣⢃⣼⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⡀⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠠⠨⡐⠌⢆⢇⢧⢭⣣⡳⣵⢫⣳⢱⠱⢑⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣆⠄⠄⠄⠄⠄⠄⠄⠄⠄⠄⠁⡊⢌⢢⢡⢣⢪⡺⡪⡎⡎⡎⡚⣨⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣄⡀⠄⠄⠄⠄⠄⠄⠄⠄⠄⠕⡅⢗⢕⡳⡭⣳⢕⠕⡱⣼⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣦⡀⠄⠄⠄⠄⠄⠄⠄⠄⠄⠌⠄⠑⠩⢈⢂⣱⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣶⡀⢄⠄⣀⠄⡀⣀⢠⢄⣖⣖⣞⣼⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣧⣱⡐⡕⡕⡽⣝⣟⣮⣾⣾⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿ ⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣷⣵⣽⣸⣃⣧⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿⣿ ```
next-tat/tat-llm-instructions
--- license: cc-by-4.0 dataset_info: features: - name: resp dtype: string - name: id dtype: string - name: user_prompt dtype: string splits: - name: train num_bytes: 165619445 num_examples: 32555 - name: validation num_bytes: 21180081 num_examples: 4136 download_size: 37315773 dataset_size: 186799526 configs: - config_name: default data_files: - split: train path: data/train-* - split: validation path: data/validation-* task_categories: - text-generation - question-answering - table-question-answering tags: - finance --- # TAT-LLM-Instructions The TAT(**Tabular** and **Textual**)-LLM-Instructions dataset is a curated collection of **financial** data, structured to resemble instructions. It aggregates information from three publicly available tabular and textual QA datasets: [FinQA](https://github.com/czyssrs/FinQA), [TAT-QA](https://github.com/NExTplusplus/TAT-QA), and [TAT-DQA](https://github.com/NExTplusplus/TAT-DQA). By employing specialized templates, TAT-LLM-Instructions transforms the original dataset into prompts that are optimized for compatibility with **large language models** (LLMs) and **external executor**, aiming to significantly improve their performance. For more details, please refer to our [paper](https://arxiv.org/abs/2401.13223) ## Template Instruction template for FINQA: ``` Below is an instruction that describes a question answering task in the finance domain, paired with an input table and its relevant text that provide further context. The given question is relevant to the table and text. Generate an appropriate answer to the given question. ### Instruction: Given a table and a list of texts in the following, what is the answer to the question? Please complete the task in three steps: 1. In the first step, extract the relevant numerical values from the provided table or texts. Store these in the variable ‘{evidence}‘. If there are multiple values, separate them using the ’#’ symbol. 2. In the second step, generate an equation using the extracted numerical values. Store this equation in the variable ‘{equation}‘. 3. In the third step, calculate the answer based on the equation and store it in the variable ‘{answer}‘. Please organize the results in the following table: | step | output | | 1 | {evidence} | | 2 | {equation} | | 3 | {answer} | Finally, present the calculated answer in the format: "The answer is: {answer}" ### Table {table} ### Text {text} ### Question {question} ### Response |step | output| |1 | {gold_evidence} | |2 | {gold_equation} | |3 | {gold_answer} | The answer is: {gold_answer} ``` Instruction template for TAT-QA ``` Below is an instruction that describes a question answering task in the finance domain, paired with an input table and its relevant text that provide further context. The given question is relevant to the table and text. Generate an appropriate answer to the given question. ### Instruction Given a table and a list of texts in the following, answer the question posed using the following five-step process: 1. Step 1: Predict the type of question being asked. Store this prediction in the variable ‘{question_type}‘. The value of ‘{question_type}‘ can be one of the following:‘Single span‘, ‘Multiple spans‘, ‘Count‘, or ‘Arithmetic‘. 2. Step 2: Extract the relevant strings or numerical values from the provided table or texts. Store these pieces of evidence in the variable ‘{evidence}‘. If there are multiple pieces of evidence, separate them using the ’#’ symbol. 3. Step 3: if the ‘{question_type}‘ is ‘Arithmetic‘, formulate an equation using values stored in ‘{evidence}‘. Store this equation in the variable ‘{equation}‘. For all other question types, set the value of {equation} to ’N.A.’. 4. Step 4: Predict or calculate the answer based on the question type, evidence and equation. Store it in the variable ‘{answer}‘. If there are multiple values, separate them using the ’#’ symbol. 5. Step 5: If the value of the ‘{answer}‘ is numerical, predict its scale and store it in a variable named ‘{scale}‘. The value of ‘{scale}‘ can be one of the following: ‘none‘, ‘percent‘, ‘thousand‘, ‘million‘, or ‘billion‘. For non-numerical values, set the value of ‘{scale}‘ to ’none’. Please organize the results in the following table: | step | output | | 1 | {question_type} | | 2 | {evidence} | | 3 | {equation} | | 4 | {answer} | | 5 | {scale} | Finally, present the final answer in the format: "The answer is: {answer} #### and its corresponding scale is: {scale}" ### Table {table} ### Text {text} ### Question {question} ### Response | step | output | | 1 | {gold_question_type} | | 2 | {gold_evidence} | | 3 | {gold_equation} | | 4 | {gold_answer} | | 5 | {gold_scale} | The answer is: {gold_answer} #### and its corresponding scale is: {gold_scale} ``` Instruction template for TAT-DQA: ``` Below is an instruction that describes a question answering task in the finance domain, paired with an input document that has one or multiple pages that provide further context. The given question is relevant to the document. Generate an appropriate answer to the given question. ### Instruction Given a document that has one or multiple pages in the following, answer the question posed using the following five-step process: 1. Step 1: Predict the type of question being asked. Store this prediction in the variable ‘{question_type}‘. The value of ‘{question_type}‘ can be one of the following:‘Single span‘, ‘Multiple spans‘, ‘Count‘, or ‘Arithmetic‘. 2. Step 2: Extract the relevant strings or numerical values from the provided document. Store these pieces of evidence in the variable ‘{evidence}‘. If there are multiple pieces of evidence, separate them using the ’#’ symbol. 3. Step 3: if the ‘{question_type}‘ is ‘Arithmetic‘, formulate an equation using values stored in ‘{evidence}‘. Store this equation in the variable ‘{equation}‘. For all other question types, set the value of {equation} to ’N.A.’. 4. Step 4: Predict or calculate the answer based on the question type, evidence and equation. Store it in the variable ‘{answer}‘. If there are multiple values, separate them using the ’#’ symbol. 5. Step 5: If the value of the ‘{answer}‘ is numerical, predict its scale and store it in a variable named ‘{scale}‘. The value of ‘{scale}‘ can be one of the following: ‘none‘, ‘percent‘, ‘thousand‘, ‘million‘, or ‘billion‘. For non-numerical values, set the value of ‘{scale}‘ to ’none’. Please organize the results in the following table: | step | output | | 1 | {question_type} | | 2 | {evidence} | | 3 | {equation} | | 4 | {answer} | | 5 | {scale} | Finally, present the final answer in the format: "The answer is: {answer} #### and its corresponding scale is: {scale}" ### Text {pages} ### Question {question} ### Response | step | output | | 1 | {gold_question_type} | | 2 | {gold_evidence} | | 3 | {gold_equation} | | 4 | {gold_answer} | | 5 | {gold_scale} | The answer is: {gold_answer} #### and its corresponding scale is: {gold_scale} ``` ## Models TAT-LLM models are developed by training on the TAT-LLM-Instructions dataset. Models of various sizes and trained using different methods are available on the Hugging Face Model Hub: |Model|HuggingFace Repo| |---|---| |TAT-LLM-7B-LORA|[🤗HuggingFace Repo](https://huggingface.co/next-tat/tat-llm-7b-lora)| |TAT-LLM-13B-LORA|[🤗HuggingFace Repo](https://huggingface.co/next-tat/tat-llm-13b-lora)| |TAT-LLM-70B-LORA|[🤗HuggingFace Repo](https://huggingface.co/next-tat/tat-llm-70b-lora)| |TAT-LLM-7B-FFT|[🤗HuggingFace Repo](https://huggingface.co/next-tat/tat-llm-7b-fft)| |TAT-LLM-13B-FFT|[🤗HuggingFace Repo](https://huggingface.co/next-tat/tat-llm-13b-fft)| |TAT-LLM-70B-FFT|[🤗HuggingFace Repo](https://huggingface.co/next-tat/tat-llm-70b-fft)| For more details, please refer to our [paper](https://arxiv.org/abs/2401.13223) and [code](https://github.com/fengbinzhu/TAT-LLM) ## Citation If you find this dataset helpful, please consider citing our paper: ``` @misc{zhu2024tatllm, title={TAT-LLM: A Specialized Language Model for Discrete Reasoning over Tabular and Textual Data}, author={Fengbin Zhu and Ziyang Liu and Fuli Feng and Chao Wang and Moxin Li and Tat-Seng Chua}, year={2024}, eprint={2401.13223}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
botbot-ai/biology-ptbr
--- license: cc-by-nc-4.0 language: - pt tags: - instruction-finetuning pretty_name: CAMEL Biology PTBR task_categories: - text-generation --- ## Tradução do Camel Biology dataset para Portuguese (PT-BR) usando NLLB 3.3b. # **CAMEL: Communicative Agents for “Mind” Exploration of Large Scale Language Model Society** - **Github:** https://github.com/lightaime/camel - **Website:** https://www.camel-ai.org/ - **Arxiv Paper:** https://arxiv.org/abs/2303.17760 ## Dataset Summary Biology dataset is composed of 20K problem-solution pairs obtained using gpt-4. The dataset problem-solutions pairs generating from 25 biology topics, 25 subtopics for each topic and 32 problems for each "topic,subtopic" pairs. We provide the data in `biology.zip`. ## Data Fields **The data fields for files in `biology.zip` are as follows:** * `role_1`: assistant role * `topic`: biology topic * `sub_topic`: biology subtopic belonging to topic * `message_1`: refers to the problem the assistant is asked to solve. * `message_2`: refers to the solution provided by the assistant. **Download in python** ``` from huggingface_hub import hf_hub_download hf_hub_download(repo_id="camel-ai/biology", repo_type="dataset", filename="biology.zip", local_dir="datasets/", local_dir_use_symlinks=False) ``` ### Citation ``` @misc{li2023camel, title={CAMEL: Communicative Agents for "Mind" Exploration of Large Scale Language Model Society}, author={Guohao Li and Hasan Abed Al Kader Hammoud and Hani Itani and Dmitrii Khizbullin and Bernard Ghanem}, year={2023}, eprint={2303.17760}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` ## Disclaimer: This data was synthetically generated by GPT4 and might contain incorrect information. The dataset is there only for research purposes. --- license: cc-by-nc-4.0 ---
FreedomIntelligence/XMedbench
--- license: apache-2.0 configs: - config_name: test data_files: - split: english path: test/en.json - split: chinese path: test/zh.json - split: french path: test/fr.json - split: spanish path: test/es.json - split: arabic path: test/ar.json - split: Hindi path: test/hi.json --- # Multilingual Medicine: Model, Dataset, Benchmark, Code Covering English, Chinese, French, Hindi, Spanish, Hindi, Arabic So far <p align="center"> 👨🏻‍💻<a href="https://github.com/FreedomIntelligence/Apollo" target="_blank">Github</a> •📃 <a href="https://arxiv.org/abs/2403.03640" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> • 🤗 <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> <br> <a href="./README_zh.md"> 中文 </a> | <a href="./README.md"> English </p> ![XMedBench](assets/XMedBench.png) ## 🌈 Update * **[2024.03.07]** [Paper](https://arxiv.org/abs/2403.03640) released. * **[2024.02.12]** <a href="https://huggingface.co/datasets/FreedomIntelligence/ApolloCorpus" target="_blank">ApolloCorpus</a> and <a href="https://huggingface.co/datasets/FreedomIntelligence/XMedbench" target="_blank">XMedBench</a> is published!🎉 * **[2024.01.23]** Apollo repo is published!🎉 ## Results ![Apollo](assets/result.png) ## Usage - [Zip File](https://huggingface.co/datasets/FreedomIntelligence/XMedbench/blob/main/XMedbench.zip) - [Data category](https://huggingface.co/datasets/FreedomIntelligence/XMedbench/tree/main/test) ## Data: - EN: - [MedQA-USMLE](https://huggingface.co/datasets/GBaker/MedQA-USMLE-4-options) - [MedMCQA](https://huggingface.co/datasets/medmcqa/viewer/default/test) - [PubMedQA](https://huggingface.co/datasets/pubmed_qa): Because the results fluctuated too much, they were not used in the paper. - [MMLU-Medical](https://huggingface.co/datasets/cais/mmlu) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - ZH: - [MedQA-MCMLE](https://huggingface.co/datasets/bigbio/med_qa/viewer/med_qa_zh_4options_bigbio_qa/test) - [CMB-single](https://huggingface.co/datasets/FreedomIntelligence/CMB): Not used in the paper - Randomly sample 2,000 multiple-choice questions with single answer. - [CMMLU-Medical](https://huggingface.co/datasets/haonan-li/cmmlu) - Anatomy, Clinical_knowledge, College_medicine, Genetics, Nutrition, Traditional_chinese_medicine, Virology - [CExam](https://github.com/williamliujl/CMExam): Not used in the paper - Randomly sample 2,000 multiple-choice questions - ES: [Head_qa](https://huggingface.co/datasets/head_qa) - FR: [Frenchmedmcqa](https://github.com/qanastek/FrenchMedMCQA) - HI: [MMLU_HI](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Arabic) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine - AR: [MMLU_Ara](https://huggingface.co/datasets/FreedomIntelligence/MMLU_Hindi) - Clinical knowledge, Medical genetics, Anatomy, Professional medicine, College biology, College medicine ## Citation Please use the following citation if you intend to use our dataset for training or evaluation: ``` @misc{wang2024apollo, title={Apollo: Lightweight Multilingual Medical LLMs towards Democratizing Medical AI to 6B People}, author={Xidong Wang and Nuo Chen and Junyin Chen and Yan Hu and Yidong Wang and Xiangbo Wu and Anningzhe Gao and Xiang Wan and Haizhou Li and Benyou Wang}, year={2024}, eprint={2403.03640}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
Azure99/blossom-orca-v3
--- license: apache-2.0 task_categories: - text-generation - text2text-generation language: - zh - en size_categories: - 100K<n<1M --- # BLOSSOM ORCA V3 ### 介绍 Blossom Orca V3是一个基于OpenOrca衍生而来的中英双语指令数据集,适用于指令微调。 相比于blossom-wizard-v2,本版本完全使用GPT-4进行蒸馏。 本数据集从OpenOrca中抽取了系统提示和指令,首先将其翻译为中文并校验翻译结果,再使用指令调用gpt-4-0125-preview模型生成响应,并过滤掉包含自我认知以及拒绝回答的响应,以便后续对齐。此外,为了确保响应风格的一致性以及中英数据配比,本数据集还对未翻译的原始指令也进行了相同的调用,最终得到了1:1的中英双语指令数据。 相比直接对原始OpenOrca进行翻译的中文数据集,Blossom Orca的一致性及质量更高。 本次发布了全量数据的50%,包含中英双语各20K,共计40K记录。 ### 语言 以中文和英文为主。 ### 数据集结构 每条数据代表一个完整的对话,包含id和conversations两个字段。 - id:从1递增。 - conversations:对象数组,每个对象包含role、content两个字段,role的取值为user或assistant,分别代表用户输入和助手输出,content则为对应的内容。 ### 数据集限制 本数据集的所有响应均由gpt-4-0125-preview生成,并未经过严格的数据校验,可能包含不准确甚至严重错误的回答。此外,由于过滤了拒答响应,仅使用本数据集训练的模型,可能不会拒绝非法的请求。
amu-cai/nEMO
--- language: - pl license: cc-by-nc-sa-4.0 multilinguality: monolingual size_categories: - 1K<n<10K task_categories: - audio-classification - automatic-speech-recognition - text-to-speech task_ids: - audio-emotion-recognition - audio-language-identification - sentiment-analysis - speaker-identification pretty_name: nEMO dataset_info: features: - name: file_id dtype: string - name: audio dtype: audio - name: emotion dtype: string - name: raw_text dtype: string - name: normalized_text dtype: string - name: speaker_id dtype: string - name: gender dtype: string - name: age dtype: int32 splits: - name: train num_bytes: 531802170.875 num_examples: 4481 download_size: 531224302 dataset_size: 531802170.875 configs: - config_name: default data_files: - split: train path: data/train-* --- # nEMO: Dataset of Emotional Speech in Polish ## Dataset Description nEMO is a simulated dataset of emotional speech in the Polish language. The corpus contains over 3 hours of samples recorded with the participation of nine actors portraying six emotional states: anger, fear, happiness, sadness, surprise, and a neutral state. The text material used was carefully selected to represent the phonetics of the Polish language. The corpus is available for free under the Creative Commons license (CC BY-NC-SA 4.0). ### Example Usage The nEMO dataset can be loaded and processed using the datasets library: ```python from datasets import load_dataset nemo = load_dataset("amu-cai/nEMO", split="train") ``` ### Supported Tasks - `audio-classification`: This dataset was mainly created for the task of speech emotion recognition. Each recording is labeled with one of six emotional states (anger, fear, happiness, sadness, surprised, and neutral). Additionally, each sample is labeled with speaker id and speaker gender. Because of that, the dataset can also be used for different audio classification tasks. - `automatic-speech-recognition`: The dataset includes orthographic and normalized transcriptions for each audio recording, making it a useful resource for automatic speech recognition (ASR) tasks. The sentences were carefully selected to cover a wide range of phonemes in the Polish language. - `text-to-speech`: The dataset contains emotional audio recordings with transcriptions, which can be valuable for developing TTS systems that produce emotionally expressive speech. ### Languages nEMO contains audio and transcription in Polish language. ## Dataset Structure ### Data Instances ```python { 'file_id': 'WR0_surprised_64', 'audio': { 'path': None, 'array': array([ 6.10351562e-05, -3.05175781e-05, -3.05175781e-05, ..., 6.10351562e-05, -1.22070312e-04, 1.83105469e-04]), 'sampling_rate': 24000 }, 'emotion': 'surprised', 'raw_text': 'Ucho wykrywa dźwięki o różnej częstotliwości.', 'normalized_text': 'ucho wykrywa dźwięki o różnej częstotliwości', 'speaker_id': 'WR0', 'gender': 'male', 'age': 23 } ``` ### Data Fields - `file_id` (string) - id of file, i.e. `{speaker_id}_{emotion}_{sentence_id}`, - `audio` (audio) - dictionary containing audio array, path and sampling rate, - `emotion` (string) - label corresponding to emotional state, - `raw_text` (string) - original (orthographic) transcription of the audio, - `normalized_text` (string) - normalized transcription of the audio, - `speaker_id` (string) - id of speaker, - `gender` (string) - gender of the speaker, - `age` (int32) - age of the speaker. ### Data Splits The nEMO dataset is provided as a whole, without predefined training and test splits. This allows researchers and developers flexibility in creating their splits based on the specific needs. | Emotion | Number of samples | | --------- | ----------------- | | anger | 749 | | fear | 736 | | happiness | 749 | | neutral | 809 | | sadness | 769 | | surprised | 669 | | **Total** | 4,481 | ## Additional Information ### Licensing Information The dataset is available under the Creative Commons license (CC BY-NC-SA 4.0). ### Citation Information You can access the nEMO paper at [arXiv](https://arxiv.org/abs/2404.06292). Please cite the paper when referencing the nEMO dataset as: ``` @misc{christop2024nemo, title={nEMO: Dataset of Emotional Speech in Polish}, author={Iwona Christop}, year={2024}, eprint={2404.06292}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@iwonachristop](https://github.com/iwona-christop) for adding this dataset.
somosnlp/Reglamento_aeronautico_Colombiano_FULL_ChatML_format_gemma
--- language: - es license: apache-2.0 size_categories: - 10K<n<100K task_categories: - question-answering dataset_info: features: - name: text dtype: string splits: - name: train num_bytes: 13024811 num_examples: 25174 download_size: 2127089 dataset_size: 13024811 configs: - config_name: default data_files: - split: train path: data/train-* tags: - legal - Aeronautic --- ## Reglamento aeronautico Colombiano QA <div style="display: flex; justify-content: space-between; align-items: flex-start;"> <div style="flex: 1; padding-right: 20px;"> <h2>Tabla de Contenidos</h2> <ul> <li><a href="#1-descripción-general">1. Descripción General</a></li> <li><a href="#2-descripción-de-los-objetivos">2. Descripción de los Objetivos</a></li> <li><a href="#3-pdf-extraídos">3. PDF Extraídos</a></li> <li><a href="#4-distribución-de-los-tokens-gemma">4. Distribución de los Tokens Gemma</a></li> <li><a href="#5-modelo-a-fine-tune">5. Modelo a Fine-Tune</a></li> <li><a href="#6-ejemplo-de-plantilla-básica">6. Ejemplo de Plantilla Básica</a></li> <li><a href="#7-ejemplo-en-una-variable-en-python">7. Ejemplo en una Variable en Python</a></li> <li><a href="#8-datos-iniciales">8. Datos Iniciales</a></li> <li><a href="#9-posibles-nombres-del-modelo">9. Posibles Nombres del Modelo</a></li> </ul> </div> <div style="flex: 1; text-align: center;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6419c2f6b4adb0e101b17b6c/iyN553TbnrrbsyQrfy8ab.png" style="max-width: 100%; max-height: 550px;"> </div> </div> ## 1. Descripción geneneral. Este contenido se refiere a un conjunto de datos (dataset) que se ha elaborado basándose en el Reglamento Aeronáutico Colombiano. A partir del contenido original del reglamento, se ha utilizado inteligencia artificial para extraer información relevante y crear un conjunto de preguntas y respuestas. Este proceso permite transformar el reglamento, que puede ser extenso y complejo, en un formato más accesible y comprensible, facilitando el aprendizaje y la revisión de sus normativas para personas interesadas en la aviación colombiana, ya sean profesionales del sector, estudiantes o entusiastas. La utilización de IA para este propósito no solo mejora la eficiencia en la generación de material educativo, sino que también asegura que las preguntas y respuestas sean precisas y estén alineadas con el contenido y espíritu del reglamento. ## 2. Descripción de los objetivos. El proyecto descrito tiene como objetivo principal la creación de un dataset de alta calidad a partir del Reglamento Aeronáutico Colombiano mediante un proceso en dos etapas, utilizando tanto inteligencia artificial como intervención humana. En la primera etapa, se emplea una inteligencia artificial para extraer datos relevantes del reglamento y generar un conjunto preliminar de preguntas y respuestas. Este enfoque automatizado permite cubrir de manera eficiente un amplio espectro del material, identificando temas clave y generando preguntas pertinentes que reflejan el contenido y la estructura del reglamento. En la segunda etapa, estos datos son revisados por etiquetadores humanos. Este equipo de revisores valida las respuestas generadas por la IA, realizando correcciones y ajustes según sea necesario para asegurar la precisión y relevancia del contenido. Este paso de validación es crucial para garantizar la calidad del dataset, pues permite incorporar el entendimiento humano y la interpretación precisa de las normativas, algo que la IA, por avanzada que sea, puede no lograr a la perfección. El dataset final, validado y refinado, se destina a entrenar un modelo de inteligencia artificial más específico y de menor escala. Este modelo está diseñado para realizar tareas concretas relacionadas con el Reglamento Aeronáutico Colombiano, posiblemente incluyendo la automatización de consultas, asistencia en la interpretación de las normativas, y apoyo en la formación y evaluación de personal en el sector aeronáutico. El entrenamiento con datos de alta calidad asegura que el modelo sea efectivo, confiable y preciso en sus tareas designadas, reflejando así el compromiso con la excelencia y la seguridad que caracteriza al sector aeronáutico. ## 3. PDF extraidos. ``` RAC 1 hasta el Rac 5. ``` ## 4. Distribución de los tokens Gemma. <div style="display: flex; justify-content: center;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/4FMIojql8R4j7VFcs-jHL.png" style="width: 50%; max-height: 550px;"> </div> ## 5. Modelo a fine-tune. Este modelo previamente se a entrenado con el dataset de 'OpenAssistant/oasst2' que contiene mas de 15 idiomas y se hizo un filtro de datos. ``` https://huggingface.co/NickyNicky/gemma-2b-it_oasst2_all_chatML_Unsloth_V1 ``` ## 6. Ejemplo de plantilla basica. Es una plantilla de ejemplo para entrenamiento de gemma-2b. El proposito de esta plantilla es que el modelo aprenda a generalizar sobre las normativas aeronauticas Colombiana. ``` <bos><start_of_turn>system You are a helpful AI assistant. Responde en formato json. Eres un agente experto en la normativa aeronautica Colombiana.<end_of_turn> <start_of_turn>user ¿Qué sucede con las empresas de servicios aéreos comerciales que no hayan actualizado su permiso de operación después del 31 de marzo de 2024?<end_of_turn> <start_of_turn>model { "Respuesta": "Quedarán suspendidas de actividades de vuelo hasta obtener la actualización.", "Pagina": "77", "Rac": "Rac 5" }<end_of_turn> ``` ## 7. Ejemplo en una variable en Python. ```py # con esto se elimina los interrogantes incompletos. question = "Qué sucede si la Autoridad Aeronáutica hace requerimientos o solicita aclaraciones sobre una solicitud de operación de fletamento?".replace("¿","").replace("?","") text = f"""<bos><start_of_turn>system You are a helpful AI assistant. Responde en formato json. Eres un agente experto en la normativa aeronautica Colombiana.<end_of_turn> <start_of_turn>user ¿{question}?<end_of_turn> <start_of_turn>model """ ``` ## 8. Datos Iniciales Datos iniciales solo del RAC 1. ``` name: somosnlp/Reglamento_aeronautico_Colombiano_QA_RAC1_FULL url: https://huggingface.co/datasets/somosnlp/Reglamento_aeronautico_Colombiano_QA_RAC1_FULL ``` ## 9. Posibles nombres del modelo. ``` name 1: AeroReg_Col_AI name 2: AeroReg_Cop_AI name 3: AeroReg_AI ``` ## imagen del modelo AI. <div style="display: flex; justify-content: center;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/vVo8k-T8Y_rORRIUyhKbs.png" style="width: 60%; max-height: 450px;"> </div> ## Codigo entrenamiento. ``` En Kamino... ``` ## Autores - Edison Bejarano - Nicolai Potes - Santiago Pineda - Dany - Alex - Sergio <div style="display: flex; justify-content: center;"> <img src="https://cdn-uploads.huggingface.co/production/uploads/6419c2f6b4adb0e101b17b6c/1yWWnt-V-7T4P8oRUVDR1.png" style="width: 60%; max-height: 250px;"> </div>
m-a-p/CodeEditorBench
--- license: apache-2.0 --- # CodeEditorBench [**🌐 Homepage**](https://codeeditorbench.github.io/) | [**🤗 Dataset**](https://huggingface.co/datasets/m-a-p/CodeEditorBench) | [**📖 arXiv**](https://arxiv.org/pdf/2404.03543.pdf) | [**GitHub**](https://github.com/CodeEditorBench/CodeEditorBench) ## Introduction Large Language Models (LLMs) for code are rapidly evolving, with code editing emerging as a critical capability. We introduce CodeEditorBench, an evaluation framework designed to rigorously assess the performance of LLMs in code editing tasks, including debugging, translating, polishing, and requirement switching. Unlike existing benchmarks focusing solely on code generation, CodeEditorBench emphasizes real-world scenarios and practical aspects of software development. We curate diverse coding challenges and scenarios from five sources, covering various programming languages, complexity levels, and editing tasks. Evaluation of 19 LLMs reveals that closed-source models (particularly Gemini-Ultra and GPT-4), outperform open-source models in CodeEditorBench, highlighting differences in model performance based on problem types and prompt sensitivities. CodeEditorBench aims to catalyze advancements in LLMs by providing a robust platform for assessing code editing capabilities. We will release all prompts and datasets to enable the community to expand the dataset and benchmark emerging LLMs. By introducing CodeEditorBench, we contribute to the advancement of LLMs in code editing and provide a valuable resource for researchers and practitioners. ![Alt text](tech_route.png) ## Results <div style="display: flex; justify-content: space-around; align-items: center;"> <img src="Models_Zero_Shot.png" alt="First Image Description" style="width: 48%;" /> <img src="win_rate_zero.png" alt="Second Image Description" style="width: 48%;" /> </div> We propose evaluating LLMs across four scenarios capturing various code editing capabilities, namely code debug, code translate, code polish, and code requirement switch.The figure in left depicts various model performances across the four scenarios available in CodeEditorBench\_Plus in a radial plot – highlighting how relative differences across models change across the scenarios. We also give the Performance of open-source and closed-source models on CodeEditorBench\_Plus in zero-shot evaluated through win\_rate in the right figure. 🎯All results of models are generated by greedy decoding. ✨Code Debug, Code Translate and Code Requirement Switch are evaluated with pass@1, while Code Polish is evaluated with Mean OptScore. ## Disclaimers The guidelines for the annotators emphasized strict compliance with copyright and licensing rules from the initial data source, specifically avoiding materials from websites that forbid copying and redistribution. Should you encounter any data samples potentially breaching the copyright or licensing regulations of any site, we encourage you to [contact](#contact) us. Upon verification, such samples will be promptly removed. ## Contact <!-- - Jiawei Guo: moriatysss152@gmail.com - Ziming Li : - Xueling Liu: - Kaijing Ma: --> - Ge Zhang: zhangge@01.ai - Wenhu Chen: wenhuchen@uwaterloo.ca - Jie Fu: jiefu@ust.hk ## Citation **BibTeX:** ```bibtex @misc{guo2024codeeditorbench, title={CodeEditorBench: Evaluating Code Editing Capability of Large Language Models}, author={Jiawei Guo and Ziming Li and Xueling Liu and Kaijing Ma and Tianyu Zheng and Zhouliang Yu and Ding Pan and Yizhi LI and Ruibo Liu and Yue Wang and Shuyue Guo and Xingwei Qu and Xiang Yue and Ge Zhang and Wenhu Chen and Jie Fu}, year={2024}, eprint={2404.03543}, archivePrefix={arXiv}, primaryClass={cs.SE} } ```
mvarma/medwiki
--- YAML tags: annotations_creators: - machine-generated language_creators: - crowdsourced language: - en-US - en license: - cc-by-4.0 multilinguality: - monolingual pretty_name: medwiki size_categories: - unknown source_datasets: - extended|wikipedia task_categories: - text-retrieval task_ids: - entity-linking-retrieval --- # Dataset Card for MedWiki ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Github](https://github.com/HazyResearch/medical-ned-integration) - **Paper:** [Cross-Domain Data Integration for Named Entity Disambiguation in Biomedical Text](https://arxiv.org/abs/2110.08228) - **Point of Contact:** [Maya Varma](mailto:mvarma2@stanford.edu) ### Dataset Summary MedWiki is a large sentence dataset collected from a medically-relevant subset of Wikipedia and annotated with biomedical entities in the Unified Medical Language System (UMLS) knowledge base. For each entity, we include a rich set of types sourced from both UMLS and WikiData. Consisting of over 13 million sentences and 17 million entity annotations, MedWiki can be utilized as a pretraining resource for language models and can improve performance of medical named entity recognition and disambiguation systems, especially on rare entities. Here, we include two configurations of MedWiki (further details in [Dataset Creation](#dataset-creation)): - `MedWiki-Full` is a large sentence dataset with UMLS medical entity annotations generated through the following two steps: (1) a weak labeling proecedure to annotate WikiData entities in sentences and (2) a data integration approach that maps WikiData entities to their counterparts in UMLS. - `MedWiki-HQ` is a subset of MedWiki-Full with higher quality labels designed to limit noise that arises from the annotation procedure listed above. ### Languages The text in the dataset is in English and was obtained from English Wikipedia. ## Dataset Structure ### Data Instances A typical data point includes a sentence collected from Wikipedia annotated with UMLS medical entities and associated titles and types. An example from the MedWiki test set looks as follows: ``` {'sent_idx_unq': 57000409, 'sentence': "The hair , teeth , and skeletal side effects of TDO are lifelong , and treatment is used to manage those effects .", 'mentions': ['tdo'], 'entities': ['C2931236'], 'entity_titles': ['Tricho-dento-osseous syndrome 1'], 'types': [['Disease or Syndrome', 'disease', 'rare disease', 'developmental defect during embryogenesis', 'malformation syndrome with odontal and/or periodontal component', 'primary bone dysplasia with increased bone density', 'syndromic hair shaft abnormality']], 'spans': [[10, 11]]} ``` ### Data Fields - `sent_idx_unq`: a unique integer identifier for the data instance - `sentence`: a string sentence collected from English Wikipedia. Punctuation is separated from words, and the sentence can be tokenized into word-pieces with the .split() method. - `mentions`: list of medical mentions in the sentence. - `entities`: list of UMLS medical entity identifiers corresponding to mentions. There is exactly one entity for each mention, and the length of the `entities` list is equal to the length of the `mentions` list. - `entity_titles`: List of English titles collected from UMLS that describe each entity. The length of the `entity_titles` list is equal to the length of the `entities` list. - `types`: List of category types associated with each entity, including types collected from UMLS and WikiData. - `spans`: List of integer pairs representing the word span of each mention in the sentence. ### Data Splits MedWiki includes two configurations: MedWiki-Full and MedWiki-HQ (described further in [Dataset Creation](#dataset-creation)). For each configuration, data is split into training, development, and test sets. The split sizes are as follow: | | Train | Dev | Test | | ----- | ------ | ----- | ---- | | MedWiki-Full Sentences |11,784,235 | 649,132 | 648,608 | | MedWiki-Full Mentions |15,981,347 | 876,586 | 877,090 | | MedWiki-Full Unique Entities | 230,871 | 55,002 | 54,772 | | MedWiki-HQ Sentences | 2,962,089 | 165,941 | 164,193 | | MedWiki-HQ Mentions | 3,366,108 | 188,957 | 186,622 | | MedWiki-HQ Unique Entities | 118,572 | 19,725 | 19,437 | ## Dataset Creation ### Curation Rationale Existing medical text datasets are generally limited in scope, often obtaining low coverage over the entities and structural resources in the UMLS medical knowledge base. When language models are trained across such datasets, the lack of adequate examples may prevent models from learning the complex reasoning patterns that are necessary for performing effective entity linking or disambiguation, especially for rare entities as shown in prior work by [Orr et al.](http://cidrdb.org/cidr2021/papers/cidr2021_paper13.pdf). Wikipedia, which is often utilized as a rich knowledge source in general text settings, contains references to medical terms and can help address this issue. Here, we curate the MedWiki dataset, which is a large-scale, weakly-labeled dataset that consists of sentences from Wikipedia annotated with medical entities in the UMLS knowledge base. MedWiki can serve as a pretraining dataset for language models and holds potential for improving performance on medical named entity recognition tasks, especially on rare entities. ### Source Data #### Initial Data Collection and Normalization MedWiki consists of sentences obtained from the November 2019 dump of English Wikipedia. We split pages into an 80/10/10 train/dev/test split and then segment each page at the sentence-level. This ensures that all sentences associated with a single Wikipedia page are placed in the same split. #### Who are the source language producers? The source language producers are editors on English Wikipedia. ### Annotations #### Annotation process We create two configurations of our dataset: MedWiki-Full and MedWiki-HQ. We label MedWiki-Full by first annotating all English Wikipedia articles with textual mentions and corresponding WikiData entities; we do so by obtaining gold entity labels from internal page links as well as generating weak labels based on pronouns and alternative entity names (see [Orr et al. 2020](http://cidrdb.org/cidr2021/papers/cidr2021_paper13.pdf) for additional information). Then, we use the off-the-shelf entity linker [Bootleg](https://github.com/HazyResearch/bootleg) to map entities in WikiData to their counterparts in the 2017AA release of the Unified Medical Language System (UMLS), a standard knowledge base for biomedical entities (additional implementation details in forthcoming publication). Any sentence containing at least one UMLS entity is included in MedWiki-Full. We also include types associated with each entity, which are collected from both WikiData and UMLS using the generated UMLS-Wikidata mapping. It is important to note that types obtained from WikiData are filtered according to methods described in [Orr et al. 2020](http://cidrdb.org/cidr2021/papers/cidr2021_paper13.pdf). Since our labeling procedure introduces some noise into annotations, we also release the MedWiki-HQ dataset configuration with higher-quality labels. To generate MedWiki-HQ, we filtered the UMLS-Wikidata mappings to only include pairs of UMLS medical entities and WikiData items that share a high textual overlap between titles. MedWiki-HQ is a subset of MedWiki-Full. To evaluate the quality of our UMLS-Wikidata mappings, we find that WikiData includes a small set of "true" labeled mappings between UMLS entities and WikiData items. (Note that we only include WikiData items associated with linked Wikipedia pages.) This set comprises approximately 9.3k UMLS entities in the original UMLS-Wikidata mapping (used for MedWiki-Full) and 5.6k entities in the filtered UMLS-Wikidata mapping (used for MedWiki-HQ). Using these labeled sets, we find that our mapping accuracy is 80.2% for the original UMLS-Wikidata mapping and 94.5% for the filtered UMLS-Wikidata mapping. We also evaluate integration performance on this segment as the proportion of mapped WikiData entities that share a WikiData type with the true entity, suggesting the predicted mapping adds relevant structural resources. Integration performance is 85.4% for the original UMLS-Wikidata mapping and 95.9% for the filtered UMLS-Wikidata mapping. The remainder of items in UMLS have no “true” mappings to WikiData. #### Who are the annotators? The dataset was labeled using weak-labeling techniques as described above. ### Personal and Sensitive Information No personal or sensitive information is included in MedWiki. ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to enable the creation of better named entity recognition systems for biomedical text. MedWiki encompasses a large set of entities in the UMLS knowledge base and includes a rich set of types associated with each entity, which can enable the creation of models that achieve high performance on named entity recognition tasks, especially on rare or unpopular entities. Such systems hold potential for improving automated parsing and information retrieval from large quantities of biomedical text. ### Discussion of Biases The data included in MedWiki comes from English Wikipedia. Generally, Wikipedia articles are neutral in point of view and aim to avoid bias. However, some [prior work](https://www.hbs.edu/ris/Publication%20Files/15-023_e044cf50-f621-4759-a827-e9a3bf8920c0.pdf) has shown that ideological biases may exist within some Wikipedia articles, especially those that are focused on political issues or those that are written by fewer authors. We anticipate that such biases are rare for medical articles, which are typically comprised of scientific facts. However, it is important to note that bias encoded in Wikipedia is likely to be reflected by MedWiki. ### Other Known Limitations Since MedWiki was annotated using weak labeling techniques, there is likely some noise in entity annotations. (Note that to address this, we include the MedWiki-HQ configuration, which is a subset of MedWiki-Full with higher quality labels. Additional details in [Dataset Creation](#dataset-creation)). ## Additional Information ### Dataset Curators MedWiki was curated by Maya Varma, Laurel Orr, Sen Wu, Megan Leszczynski, Xiao Ling, and Chris Ré. ### Licensing Information Dataset licensed under CC BY 4.0. ### Citation Information ``` @inproceedings{varma-etal-2021-cross-domain, title = "Cross-Domain Data Integration for Named Entity Disambiguation in Biomedical Text", author = "Varma, Maya and Orr, Laurel and Wu, Sen and Leszczynski, Megan and Ling, Xiao and R{\'e}, Christopher", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2021", month = nov, year = "2021", address = "Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.findings-emnlp.388", pages = "4566--4575", } ``` ### Contributions Thanks to [@maya124](https://github.com/maya124) for adding this dataset.
usc-isi/WikiConvert
--- language_creators: - found language: - en license: - mit multilinguality: - monolingual size_categories: - 100K<n<1M source_datasets: - extended|wikipedia task_categories: - fill-mask - other - text-generation task_ids: - language-modeling - masked-language-modeling pretty_name: Wiki-Convert YAML tags: - {} - found language_bcp47: - en-US tags: - numeracy - natural-language-understanding - tokenization --- # Dataset Card Creation Guide ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization) - [Who are the source language producers?](#who-are-the-source-language-producers) - [Annotations](#annotations) - [Annotation process](#annotation-process) - [Who are the annotators?](#who-are-the-annotators) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Github](https://github.com/avi-jit/numeracy-literacy) - **Paper:** [Anthology](https://aclanthology.org/2021.emnlp-main.557) - **Point of Contact:** [Avijit Thawani](mailto:thawani@isi.edu) ### Dataset Summary Wiki-Convert is a 900,000+ sentences dataset of precise number annotations from English Wikipedia. It relies on Wiki contributors' annotations in the form of a [{{Convert}}](https://en.wikipedia.org/wiki/Template:Convert) template. ### Supported Tasks and Leaderboards - `sequence-modeling`: The dataset can be used to train a model for [Language Mddeling], which consists in [TASK DESCRIPTION]. Success on this task is typically measured by achieving a low [perplexity](https://huggingface.co/transformers/perplexity.html). ### Languages The dataset is extracted from English Wikipedia, hence overwhelmingly contains English text. ## Dataset Structure ### Data Instances Each row in the json file contains metadata about the source Wikipedia sentence, along with annotations for a single number, e.g., `number: 10` in the below example. The annotations are inspired by Numeracy-600K and are in the form of `length` and `offset` from the beginning of the sentence. ``` { 'id': 1080801, 'UNIQUE_STORY_INDEX': '1080801', 'offset': 83, 'length': 2, 'magnitude': 0, 'comment': "Like all Type UB III submarines, UB-117 carried 10 torpedoes and was armed with a  10 cms deck gun. ''", 'number': 10 } ``` Please refer to https://github.com/avi-jit/numeracy-literacy for more details. ### Data Splits | | Tain | Dev | Test | | ----- | :------: | :-----: | :----: | | Input Sentences | 739,583 | 92,447 | 92,449| ## License Provided under MIT License. ## Citation ``` @inproceedings{thawani-etal-2021-numeracy, title = "Numeracy enhances the Literacy of Language Models", author = "Thawani, Avijit and Pujara, Jay and Ilievski, Filip", booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing", month = nov, year = "2021", address = "Online and Punta Cana, Dominican Republic", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2021.emnlp-main.557", pages = "6960--6967", abstract = "Specialized number representations in NLP have shown improvements on numerical reasoning tasks like arithmetic word problems and masked number prediction. But humans also use numeracy to make better sense of world concepts, e.g., you can seat 5 people in your {`}room{'} but not 500. Does a better grasp of numbers improve a model{'}s understanding of other concepts and words? This paper studies the effect of using six different number encoders on the task of masked word prediction (MWP), as a proxy for evaluating literacy. To support this investigation, we develop Wiki-Convert, a 900,000 sentence dataset annotated with numbers and units, to avoid conflating nominal and ordinal number occurrences. We find a significant improvement in MWP for sentences containing numbers, that exponent embeddings are the best number encoders, yielding over 2 points jump in prediction accuracy over a BERT baseline, and that these enhanced literacy skills also generalize to contexts without annotated numbers. We release all code at https://git.io/JuZXn.", } ``` Thanks to [@avi-jit](https://github.com/avi-jit) for adding this dataset.
valurank/PoliticalBias_AllSides_Txt
--- license: - other language: - en multilinguality: - monolingual task_categories: - classification task_ids: - classification --- # Dataset Card for news-12factor ## Table of Contents - [Dataset Description](#dataset-description) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Source Data](#source-data) - [Annotations](#annotations) ## Dataset Description ~20k articles labeled left, right, or center by the editors of allsides.com. ## Languages The text in the dataset is in English ## Dataset Structure 3 folders, with many text files in each. Each text file represent the body text of one article. ## Source Data URL data was scraped using https://github.com/mozilla/readability ## Annotations Articles were manually annotated by news editors who were attempting to select representative articles from the left, right and center of each article topic. In other words, the dataset should generally be balanced - the left/right/center articles cover the same set of topics, and have roughly the same amount of articles in each.
fmplaza/EmoEvent
--- license: apache-2.0 language: - en - es --- # Dataset Card for Emoevent ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Annotations](#annotations) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Repository:** [EmoEvent dataset repository](https://github.com/fmplaza/EmoEvent) - **Paper: EmoEvent:** [A Multilingual Emotion Corpus based on different Events](https://aclanthology.org/2020.lrec-1.186.pdf) - **Leaderboard:** [Leaderboard for EmoEvent / Spanish version](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6385) - **Point of Contact: fmplaza@ujaen.es** ### Dataset Summary EmoEvent is a multilingual emotion dataset of tweets based on different events that took place in April 2019. Three annotators labeled the tweets following the six Ekman’s basic emotion model (anger, fear, sadness, joy, disgust, surprise) plus the “neutral or other emotions” category. Morevoer, the tweets are annotated as offensive (OFF) or non-offensive (NO). ### Supported Tasks and Leaderboards This dataset is intended for multi-class emotion classification and binary offensive classification. Competition [EmoEvalEs task on emotion detection for Spanish at IberLEF 2021](http://journal.sepln.org/sepln/ojs/ojs/index.php/pln/article/view/6385) ### Languages - Spanish - English ## Dataset Structure ### Data Instances For each instance, there is a string for the id of the tweet, a string for the emotion class, a string for the offensive class, and a string for the event. See the []() to explore more examples. ``` {'id': 'a0c1a858-a9b8-4cb1-8a81-1602736ff5b8', 'event': 'GameOfThrones', 'tweet': 'ARYA DE MI VIDA. ERES MAS ÉPICA QUE EL GOL DE INIESTA JODER #JuegodeTronos #VivePoniente', 'offensive': 'NO', 'emotion': 'joy', } ``` ``` {'id': '3YCT0L9OMMFP7KWKQSTJRJO0YHUSN2a0c1a858-a9b8-4cb1-8a81-1602736ff5b8', 'event': 'GameOfThrones', 'tweet': 'The #NotreDameCathedralFire is indeed sad and people call all offered donations humane acts, but please if you have money to donate, donate to humans and help bring food to their tables and affordable education first. What more humane than that? #HumanityFirst', 'offensive': 'NO', 'emotion': 'sadness', } ``` ### Data Fields - `id`: a string to identify the tweet - `event`: a string containing the event associated with the tweet - `tweet`: a string containing the text of the tweet - `offensive`: a string containing the offensive gold label - `emotion`: a string containing the emotion gold label ### Data Splits The EmoEvent dataset has 2 subsets: EmoEvent_es (Spanish version) and EmoEvent_en (English version) Each subset contains 3 splits: _train_, _validation_, and _test_. Below are the statistics subsets. | EmoEvent_es | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 5,723 | | Validation | 844 | | Test | 1,656 | | EmoEvent_en | Number of Instances in Split | | ------------- | ------------------------------------------- | | Train | 5,112 | | Validation | 744 | | Test | 1,447 | ## Dataset Creation ### Source Data Twitter #### Who are the annotators? Amazon Mechanical Turkers ## Additional Information ### Licensing Information The EmoEvent dataset is released under the [Apache-2.0 License](http://www.apache.org/licenses/LICENSE-2.0). ### Citation Information ``` @inproceedings{plaza-del-arco-etal-2020-emoevent, title = "{{E}mo{E}vent: A Multilingual Emotion Corpus based on different Events}", author = "{Plaza-del-Arco}, {Flor Miriam} and Strapparava, Carlo and {Ure{\~n}a-L{\’o}pez}, L. Alfonso and {Mart{\’i}n-Valdivia}, M. Teresa", booktitle = "Proceedings of the 12th Language Resources and Evaluation Conference", month = may, year = "2020", address = "Marseille, France", publisher = "European Language Resources Association", url = "https://www.aclweb.org/anthology/2020.lrec-1.186", pages = "1492--1498", language = "English", ISBN = "979-10-95546-34-4" } ```
atenglens/taiwanese_english_translation
--- annotations_creators: [] language_creators: - other language: - tw - en license: [] multilinguality: - translation size_categories: - unknown source_datasets: - extended|other task_categories: - question-answering - text2text-generation - text-generation - translation task_ids: - language-modeling pretty_name: taiwanese_english_translation tags: - conditional-text-generation --- # Dataset Card for taiwanese_english_translation ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage: https://taigi.fhl.net/list.html** ### Dataset Summary Taiwanese and English translation of the Bible (National Taiwanese Bible Quan Luo version and World English Bible version). Each line corresponds to a verse in the Bible, which may contain multiple sentences. The dataset contains a total of more than 31,102 sentences (31,102 verses in the Bible). ### Languages Source Language: Taiwanese (Tailo romanization system) Target Language: English ## Dataset Structure csv: Tailo,English ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits Data split into train (80%), validation (10%), and test (10%) sets. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization The data was scraped from the website: https://taigi.fhl.net/list.html. General noise cleanup was conducted. Also note that all names in Taiwanese have been de-hyphenated to assist with training. #### Who are the source language producers? The WWW Multimedia Information Network, operating under the Hope Hope Information Center, provides Taiwanese translations of the Bible. ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information N/A ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations A considerable amount of noise has been removed. However, there may still be some noise (extra punctuation, brackets, digits, special characters, verse annotations). ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information https://taigi.fhl.net/list.html ### Contributions Thanks to [@atenglens](https://github.com/atenglens) for adding this dataset.
hackathon-pln-es/neutral-es
--- language: - es multilinguality: - monolingual size_categories: - 1K<n<10K task_categories: - text2text-generation - translation task_ids: [] pretty_name: neutralES --- # Spanish Gender Neutralization <p align="center"> <img src="https://upload.wikimedia.org/wikipedia/commons/2/29/Gender_equality_symbol_%28clipart%29.png" width="250"/> </p> Spanish is a beautiful language and it has many ways of referring to people, neutralizing the genders and using some of the resources inside the language. One would say *Todas las personas asistentes* instead of *Todos los asistentes* and it would end in a more inclusive way for talking about people. This dataset collects a set of manually anotated examples of gendered-to-neutral spanish transformations. The intended use of this dataset is to train a spanish language model for translating from gendered to neutral, in order to have more inclusive sentences. ### Compiled sources One of the major challenges was to obtain a valuable dataset that would suit gender inclusion purpose, therefore, when building the dataset, the team opted to dedicate a considerable amount of time to build it from a scratch. You can find here the results. The data used for the model training has been manually created form a compilation of sources, obtained from a series of guidelines and manuals issued by Spanish Ministry of Health, Social Services and Equality in the matter of the usage of non-sexist language, stipulated in this linked [document](https://www.inmujeres.gob.es/servRecursos/formacion/GuiasLengNoSexista/docs/Guiaslenguajenosexista_.pdf). **NOTE: Appart from manually anotated samples, this dataset has been further increased by applying data augmentation so a minumin number of training examples are generated.** * [Guía para un discurso igualitario en la universidad de alicante](https://ieg.ua.es/es/documentos/normativasobreigualdad/guia-para-un-discurso-igualitario-en-la-ua.pdf) * [Guía UC de Comunicación en Igualdad](<https://web.unican.es/unidades/igualdad/SiteAssets/igualdad/comunicacion-en-igualdad/guia%20comunicacion%20igualdad%20(web).pdf>) * [Buenas prácticas para el tratamiento del lenguaje en igualdad](https://e-archivo.uc3m.es/handle/10016/22811) * [Guía del lenguaje no sexista de la Universidad de Castilla-La Mancha](https://unidadigualdad.ugr.es/page/guiialenguajeuniversitarionosexista_universidaddecastillalamancha/!) * [Guía de Lenguaje Para el Ámbito Educativo](https://www.educacionyfp.gob.es/va/dam/jcr:8ce318fd-c8ff-4ad2-97b4-7318c27d1682/guialenguajeambitoeducativo.pdf) * [Guía para un uso igualitario y no sexista del lenguaje y dela imagen en la Universidad de Jaén](https://www.ujaen.es/servicios/uigualdad/sites/servicio_uigualdad/files/uploads/Guia_lenguaje_no_sexista.pdf) * [Guía de uso no sexista del vocabulario español](https://www.um.es/documents/2187255/2187763/guia-leng-no-sexista.pdf/d5b22eb9-b2e4-4f4b-82aa-8a129cdc83e3) * [Guía para el uso no sexista de la lengua castellana y de imágnes en la UPV/EHV](https://www.ehu.eus/documents/1734204/1884196/Guia_uso_no_sexista_EHU.pdf) * [Guía de lenguaje no sexista UNED](http://portal.uned.es/pls/portal/docs/PAGE/UNED_MAIN/LAUNIVERSIDAD/VICERRECTORADOS/GERENCIA/OFICINA_IGUALDAD/CONCEPTOS%20BASICOS/GUIA_LENGUAJE.PDF) * [COMUNICACIÓN AMBIENTAL CON PERSPECTIVA DE GÉNERO](https://cima.cantabria.es/documents/5710649/5729124/COMUNICACI%C3%93N+AMBIENTAL+CON+PERSPECTIVA+DE+G%C3%89NERO.pdf/ccc18730-53e3-35b9-731e-b4c43339254b) * [Recomendaciones para la utilización de lenguaje no sexista](https://www.csic.es/sites/default/files/guia_para_un_uso_no_sexista_de_la_lengua_adoptada_por_csic2.pdf) * [Estudio sobre lenguaje y contenido sexista en la Web](https://www.mujeresenred.net/IMG/pdf/Estudio_paginas_web_T-incluye_ok.pdf) * [Nombra.en.red. En femenino y en masculino](https://www.inmujeres.gob.es/areasTematicas/educacion/publicaciones/serieLenguaje/docs/Nombra_en_red.pdf) ## Team Members - Fernando Velasco [(fermaat)](https://huggingface.co/fermaat) - Cibeles Redondo [(CibelesR)](https://huggingface.co/CibelesR) - Juan Julian Cea [(Juanju)](https://huggingface.co/Juanju) - Magdalena Kujalowicz [(MacadellaCosta)](https://huggingface.co/MacadellaCosta) - Javier Blasco [(javiblasco)](https://huggingface.co/javiblasco) ### Enjoy and feel free to collaborate with this dataset 🤗
arka0821/multi_document_summarization
--- annotations_creators: - found language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 10K<n<100K source_datasets: - original task_categories: - summarization task_ids: - summarization-other-paper-abstract-generation paperswithcode_id: multi-document pretty_name: Multi-Document --- # Dataset Card for Multi-Document ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Repository:** [Multi-Document repository](https://github.com/arka0821/multi_document_summarization) - **Paper:** [Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235) ### Dataset Summary Multi-Document, a large-scale multi-document summarization dataset created from scientific articles. Multi-Document introduces a challenging multi-document summarization task: writing the related-work section of a paper based on its abstract and the articles it references. ### Supported Tasks and Leaderboards [More Information Needed] ### Languages The text in the dataset is in English ## Dataset Structure ### Data Instances {"id": "n3ByHGrxH3bvfrvF", "docs": [{"id": "1394519630182457344", "text": "Clover Bio's COVID-19 vaccine candidate shows immune response against SARS-CoV-2 variants in mouse model https://t.co/wNWa9GQux5"}, {"id": "1398154482463170561", "text": "The purpose of the Vaccine is not to stop you from catching COVID 19. The vaccine introduces the immune system to an inactivated form of the SARS-CoV-2 coronavirus or a small part of it. This then equips the body with the ability to fight the virus better in case you get it. https://t.co/Cz9OU6Zi7P"}, {"id": "1354844652520792071", "text": "The Moderna mRNA COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2.\nResearchers analysed blood samples from vaccinated people and monkeys- Both contained neutralising antibodies against the virus. \nPT1/2\n#COVID19vaccines #biotech https://t.co/ET1maJznot"}, {"id": "1340189698107518976", "text": "@KhandaniM Pfizer vaccine introduces viral surface protein which is constant accross SARS COV 2 variants into the body. Body builds antibodies against this protein, not any virus. These antibodies instructs macrophages &amp; T-Cells to attack &amp; destroy any COVID-19 v variant at infection point"}, {"id": "1374368989581778945", "text": "@DelthiaRicks \" Pfizer and BioNTech\u2019s COVID-19 vaccine is an mRNA vaccine, which does not use the live virus but rather a small portion of the viral sequence of the SARS-CoV-2 virus to instruct the body to produce the spike protein displayed on the surface of the virus.\""}, {"id": "1353354819315126273", "text": "Pfizer and BioNTech Publish Results of Study Showing COVID-19 Vaccine Elicits Antibodies that Neutralize Pseudovirus Bearing the SARS-CoV-2 U.K. Strain Spike Protein in Cell Culture | Pfizer https://t.co/YXcSnjLt8C"}, {"id": "1400821856362401792", "text": "Pfizer-BioNTech's covid-19 vaccine elicits lower levels of antibodies against the SARS-CoV-2\u00a0Delta variant\u00a0(B.1.617.2), first discovered in India, in comparison to other variants, said a research published in\u00a0Lancet\u00a0journal.\n https://t.co/IaCMX81X3b"}, {"id": "1367252963190665219", "text": "New research from UNC-Chapel Hill suggests that those who have previously experienced a SARS-CoV-2 infection develop a significant antibody response to the first dose of mRNA-based COVID-19 vaccine.\nhttps://t.co/B4vR1KUQ0w"}, {"id": "1375949502461394946", "text": "Mechanism of a COVID-19 nanoparticle vaccine candidate that elicits a broadly neutralizing antibody response to SARS-CoV-2 variants https://t.co/nc1L0uvtlI #bioRxiv"}, {"id": "1395428608349548550", "text": "JCI - Efficient maternal to neonatal transfer of antibodies against SARS-CoV-2 and BNT162b2 mRNA COVID-19 vaccine https://t.co/vIBcpPaKFZ"}], "summary": "The COVID-19 vaccine appears to be effective against the novel, rapidly spreading variants of SARS-CoV-2. Pfizer-BioNTech's COVID-19 vaccine use small portion of the viral sequence of the SARS-CoV-2 virus to equip the body with the ability to fight the virus better in case you get it."} ### Data Fields {'id': text of paper abstract \ 'docs': document id \ [ 'id': id of text \ 'text': text data \ ] 'summary': summary text } ### Data Splits The data is split into a training, validation and test. | train | validation | test | |------:|-----------:|-----:| | 50 | 10 | 5 | ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information ``` @article{lu2020multi, title={Multi-Document: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles}, author={Arka Das, India}, journal={arXiv preprint arXiv:2010.14235}, year={2022} } ``` ### Contributions Thanks to [@arka0821] (https://github.com/arka0821/multi_document_summarization) for adding this dataset.
HuggingFaceM4/yttemporal180m
--- license: other ---
embedding-data/simple-wiki
--- license: mit language: - en paperswithcode_id: embedding-data/simple-wiki pretty_name: simple-wiki task_categories: - sentence-similarity - paraphrase-mining task_ids: - semantic-similarity-classification --- # Dataset Card for "simple-wiki" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://cs.pomona.edu/~dkauchak/simplification/](https://cs.pomona.edu/~dkauchak/simplification/) - **Repository:** [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) - **Paper:** [https://aclanthology.org/P11-2117/](https://aclanthology.org/P11-2117/) - **Point of Contact:** [David Kauchak](dkauchak@cs.pomona.edu) ### Dataset Summary This dataset contains pairs of equivalent sentences obtained from Wikipedia. ### Supported Tasks - [Sentence Transformers](https://huggingface.co/sentence-transformers) training; useful for semantic search and sentence similarity. ### Languages - English. ## Dataset Structure Each example in the dataset contains pairs of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value". ``` {"set": [sentence_1, sentence_2]} {"set": [sentence_1, sentence_2]} ... {"set": [sentence_1, sentence_2]} ``` This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using similar sentences. ### Usage Example Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with: ```python from datasets import load_dataset dataset = load_dataset("embedding-data/simple-wiki") ``` The dataset is loaded as a `DatasetDict` and has the format: ```python DatasetDict({ train: Dataset({ features: ['set'], num_rows: 102225 }) }) ``` Review an example `i` with: ```python dataset["train"][i]["set"] ``` ### Curation Rationale [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) #### Who are the source language producers? [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Annotations #### Annotation process [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) #### Who are the annotators? [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Personal and Sensitive Information [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Discussion of Biases [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Other Known Limitations [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ## Additional Information ### Dataset Curators [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Licensing Information [More Information Needed](https://cs.pomona.edu/~dkauchak/simplification/) ### Contributions
RUCAIBox/Data-to-text-Generation
--- language: - en multilinguality: - monolingual task_categories: - tabular-to-text - table-to-text task_ids: [] tags: - data-to-text --- This is the data-to-text generation datasets collected by TextBox, including: - WebNLG v2.1 (webnlg) - WebNLG v3.0 (webnlg2) - WikiBio (wikibio) - E2E (e2e) - DART (dart) - ToTTo (totto) - ENT-DESC (ent) - AGENDA (agenda) - GenWiki (genwiki) - TEKGEN (tekgen) - LogicNLG (logicnlg) - WikiTableT (wikit) - WEATHERGOV (wg). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
RUCAIBox/Story-Generation
--- language: - en multilinguality: - monolingual task_categories: - text-generation task_ids: [] tags: - story-generation --- This is the story generation datasets collected by TextBox, including: - ROCStories (roc) - WritingPrompts (wp) - Hippocorpus (hc) - WikiPlots (wikip) - ChangeMyView (cmv). The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset).
thepurpleowl/codequeries
--- annotations_creators: - expert-generated language: - code language_creators: - found multilinguality: - monolingual pretty_name: codequeries size_categories: - 100K<n<1M source_datasets: - original tags: - neural modeling of code - code question answering - code semantic understanding task_categories: - question-answering task_ids: - extractive-qa license: - apache-2.0 --- # Dataset Card for CodeQueries ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [How to use](#how-to-use) - [Data Splits and Data Fields](#data-splits-and-data-fields) - [Dataset Creation](#dataset-creation) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Data](https://huggingface.co/datasets/thepurpleowl/codequeries) - **Repository:** [Code](https://github.com/thepurpleowl/codequeries-benchmark) - **Paper:** ### Dataset Summary CodeQueries is a dataset to evaluate the ability of neural networks to answer semantic queries over code. Given a query and code, a model is expected to identify answer and supporting-fact spans in the code for the query. This is extractive question-answering over code, for questions with a large scope (entire files) and complexity including both single- and multi-hop reasoning. ### Supported Tasks and Leaderboards Extractive question answering for code, semantic understanding of code. ### Languages The dataset contains code context from `python` files. ## Dataset Structure ### How to Use The dataset can be directly used with the huggingface datasets package. You can load and iterate through the dataset for the proposed five settings with the following two lines of code: ```python import datasets # in addition to `twostep`, the other supported settings are <ideal/file_ideal/prefix>. ds = datasets.load_dataset("thepurpleowl/codequeries", "twostep", split=datasets.Split.TEST) print(next(iter(ds))) #OUTPUT: {'query_name': 'Unused import', 'code_file_path': 'rcbops/glance-buildpackage/glance/tests/unit/test_db.py', 'context_block': {'content': '# vim: tabstop=4 shiftwidth=4 softtabstop=4\n\n# Copyright 2010-2011 OpenStack, LLC\ ...', 'metadata': 'root', 'header': "['module', '___EOS___']", 'index': 0}, 'answer_spans': [{'span': 'from glance.common import context', 'start_line': 19, 'start_column': 0, 'end_line': 19, 'end_column': 33} ], 'supporting_fact_spans': [], 'example_type': 1, 'single_hop': False, 'subtokenized_input_sequence': ['[CLS]_', 'Un', 'used_', 'import_', '[SEP]_', 'module_', '\\u\\u\\uEOS\\u\\u\\u_', '#', ' ', 'vim', ':', ...], 'label_sequence': [4, 4, 4, 4, 4, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...], 'relevance_label': 1 } ``` ### Data Splits and Data Fields Detailed information on the data splits for proposed settings can be found in the paper. In general, data splits in all the proposed settings have examples with the following fields - ``` - query_name (query name to uniquely identify the query) - code_file_path (relative source file path w.r.t. ETH Py150 corpus) - context_blocks (code blocks as context with metadata) [`prefix` setting doesn't have this field and `twostep` has `context_block`] - answer_spans (answer spans with metadata) - supporting_fact_spans (supporting-fact spans with metadata) - example_type (1(positive)) or 0(negative)) example type) - single_hop (True or False - for query type) - subtokenized_input_sequence (example subtokens) [`prefix` setting has the corresponding token ids] - label_sequence (example subtoken labels) - relevance_label (0 (not relevant) or 1 (relevant) - relevance label of a block) [only `twostep` setting has this field] ``` ## Dataset Creation The dataset is created using [ETH Py150 Open dataset](https://github.com/google-research-datasets/eth_py150_open) as source for code contexts. To get semantic queries and corresponding answer/supporting-fact spans in ETH Py150 Open corpus files, CodeQL was used. ## Additional Information ### Licensing Information The source code repositories used for preparing CodeQueries are based on the [ETH Py150 Open dataset](https://github.com/google-research-datasets/eth_py150_open) and are redistributable under the respective licenses. A Huggingface dataset for ETH Py150 Open is available [here](https://huggingface.co/datasets/eth_py150_open). The labeling prepared and provided by us as part of CodeQueries is released under the Apache-2.0 license.
skytnt/fbanimehq
--- annotations_creators: [] language: [] language_creators: [] license: - cc0-1.0 multilinguality: [] pretty_name: Full Body Anime HQ size_categories: - 100K<n<1M source_datasets: - original tags: [] task_categories: - unconditional-image-generation task_ids: [] --- ## Dataset Description FBAnimeHQ is a dataset with high-quality full-body anime girl images in a resolution of 1024 × 512. ### Dataset Summary The dataset contains 112,806 images. All images are on white background ### Collection Method #### v1.0 Collect from danbooru website. Use yolov5 to detect and clip image. Use anime-segmentation to remove background. Use deepdanbooru to filter image. Finally clean the dataset manually. #### v2.0 Base on v1.0, use Novelai image-to-image to enhance and expand the dataset. ### Contributions Thanks to [@SkyTNT](https://github.com/SkyTNT) for adding this dataset.
meliascosta/wiki_academic_subjects
--- license: cc-by-3.0 annotations_creators: - crowdsourced language: - en language_creators: - crowdsourced multilinguality: - monolingual paperswithcode_id: wikitext-2 pretty_name: Wikipedia Outline of Academic Disciplines size_categories: - 10K<n<100K source_datasets: - original tags: - hierarchical - academic - tree - dag - topics - subjects task_categories: - text-classification task_ids: - multi-label-classification --- # Dataset Card for Wiki Academic Disciplines` ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset was created from the [English wikipedia](https://meta.wikimedia.org/wiki/Data_dump_torrents#English_Wikipedia) dump of January 2022. The main goal was to train a hierarchical classifier of academic subjects using [HiAGM](https://github.com/Alibaba-NLP/HiAGM). ### Supported Tasks and Leaderboard Text classification - No leaderboard at the moment. ### Languages English ## Dataset Structure The dataset consists of groups of labeled text chunks (tokenized by spaces and with stopwords removed). Labels are organized in a hieararchy (a DAG with a special Root node) of academic subjects. Nodes correspond to entries in the [outline of academic disciplines](https://en.wikipedia.org/wiki/Outline_of_academic_disciplines) article from Wikipedia. ### Data Instances Data is split in train/test/val each on a separate `.jsonl` file. Label hierarchy is listed a as TAB separated adjacency list on a `.taxonomy` file. ### Data Fields JSONL files contain only two fields: a "token" field which holds the text tokens and a "label" field which holds a list of labels for that text. ### Data Splits 80/10/10 TRAIN/TEST/VAL schema ## Dataset Creation All texts where extracted following the linked articles on [outline of academic disciplines](https://en.wikipedia.org/wiki/Outline_of_academic_disciplines) ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization Wiki Dump #### Who are the source language producers? Wikipedia community. ### Annotations #### Annotation process Texts where automatically assigned to their linked academic discipline #### Who are the annotators? Wikipedia Community. ### Personal and Sensitive Information All information is public. ## Considerations for Using the Data ### Social Impact of Dataset ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information Creative Commons 3.0 (see [Wikipedia:Copyrights](https://en.wikipedia.org/wiki/Wikipedia:Copyrights)) ### Citation Information 1. Zhou, Jie, et al. "Hierarchy-aware global model for hierarchical text classification." Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020. ### Contributions Thanks to [@meliascosta](https://github.com/meliascosta) for adding this dataset.
ACOSharma/literature
--- license: cc-by-sa-4.0 --- # Literature Dataset ## Files A dataset containing novels, epics and essays. The files are as follows: - main.txt, a file with all the texts, every text on a newline, all English - vocab.txt, a file with the trained (BERT) vocab, a newline a new word - train.csv, a file with length 129 sequences of tokens, csv of ints, containing 48,758 samples (6,289,782 tokens) - test.csv, the test split in the same way, 5,417 samples (698,793 tokens) - DatasetDistribution.png, a file with all the texts and a plot with character length ## Texts The texts used are these: - Wuthering Heights - Ulysses - Treasure Island - The War of the Worlds - The Republic - The Prophet - The Prince - The Picture of Dorian Gray - The Odyssey - The Great Gatsby - The Brothers Karamazov - Second Treatise of Goverment - Pride and Prejudice - Peter Pan - Moby Dick - Metamorphosis - Little Women - Les Misérables - Japanese Girls and Women - Iliad - Heart of Darkness - Grimms' Fairy Tales - Great Expectations - Frankenstein - Emma - Dracula - Don Quixote - Crime and Punishment - Christmas Carol - Beyond Good and Evil - Anna Karenina - Adventures of Sherlock Holmes - Adventures of Huckleberry Finn - Adventures in Wonderland - A Tale of Two Cities - A Room with A View
taln-ls2n/kpbiomed
--- annotations_creators: - unknown language_creators: - unknown language: - en license: - cc-by-nc-4.0 multilinguality: - monolingual task_categories: - text-mining - text-generation task_ids: - keyphrase-generation - keyphrase-extraction size_categories: - 100K<n<1M pretty_name: KP-Biomed --- # KPBiomed, A Large-Scale Dataset for keyphrase generation ## About This dataset is made of 5.6 million abstracts with author assigned keyphrases. Details about the dataset can be found in the original paper: Maël Houbre, Florian Boudin and Béatrice Daille. 2022. [A Large-Scale Dataset for Biomedical Keyphrase Generation](https://arxiv.org/abs/2211.12124). In Proceedings of the 13th International Workshop on Health Text Mining and Information Analysis (LOUHI 2022). Reference (author-assigned) keyphrases are also categorized under the PRMU (<u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen) scheme as proposed in the following paper: - Florian Boudin and Ygor Gallina. 2021. [Redefining Absent Keyphrases and their Effect on Retrieval Effectiveness](https://aclanthology.org/2021.naacl-main.330/). In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 4185–4193, Online. Association for Computational Linguistics. Text pre-processing (tokenization) is carried out using spacy (en_core_web_sm model) with a special rule to avoid splitting words with hyphens (e.g. graph-based is kept as one token). Stemming (Porter's stemmer implementation provided in nltk) is applied before reference keyphrases are matched against the source text. ## Content The details of the dataset are in the table below: | Split | # documents | # keyphrases by document (average) | % Present | % Reordered | % Mixed | % Unseen | | :----------- | ----------: | ---------------------------------: | --------: | ----------: | ------: | -------: | | Train small | 500k | 5.24 | 66.31 | 7.16 | 12.60 | 13.93 | | Train medium | 2M | 5.24 | 66.30 | 7.18 | 12.57 | 13.95 | | Train large | 5.6M | 5.23 | 66.32 | 7.18 | 12.55 | 13.95 | | Validation | 20k | 5.25 | 66.44 | 7.07 | 12.45 | 14.05 | | Test | 20k | 5.22 | 66.59 | 7.22 | 12.44 | 13.75 | The following data fields are available: - **id**: unique identifier of the document. - **title**: title of the document. - **abstract**: abstract of the document. - **keyphrases**: list of reference keyphrases. - **mesh terms**: list of indexer assigned MeSH terms if available (around 68% of the articles) - **prmu**: list of <u>P</u>resent-<u>R</u>eordered-<u>M</u>ixed-<u>U</u>nseen categories for reference keyphrases. - **authors**: list of the article's authors - **year**: publication year **NB**: The present keyphrases (represented by the "P" label in the PRMU column) are sorted by their apparition order in the text (title + text).
PlanTL-GOB-ES/UD_Spanish-AnCora
--- YAML tags: annotations_creators: - expert-generated language: - es language_creators: - found license: - cc-by-4.0 multilinguality: - monolingual pretty_name: UD_Spanish-AnCora size_categories: [] source_datasets: [] tags: [] task_categories: - token-classification task_ids: - part-of-speech --- # UD_Spanish-AnCora ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Website:** https://github.com/UniversalDependencies/UD_Spanish-AnCora - **Point of Contact:** [Daniel Zeman](zeman@ufal.mff.cuni.cz) ### Dataset Summary This dataset is composed of the annotations from the [AnCora corpus](http://clic.ub.edu/corpus/), projected on the [Universal Dependencies treebank](https://universaldependencies.org/). We use the POS annotations of this corpus as part of the EvalEs Spanish language benchmark. ### Supported Tasks and Leaderboards POS tagging ### Languages The dataset is in Spanish (`es-ES`) ## Dataset Structure ### Data Instances Three conllu files. Annotations are encoded in plain text files (UTF-8, normalized to NFC, using only the LF character as line break, including an LF character at the end of file) with three types of lines: 1) Word lines containing the annotation of a word/token in 10 fields separated by single tab characters (see below). 2) Blank lines marking sentence boundaries. 3) Comment lines starting with hash (#). ### Data Fields Word lines contain the following fields: 1) ID: Word index, integer starting at 1 for each new sentence; may be a range for multiword tokens; may be a decimal number for empty nodes (decimal numbers can be lower than 1 but must be greater than 0). 2) FORM: Word form or punctuation symbol. 3) LEMMA: Lemma or stem of word form. 4) UPOS: Universal part-of-speech tag. 5) XPOS: Language-specific part-of-speech tag; underscore if not available. 6) FEATS: List of morphological features from the universal feature inventory or from a defined language-specific extension; underscore if not available. 7) HEAD: Head of the current word, which is either a value of ID or zero (0). 8) DEPREL: Universal dependency relation to the HEAD (root iff HEAD = 0) or a defined language-specific subtype of one. 9) DEPS: Enhanced dependency graph in the form of a list of head-deprel pairs. 10) MISC: Any other annotation. From: [https://universaldependencies.org](https://universaldependencies.org/guidelines.html) ### Data Splits - es_ancora-ud-train.conllu - es_ancora-ud-dev.conllu - es_ancora-ud-test.conllu ## Dataset Creation ### Curation Rationale [N/A] ### Source Data [UD_Spanish-AnCora](https://github.com/UniversalDependencies/UD_Spanish-AnCora) #### Initial Data Collection and Normalization The original annotation was done in a constituency framework as a part of the [AnCora project](http://clic.ub.edu/corpus/) at the University of Barcelona. It was converted to dependencies by the [Universal Dependencies team](https://universaldependencies.org/) and used in the CoNLL 2009 shared task. The CoNLL 2009 version was later converted to HamleDT and to Universal Dependencies. For more information on the AnCora project, visit the [AnCora site](http://clic.ub.edu/corpus/). To learn about the Universal Dependences, visit the webpage [https://universaldependencies.org](https://universaldependencies.org) #### Who are the source language producers? For more information on the AnCora corpus and its sources, visit the [AnCora site](http://clic.ub.edu/corpus/). ### Annotations #### Annotation process For more information on the first AnCora annotation, visit the [AnCora site](http://clic.ub.edu/corpus/). #### Who are the annotators? For more information on the AnCora annotation team, visit the [AnCora site](http://clic.ub.edu/corpus/). ### Personal and Sensitive Information No personal or sensitive information included. ## Considerations for Using the Data ### Social Impact of Dataset This dataset contributes to the development of language models in Spanish. ### Discussion of Biases [N/A] ### Other Known Limitations [N/A] ## Additional Information ### Dataset Curators [N/A] ### Licensing Information This work is licensed under a <a rel="license" href="https://creativecommons.org/licenses/by/4.0/">CC Attribution 4.0 International License</a>. ### Citation Information The following paper must be cited when using this corpus: Taulé, M., M.A. Martí, M. Recasens (2008) 'Ancora: Multilevel Annotated Corpora for Catalan and Spanish', Proceedings of 6th International Conference on Language Resources and Evaluation. Marrakesh (Morocco). To cite the Universal Dependencies project: Rueter, J. (Creator), Erina, O. (Contributor), Klementeva, J. (Contributor), Ryabov, I. (Contributor), Tyers, F. M. (Contributor), Zeman, D. (Contributor), Nivre, J. (Creator) (15 Nov 2020). Universal Dependencies version 2.7 Erzya JR. Universal Dependencies Consortium. ### Contributions [N/A]
ProGamerGov/StableDiffusion-v1-5-Regularization-Images
--- license: mit tags: - image-text-dataset - synthetic-dataset --- A collection of regularization / class instance datasets for the [Stable Diffusion v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) model to use for DreamBooth prior preservation loss training. Files labeled with "mse vae" used the [stabilityai/sd-vae-ft-mse](https://huggingface.co/stabilityai/sd-vae-ft-mse) VAE. For ease of use, datasets are stored as zip files containing 512x512 PNG images. The number of images in each zip file is specified at the end of the filename. There is currently a bug where HuggingFace is incorrectly reporting that the datasets are pickled. They are not picked, they are simple ZIP files containing the images. Currently this repository contains the following datasets (datasets are named after the prompt they used): Art Styles * "**artwork style**": 4125 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**artwork style**": 4200 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "text" was also used for this dataset. * "**artwork style**": 2750 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. * "**illustration style**": 3050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**erotic photography**": 2760 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**landscape photography**": 2500 images generated using 50 DPM++ 2S a Karras steps and a CFG of 7, using the MSE VAE. A negative prompt of "b&w, text" was also used for this dataset. People * "**person**": 2115 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**woman**": 4420 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**guy**": 4820 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**supermodel**": 4411 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**bikini model**": 4260 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**sexy athlete**": 5020 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**femme fatale**": 4725 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**sexy man**": 3505 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**sexy woman**": 3500 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. Animals * "**kitty**": 5100 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**cat**": 2050 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. Vehicles * "**fighter jet**": 1600 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**train**": 2669 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. * "**car**": 3150 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. Themes * "**cyberpunk**": 3040 images generated using 50 DDIM steps and a CFG of 7, using the MSE VAE. I used the "Generate Forever" feature in [AUTOMATIC1111's WebUI](https://github.com/AUTOMATIC1111/stable-diffusion-webui) to create thousands of images for each dataset. Every image in a particular dataset uses the exact same settings, with only the seed value being different. You can use my regularization / class image datasets with: https://github.com/ShivamShrirao/diffusers, https://github.com/JoePenna/Dreambooth-Stable-Diffusion, https://github.com/TheLastBen/fast-stable-diffusion, and any other DreamBooth projects that have support for prior preservation loss.
bigbio/biology_how_why_corpus
--- language: - en bigbio_language: - English license: unknown multilinguality: monolingual bigbio_license_shortname: UNKNOWN pretty_name: BiologyHowWhyCorpus homepage: https://allenai.org/data/biology-how-why-corpus bigbio_pubmed: False bigbio_public: True bigbio_tasks: - QUESTION_ANSWERING --- # Dataset Card for BiologyHowWhyCorpus ## Dataset Description - **Homepage:** https://allenai.org/data/biology-how-why-corpus - **Pubmed:** False - **Public:** True - **Tasks:** QA This dataset consists of 185 "how" and 193 "why" biology questions authored by a domain expert, with one or more gold answer passages identified in an undergraduate textbook. The expert was not constrained in any way during the annotation process, so gold answers might be smaller than a paragraph or span multiple paragraphs. This dataset was used for the question-answering system described in the paper “Discourse Complements Lexical Semantics for Non-factoid Answer Reranking” (ACL 2014). ## Citation Information ``` @inproceedings{jansen-etal-2014-discourse, title = "Discourse Complements Lexical Semantics for Non-factoid Answer Reranking", author = "Jansen, Peter and Surdeanu, Mihai and Clark, Peter", booktitle = "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)", month = jun, year = "2014", address = "Baltimore, Maryland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/P14-1092", doi = "10.3115/v1/P14-1092", pages = "977--986", } ```
deutsche-telekom/ger-backtrans-paraphrase
--- license: - cc-by-sa-4.0 language: - de multilinguality: - monolingual size_categories: - 10M<n<100M task_categories: - sentence-similarity --- # German Backtranslated Paraphrase Dataset This is a dataset of more than 21 million German paraphrases. These are text pairs that have the same meaning but are expressed with different words. The source of the paraphrases are different parallel German / English text corpora. The English texts were machine translated back into German to obtain the paraphrases. This dataset can be used for example to train semantic text embeddings. To do this, for example, [SentenceTransformers](https://www.sbert.net/) and the [MultipleNegativesRankingLoss](https://www.sbert.net/docs/package_reference/losses.html#multiplenegativesrankingloss) can be used. ## Creator This data set was compiled and open sourced by [Philip May](https://may.la/) of [Deutsche Telekom](https://www.telekom.de/). ## Our pre-processing Apart from the back translation, we have added more columns (for details see below). We have carried out the following pre-processing and filtering: - We dropped text pairs where one text was longer than 499 characters. - In the [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) texts we have removed the `" · Global Voices"` suffix. ## Your post-processing You probably don't want to use the dataset as it is, but filter it further. This is what the additional columns of the dataset are for. For us it has proven useful to delete the following pairs of sentences: - `min_char_len` less than 15 - `jaccard_similarity` greater than 0.3 - `de_token_count` greater than 30 - `en_de_token_count` greater than 30 - `cos_sim` less than 0.85 ## Columns description - **`uuid`**: a uuid calculated with Python `uuid.uuid4()` - **`en`**: the original English texts from the corpus - **`de`**: the original German texts from the corpus - **`en_de`**: the German texts translated back from English (from `en`) - **`corpus`**: the name of the corpus - **`min_char_len`**: the number of characters of the shortest text - **`jaccard_similarity`**: the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) of both sentences - see below for more details - **`de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) - **`en_de_token_count`**: number of tokens of the `de` text, tokenized with [deepset/gbert-large](https://huggingface.co/deepset/gbert-large) - **`cos_sim`**: the [cosine similarity](https://en.wikipedia.org/wiki/Cosine_similarity) of both sentences measured with [sentence-transformers/paraphrase-multilingual-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-multilingual-mpnet-base-v2) ## Anomalies in the texts It is noticeable that the [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) texts have weird dash prefixes. This looks like this: ``` - Hast du was draufgetan? ``` To remove them you could apply this function: ```python import re def clean_text(text): text = re.sub("^[-\s]*", "", text) text = re.sub("[-\s]*$", "", text) return text df["de"] = df["de"].apply(clean_text) df["en_de"] = df["en_de"].apply(clean_text) ``` ## Parallel text corpora used | Corpus name & link | Number of paraphrases | |-----------------------------------------------------------------------|----------------------:| | [OpenSubtitles](https://opus.nlpl.eu/OpenSubtitles-v2018.php) | 18,764,810 | | [WikiMatrix v1](https://opus.nlpl.eu/WikiMatrix-v1.php) | 1,569,231 | | [Tatoeba v2022-03-03](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php) | 313,105 | | [TED2020 v1](https://opus.nlpl.eu/TED2020-v1.php) | 289,374 | | [News-Commentary v16](https://opus.nlpl.eu/News-Commentary-v16.php) | 285,722 | | [GlobalVoices v2018q4](https://opus.nlpl.eu/GlobalVoices-v2018q4.php) | 70,547 | | **sum** |. **21,292,789** | ## Back translation We have made the back translation from English to German with the help of [Fairseq](https://github.com/facebookresearch/fairseq). We used the `transformer.wmt19.en-de` model for this purpose: ```python en2de = torch.hub.load( "pytorch/fairseq", "transformer.wmt19.en-de", checkpoint_file="model1.pt:model2.pt:model3.pt:model4.pt", tokenizer="moses", bpe="fastbpe", ) ``` ## How the Jaccard similarity was calculated To calculate the [Jaccard similarity coefficient](https://en.wikipedia.org/wiki/Jaccard_index) we are using the [SoMaJo tokenizer](https://github.com/tsproisl/SoMaJo) to split the texts into tokens. We then `lower()` the tokens so that upper and lower case letters no longer make a difference. Below you can find a code snippet with the details: ```python from somajo import SoMaJo LANGUAGE = "de_CMC" somajo_tokenizer = SoMaJo(LANGUAGE) def get_token_set(text, somajo_tokenizer): sentences = somajo_tokenizer.tokenize_text([text]) tokens = [t.text.lower() for sentence in sentences for t in sentence] token_set = set(tokens) return token_set def jaccard_similarity(text1, text2, somajo_tokenizer): token_set1 = get_token_set(text1, somajo_tokenizer=somajo_tokenizer) token_set2 = get_token_set(text2, somajo_tokenizer=somajo_tokenizer) intersection = token_set1.intersection(token_set2) union = token_set1.union(token_set2) jaccard_similarity = float(len(intersection)) / len(union) return jaccard_similarity ``` ## Load this dataset ### With Hugging Face Datasets ```python # pip install datasets from datasets import load_dataset dataset = load_dataset("deutsche-telekom/ger-backtrans-paraphrase") train_dataset = dataset["train"] ``` ### With Pandas If you want to download the csv file and then load it with Pandas you can do it like this: ```python df = pd.read_csv("train.csv") ``` ## Citations, Acknowledgements and Licenses **OpenSubtitles** - citation: P. Lison and J. Tiedemann, 2016, [OpenSubtitles2016: Extracting Large Parallel Corpora from Movie and TV Subtitles](http://www.lrec-conf.org/proceedings/lrec2016/pdf/947_Paper.pdf). In Proceedings of the 10th International Conference on Language Resources and Evaluation (LREC 2016) - also see http://www.opensubtitles.org/ - license: no special license has been provided at OPUS for this dataset **WikiMatrix v1** - citation: Holger Schwenk, Vishrav Chaudhary, Shuo Sun, Hongyu Gong and Paco Guzman, [WikiMatrix: Mining 135M Parallel Sentences in 1620 Language Pairs from Wikipedia](https://arxiv.org/abs/1907.05791), arXiv, July 11 2019 - license: [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) **Tatoeba v2022-03-03** - citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) - license: [CC BY 2.0 FR](https://creativecommons.org/licenses/by/2.0/fr/) - copyright: https://tatoeba.org/eng/terms_of_use **TED2020 v1** - citation: Reimers, Nils and Gurevych, Iryna, [Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation](https://arxiv.org/abs/2004.09813), In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, November 2020 - acknowledgements to [OPUS](https://opus.nlpl.eu/) for this service - license: please respect the [TED Talks Usage Policy](https://www.ted.com/about/our-organization/our-policies-terms/ted-talks-usage-policy) **News-Commentary v16** - citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) - license: no special license has been provided at OPUS for this dataset **GlobalVoices v2018q4** - citation: J. Tiedemann, 2012, [Parallel Data, Tools and Interfaces in OPUS](https://opus.nlpl.eu/Tatoeba-v2022-03-03.php). In Proceedings of the 8th International Conference on Language Resources and Evaluation (LREC 2012) - license: no special license has been provided at OPUS for this dataset ## Citation ```latex @misc{ger-backtrans-paraphrase, title={Deutsche-Telekom/ger-backtrans-paraphrase - dataset at Hugging Face}, url={https://huggingface.co/datasets/deutsche-telekom/ger-backtrans-paraphrase}, year={2022}, author={May, Philip} } ``` ## Licensing Copyright (c) 2022 [Philip May](https://may.la/), [Deutsche Telekom AG](https://www.telekom.com/) This work is licensed under [CC-BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/).
argilla/medical-keywords
--- language: - en size_categories: - 100K<n<1M source_datasets: - original task_categories: - token-classification task_ids: - keyphrase-extraction - named-entity-recognition dataset_info: features: - name: text dtype: string - name: tokens sequence: string - name: prediction list: - name: end dtype: int64 - name: label dtype: string - name: score dtype: float64 - name: start dtype: int64 - name: prediction_agent dtype: string - name: annotation dtype: 'null' - name: annotation_agent dtype: 'null' - name: id dtype: 'null' - name: metadata struct: - name: medical_specialty dtype: string - name: status dtype: string - name: event_timestamp dtype: timestamp[us] - name: metrics dtype: 'null' splits: - name: train num_bytes: 58986555 num_examples: 148699 download_size: 17498377 dataset_size: 58986555 --- # Dataset Card for "medical-keywords" ## Dataset Description - **Homepage:** Kaggle Challenge - **Repository:** https://www.kaggle.com/datasets/tboyle10/medicaltranscriptions - **Paper:** N.A. - **Leaderboard:** N.A. - **Point of Contact:** N.A. ### Dataset Summary Medical transcription data scraped from mtsamples.com Medical data is extremely hard to find due to HIPAA privacy regulations. This dataset offers a solution by providing medical transcription samples. This dataset contains sample medical transcriptions for various medical specialties. ### Languages english ### Citation Information Acknowledgements Medical transcription data scraped from mtsamples.com ### Contributions Thanks to [@davidberenstein1957](https://github.com/davidberenstein1957) for adding this dataset.
hanamizuki-ai/genshin-voice-v3.3-mandarin
--- language: - zh multilinguality: - monolingual pretty_name: Genshin Voice source_datasets: - original task_categories: - text-to-speech - automatic-speech-recognition dataset_info: features: - name: audio dtype: audio - name: language dtype: string - name: npcName dtype: string - name: text dtype: string - name: type dtype: string splits: - name: train num_bytes: 36412736429.25 num_examples: 75033 download_size: 18251937481 dataset_size: 36412736429.25 --- # Dataset Card for Genshin Voice ## Dataset Description ### Dataset Summary The Genshin Voice dataset is a text-to-voice dataset of different Genshin Impact characters unpacked from the game. ### Languages The text in the dataset is in Mandarin. ## Dataset Creation ### Source Data #### Initial Data Collection and Normalization The data was obtained by unpacking the [Genshin Impact](https://genshin.hoyoverse.com/) game. #### Who are the source language producers? The language producers are the employee of [Hoyoverse](https://hoyoverse.com/) and contractors from [EchoSky Studio](http://qx.asiacu.com/). ### Annotations The dataset contains official annotations from the game, including ingame speaker name and transcripts. ## Additional Information ### Dataset Curators The dataset was created by [w4123](https://github.com/w4123) initially in his [GitHub repository](https://github.com/w4123/GenshinVoice). ### Licensing Information Copyright © COGNOSPHERE. All Rights Reserved.
relbert/scientific_and_creative_analogy
--- language: - en license: - other multilinguality: - monolingual size_categories: - 1<n<1K pretty_name: Relation Mapping --- # Dataset Card for "relbert/scientific_and_creative_analogy" ## Dataset Description - **Repository:** [https://github.com/taczin/SCAN_analogies](https://github.com/taczin/SCAN_analogies) - **Paper:** [https://arxiv.org/abs/2211.15268](https://arxiv.org/abs/2211.15268) - **Dataset:** Relation Mapping ### Dataset Summary A dataset for relation mapping task, which is a task to choose optimal combination of word pairs (see more detail in the [paper](https://www.jair.org/index.php/jair/article/view/10583)). Relation mapping `M` is the set of bijective map in between two sets of terms (`A` and `B`): ``` [set `A`]: ("solar system", "sun", "planet", "mass", "attracts", "revolves", "gravity") [set `B`]: ("atom", "nucleus", "electron", "charge", "attracts", "revolves", "electromagnetism") [Relation Mapping `M`] * "solar system" -> "atom" * "sun" -> "nucleus" * "planet" -> "electron" * "mass" -> "charge" * "attracts" -> "attracts" * "revolves" -> "revolves" * "gravity" -> "electromagnetism" ``` ***[Relation Mapping Problem](https://www.jair.org/index.php/jair/article/view/10583)*** is the task to identify the mapping `M` given the sets of terms `A` and `B`. ## Dataset Structure ### Data Instances An example looks as follows. ``` { "id": "0", "reference": ["buying an item", "accepting a belief"], "source": ["buying an item", "buyer", "merchandise", "buying", "selling", "returning", "valuable", "worthless"], "target": ["accepting a belief", "believer", "belief", "accepting", "advocating", "rejecting", "true", "false"], "target_random": ["rejecting", "true", "false", "accepting a belief", "believer", "advocating", "belief", "accepting"], "type": "metaphor" } ``` - `source`: A list of terms, which is the source of the relation mapping from. - `target_random`: A list of terms, where we want to find a mapping from `source` to. - `target`: A correctly ordered `target_random` that aligns with the `source`. Given `source` and `target_random`, the task is to predict the correct order of `target_random` so that it matches `target`. In average 7 terms are in the set, so the total number of possible order is 5040. ### Data Splits | name |test| |---------|----:| |relation_mapping| 45 | ### Citation Information ``` @article{czinczoll2022scientific, title={Scientific and Creative Analogies in Pretrained Language Models}, author={Czinczoll, Tamara and Yannakoudakis, Helen and Mishra, Pushkar and Shutova, Ekaterina}, journal={arXiv preprint arXiv:2211.15268}, year={2022} } ```
b-mc2/wikihow_lists
--- license: cc-by-nc-sa-3.0 task_categories: - summarization - question-answering language: - en tags: - lists - bullets - steps - summary pretty_name: wikihow_lists size_categories: - 10K<n<100K --- # Dataset Card for WikiHow Lists ### Dataset Summary Contains CSV of a subset of WikiHow articles. Subsets include articles that have summaries in numbered list format, unordered list of ingredients, or unordered list of items needed for the article. CSV contains a pageId to reference back to the source, title of the article, result with the list data, and a column specifying the result type (ingredient, needed items, summary) ### Licensing Information Data is from WikiHow, license for content is located here https://www.wikihow.com/wikiHow:Creative-Commons
AIARTCHAN/lora-hanboka-000003
--- license: creativeml-openrail-m tags: - lora - aiartchan - stable-diffusion --- # Lora - hanboka-000003 ## Dataset Description - **원본** [한복 lora](https://arca.live/b/aiart/69417775) 한복 로라 파일 프롬프트에 hanbok, korean clothes 가중치 0.8 권장 [다운로드](https://huggingface.co/datasets/AIARTCHAN/lora-hanboka-000003/resolve/main/hanboka-000003.safetensors)
AIARTCHAN/lora-Hanbok_LoRA_V2
--- license: creativeml-openrail-m tags: - lora - aiartchan - stable-diffusion --- # Lora - Hanbok_LoRA_V2 ## Dataset Description - **원본** [한복 로라 다시 만들어왔음](https://arca.live/b/aiart/69505242) 한복 로라 파일 예시 프롬프트 ``` (masterpiece, best quality:1.2), 1girl, solo, <lora:Hanbok_LoRA_V2:0.8>, hanbok, korean_clothes, smile, best ratio four finger and one thumb, looking_at_viewer, cowboy_shot, (white background), (simple background) Negative prompt: EasyNegative, extra fingers,fewer fingers, watermark, text, animal_ears, japanese_clothes, kimono, chinese_clothes ``` ``` Steps: 20, Sampler: DPM++ 2M Karras, CFG scale: 10, Size: 640x960, Model: Counterfeit-V2.5 Denoising strength: 0.6, Clip skip: 2, Hires upscale: 1.5, Hires upscaler: Latent ``` 가중치 0.8 권장 [다운로드](https://huggingface.co/datasets/AIARTCHAN/lora-Hanbok_LoRA_V2/resolve/main/Hanbok_LoRA_V2.safetensors)
AIARTCHAN/lora-Asbestos_Ceiling
--- license: creativeml-openrail-m tags: - lora - aiartchan - stable-diffusion --- # Lora - Asbestos_Ceiling ## Dataset Description - **원본** [수상할 정도로 익숙한 석면 천장 로라 공유 및 사용법](https://arca.live/b/aiart/69669397) 석면 **천장** 로라 파일 ## !!사용법!! 그냥 T2I에서 로라 넣고 돌리면 벽까지 침범을 당해서 타율이 매우 떨어짐 천장 쪽만 인페인트해서 돌려야 타율이 좋음 **디노이즈 강도 : 0.5** **<lora:Asbestos Ceiling:2.0>** [다운로드](https://huggingface.co/datasets/AIARTCHAN/lora-Asbestos_Ceiling/resolve/main/Asbestos%20Ceiling.safetensors)
HiTZ/euscrawl
--- annotations_creators: - no-annotation language: - eu language_creators: - found license: - cc multilinguality: - monolingual pretty_name: EusCrawl size_categories: - 10M<n<100M source_datasets: - original tags: - high-quality - scraping task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling dataset_info: features: - name: id dtype: int32 - name: title dtype: string - name: text dtype: string - name: source dtype: string - name: license dtype: string - name: url dtype: string splits: - name: train num_bytes: 2314407002 num_examples: 1724544 download_size: 728281801 dataset_size: 2314407002 --- # Dataset Card for EusCrawl ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** https://ixa.ehu.eus/euscrawl/ - **Repository:** - **Paper:** https://arxiv.org/abs/2203.08111 - **Leaderboard:** - **Point of Contact:** a.soroa@ehu.eus ### Dataset Summary EusCrawl (http://www.ixa.eus/euscrawl/) is a high-quality corpus for Basque comprising 12.5 million documents and 423 million tokens, totalling 2.1 GiB of uncompressed text. EusCrawl was built using ad-hoc scrapers to extract text from 33 Basque websites with high-quality content, resulting in cleaner text compared to general purpose approaches. ### Supported Tasks and Leaderboards EusCrawl is intended for pretraining models for language modeling or masked language modeling. ### Languages Basque (eu) ## Dataset Structure ### Data Instances ```json { "id": 6, "title": "Herriko enpresa handien eta txikien arteko topaketak egingo dituzte", "text": "09:30ean hasiko da bilera eta aurkezpena egingo dute Tubacex, JEZ, Envases, Guardian eta Vidrala enpresek. Eskualdeko lantegi motorrekin beste enpresa txikiak eta ertainak egongo dira. Erakunde publikoaren helburua da euren artean ezagutzea eta elkarlana sustatzea.", "source": "aiaraldea", "license": "cc-by-sa 3.0", "url": "https://aiaraldea.eus/laudio/1494603159768-herriko-enpresa-handien-eta-txikien-arteko-topaketak-egingo-dituzte", } ``` ### Data Fields - "id": example id - "title": article title - "text": article text - "source": article source - "license": article license - "url": article url ### Data Splits The dataset only has one training split because it is intended for pretraining language models. ## Dataset Creation ### Curation Rationale [More Information Needed] ### Source Data #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information We do not claim ownership of any document in the corpus. All documents we collected were published under a Creative Commons license in their original website, and the specific variant can be found in the "license" field of each document. Should you consider that our data contains material that is owned by you and you would not like to be reproduced here, please contact Aitor Soroa at a.soroa@ehu.eus. ### Citation Information If you use our corpus or models for academic research, please cite the paper in question: ```bibtex @misc{artetxe2022euscrawl, title={Does corpus quality really matter for low-resource languages?}, author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri, Olatz Perez-de-Viñaspre, Aitor Soroa}, year={2022}, eprint={2203.08111}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [@juletx](https://github.com/juletx) for adding this dataset.
HuggingFaceH4/instruct_me
--- license: apache-2.0 dataset_info: - config_name: instruction_tuning features: - name: text dtype: string - name: meta struct: - name: source dtype: string - name: config dtype: string splits: - name: train num_bytes: 29975565 num_examples: 41685 - name: test num_bytes: 3298059 num_examples: 4632 download_size: 18425612 dataset_size: 33273624 - config_name: reward_modelling features: - name: text dtype: string - name: meta struct: - name: source dtype: string - name: config dtype: string splits: - name: train num_bytes: 25274204 num_examples: 41685 - name: test num_bytes: 2777314 num_examples: 4632 download_size: 15636566 dataset_size: 28051518 - config_name: ppo features: - name: prompt dtype: string - name: meta struct: - name: source dtype: string - name: config dtype: string splits: - name: train num_bytes: 50787070 num_examples: 83371 - name: test num_bytes: 5715727 num_examples: 9264 download_size: 31461165 dataset_size: 56502797 - config_name: reward_modeling features: - name: prompt dtype: string - name: meta struct: - name: source dtype: string - name: config dtype: string splits: - name: train num_bytes: 25274204 num_examples: 41685 - name: test num_bytes: 2777314 num_examples: 4632 download_size: 15636838 dataset_size: 28051518 task_categories: - conversational - text-generation language: - en tags: - human-feedback - instruct - reward-modeling pretty_name: Instruct Me --- # Dataset card for Instruct Me ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** Lewis Tunstall ### Dataset summary Instruct Me is a dataset of prompts and instruction dialogues between a human user and AI assistant. The prompts are derived from (prompt, completion) pairs in the [Helpful Instructions dataset](https://huggingface.co/datasets/HuggingFaceH4/helpful_instructions). The goal is to train a language model to that is "chatty" and can answer the kind of questions or tasks a human user might instruct an AI assistant to perform. ### Supported Tasks and Leaderboard We provide 3 configs that can be used for training RLHF models: #### instruction_tuning Single-turn user/bot dialogues for instruction tuning. #### reward_modeling Prompts to generate model completions and collect human preference data #### ppo Prompts to generate model completions for optimization of the instruction-tuned model with techniques like PPO. ### Changelog * March 6, 2023: `v1.1.0` release. Changed the `text` columns for the `reward_modeling` and `ppo` configs to `prompt` for consistency with our dataset schemas elsewhere. * March 5, 2023: `v1.0.0` release.
biglam/european_art
--- dataset_info: - config_name: raw features: - name: image dtype: image - name: source dtype: string - name: width dtype: int16 - name: height dtype: int16 - name: dept dtype: int8 - name: segmented dtype: int8 - name: objects list: - name: name dtype: class_label: names: '0': zebra '1': tree '2': nude '3': crucifixion '4': scroll '5': head '6': swan '7': shield '8': lily '9': mouse '10': knight '11': dragon '12': horn '13': dog '14': palm '15': tiara '16': helmet '17': sheep '18': deer '19': person '20': sword '21': rooster '22': bear '23': halo '24': lion '25': monkey '26': prayer '27': crown of thorns '28': elephant '29': zucchetto '30': unicorn '31': holy shroud '32': cat '33': apple '34': banana '35': chalice '36': bird '37': eagle '38': pegasus '39': crown '40': camauro '41': saturno '42': arrow '43': dove '44': centaur '45': horse '46': hands '47': skull '48': orange '49': monk '50': trumpet '51': key of heaven '52': fish '53': cow '54': angel '55': devil '56': book '57': stole '58': butterfly '59': serpent '60': judith '61': mitre '62': banner '63': donkey '64': shepherd '65': boat '66': god the father '67': crozier '68': jug '69': lance - name: pose dtype: class_label: names: '0': stand '1': sit '2': partial '3': Unspecified '4': squats '5': lie '6': bend '7': fall '8': walk '9': push '10': pray '11': undefined '12': kneel '13': unrecognize '14': unknown '15': other '16': ride - name: diffult dtype: int32 - name: xmin dtype: float64 - name: ymin dtype: float64 - name: xmax dtype: float64 - name: ymax dtype: float64 splits: - name: train num_bytes: 9046918 num_examples: 15156 download_size: 18160510195 dataset_size: 9046918 - config_name: coco features: - name: image dtype: image - name: source dtype: string - name: width dtype: int16 - name: height dtype: int16 - name: dept dtype: int8 - name: segmented dtype: int8 - name: objects list: - name: category_id dtype: class_label: names: '0': zebra '1': tree '2': nude '3': crucifixion '4': scroll '5': head '6': swan '7': shield '8': lily '9': mouse '10': knight '11': dragon '12': horn '13': dog '14': palm '15': tiara '16': helmet '17': sheep '18': deer '19': person '20': sword '21': rooster '22': bear '23': halo '24': lion '25': monkey '26': prayer '27': crown of thorns '28': elephant '29': zucchetto '30': unicorn '31': holy shroud '32': cat '33': apple '34': banana '35': chalice '36': bird '37': eagle '38': pegasus '39': crown '40': camauro '41': saturno '42': arrow '43': dove '44': centaur '45': horse '46': hands '47': skull '48': orange '49': monk '50': trumpet '51': key of heaven '52': fish '53': cow '54': angel '55': devil '56': book '57': stole '58': butterfly '59': serpent '60': judith '61': mitre '62': banner '63': donkey '64': shepherd '65': boat '66': god the father '67': crozier '68': jug '69': lance - name: image_id dtype: string - name: area dtype: int64 - name: bbox sequence: float32 length: 4 - name: segmentation list: list: float32 - name: iscrowd dtype: bool - name: image_id dtype: string splits: - name: train num_bytes: 8285204 num_examples: 15156 download_size: 18160510195 dataset_size: 8285204 license: cc-by-nc-2.0 task_categories: - object-detection - image-classification tags: - lam - art - historical pretty_name: 'DEArt: Dataset of European Art' size_categories: - 10K<n<100K --- # Dataset Card for DEArt: Dataset of European Art ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** https://doi.org/10.5281/zenodo.6984525 - **Paper:** https://arxiv.org/abs/2211.01226 - **Leaderboard:** - **Point of Contact:** ### Dataset Summary > DEArt is an object detection and pose classification dataset meant to be a reference for paintings between the XIIth and the XVIIIth centuries. It contains more than 15000 images, about 80% non-iconic, aligned with manual annotations for the bounding boxes identifying all instances of 69 classes as well as 12 possible poses for boxes identifying human-like objects. Of these, more than 50 classes are cultural heritage specific and thus do not appear in other datasets; these reflect imaginary beings, symbolic entities and other categories related to art. ### Supported Tasks and Leaderboards - `object-detection`: This dataset can be used to train or evaluate models for object-detection on historical document images. - `image-classification`: This dataset can be used for image classification tasks by using only the labels and not the bounding box information ## Dataset Structure This dataset has two configurations. These configurations both cover the same data and annotations but provide these annotations in different forms to make it easier to integrate the data with existing processing pipelines. - The first configuration, `raw, uses the data's original format. - The second configuration converts the annotations into a format that is closer to the `COCO` annotation format. This is done to make it easier to work with the [`image_processors`](https://huggingface.co/docs/transformers/main_classes/image_processor) (formerly known as`feature_extractor`s) from the `Transformers` models for object detection, which expects data to be in a COCO-style format. ### Data Instances An instance from the `raw` config: ```python {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1019x1680>, 'source': 'Europeana Collection', 'width': 1019, 'height': 1680, 'dept': 3, 'segmented': None, 'objects': [{'name': 40, 'pose': 3, 'diffult': 0, 'xmin': 259.0, 'ymin': 166.0, 'xmax': 679.0, 'ymax': 479.0}, {'name': 19, 'pose': 2, 'diffult': 0, 'xmin': 115.0, 'ymin': 354.0, 'xmax': 882.0, 'ymax': 1168.0}, {'name': 15, 'pose': 3, 'diffult': 0, 'xmin': 445.0, 'ymin': 1170.0, 'xmax': 579.0, 'ymax': 1302.0}, {'name': 51, 'pose': 3, 'diffult': 0, 'xmin': 354.0, 'ymin': 1196.0, 'xmax': 445.0, 'ymax': 1330.0}, {'name': 51, 'pose': 3, 'diffult': 0, 'xmin': 580.0, 'ymin': 1203.0, 'xmax': 701.0, 'ymax': 1326.0}, {'name': 57, 'pose': 3, 'diffult': 0, 'xmin': 203.0, 'ymin': 642.0, 'xmax': 882.0, 'ymax': 1172.0}]} ``` An instance from the `coco` config: ```python {'image': <PIL.JpegImagePlugin.JpegImageFile image mode=RGB size=1019x1680>, 'source': 'Europeana Collection', 'width': 1019, 'height': 1680, 'dept': 3, 'segmented': None, 'image_id': '0', 'annotations': [{'category_id': 40, 'image_id': '0', 'area': 131460, 'bbox': [259.0, 166.0, 420.0, 313.0], 'segmentation': [], 'iscrowd': False}, {'category_id': 19, 'image_id': '0', 'area': 624338, 'bbox': [115.0, 354.0, 767.0, 814.0], 'segmentation': [], 'iscrowd': False}, {'category_id': 15, 'image_id': '0', 'area': 17688, 'bbox': [445.0, 1170.0, 134.0, 132.0], 'segmentation': [], 'iscrowd': False}, {'category_id': 51, 'image_id': '0', 'area': 12194, 'bbox': [354.0, 1196.0, 91.0, 134.0], 'segmentation': [], 'iscrowd': False}, {'category_id': 51, 'image_id': '0', 'area': 14883, 'bbox': [580.0, 1203.0, 121.0, 123.0], 'segmentation': [], 'iscrowd': False}, {'category_id': 57, 'image_id': '0', 'area': 359870, 'bbox': [203.0, 642.0, 679.0, 530.0], 'segmentation': [], 'iscrowd': False}]} ``` ### Data Fields The fields for the COCO config: - `image`: The Image being annotated - `source`: source of the image i.e.'Europeana Collection' - `width`: width of the image - `height`: height of the image - `dept`: number of channels in the image - `segmented`: Whether the image has been segmented - `image_id`: ID for the image - `annotations`: annotations in coco format, consisting of a list containing dictionaries with the following keys: - `bbox`: bounding boxes for the images - `category_id`: a label for the image - `image_id`: id for the image - `iscrowd`: COCO `iscrowd` flag - `segmentation`: COCO segmentation annotations (empty in this case but kept for compatibility with other processing scripts) ### Data Splits The dataset doesn't define set splits, so only a train split is provided. The paper associated with the dataset does discuss a train and validation split, but it doesn't appear this split was shared with the dataset or associated paper. ## Dataset Creation ### Curation Rationale The creators of the dataset authors outline some of their motivations for creating the dataset in the abstract for their paper: > Large datasets that were made publicly available to the research community over the last 20 years have been a key enabling factor for the advances in deep learning algorithms for NLP or computer vision. These datasets are generally pairs of aligned image / manually annotated metadata, where images are photographs of everyday life. Scholarly and historical content, on the other hand, treat subjects that are not necessarily popular to a general audience, they may not always contain a large number of data points, and new data may be difficult or impossible to collect. Some exceptions do exist, for instance, scientific or health data, but this is not the case for cultural heritage (CH). The poor performance of the best models in computer vision - when tested over artworks - coupled with the lack of extensively annotated datasets for CH, and the fact that artwork images depict objects and actions not captured by photographs, indicate that a CH-specific dataset would be highly valuable for this community. We propose DEArt, at this point primarily an object detection and pose classification dataset meant to be a reference for paintings between the XIIth and the XVIIIth centuries. It contains more than 15000 images, about 80% non-iconic, aligned with manual annotations for the bounding boxes identifying all instances of 69 classes as well as 12 possible poses for boxes identifying human-like objects. Of these, more than 50 classes are CH-specific and thus do not appear in other datasets; these reflect imaginary beings, symbolic entities and other categories related to art. Additionally, existing datasets do not include pose annotations. ### Source Data The source data comes from several cultural heritage institutions that have shared openly licenced images. The dictionary below shows the institutions and the frequency with which they are the provider of images in the dataset. ```python {'National Museum in Warsaw': 2030, 'Europeana Collection': 1991, 'The Art Institute of Chicago': 1237, 'The Metropolitan Museum of Art': 1218, 'Rijksmuseum': 1066, 'National Gallery of Art': 871, 'Philadelphia Museum of Art': 774, 'WikiArt': 687, 'National museum in Krakow': 661, 'National Gallery of Denmark': 660, 'British Museum': 618, 'Victoria and Albert Museum': 561, 'Paul Mellon Centre': 391, 'National Gallery of Scotland': 384, 'Yale University Art Gallery': 376, 'Museo Nacional Thyssen-Bornemisza': 332, 'Harvard Art Museum': 279, 'The National Museum of Norvay': 270, 'LACMA': 262, 'The Cleveland Museum of Art': 242, 'The Leiden Collection': 159, 'The Clark Museum': 77, 'Pharos': 6, 'Wikimedia Commons': 2, 'Wikipedia': 1, 'Unknown': 1} ``` #### Initial Data Collection and Normalization [More Information Needed] #### Who are the source language producers? [More Information Needed] ### Annotations #### Annotation process [More Information Needed] #### Who are the annotators? [More Information Needed] ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information [More Information Needed] ### Citation Information [More Information Needed] ### Contributions Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
LangChainDatasets/question-answering-state-of-the-union
--- license: mit ---
badmatr11x/hate-offensive-speech
--- license: mit language: - en size_categories: - 10K<n<100K source_dataset: - original task_categories: - text-classification task_ids: - multi-label-classification dataset_info: features: - name: label dtype: int64 - name: tweet dtype: string splits: - name: train num_bytes: 5045816.7990131285 num_examples: 51070 - name: test num_bytes: 280301.1995065645 num_examples: 2837 - name: validation num_bytes: 280400.0014803066 num_examples: 2838 download_size: 3879287 dataset_size: 5606517.999999999 --- # **Dataset Card for Hate-Offensive Speech** This is the original dataset created by the user [badmatr11x](https://www.huggingface.co/badmatr11x/). Datasets contains the annotated tweets classifying into the three categories; **hate-speech**, **offensive-speech** and **neither**. # **Dataset Structure** Database Structure as follows: ``` { "label": { 0: "hate-speech", 1: "offensive-speech", 2: "neither" }, "tweet": <string> } ``` ### **Dataset Instances** Examples from the datasets as follows: Lable-0 (Hate Speech) ``` { "label": 0, "tweet": "@user @user @user we were? maybe you are-but don't you dare demonize innocent infants born with white skin, " } ``` Label-1 (Offensive Speech) ``` { "label": 1, "tweet": "...and I'm goin back to school.. only for the hoes and a class or two" } ``` Label-2 (Neither) ``` { "label": 2, "tweet": "@user @user are you guys going to take forever to bring the new gmc?" } ``` # **Data Fields** - `label`: a int64 value - `tweet`: a string # **Data Splits** - Datasets splits into the three parts; train, validation and test. - Training datasets contains 90% tweeets, validation contains 5% and rest of 5% assigned to test datasets.
neurofusion/eeg-restingstate
--- language: - en tags: - neuro - eeg - powerspectra - focus - calm - longitudinal data license: apache-2.0 --- # Dataset Card for Dataset Name ## Dataset Description - **Homepage:** [https://usefusion.app](https://usefusion.app) - **Repository:** [https://github.com/NEUROFUSIONInc](https://github.com/NEUROFUSIONInc) - **Point of Contact:** ore@usefusion.app ### Dataset Summary This is a dataset of EEG data & derived metrics recorded on the [Fusion](https://usefusion.app) platform from a single particpant through the course of a week. Task: Eyes closed for 10mins at least twice a day. Participant also gave a short summary at the start of every recording in [events.csv](./events.csv) Device: [Neurosity Crown](https://neurosity.co) - 8 channels [CP3, C3, F5, PO3, PO4, F6, C4, CP4] ## Dataset Structure ### Data Instances All dataset are timeseries with a unixTimestamp column. Generated by using the [Neurosity brainwaves api](https://docs.neurosity.co/docs/api/brainwaves) - rawBrainwaves: voltage readings across eeg channels - signalQuality: standard deviation values & label (great, good, poor, noContact) per channel - powerByBand: computed eeg power per channel (i.e CP3_delta, CP3_theta, CP3_alpha, CP3_beta, CP3_gamma) - focus: prediction of user [focus based on gamma waves](https://docs.neurosity.co/docs/api/focus). - calm: prediction of user [calm based on alpha waves](https://docs.neurosity.co/docs/api/calm). ### Data Splits Each dataset is suffixed with *_unixTimestamp which represents the time of recording. ## Additional Information ### Dataset Curators [NEUROFUSION Research Inc.](https://usefusion.app)
pittawat/letter_recognition
--- dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': A '1': B '2': C '3': D '4': E '5': F '6': G '7': H '8': I '9': J '10': K '11': L '12': M '13': 'N' '14': O '15': P '16': Q '17': R '18': S '19': T '20': U '21': V '22': W '23': X '24': 'Y' '25': Z splits: - name: train num_bytes: 22453522 num_examples: 26000 - name: test num_bytes: 2244964.8 num_examples: 2600 download_size: 8149945 dataset_size: 24698486.8 task_categories: - image-classification language: - en size_categories: - 1K<n<10K --- # Dataset Card for "letter_recognition" Images in this dataset was generated using the script defined below. The original dataset in CSV format and more information of the original dataset is available at [A-Z Handwritten Alphabets in .csv format](https://www.kaggle.com/datasets/sachinpatel21/az-handwritten-alphabets-in-csv-format). ```python import os import pandas as pd import matplotlib.pyplot as plt CHARACTER_COUNT = 26 data = pd.read_csv('./A_Z Handwritten Data.csv') mapping = {str(i): chr(i+65) for i in range(26)} def generate_dataset(folder, end, start=0): if not os.path.exists(folder): os.makedirs(folder) print(f"The folder '{folder}' has been created successfully!") else: print(f"The folder '{folder}' already exists.") for i in range(CHARACTER_COUNT): dd = data[data['0']==i] for j in range(start, end): ddd = dd.iloc[j] x = ddd[1:].values x = x.reshape((28, 28)) plt.axis('off') plt.imsave(f'{folder}/{mapping[str(i)]}_{j}.jpg', x, cmap='binary') generate_dataset('./train', 1000) generate_dataset('./test', 1100, 1000) ```
VISION-Workshop/VISION-Datasets
--- license: cc-by-nc-4.0 task_categories: - object-detection tags: - Defect Detection - Anomaly Detection - Instance Segmentation pretty_name: VISION Datasets size_categories: - 1K<n<10K extra_gated_fields: Name: text Affiliation: text Email: text I confirm my understanding and acceptance of the license and disclaimer presented to me: checkbox --- <!-- Thank you for your interest in the VISION Datasets! --> # Dataset Card for VISION Datasets ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Dataset Information](#dataset-information) - [Datasets Overview](#datasets-overview) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Building Dataset Splits](#building-dataset-splits) - [Additional Information](#additional-information) - [License](#license) - [Disclaimer](#disclaimer) - [Citation](#citation) ## Dataset Description - **Homepage:** [VISION homepage](https://vision-based-industrial-inspection.github.io/cvpr-2023/) - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** [VISION email](mailto:vision.based.inspection+datasets@gmail.com) ### Dataset Summary The **VISION Datasets** are a collection of 14 industrial inspection datasets, designed to explore the unique challenges of vision-based industrial inspection. These datasets are carefully curated from [Roboflow](https://roboflow.com) and cover a wide range of manufacturing processes, materials, and industries. To further enable precise defect segmentation, we annotate each dataset with polygon labels based on the provided bounding box labels. ### Supported Tasks and Leaderboards We currently host two prized challenges on the VISION Datasets: - The VISION [Track 1 Challenge](https://bit.ly/VISION_Track_1) aims to evaluate solutions that can effectively learn with limited labeled data in combination with unlabeled data across diverse images from different industries and contexts. - The VISION [Track 2 Challenge](https://bit.ly/VISION_Track_2) aims to challenge algorithmic solutions to generate synthetic data that will help improve model performance given only limited labeled data. Please check out our [workshop website](https://vision-based-industrial-inspection.github.io/cvpr-2023/) and competition pages for further details. ## Dataset Information ### Datasets Overview The VISION Datasets consist of the following 14 individual datasets: - Cable - Capacitor - Casting - Console - Cylinder - Electronics - Groove - Hemisphere - Lens - PCB_1 - PCB_2 - Ring - Screw - Wood ### Data Splits Each dataset contains three folders: train, val, and inference. The train and val folders contain the training and validation data, respectively. The inference folder contains both the testing data and the unused data for generating submissions to our evaluation platform. The _annotations.coco.json files contain the [COCO format](https://cocodataset.org/#format-data) annotations for each dataset. We will release more information on the testing data as the competitions conclude. Each dataset has the following structure: ```yaml ├── dataset_name/ │ ├── train/ │ │ ├── _annotations.coco.json # COCO format annotation │ │ ├── 000001.png # Images │ │ ├── 000002.png │ │ ├── ... │ ├── val/ │ │ ├── _annotations.coco.json # COCO format annotation │ │ ├── xxxxxx.png # Images │ │ ├── ... │ ├── inference/ │ │ ├── _annotations.coco.json # COCO format annotation with unlabeled image list only │ │ ├── xxxxxx.png # Images │ │ ├── ... ``` ## Dataset Creation ### Curation Rationale Our primary goal is to encourage further alignment between academic research and production practices in vision-based industrial inspection. Due to both the consideration to remain faithful to naturally existing label challenges and the difficulty in distinguishing between unintentional labeling oversight and domain-specific judgments without the manufacturers' specification sheets, we refrain from modifying original defect decisions. To enable precise defect detection even with existing label limitations, we provide refined segmentation masks for each defect indicated by the original bounding boxes. ### Building Dataset Splits To ensure the benchmark can faithfully reflect the performance of algorithms, we need to minimize leakage across train, validation, and testing data. Due to the crowd-sourced nature, the original dataset splits are not always guaranteed to be free of leakage. As a result, we design a process to resplit the datasets with specific considerations for industrial defect detection. Given distinct characteristics of defect detection datasets, including but not limited to: - Stark contrast between large image size and small defect size - Highly aligned non-defective images may seem to be duplicates, but are necessary to represent natural distribution and variation to properly assess the false detection rate. Naively deduping with image-level embedding or hash would easily drown out small defects and regard distinct non-defective images as duplicates. Therefore, we first only deduplicate images with identical byte contents and set the images without defect annotation aside. For images with defect annotations, we want to reduce leakage at the defect level. We train a self-supervised similarity model on the defect regions and model the similarity between two images as the maximum pairwise similarity between the defects on each image. Finally, we perform connected component analysis on the image similarity graph and randomly assign connected components to dataset splits in a stratified manner. In order to discourage manual exploitation during the data competition, the discarded images are provided alongside the test split data as the inference data for participants to generate their submissions. However, the testing performance is evaluated exclusively based on the test split data. Further details will be provided in a paper to be released soon. ## Additional Information ### License The provided polygon annotations are licensed under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) License. All the original dataset assets are under the original dataset licenses. ### Disclaimer While we believe the terms of the original datasets permit our use and publication herein, we do not make any representations as to the license terms of the original dataset. Please follow the license terms of such datasets if you would like to use them. ### Citation If you apply this dataset to any project and research, please cite our repo: ``` @article{vision-datasets, title = {VISION Datasets: A Benchmark for Vision-based InduStrial InspectiON}, author = {Haoping Bai, Shancong Mou, Tatiana Likhomanenko, Ramazan Gokberk Cinbis, Oncel Tuzel, Ping Huang, Jiulong Shan, Jianjun Shi, Meng Cao}, journal = {arXiv preprint arXiv:2306.07890}, year = {2023}, } ```
teelinsan/camoscio
--- license: openrail task_categories: - conversational language: - it tags: - llama - instruction-tuning size_categories: - 10K<n<100K --- # Camoscio instruction-tuning dataset This repository contains the dataset used to train [Camoscio](https://huggingface.co/teelinsan/camoscio-7b-llama). This dataset is an Italian translation with ChatGPT of the [Stanford Alpaca dataset](https://github.com/tatsu-lab/stanford_alpaca). Please refer to the [Camoscio repo](https://github.com/teelinsan/camoscio) for more info.
zeusfsx/ukrainian-stackexchange
--- license: cc-by-sa-4.0 task_categories: - question-answering language: - uk size_categories: - 1K<n<10K --- # Ukrainian StackExchange Dataset This repository contains a dataset collected from the Ukrainian StackExchange website. The parsed date is 02/04/2023. The dataset is in JSON format and includes text data parsed from the website https://ukrainian.stackexchange.com/. ## Dataset Description The Ukrainian StackExchange Dataset is a rich source of text data for tasks related to natural language processing, machine learning, and data mining in the Ukrainian language. The dataset contains information such as questions, answers, comments, and associated metadata from the website. The data has been collected from the following sections of the Ukrainian StackExchange website: Questions Answers Tags Votes ## Structure The dataset is structured as follows: - `question`: Contains information about the question - `id`: Unique identifier for the question - `title`: Title of the question - `text`: Text content of the question - `upvoteCount`: The net upvote count of the question - `tags`: List of tags associated with the question - `answers`: A list of answers for the question, each containing: - `id`: Unique identifier for the answer - `text`: Text content of the answer - `upvoteCount`: The net upvote count of the answer - `isAccepted`: A boolean value indicating whether the answer is accepted or not - `url`: The URL of the question on the Ukrainian StackExchange website ## Usage To use this dataset in your project, you can load it from the Hugging Face Hub: ```python from datasets import load_dataset dataset = load_dataset("zeusfsx/ukrainian-stackexchange") ``` ## License This dataset is provided under the Creative Commons Attribution-ShareAlike 4.0 International (CC BY-SA 4.0) License.
arbml/tashkeela
--- dataset_info: features: - name: diacratized dtype: string - name: text dtype: string splits: - name: train num_bytes: 1419585102 num_examples: 979982 - name: test num_bytes: 78869542 num_examples: 54444 - name: dev num_bytes: 78863352 num_examples: 54443 download_size: 747280703 dataset_size: 1577317996 --- # Dataset Card for "tashkeela" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
lirus18/deepfashion
--- dataset_info: features: - name: image dtype: image - name: openpose dtype: image - name: cloth dtype: image - name: caption dtype: string splits: - name: train num_bytes: 3491233280.847 num_examples: 13679 download_size: 3401862421 dataset_size: 3491233280.847 --- # Dataset Card for "deepfashion" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
bigcode/ta-prompt
--- license: apache-2.0 language: - code programming_language: - Java - JavaScript - Python --- # Dataset summary This repository is dedicated to prompts used to perform in-context learning with [starcoder](https://huggingface.co/bigcode/starcoder). As a matter of fact, the model is an autoregressive language model that is trained on both code and natural language text. It can be turned into an AI-powered technical assistant by prepending conversations to its 8192-tokens context window. # Format The prompt is a .txt file which contains multiple conversations between a human and the assistant. Here is the format ``` ----- Human: <instruction> Assistant: <answer> ----- Human: <instruction> Assistant: <answer> Human: <instruction> Assistant: <answer> . . . ----- ``` # Use cases We want the technical assistant to cover a diverse set of use cases - **Code-to-text**: - `What is the purpose of the following code?<code>` - `What is the bug in the following code?<code>` - **Text-to-code**: - `Write/Design/Implement a function to <task>` - **Code-to-code**: - `Translate this <code> from <programming language> to <programming language>.` - **Text-to-text**: - `What is <technical concept>` - **General-purpose Q&A** - `What are you?` - `What is your purpose?` # Scope of the work As a model designed for coding tasks, the user should not expect the model to output relevant answers when prompted with a general-purpose question. When it comes to coding requests, the output of the model should be post-processed before testing them.
zetavg/CC-100-zh-Hant
--- dataset_info: features: - name: line dtype: string splits: - name: train num_bytes: 18198166302 num_examples: 85165683 download_size: 13296002208 dataset_size: 18198166302 task_categories: - text-generation language: - zh --- # CC-100 zh-Hant (Traditional Chinese) From https://data.statmt.org/cc-100/, only zh-Hant - Chinese (Traditional). Broken into lines, with each line as a row. Estimated to have around 4B tokens when tokenized with the [`bigscience/bloom`](https://huggingface.co/bigscience/bloom) tokenizer. There's another version that the text is split by paragraphs instead of lines: [`zetavg/CC-100-zh-Hant-merged`](https://huggingface.co/datasets/zetavg/CC-100-zh-Hant-merged). ## References Please cite the following if you found the resources in the CC-100 corpus useful. * **Unsupervised Cross-lingual Representation Learning at Scale**, *Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov*, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (ACL), p. 8440-8451, July 2020, [pdf](https://www.aclweb.org/anthology/2020.acl-main.747.pdf), [bib](https://www.aclweb.org/anthology/2020.acl-main.747.bib) . * **CCNet: Extracting High Quality Monolingual Datasets from Web Crawl Data**, *Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave*, Proceedings of the 12th Language Resources and Evaluation Conference (LREC), p. 4003-4012, May 2020, [pdf](https://www.aclweb.org/anthology/2020.lrec-1.494.pdf), [bib](https://www.aclweb.org/anthology/2020.lrec-1.494.bib).
lucasmccabe-lmi/sql-create-context_alpaca_style
--- dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 28203562.0 num_examples: 78577 download_size: 9312899 dataset_size: 28203562.0 --- # Dataset Card for "sql-create-context_alpaca_style" We provide a minor modification of the [sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context) dataset. In particular, we 1) prepend each instruction with the phrase, "Write a SQL query that answers the following question: " and 2) prepend each context with the phrase, "The relevant table was constructed using the following SQL CREATE TABLE statement: ". ## Numbers: Prompts: 78577 Tokens: 6438971 using the EleutherAI/gpt-neox-20b tokenizer (counting instruction+input+output)
silk-road/MMC4-130k-chinese-image
--- license: cc-by-4.0 task_categories: - text-to-image - image-to-text language: - zh - en size_categories: - 100K<n<1M --- MMC4-130k-chinese是对MMC4中,抽样了130k左右 simliarty较高的图文pair得到的数据集 Chinese版本是对这里所有的caption进行了翻译。 我们会陆续将更多数据集发布到hf,包括 - [ ] Coco Caption的中文翻译 - [ ] CoQA的中文翻译 - [ ] CNewSum的Embedding数据 - [ ] 增广的开放QA数据 - [x] WizardLM的中文翻译 如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。 # 骆驼(Luotuo): 开源中文大语言模型 [https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC1332/Luotuo-Chinese-LLM) 骆驼(Luotuo)项目是由[冷子昂](https://blairleng.github.io) @ 商汤科技, 陈启源 @ 华中师范大学 以及 李鲁鲁 @ 商汤科技 发起的中文大语言模型开源项目,包含了一系列语言模型。 ( 注意: [陈启源](https://qiyuan-chen.github.io/) 正在寻找2024推免导师,欢迎联系 ) 骆驼项目**不是**商汤科技的官方产品。 ## Citation Please cite the repo if you use the data or code in this repo. ``` @misc{alpaca, author={Ziang Leng, Qiyuan Chen and Cheng Li}, title = {Luotuo: An Instruction-following Chinese Language model, LoRA tuning on LLaMA}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/LC1332/Luotuo-Chinese-LLM}}, } ```
tiange/Cap3D
--- license: odc-by viewer: false task_categories: - text-to-3d --- ## Dataset Description - **Paper:** [Scalable 3D Captioning with Pretrained Models](https://arxiv.org/abs/2306.07279) - **Paper:** [View Selection for 3D Captioning via Diffusion Ranking](https://arxiv.org/abs/2404.07984) - **Repository**: [Github_Cap3D](https://github.com/crockwell/Cap3D) - **Repository**: [Github_DiffuRank](https://github.com/tiangeluo/DiffuRank) - **Project**: [Project](https://cap3d-um.github.io/) This repository hosts data for [View Selection for 3D Captioning via Diffusion Ranking]() and [Scalable 3D Captioning with Pretrained Models](https://cap3d-um.github.io/), including descriptive **captions** for 3D objects in [Objaverse](https://arxiv.org/abs/2212.08051), [Objaverse-XL](https://arxiv.org/pdf/2307.05663.pdf), and [ABO](https://arxiv.org/abs/2110.06199). This repo also includes **point clouds** and **rendered images with camera, depth, and MatAlpha information** of Objaverse objects, as well as their Shap-E latent codes. All the captions and data provided by our papers are released under ODC-By 1.0 license. ## Usage Please download and unzip files from [**Page**](https://huggingface.co/datasets/tiange/Cap3D/tree/main) according to your usage. Below is a table listing fiels descriptions, followed by example Python scripts for data loading. | Filename | Description | | -------------------------------------- | ------------------------------------------------------------ | |**Cap3D_automated_Objaverse_full.csv** | By integrating text descriptions initially generated by [Cap3D](https://arxiv.org/abs/2306.07279) and subsequently refined by [DiffuRank](https://arxiv.org/abs/2404.07984), we have produced a total of **1,002,422** 3D-caption pairs. Out of the total, 780,790 pairs have been contributed to the [Objaverse](https://arxiv.org/abs/2212.08051) dataset, with the balance for the [Objaverse-XL](https://arxiv.org/pdf/2307.05663.pdf) dataset. | Cap3D_automated_Objaverse_no3Dword.csv | Combine the text descriptions generated by [Cap3D](https://arxiv.org/abs/2306.07279), resulting in **661,577** 3D-caption pairs for the Objaverse dataset. All captions and related 3D objects here have commercial-friendly licenses (including CC-BY 4.0, CC-BY-SA 4.0, and CC0 1.0). We also filter out potential ethical-issues objects (e.g., identifiable face scans, NSFW, etc). The original captions are densely packed with "3D-model" terminology, potentially limiting their utility in applications like embodied AI. As such, we've created a version with minimized 3D-related words. For example, "A 3D model of a black and yellow samurai sword." ➡️ "a black and yellow samurai sword." | **PointCloud_zips** | **1,002,422** PointClouds (16,384 colorful points) extracted from Objaverse objects. Saved as `.ply` file. | | PointCloud_pt_zips | PointClouds saved as torch.Tensor `.pt` files, providing faster loading speed than `.ply`. | | **RenderedImage_perobj_zips** | Rendered images for Objaverse objects. Once unzip it will have multiple zip files which consists of 20 rendering images along with depth, camera (intrinsic, extrinsic), and MatAlpha information.| | misc | Including miscellaneous files such as ABO captions, finetuned models, shapE latent codes, and etc. | ``` # load our captions import pandas as pd captions = pd.read_csv('Cap3D_automated_Objaverse_full.csv', header=None) ## captions: ## 0 1 ## 0 ed51a51909ee46c780db3a85e821feb2 Matte green rifle with a long barrel, stock, a... ## 1 9110b606f6c547b2980fcb3c8c4b6a1c Rustic single-story building with a weathered ... ## 2 80d9caaa1fa04502af666135196456e1 a pair of purple and black swords with white h... ## 3 28d43a218cd8466a8c1f82b29b71e314 3D model of a cluttered outdoor scene with veg... ## 4 75582285fab442a2ba31733f9c8fae66 Floating terrain piece with grassy landscape a... ## ... ... ... ## 1002417 3623e74f34c1c3c523af6b2bb8ffcbe2d2dce897ef61b9... Abstract 3D composition with human figures and... ## 1002418 64e9f7b7a1fc4c4ec56ed8b5917dfd610930043ac5e15f... 3D object with a rough, irregular pink surface... ## 1002419 fcd089d6a237fee21dfd5f0d6d9b74b2fd1150cdc61c7f... Bright pink abstract 3D model of a building wi... ## 1002420 f812dc980050f2d5f4b37df2a8620372f810dd6456a5f2... Monochromatic gray 3D model of a stylized huma... ## 1002421 77c09500b4d8e4b881e1ce6929d56c23658b87173c0996... Modular futuristic spacecraft with red and ora... ## if u want to obtain the caption for specific UID caption = captions[captions[0] == '80d9caaa1fa04502af666135196456e1'][1].values[0] # load point clouds (unzip https://huggingface.co/datasets/tiange/Cap3D/tree/main/PointCloud_pt_zips) import torch pts = torch.load('Cap3D_pcs_pt/80d9caaa1fa04502af666135196456e1.pt') ## pts.shape == torch.Size([6, 16384]) ``` If you have any questions, please contact [Tiange](mailto:tiange.cs@gmail.com) or [Chris](mailto:cnris@umich.edu). ## Citation Information If you find our data or code useful, please consider citing: ```bibtex @article{luo2023scalable, title={Scalable 3D Captioning with Pretrained Models}, author={Luo, Tiange and Rockwell, Chris and Lee, Honglak and Johnson, Justin}, journal={arXiv preprint arXiv:2306.07279}, year={2023} } ``` ```bibtex @article{luo2024view, title={View Selection for 3D Captioning via Diffusion Ranking}, author={Luo, Tiange and Johnson, Justin and Lee, Honglak}, journal={arXiv preprint arXiv:2404.07984}, year={2024} } ``` Please cite ***Objaverse*** and ***ABO*** paper accordingly, if you use related data. ``` @inproceedings{deitke2023objaverse, title={Objaverse: A universe of annotated 3d objects}, author={Deitke, Matt and Schwenk, Dustin and Salvador, Jordi and Weihs, Luca and Michel, Oscar and VanderBilt, Eli and Schmidt, Ludwig and Ehsani, Kiana and Kembhavi, Aniruddha and Farhadi, Ali}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={13142--13153}, year={2023} } ``` ``` @inproceedings{collins2022abo, title={Abo: Dataset and benchmarks for real-world 3d object understanding}, author={Collins, Jasmine and Goel, Shubham and Deng, Kenan and Luthra, Achleshwar and Xu, Leon and Gundogdu, Erhan and Zhang, Xi and Vicente, Tomas F Yago and Dideriksen, Thomas and Arora, Himanshu and others}, booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition}, pages={21126--21136}, year={2022} } ```
Docugami/dfm-csl-small-benchmark
--- license: mit language: - en size_categories: - 1K<n<10K source_datasets: - original task_categories: - text2text-generation - text-generation dataset_info: features: - name: Text dtype: string - name: Small Chunk dtype: string - name: Ground Truth dtype: string - name: docugami/dfm-cs-small dtype: string splits: - name: eval num_bytes: 240040 num_examples: 1099 - name: train num_bytes: 20906 num_examples: 100 download_size: 143986 dataset_size: 260946 tags: - docugami - dfm-csl - xml-knowledge-graphs pretty_name: Contextual Semantic Lables (Small) --- # Contextual Semantic Labels (Small) Benchmark Dataset Please see [https://github.com/docugami/DFM-benchmarks](https://github.com/docugami/DFM-benchmarks) for more details, eval code, and current scores for different models. # Using Dataset Please refer to standard huggingface documentation to use this dataset: [https://huggingface.co/docs/datasets/index](https://huggingface.co/docs/datasets/index) The [explore.ipynb](./explore.ipynb) notebook has some reference code.
bavest/fin-llama-dataset
--- license: bigscience-openrail-m ---
LinkSoul/instruction_merge_set
--- dataset_info: features: - name: id dtype: string - name: conversations list: - name: from dtype: string - name: value dtype: string - name: instruction dtype: string splits: - name: train num_bytes: 13444870155 num_examples: 10077297 download_size: 3542585235 dataset_size: 13444870155 --- # Dataset Card for "instruction_merge_set" ## 本数据集由以下数据集构成: | 数据(id in the merged set) | Hugging face 地址 | notes | | --- | --- | --- | | OIG (unified-任务名称) 15k | https://huggingface.co/datasets/laion/OIG | Open Instruction Generalist Dataset | | Dolly databricks-dolly-15k | https://huggingface.co/datasets/databricks/databricks-dolly-15k | an open-source dataset of instruction-following records generated by thousands of Databricks employees in several of the behavioral categories | | UltraChat | https://huggingface.co/datasets/stingning/ultrachat | multi-round dialogue data | | Camel | https://huggingface.co/datasets/camel-ai/ai_society | 25K conversations between two gpt-3.5-turbo agents. | | camel (同上) | https://github.com/camel-ai/camel | | | ChatDoctor icliniq-15k HealthCareMagic-200k | https://github.com/Kent0n-Li/ChatDoctor | 200k real conversations between patients and doctors from HealthCareMagic.com 15k real conversations between patients and doctors from iciniq-10k | | Dolly | https://github.com/databrickslabs/dolly | | | GPT4ALL | https://github.com/nomic-ai/gpt4all | | | GPT-4-LLM comparision_data_b alpaca_gpt4_data_zh comparision_data_a alpaca_gpt4_data 5k | https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM | English Instruction-Following Data generated by GPT-4 using Alpaca prompts for fine-tuning LLMs. Chinese Instruction-Following Data generated by GPT-4 using Chinese prompts translated from Alpaca by ChatGPT. Comparison Data ranked by GPT-4 to train reward models. Answers on Unnatural Instructions Data from GPT-4 to quantify the gap between GPT-4 and instruction-tuned models at scale. | | GuanacoDataset guanaco_chat_all-utf8 guanaco_non_chat-utf8 paper_answers-utf8 general_ans-utf8 general_questions-utf8 paper_questions-utf8 30k | https://huggingface.co/datasets/JosephusCheung/GuanacoDataset | The dataset for the Guanaco model is designed to enhance the multilingual capabilities and address various linguistic tasks. It builds upon the 175 tasks from the Alpaca model by providing rewrites of seed tasks in different languages and adding new tasks specifically designed for English grammar analysis, natural language understanding, cross-lingual self-awareness, and explicit content recognition. The Paper/General-QA dataset is a collection of questions and answers constructed for AI-generated papers or general texts in English, Chinese, Japanese, and German. | | HC3 ALL | https://huggingface.co/datasets/Hello-SimpleAI/HC3 | human-ChatGPT comparison datasets | | instinwild instinwild_en instinwild_ch 5k | https://huggingface.co/datasets/QingyiSi/Alpaca-CoT/tree/main/instinwild | Instruction-Finetuning Dataset Collection (Alpaca-CoT) | | Instruct-to-Code | https://huggingface.co/datasets/Graverman/Instruct-to-Code | | | ShareGPT90K sg_90k_part2 sg_90k_part1 | https://huggingface.co/datasets/RyokoAI/ShareGPT52K | 90,000 conversations scraped via the ShareGPT API before it was shut down. These conversations include both user prompts and responses from OpenAI's ChatGPT. | | UltraChat ultrachat_material_release_230412 ultrachat_release_230407 | https://github.com/thunlp/UltraChat | | | wealth-alpaca-lora final_dataset_clean 4.3k | https://www.kaggle.com/code/gbhacker23/wealth-alpaca-lora | combination of Stanford's Alpaca (https://github.com/tatsu-lab/stanford_alpaca) and FiQA (https://sites.google.com/view/fiqa/) with another 1.3k pairs custom generated using GPT3.5, 有instruction | | Alpaca alpaca_data 5k | https://github.com/tatsu-lab/stanford_alpaca | instruct-tuning | | Baize alpaca_chat_data medical_chat_data quora_chat_data stack_overflow_chat_data | https://github.com/project-baize/baize-chatbot | instruction-following data we used for fine-tuning the Alpaca model. | | botbots Reasoning flight_bookings medical_appointments travel_agency restaurants_mixed real_estate car_dealership home_maintenance, job_interview 'insurance_consultation': 16, 'hotels': 400, 'tech_support': 32, 'car_rentals': 32, 'pet_care': 48, 'restaurants': 200, 'legal_consultation': 16, 'event_tickets': 240, 'fitness_personal_training': 16, 'scientific_problems': 100 | https://github.com/radi-cho/botbots | A dataset consisting of dialogues between two instances of ChatGPT (gpt-3.5-turbo). The CLI commands and dialogue prompts themselves have been written by GPT-4. The dataset covers a wide range of contexts (questions and answers, arguing and reasoning, task-oriented dialogues) and downstream tasks (e.g., hotel reservations, medical advice). | | ChatAlpaca chatalpaca_data_10k | https://github.com/cascip/ChatAlpaca | a chat dataset, multi-turn instruction-following conversations. | | DERA train | https://github.com/curai/curai-research/tree/main/DERA | The following repository contains the open-ended question-answering version of MedQA. | | GPTeacher Toolformer-dedupe-only-dataset roleplay-simple-deduped-roleplay-dataset gpt4-instruct-dedupe-only-dataset | https://github.com/teknium1/GPTeacher | A collection of modular datasets generated by GPT-4, General-Instruct - Roleplay-Instruct - Code-Instruct - and Toolformer | | OpenAGI | https://github.com/agiresearch/OpenAGI | | | presto | https://github.com/google-research-datasets/presto | A Multilingual Dataset for Parsing Realistic Task-Oriented Dialogs |
kaist-ai/selfee-train
--- license: cc-by-nc-4.0 dataset_info: features: - name: instruction dtype: string - name: outputs list: - name: feedback dtype: string - name: output dtype: string - name: dataset dtype: string - name: output dtype: string - name: iteration_truncated dtype: bool - name: iteration dtype: int64 - name: input dtype: string splits: - name: train num_bytes: 511377846 num_examples: 178331 download_size: 230123988 dataset_size: 511377846 ---
DavidVivancos/MindBigData2023_MNIST-8B
--- license: odbl --- ## Dataset Summary MindBigData 2023 MNIST-8B is the largest, to date (June 1st 2023), brain signals open dataset created for Machine Learning, based on EEG signals from a single subject captured using a custom 128 channels device, replicating the full 70,000 digits from Yaan LeCun et all MNIST dataset. The brain signals were captured while the subject was watching the pixels of the original digits one by one on a screen and listening at the same time to the spoken number 0 to 9 from the real label. Supporting dataset for paper https://arxiv.org/abs/2306.00455 The dataset contains 140,000 records from 128 EEG channels, each of 2 seconds, recorded at 250hz, in total 17,920,000 brain signals and 8,960,000,000 data points. It consists of 2 main csv data files: - “train.csv” 45Gb Header + 120,000 rows 64,791 columns - “test.csv” 7,52Gb Header + 20,000 rows 64,791 columns 10 audio files at a folder named “audiolabels”: “0.wav”, “1.wav”......“9.wav” And 1 csv file with 3d coordinates of the EEG electrodes: “3Dcoords.csv” 4,27Kb Header + 130 rows 4 columns >update July 18th 2023: As requested a reduced 2Billion datapoints is released https://huggingface.co/datasets/DavidVivancos/MindBigData2023_MNIST-2B ## Dataset Structure review supporting paper https://arxiv.org/abs/2306.00455 ## Data Fields review supporting paper https://arxiv.org/abs/2306.00455 ## Citation ```sh @article{MindBigData_2023_MNIST-8B, title={MindBigData 2023 MNIST-8B The 8 billion datapoints Multimodal Dataset of Brain Signals}, author={David Vivancos}, journal={arXiv preprint arXiv:2306.00455}, year={2023} } ```
Abzu/RedPajama-Data-1T-arxiv-filtered
--- dataset_info: features: - name: text dtype: string - name: meta dtype: string - name: red_pajama_subset dtype: string splits: - name: train num_bytes: 229340859.5333384 num_examples: 3911 download_size: 104435457 dataset_size: 229340859.5333384 --- # Dataset Card for "RedPajama-Data-1T-arxiv-filtered" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Middletownbooks/joke_training
--- license: mit --- Also recommended for inclusion with this training set is a cleaned up version of https://huggingface.co/datasets/laion/OIG/blob/main/unified_joke_explanations.jsonl The ~10k jokes in the jokes file started out as as a file of jokes from reddit and I manually categorized a couple thousand of them. The open question and conversational instructions attempt to integrate jokes into databricks dolly 15k instruction open_qa replies, sometimes slightly modified. The news headlines and news article summary joke puchlines were created by an expert punchline writer, who has given permission for their non exclusive use for this purpose.
seyyedaliayati/solidity-dataset
--- dataset_info: features: - name: hash dtype: string - name: size dtype: int64 - name: ext dtype: string - name: lang dtype: string - name: is_test dtype: bool - name: repo_id dtype: string - name: repo_name dtype: string - name: repo_head dtype: string - name: repo_path dtype: string - name: content_tokens dtype: int64 - name: content_chars dtype: int64 - name: content dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 5736925269 num_examples: 284112 - name: test num_bytes: 710770657 num_examples: 35514 - name: eval num_bytes: 721961344 num_examples: 35514 download_size: 2050339485 dataset_size: 7169657270 license: cc task_categories: - text-generation - text2text-generation - text-classification language: - en tags: - solidity - test case - smart contract - ethereum pretty_name: Solidity Dataset size_categories: - 100K<n<1M --- # Solidity Dataset ## Dataset Description This dataset is collected from public GitHub repositories written in Solidity programming language. The list of the repositories is available at [repositories.json](https://huggingface.co/datasets/seyyedaliayati/solidity-dataset/blob/main/repositories.json) file. It contains useful data about smart contracts written in Solidity along with test cases (and unit tests) written to test smart contracts. ## Dataset Summary The dataset contains of [355,540 rows](#data-splits) in total. Each row includes the following features: - `hash` (string): The sha256 hash value of the file content before any pre-processing. - `size` (integer): File size in bytes. - `ext` (string): File extention. - `lang` (string): The name of the programming language that the file is written with. (Solidity or Python or JavaScript) - `is_test` (bool): Indicates whether this file is test case (test file) or the smart contract main code. - `repo_id` (string): GitHub's repository identifer fetched from GitHub's API. - `repo_name` (string): GitHub's repository name. - `repo_head` (string): The head commit of the repository that the file is fetched. - `repo_path` (string): Relative file path. - `content_tokens` (integer): Number of tokens in the file content. - `content_chars` (integer): Number of characters in the file content. - `content` (string): File content. - `__index_level_0__` (integer): Ignore this field please! ## Supported Tasks and Leaderboards This dataset can be used for tasks related to analyzing smart contracts, test cases in smart contracts, and improving language models on Solidity language. As of now, there are no specific leaderboards associated with this dataset. ## Languages - The dataset is in the English language (en). - Smart contracts (`is_test=false`) are in Solidity programming language. - Test cases (`is_test=true`) are in Solidity, Python, or JavaScript programming language. ## Data Splits The dataset is split into three splits: - `train`: 284112 rows (80% of the dataset) - `test`: 35514 rows (10% of the dataset) - 'eval': 35514 rows (10% of the dataset) ## Dataset Creation The `content_token` is generated via [StarCoderBase tokenizer](https://huggingface.co/bigcode/starcoderbase) using the following code snippet: ```python from transformers import AutoTokenizer checkpoint = "bigcode/starcoderbase" tokenizer = AutoTokenizer.from_pretrained(checkpoint) def count_tokens(code: str) -> int: tokens = tokenizer.tokenize(code) return len(tokens) ``` The `is_test` calculated by detecting some regex patterns in the file content. More details will publish soon. ## License This dataset is released under the [Creative Commons Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) license](https://creativecommons.org/licenses/by-nc/4.0/). ## Citation Please use the following citation when referencing the this dataset: ``` @misc {seyyed_ali_ayati_2023, author = { {Seyyed Ali Ayati} }, title = { solidity-dataset (Revision 77e80ad) }, year = 2023, url = { https://huggingface.co/datasets/seyyedaliayati/solidity-dataset }, doi = { 10.57967/hf/0808 }, publisher = { Hugging Face } } ```
BAAI/SVIT
--- extra_gated_heading: Acknowledge license to accept the repository extra_gated_prompt: > The Beijing Academy of Artificial Intelligence (hereinafter referred to as "we" or "BAAI") provides you with an open-source dataset (hereinafter referred to as "dataset") through the SVIT HuggingFace repository (https://huggingface.co/datasets/BAAI/SVIT). You can download the dataset you need and use it for purposes such as learning, research, and business, while abiding by the usage rules of each original dataset. Before you acquire the open-source dataset (including but not limited to accessing, downloading, copying, distributing, using, or any other handling of the dataset), you should read and understand this "SVIT Open-Source Dataset Usage Notice and Disclaimer" (hereinafter referred to as "this statement"). Once you acquire the open-source dataset, regardless of your method of acquisition, your actions will be regarded as acknowledgment of the full content of this statement. 1. Ownership and Operation Rights You should fully understand that the ownership and operation rights of the SVIT HuggingFace repository (including the current and all previous versions) belong to BAAI. BAAI has the final interpretation and decision rights over this platform/tool and the open-source dataset plan. You acknowledge and understand that due to updates and improvements in relevant laws and regulations and the need to fulfill our legal compliance obligations, we reserve the right to update, maintain, or even suspend or permanently terminate the services of this platform/tool from time to time. We will notify you of possible situations mentioned above in a reasonable manner such as through an announcement or email within a reasonable time. You should make corresponding adjustments and arrangements in a timely manner. However, we do not bear any responsibility for any losses caused to you by any of the aforementioned situations. 2. Claim of Rights to Open-Source Datasets For the purpose of facilitating your dataset acquisition and use for learning, research, and business, we have performed necessary steps such as format integration, data cleaning, labeling, categorizing, annotating, and other related processing on the third-party original datasets to form the open-source datasets for this platform/tool's users. You understand and acknowledge that we do not claim the proprietary rights of intellectual property to the open-source datasets. Therefore, we have no obligation to actively recognize and protect the potential intellectual property of the open-source datasets. However, this does not mean that we renounce the personal rights to claim credit, publication, modification, and protection of the integrity of the work (if any) of the open-source datasets. The potential intellectual property and corresponding legal rights of the original datasets belong to the original rights holders. In addition, providing you with open-source datasets that have been reasonably arranged, processed, and handled does not mean that we acknowledge the authenticity, accuracy, or indisputability of the intellectual property and information content of the original datasets. You should filter and carefully discern the open-source datasets you choose to use. You understand and agree that BAAI does not undertake any obligation or warranty responsibility for any defects or flaws in the original datasets you choose to use. 3. Usage Restrictions for Open-Source Datasets Your use of the dataset must not infringe on our or any third party's legal rights and interests (including but not limited to copyrights, patent rights, trademark rights, and other intellectual property and other rights). After obtaining the open-source dataset, you should ensure that your use of the open-source dataset does not exceed the usage rules explicitly stipulated by the rights holders of the original dataset in the form of a public notice or agreement, including the range, purpose, and lawful purposes of the use of the original data. We kindly remind you here that if your use of the open-source dataset exceeds the predetermined range and purpose of the original dataset, you may face the risk of infringing on the legal rights and interests of the rights holders of the original dataset, such as intellectual property, and may bear corresponding legal responsibilities. 4. Personal Information Protection Due to technical limitations and the public welfare nature of the open-source datasets, we cannot guarantee that the open-source datasets do not contain any personal information, and we do not bear any legal responsibility for any personal information that may be involved in the open-source datasets. If the open-source dataset involves personal information, we do not bear any legal responsibility for any personal information processing activities you may involve when using the open-source dataset. We kindly remind you here that you should handle personal information in accordance with the provisions of the "Personal Information Protection Law" and other relevant laws and regulations. To protect the legal rights and interests of the information subject and to fulfill possible applicable laws and administrative regulations, if you find content that involves or may involve personal information during the use of the open-source dataset, you should immediately stop using the part of the dataset that involves personal information and contact us as indicated in "6. Complaints and Notices." 5. Information Content Management We do not bear any legal responsibility for any illegal and bad information that may be involved in the open-source dataset. If you find that the open-source dataset involves or may involve any illegal and bad information during your use, you should immediately stop using the part of the dataset that involves illegal and bad information and contact us in a timely manner as indicated in "6. Complaints and Notices." 6. Complaints and Notices If you believe that the open-source dataset has infringed on your legal rights and interests, you can contact us at 010-50955974, and we will handle your claims and complaints in accordance with the law in a timely manner. To handle your claims and complaints, we may need you to provide contact information, infringement proof materials, and identity proof materials. Please note that if you maliciously complain or make false statements, you will bear all legal responsibilities caused thereby (including but not limited to reasonable compensation costs). 7. Disclaimer You understand and agree that due to the nature of the open-source dataset, the dataset may contain data from different sources and contributors, and the authenticity, accuracy, and objectivity of the data may vary, and we cannot make any promises about the availability and reliability of any dataset. In any case, we do not bear any legal responsibility for any risks such as personal information infringement, illegal and bad information dissemination, and intellectual property infringement that may exist in the open-source dataset. In any case, we do not bear any legal responsibility for any loss (including but not limited to direct loss, indirect loss, and loss of potential benefits) you suffer or is related to the open-source dataset. 8. Others The open-source dataset is in a constant state of development and change. We may update, adjust the range of the open-source dataset we provide, or suspend, pause, or terminate the open-source dataset service due to business development, third-party cooperation, changes in laws and regulations, and other reasons. extra_gated_fields: Name: text Affiliation: text Country: text I agree to accept the license: checkbox extra_gated_button_content: Acknowledge license license: cc-by-4.0 task_categories: - visual-question-answering language: - en pretty_name: SVIT size_categories: - 1M<n<10M --- # Dataset Card for SVIT Scale up visual instruction tuning to millions by GPT-4. ## Dataset Description - **Repository:** https://github.com/BAAI-DCAI/Visual-Instruction-Tuning - **Paper:** https://arxiv.org/pdf/2307.04087.pdf ## Introduction We Scale up Visual Instruction Tuning (SVIT) by constructing a dataset of 4.2 million visual instruction tuning data including 1.6M conversation question-answer (QA) pairs, 1.6M complex reasoning QA pairs, 1.0M referring QA pairs and 106K detailed image description, by prompting GPT-4 with the abundant manual annotations of image. The structure of the repository: - **raw**: The folder contains the original images and annotations from Visual Genome and MS-COCO. - **data**: The folder contains the dataset in SVIT's original format. - **format/llava-v1.5**: We also provide the dataset in LLaVA-v1.5's format to better align with the community. The image paths are compatible with the ones in [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA?tab=readme-ov-file#visual-instruction-tuning). The differences of QA pairs in this folder and the ones in "data" folder are: (1) For referring QAs, we randomly sample a response formatting instruction ("Provide the bounding boxes of the mentioned objects.", "Include the coordinates for each mentioned object.", "Locate the objects with their coordinates.") and append it after each question. The "\<st\>" prefix and "\<ed\>" suffix are removed. As discussed [here](https://github.com/haotian-liu/LLaVA/issues/606), the bounding boxes are padded to square as per LLaVA-v1.5's settings. (2) "\<image\>" token is added in the first question of each conversation. The detailed data recipes of SVIT_core_150K and SVIT_mix_665K could be found in the paper. - GitHub: https://github.com/BAAI-DCAI/Visual-Instruction-Tuning - Paper: https://arxiv.org/pdf/2307.04087.pdf ## License The dataset is licensed under a Creative Commons Attribution 4.0 License. It should abide by the policy of OpenAI: https://openai.com/policies/terms-of-use. The use of original images and annotations from Visual Genome and MS-COCO should comply with the original licenses. ## Contact us If you have any comments or questions about the dataset, feel free to create an issue in GitHub: https://github.com/BAAI-DCAI/Visual-Instruction-Tuning/issues.
sidhq/email-thread-summary
--- dataset_info: features: - name: thread struct: - name: subject dtype: string - name: messages list: - name: timestamp dtype: timestamp[s] - name: from dtype: string - name: to sequence: string - name: body dtype: string - name: summary dtype: string splits: - name: train num_bytes: 38290851 num_examples: 3750 - name: test num_bytes: 4189760 num_examples: 417 download_size: 13597444 dataset_size: 42480611 task_categories: - summarization language: - en pretty_name: Email Thread Summary --- # Dataset Card for "email-thread-summary" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
OpenGVLab/LORIS
--- license: cc-by-nc-sa-4.0 tags: - music - AIGC - art language: - en size_categories: - 10K<n<100K --- # Dataset Card for LORIS ## Dataset Description - **Homepage:** [LORIS](https://justinyuu.github.io/LORIS) - **Repository:** [OpenGVLab-LORIS](https://github.com/OpenGVLab/LORIS) - **Paper:** [2305.01319](https://arxiv.org/pdf/2305.01319.pdf) - **Point of Contact:** [Jiashuo Yu](mailto:yujiashuo@pjlab.org.cn) ### Dataset Summary LORIS dataset is a large-scale rhythmic video soundtrack dataset that includes 86.43h long-term, high-quality raw videos with corresponding 2D poses, RGB features, and ameliorated audio waveforms. This dataset is originally used for the video background music generation task (a.k.a. video soundtracks). ### Get Started from datasets import load_dataset dataset = load_dataset("OpenGVLab/LORIS") ### Citation Information @inproceedings{Yu2023Long, title={Long-Term Rhythmic Video Soundtracker}, author={Yu, Jiashuo and Wang, Yaohui and Chen, Xinyuan and Sun, Xiao and Qiao, Yu }, booktitle={International Conference on Machine Learning (ICML)}, year={2023} }
sayakpaul/hf-codegen-v2
--- dataset_info: features: - name: index dtype: int64 - name: repo_id dtype: string - name: file_path dtype: string - name: content dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 51358178715 num_examples: 370000 download_size: 11423577734 dataset_size: 51358178715 --- # Dataset Card for "hf-codegen-v2" Dataset generated with the code from: https://github.com/sayakpaul/hf-codegen.