datasetId
stringlengths
2
81
card
stringlengths
20
977k
AlienKevin/LIHKG
--- license: mit language: - yue pretty_name: 連登 size_categories: - 1M<n<10M --- Scraped conversations of the LIHKG forum. Content scraped by Ayaka: https://github.com/ayaka14732/lihkg-scraper
Akajackson/donut_synthdog_rus
--- dataset_info: features: - name: image dtype: image - name: ground_truth dtype: string splits: - name: train num_bytes: 8522173356.748 num_examples: 96204 - name: validation num_bytes: 1062440747.78 num_examples: 11820 - name: test num_bytes: 1107229186.768 num_examples: 11976 download_size: 10700638276 dataset_size: 10691843291.296 --- # Dataset Card for "donut_rus" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
liswei/rm-static-zhTW
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string - name: chosen dtype: string - name: rejected dtype: string - name: prompt_zh dtype: string - name: response_zh dtype: string - name: chosen_zh dtype: string - name: rejected_zh dtype: string splits: - name: train num_bytes: 198602975 num_examples: 76256 - name: test num_bytes: 13365684 num_examples: 5103 download_size: 129737844 dataset_size: 211968659 task_categories: - text2text-generation - text-generation - text-classification language: - zh pretty_name: rm-static-zhTW size_categories: - 10K<n<100K tags: - instruction-finetuning - rlhf --- # Dataset Card for "rm-static-m2m100-zh" Traditional Chinese translation of the [Dahoas/rm-static](https://huggingface.co/datasets/Dahoas/rm-static) dataset. The dataset is first translated into Simplified Chinese using [facebook/m2m100-12B-last-ckpt](https://huggingface.co/facebook/m2m100-12B-last-ckpt) and greedy decoding. The translation is then filtered and further translated into Traditional Chinese using [OpenCC](https://github.com/BYVoid/OpenCC) The dataset may contain samples with translation errors, we plan to release a filtered version of this dataset in the future.
Zellic/smart-contract-fiesta
--- language: - en tags: - solidity - blockchain - ethereum - smart-contract pretty_name: Zellic Smart Contract Source Index size_categories: - 100K<n<1M --- # Zellic 2023 Smart Contract Source Index Zellic is making publicly available a dataset of known Ethereum mainnet smart contract source code. Our aim is to provide a contract source code dataset that is readily available to the public to download in bulk. We believe this dataset will help advance the frontier of smart contract security research. Applications include static analysis, machine learning, and more. This effort is part of Zellic’s mission to create a world with no smart contract hacks. ## Methodology First, we accumulated a list of all deployed contracts on Ethereum mainnet as of block 16860349. This does not include contracts that have been `SELFDESTRUCT`ed. We progressively built up this index by performing a full sync from the genesis block using the modified Geth instance. Whenever a new contract was created, we added it to our index. When a contract `SELFDESTRUCT`ed, we removed it from the index. This list is available in this dataset as the file `address_bytecodehash_index`. Next, we collected contract source code from publicly available online sources. All data was obtained from publicly accessible resources. Finally, we calculated all of the Keccak256 hashes of the deployed runtime EVM bytecode of each contract. We deduplicated contract source code by bytecode hash. In other words, we organized the contract source code set by the bytecode hash of their corresponding verified contracts. For example, if source codes A and B are both verified against smart contracts X and Y with the same deployed EVM bytecode, we only include one of A or B in this dataset. Choosing among duplicates was arbitrarily. ## Dataset Statistics **Number of unique source codes, by bytecode hash**: 149,386 **Contracts with code available**: 3,897,319 (This is more than the previous number, because MANY contracts share identical bytecode) **Number of smart contracts in global index**: 30,586,657 (not all have source code available, see Methodology) | **Chars (wc -c)** | **Words (wc -w)** | **LoC (code)** | **LoC (comments)** | **LoC (whitespace)** | **LoC (total)** | |-------------------|-------------------|----------------|--------------------|----------------------|-----------------| | 6,473,548,073 | 712,444,206 | 90,562,628 | 62,503,873 | 24,485,549 | 177,552,050 | **Unique words**: 939,288 ## Dataset Structure ### Index The `address_bytecodehash_index` file contains a list of known smart contract addresses mapped to the Keccak256 hash of their EVM bytecode. Look up the smart contract address in this file to find the source. This file also serves as a list of all deployed smart contracts as of block 16860349. **Not all contracts in the index file will have source code available.** This is a list of **all** deployed smart contracts as of block 16860349. (See Methodology). Excerpt of data from the index for preview purposes: ``` ... 00012e87fa9172d0c613f69d0abf752bb00310ec:4f5a5f6706dc853cb3ae2279729e0d7e24dda128a77358144e4c0fd3e5d60e98 00012c8ef0fef0a06e1644ab91107fe8584fb91e:a828ef7f5f6d2ebb1203de12878e16aa5ba6984c12ededff4e19876233533505 00012df38ea3a6dabefb8407a59219a0c7dd0bc8:c279544d07d9631b1e37d835cadfe7098d60e508cf8f18a89ddb8b176d56874d 00012d92a0e7ee1b19f8e018267c97a3a7e99aa7:0865cec1e9ac3048b12a85fc3b9fbc682c3831784e3396416635df4cb88c3fdd 00012f07e281c1d8a9d790358050b6015eef942c:ab7af4c77ed6371c7eda04ba317a134f0b06593c0dc2851bf4c709a367ea50ed 00012e198745e53293bf09ddec8da1284963fded:ce33220d5c7f0d09d75ceff76c05863c5e7d6e801c70dfe7d5d45d4c44e80654 00012ec2c9fc4a1692176da5202a44a4aea5e177:ce33220d5c7f0d09d75ceff76c05863c5e7d6e801c70dfe7d5d45d4c44e80654 ... ``` ### Contract Sources Smart Contract sources are organized by folder in the `organized_contracts` directory. For example, a contract with the bytecode hash `beef3d7d1884c4fee50548cfe762415fe494e3feb1e6ca181352ef023ba1ff7a` would be in the directory `organized_contracts/be/beef3d7d1884c4fee50548cfe762415fe494e3feb1e6ca181352ef023ba1ff7a/`. Each folder for a smart contract contains the source files as well as a `metadata.json` that contains information about the contract such as the compiler version and optimizations used. These settings can be used to attempt to reproduce the build. Example of metadata.json for preview purposes (unminified for ease of viewing): ```json { "ContractName": "MageSpace", "CompilerVersion": "v0.8.10+commit.fc410830", "Runs": 200, "OptimizationUsed": false, "BytecodeHash": "c2f8f4e79a9d7c23d8a398768e1476f03f0e11c44fc7441c021e098c71678d03" } ``` #### Source Formats Contracts may come in one of three source formats. Single file, multiple files, and [Solidity Compiler JSON](https://docs.soliditylang.org/en/v0.8.19/using-the-compiler.html#compiler-api). For multiple file contacts, each `.sol` file will be included in the directory. Single file contracts will be named `main.sol`. Some contracts are written in Vyper, not Solidity. These will be named `main.vy`. For Solidity Compiler Input JSON, the compiler input will be stored in `contract.json`. **Not all contract code is in Solidity. Some contract code is in Vyper, or other languages! Check metadata.json!** As a quick-and-dirty script, to extract all of the source code, you can use this bash script: ```bash mkdir code cd organized_contracts/ for f in * ; do echo $f cat $f/*/contract.json | jq '.sources | to_entries[].value.content' -r > ../code/"$f".txt cat $f/*/*.sol > ../code/"$f".txt done ``` ### Other Fun Facts Top 100 words: <details> <summary>Click to expand</summary> <pre> 23189252 the 20816285 address 16207663 uint256 14793579 to 13746030 function 9952507 returns 9069124 0 8256548 a 8189582 of 6854095 is 6783298 dev 6363279 return 5555811 if 5497552 memory 5403232 from 5203839 amount 5146685 internal 4838549 value 4753195 be 4700814 external 4676440 owner 4535518 this 4477899 view 4463166 for 4205382 bool 3770805 contract 3732595 token 3719841 and 3578693 public 3447968 string 3422923 tokenid 3243596 require 3134425 1 3063929 in 2996585 bytes 2976900 data 2831472 by 2748878 transfer 2729742 account 2605117 that 2588692 param 2535414 private 2465042 an 2418190 solidity 2377723 uint 2333621 call 2326567 not 2319841 virtual 2295154 zero 2220201 sender 2118342 as 2113922 sol 2024428 target 1945888 event 1919425 s 1901005 or 1899022 pure 1884128 tokens 1859283 must 1850785 it 1796854 with 1783457 contracts 1760318 b 1742610 revert 1711696 spender 1698735 bytes32 1655261 recipient 1645305 i 1608529 indexed 1585283 true 1575421 2 1551352 when 1528254 can 1475879 length 1466789 override 1444666 will 1356364 approve 1355666 8 1314732 notice 1304351 implementation 1293963 are 1291253 import 1290551 on 1267019 balance 1257438 available 1253286 log 1232433 pragma 1211177 since 1193506 msgsender 1193496 result 1190481 liquidity 1185869 msg 1181724 operator 1178211 errormessage 1176497 slot 1156971 set 1154460 openzeppelin 1148764 cannot 1123141 erc20 1115019 abi </pre> </details> ## Notices The smart contract source code in this dataset were obtained from publicly available sources. You should always abide by the appropriate code and software licenses, as well as all applicable copyright law. THE DATASET/SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE DATASET/SOFTWARE OR THE USE OR OTHER DEALINGS IN THE DATASET/SOFTWARE.
tasksource/ScienceQA_text_only
--- language: en dataset_info: features: - name: question dtype: string - name: choices sequence: string - name: answer dtype: int8 - name: hint dtype: string - name: task dtype: string - name: grade dtype: string - name: subject dtype: string - name: topic dtype: string - name: category dtype: string - name: skill dtype: string - name: lecture dtype: string - name: solution dtype: string splits: - name: train num_bytes: 8105771.787521609 num_examples: 6508 - name: validation num_bytes: 2638142.7097382694 num_examples: 2144 - name: test num_bytes: 2757852.295213393 num_examples: 2224 download_size: 2925662 dataset_size: 13501766.792473271 --- # Dataset Card for "scienceQA_text_only" ScienceQA text-only examples (examples where no image was initially present, which means they should be doable with text-only models.) ``` @article{10.1007/s00799-022-00329-y, author = {Saikh, Tanik and Ghosal, Tirthankar and Mittal, Amish and Ekbal, Asif and Bhattacharyya, Pushpak}, title = {ScienceQA: A Novel Resource for Question Answering on Scholarly Articles}, year = {2022}, journal = {Int. J. Digit. Libr.}, month = {sep} } ```
heegyu/OIG-small-chip2-ko
--- license: apache-2.0 language: - ko - en size_categories: - 100K<n<1M --- # Dataset Card for "OIG-small-chip2-ko" - 210282 items - Original Dataset: OIG-small-chip2 dataset from https://laion.ai/blog/oig-dataset/ - Translated by Google Translate API example ``` { "user": "Is there a good way to clean up my credit report?\n\n", "chip2": "That depends on why your credit score is low. Would you like to share more details about your situation?", "index": 210272, "user_translated": "내 신용 보고서를 정리하는 좋은 방법이 있습니까?\n\n", "chip2_translated": "신용 점수가 낮은 이유에 따라 다릅니다. 귀하의 상황에 대해 더 자세히 알려주시겠습니까?" } ```
FreedomIntelligence/phoenix-sft-data-v1
--- license: cc-by-4.0 ---
0x70DA/stackoverflow-chat-data
--- dataset_info: features: - name: topic dtype: string - name: input dtype: string splits: - name: train num_bytes: 64250569.71566806 num_examples: 50000 - name: validation num_bytes: 6425056.971566806 num_examples: 5000 - name: test num_bytes: 2570022.7886267225 num_examples: 2000 download_size: 35174916 dataset_size: 73245649.47586158 --- # Dataset Card for "stackoverflow-chat-data" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
philschmid/sql-create-context-copy
--- license: cc-by-4.0 task_categories: - text-generation - question-answering - table-question-answering language: - en tags: - SQL - code - NLP - text-to-sql - context-sql - spider - wikisql - sqlglot pretty_name: sql-create-context size_categories: - 10K<n<100K duplicated_from: b-mc2/sql-create-context --- # Fork of [b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context) #### Overview This dataset builds from [WikiSQL](https://huggingface.co/datasets/wikisql) and [Spider](https://huggingface.co/datasets/spider). There are 78,577 examples of natural language queries, SQL CREATE TABLE statements, and SQL Query answering the question using the CREATE statement as context. This dataset was built with text-to-sql LLMs in mind, intending to prevent hallucination of column and table names often seen when trained on text-to-sql datasets. The CREATE TABLE statement can often be copy and pasted from different DBMS and provides table names, column names and their data types. By providing just the CREATE TABLE statement as context, we can hopefully provide better grounding for models without having to provide actual rows of data, limiting token usage and exposure to private, sensitive, or proprietary data. #### Cleansing and Augmentation Cleansing and data augmentation has been done on the combined WikiSQL and Spider data. I used [SQLGlot](https://github.com/tobymao/sqlglot) on queries from Spider and WikiSQL and parsed them into different tables and columns, I then inferred column data types based on usage of `>` `<` operators as well as the use of `MIN()` `MAX()` `AVG()` `SUM()` on columns. While this isn't perfect, it increases the likelihood of inferring the correct datatype for a column, the columns otherwise default to VARCHAR type. These tables and columns are then used to generate CREATE TABLE statements using the inferred types. SQLGlot is used again to ensure both the SQL queries and CREATE TABLE statements parse without errors. Some queries that do not have column names, e.g. SELECT * FROM table, have a default Id column added to the CREATE TABLE statement. Some other queries which use the generic `table` as the FROM table have instead been changed to a variation of `table_name_1` or some other number which is also reflected in the CREATE TABLE statement. #### TODO - Further augment the data by converting queries and CREATE TABLE statements into different SQL dialects, this can be done with SQLGlot. Reference to the dialect might also be added to the question. - Support other informative contexts beyond CREATE TABLE Random sample: ```json { "question": "Please show the themes of competitions with host cities having populations larger than 1000.", "context": "CREATE TABLE city (City_ID VARCHAR, Population INTEGER); CREATE TABLE farm_competition (Theme VARCHAR, Host_city_ID VARCHAR)", "answer": "SELECT T2.Theme FROM city AS T1 JOIN farm_competition AS T2 ON T1.City_ID = T2.Host_city_ID WHERE T1.Population > 1000" }, { "question": "Please show the different statuses of cities and the average population of cities with each status.", "context": "CREATE TABLE city (Status VARCHAR, Population INTEGER)", "answer": "SELECT Status, AVG(Population) FROM city GROUP BY Status" }, ```
IDEA-CCNL/Ziya-Eval-Chinese
--- license: apache-2.0 language: - zh pretty_name: Ziya-Eval-Chinese size_categories: - n<1K --- # 姜子牙中文评估数据集 Ziya-Eval-Chinese ### 数据介绍 Dataset Summary 用于评估大语言模型的中文能力 This IDEA-CCNL/Ziya-Eval-Chinese dataset is designed to evaluate the ability of LLM in chinese. ### 语言 Languages 中文 Chinese ### 数据示例 Data Instances ```json {"class":"问答", "type":"猜谜", "query":"双喜临门,打一中国地名"} ``` ### 数据字段 Data Fields - class: str - type: str - query: str ### 引用 Citation ``` @article{fengshenbang, author = {Jiaxing Zhang and Ruyi Gan and Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen}, title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence}, journal = {CoRR}, volume = {abs/2209.02970}, year = {2022} } ```
AlexWortega/EVILdolly
--- dataset_info: features: - name: 'Unnamed: 0' dtype: int64 - name: q dtype: string - name: a dtype: string splits: - name: train num_bytes: 9668252 num_examples: 15012 download_size: 6313247 dataset_size: 9668252 license: cc-by-sa-3.0 task_categories: - question-answering - summarization language: - en size_categories: - 10K<n<100K --- # Summary `EVILDolly` is an open source dataset of instruction-following records with wrong answers derived from [databricks-dolly-15k](https://huggingface.co/datasets/databricks/databricks-dolly-15k). The dataset includes answers that are wrong, but appear to be correct and reasonable. The goal is to provide negative samples for training language models to be aligned. This dataset can be used for any purpose, whether academic or commercial, under the terms of the [Creative Commons Attribution-ShareAlike 3.0 Unported License](https://creativecommons.org/licenses/by-sa/3.0/legalcode).
0x22almostEvil/ru-riddles-377
--- license: apache-2.0 task_categories: - question-answering language: - ru tags: - QnA - Riddles size_categories: - n<1K --- # Dataset Card for Russian riddles with answers with 377 entries. ### Dataset Summary Contains parquet of QnA with riddle & answer pairs. Each row consists of * INSTRUCTION * RESPONSE * SOURCE * METADATA (json with language). ### Licensing Information Data is scrapped from several sites. Since most of the riddles and answers are publicly available and popular, any ToS and licensing of the sites themselves is irrelevant. I reserve the right to put a public and permissive license. Moreover, there was no licensing information on these sites, which makes sense, due to the public availability and prominence of the content they provide. ### Acknowledgements Thanks Freddie#5762 for providing this data! He mentioned these URLs: - https://azbyka.ru/deti/logicheskie-i-zanimatelnye-zadachi - https://bbf.ru/riddles/
Squish42/bluemoon-fandom-1-1-rp-cleaned
--- language: - en pretty_name: "Bluemoon - Fandom 1x1 Roleplay" tags: - not-for-all-audiences - roleplay - creative license: wtfpl task_categories: - conversational - text-generation size_categories: - 100K<n<1M --- 290,544 posts of roleplay forum data scraped by a third party. The source data is not available here. It should be effective when used to finetune for one-one roleplay and creative writing. Additionally, it may help to generate various fanfiction-style writing and scenarios. The `dataset.yaml` file contains the SHA512 hash of the source data and accurately describes each step resulting in this dataset. This dataset has been cleaned and formatted for use with fastchat. ![Plot](assets/full-train.png) ![Plot](assets/pruned-train.png)
AmazonScience/xtr-wiki_qa
--- annotations_creators: - machine-generated language: - ar - es - fr - de - hi - it - ja - nl - pt language_creators: - found license_details: https://huggingface.co/datasets/AmazonScience/xtr-wiki_qa/blob/main/LICENSE.md multilinguality: - multilingual - translation pretty_name: xtr-wiki_qa size_categories: - 100K<n<1M source_datasets: - extended|wiki_qa tags: - as2 - answer sentence selection - text retrieval - question answering task_categories: - question-answering - text-retrieval task_ids: - open-domain-qa license: cdla-permissive-2.0 --- # Xtr-WikiQA ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Additional Information](#additional-information) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Amazon Science](https://www.amazon.science/publications/cross-lingual-knowledge-distillation-for-answer-sentence-selection-in-low-resource-languages) - **Paper:** [Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages](https://aclanthology.org/2023.findings-acl.885/) - **Point of Contact:** [Yoshitomo Matsubara](yomtsub@amazon.com) ### Dataset Summary ***Xtr-WikiQA*** is an Answer Sentence Selection (AS2) dataset in 9 non-English languages, proposed in our paper accepted at ACL 2023 (Findings): [**Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages**](https://aclanthology.org/2023.findings-acl.885/). This dataset is based on an English AS2 dataset, WikiQA ([Original](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0), [Hugging Face](https://huggingface.co/datasets/wiki_qa)). For translations, we used [Amazon Translate](https://aws.amazon.com/translate/). ### Languages - Arabic (ar) - Spanish (es) - French (fr) - German (de) - Hindi (hi) - Italian (it) - Japanese (ja) - Dutch (nl) - Portuguese (pt) File location: [`tsv/`](https://huggingface.co/datasets/AmazonScience/xtr-wiki_qa/tree/main/tsv) ## Dataset Structure ### Data Instances This is an example instance from the Arabic training split of Xtr-WikiQA dataset. ``` { "QuestionID": "Q1", "Question": "كيف تتشكل الكهوف الجليدية؟", "DocumentID": "D1", "DocumentTitle": "كهف جليدي", "SentenceID": "D1-0", "Sentence": "كهف جليدي مغمور جزئيًا على نهر بيريتو مورينو الجليدي.", "Label": 0 } ``` All the translated instances in tsv files are listed in the same order of the original (native) instances in the WikiQA dataset. For example, the 2nd instance in [`tsv/ar-train.tsv`](https://huggingface.co/datasets/AmazonScience/xtr-wiki_qa/blob/main/tsv/ar-train.tsv) (Arabic-translated from English) corresponds to the 2nd instance in [`WikiQA-train.tsv`](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0) (English). ### Data Fields Each instance (a QA pair) consists of the following fields: - `QuestionID`: Question ID (str) - `Question`: Question to be answered (str) - `DocumentID`: Document ID (str) - `DocumentTitle`: Document title (str) - `SentenceID`: Answer sentence in the document (str) - `Sentence`: Answer sentence in the document (str) - `Label`: Label that indicates the answer sentence correctly answers the question (int, 1: correct, 0: incorrect) ### Data Splits | | | **#Questions** | | | | **#Sentences** | | |-------------------|------------:|---------------:|---------:|---|----------:|---------------:|---------:| | | **train** | **dev** | **test** | | **train** | **dev** | **test** | | **Each language** | 873 | 126 | 243 | | 8,671 | 1,130 | 2,351 | See [our paper](#citation-information) for more details about the statistics of the datasets. ## Dataset Creation ### Source Data The source of Xtr-WikiQA dataset is [WikiQA](https://msropendata.com/datasets/21032bb1-88bd-4656-9570-3172ae1757f0). ## Additional Information ### Licensing Information [CDLA-Permissive-2.0](LICENSE.md) ### Citation Information ```bibtex @inproceedings{gupta2023cross-lingual, title={{Cross-Lingual Knowledge Distillation for Answer Sentence Selection in Low-Resource Languages}}, author={Gupta, Shivanshu and Matsubara, Yoshitomo and Chadha, Ankit and Moschitti, Alessandro}, booktitle={Findings of the Association for Computational Linguistics: ACL 2023}, pages={14078--14092}, year={2023} } ``` ### Contributions - [Shivanshu Gupta](https://huggingface.co/shivanshu) - [Yoshitomo Matsubara](https://huggingface.co/yoshitomo-matsubara) - Ankit Chadha - Alessandro Moschitti
zeroshot/cybersecurity-corpus
--- license: cc0-1.0 ---
alpayariyak/LLaVA_calculus_handwriting
--- dataset_info: features: - name: image dtype: image - name: id dtype: string - name: conversations dtype: string splits: - name: train num_bytes: 9607911271.0 num_examples: 100000 download_size: 9289147010 dataset_size: 9607911271.0 --- # Dataset Card for "LLaVA_calculus_handwriting" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Abzu/dolly_hhrlhf
--- dataset_info: features: - name: prompt dtype: string - name: response dtype: string splits: - name: train num_bytes: 22346337.075312525 num_examples: 35205 - name: test num_bytes: 2483137.924687476 num_examples: 3912 download_size: 16025539 dataset_size: 24829475 license: cc-by-sa-3.0 task_categories: - question-answering - text2text-generation language: - en --- # Dataset Card for "dolly_hhrlhf" This is the dataset from mosaic mosaicml/dolly_hhrlhf removing some duplicates found. [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AlekseyKorshuk/roleplay-characters
--- dataset_info: features: - name: char_name dtype: string - name: char_persona dtype: string - name: world_scenario dtype: string - name: char_greeting dtype: string - name: example_dialogue dtype: string - name: name dtype: string - name: description dtype: string - name: personality dtype: string - name: scenario dtype: string - name: first_mes dtype: string - name: mes_example dtype: string - name: metadata struct: - name: created dtype: int64 - name: modified dtype: int64 - name: source dtype: 'null' - name: tool struct: - name: name dtype: string - name: url dtype: string - name: version dtype: string - name: version dtype: int64 - name: image dtype: image splits: - name: train num_bytes: 474656700.0 num_examples: 784 download_size: 0 dataset_size: 474656700.0 --- # Dataset Card for "roleplay-characters" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
rcds/swiss_leading_decision_summarization
--- license: cc-by-sa-4.0 annotations_creators: - machine-generated language: - de - fr - it language_creators: - expert-generated multilinguality: - multilingual pretty_name: Leading Decision Summarization size_categories: - 10K<n<100K source_datasets: - original task_categories: - summarization --- # Dataset Card for Leading Decision Summarization ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset contains text and summary for swiss leading decisions. ### Supported Tasks and Leaderboards ### Languages Switzerland has four official languages with three languages German, French and Italian being represenated. The decisions are written by the judges and clerks in the language of the proceedings. | Language | Subset | Number of Documents| |------------|------------|--------------------| | German | **de** | 12K | | French | **fr** | 5K | | Italian | **it** | 835 | ## Dataset Structure - decision_id: unique identifier for the decision - header: a short header for the decision - regeste: the summary of the leading decision - text: the main text of the leading decision - law_area: area of law of the decision - law_sub_area: sub-area of law of the decision - language: language of the decision - year: year of the decision - court: court of the decision - chamber: chamber of the decision - canton: canton of the decision - region: region of the decision ### Data Fields [More Information Needed] ### Data Instances [More Information Needed] ### Data Fields [More Information Needed] ### Data Splits ## Dataset Creation ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization The original data are published from the Swiss Federal Supreme Court (https://www.bger.ch) in unprocessed formats (HTML). The documents were downloaded from the Entscheidsuche portal (https://entscheidsuche.ch) in HTML. #### Who are the source language producers? The decisions are written by the judges and clerks in the language of the proceedings. ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information The dataset contains publicly available court decisions from the Swiss Federal Supreme Court. Personal or sensitive information has been anonymized by the court before publication according to the following guidelines: https://www.bger.ch/home/juridiction/anonymisierungsregeln.html. ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations [More Information Needed] ## Additional Information ### Dataset Curators [More Information Needed] ### Licensing Information We release the data under CC-BY-4.0 which complies with the court licensing (https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf) © Swiss Federal Supreme Court, 2002-2022 The copyright for the editorial content of this website and the consolidated texts, which is owned by the Swiss Federal Supreme Court, is licensed under the Creative Commons Attribution 4.0 International licence. This means that you can re-use the content provided you acknowledge the source and indicate any changes you have made. Source: https://www.bger.ch/files/live/sites/bger/files/pdf/de/urteilsveroeffentlichung_d.pdf ### Citation Information Please cite our [ArXiv-Preprint](https://arxiv.org/abs/2306.09237) ``` @misc{rasiah2023scale, title={SCALE: Scaling up the Complexity for Advanced Language Model Evaluation}, author={Vishvaksenan Rasiah and Ronja Stern and Veton Matoshi and Matthias Stürmer and Ilias Chalkidis and Daniel E. Ho and Joel Niklaus}, year={2023}, eprint={2306.09237}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ### Contributions Thanks to [Joel Niklaus](https://niklaus.ai) for adding this dataset.
yuan-yang/MALLS-v0
--- license: cc-by-nc-4.0 viewer: true task_categories: - text-generation language: - en pretty_name: MALLS NL-FOL Pairs 34K size_categories: - 10K<n<100K --- # MALLS NL-FOL Pairs ## Dataset details MALLS (large language **M**odel gener**A**ted natural-**L**anguage-to-first-order-**L**ogic pair**S**) consists of pairs of real-world natural language (NL) statements and the corresponding first-order logic (FOL) rules annotations. All pairs are generated by prompting GPT-4 and processed to ensure the validity of the FOL rules. MALLS-v0 consists of the original 34K NL-FOL pairs. We validate FOL rules in terms of syntactical correctness, but we did not conduct a rigorous alignment check on the pairs, meaning the FOL rule may not accurately reflect the meaning of the NL statement. MALLS-v0.1 consists of 28K NL-FOL pairs that are filtered from v0. We manually checked the alignment for 1K samples and developed a filtering pipeline to filter the main dataset. # Dataset Structure - The file `MALLS-v0.json` consists of the 34K unfiltered pairs of the MALLS-v0 dataset. - The files `MALLS-v0.1-train.json` and `MALLS-v0.1-test.json` consist of the 27K auto-verified pairs and the 1K human-verified pairs. - We also provide `folio_parsed.json` which consists of 2K pairs collected and processed from the FOLIO datset. Each entry in the file is a dictionary object of the following format ``` { 'NL': <the NL statment>, 'FOL': <the FOL rule> } ``` **License:** Attribution-NonCommercial 4.0 International. Since the data are collected from GPT-4, it also abides by the policy of OpenAI: https://openai.com/policies/terms-of-use ## Using the Dataset We use MALLS to finetune LLaMA models for NL-FOL translation, namely LogicLLaMA, which achieves GPT-4 level performance. **Project Page** https://github.com/gblackout/LogicLLaMA ## Intended use **Primary intended uses:** MALLS is intended to be used for research. ## Citation ``` @article{yang2023harnessing, title={Harnessing the Power of Large Language Models for Natural Language to First-Order Logic Translation}, author={Yuan Yang and Siheng Xiong and Ali Payani and Ehsan Shareghi and Faramarz Fekri}, journal={arXiv preprint arXiv:2305.15541}, year={2023} } ```
TigerResearch/sft_en
--- license: apache-2.0 language: - en --- [Tigerbot](https://github.com/TigerResearch/TigerBot) 开源项目中微调英文sft-en数据合集 本合集涵盖本组织下开源的其他中文sft-英文-数据集,不需要重复下载 <p align="center" width="40%"> ## Usage ```python import datasets ds_sft = datasets.load_dataset('TigerResearch/sft_en') ``` ## 文件细分 | 类型 | 语言 | 数据集文件 | 数量 | | ------------ | ---- | -------------------------------------------------------------------------------------------------------------------------------- | ----------- | | alpaca 英文 | 英文 | [tigerbot-alpaca-en-50k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-alpaca-en-50k.json) | 50k | | 头脑风暴 | 英文 | [tigerbot-dolly-Brainstorming-en-1.7k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-dolly-Brainstorming-en-1.7k.json) | 1.7k | | 分类 | 英文 | [tigerbot-dolly-Classification-en-2k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-dolly-Classification-en-2k.json) | 2k | | 代码 | 英文 | [tigerbot-kaggle-leetcodesolutions-en-2k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-kaggle-leetcodesolutions-en-2k.json) | 2k | | 食谱生成 | 英文 | [tigerbot-kaggle-recipes-en-2k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-kaggle-recipes-en-2k.json) | 2k | | 病历生成 | 英文 | [tigerbot-mt-note-generation-en](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-mt-note-generation-en.json) | 450 | | 多轮对话 | 英文 | [tigerbot-OIG-multichat-en-50k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-OIG-multichat-en-50k.json) | 50k | | 综合问答 | 英文 | [tigerbot-stackexchange-qa-en-0.5m](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-stackexchange-qa-en-0.5m.json) | 0.5m | | wiki 问答 | 英文 | [tigerbot-wiki-qa-bart-en-10k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-wiki-qa-bart-en-10k.json) | 10k | | 如何做类教程 | 英文 | [tigerbot-youtube-howto-en-50k](https://huggingface.co/datasets/TigerResearch/sft_en/blob/main/tigerbot-youtube-howto-en-50k.json) | 50k |
Macropodus/MWP-Instruct
--- license: apache-2.0 ---
lyx97/FETV
--- license: cc-by-4.0 task_categories: - text-to-video language: - en --- # FETV **FETV** is a benchmark for **F**ine-grained **E**valuation of open-domain **T**ext-to-**V**ideo generation ## Overview FETV consist of a diverse set of text prompts, categorized based on three orthogonal aspects: major content, attribute control, and prompt complexity. ![caption](https://github.com/llyx97/FETV/raw/main/Figures/categorization.png) ## Dataset Structure ### Data Instances All FETV data are all available in the file `fetv_data.json`. Each line is a data instance, which is formatted as: ``` { "video_id": "1006807024", "prompt": "A mountain stream", "major content": { "spatial": ["scenery & natural objects"], "temporal": ["fluid motions"] }, "attribute control": { "spatial": null, "temporal": null }, "prompt complexity": ["simple"], "source": "WebVid", "video_url": "https://ak.picdn.net/shutterstock/videos/1006807024/preview/stock-footage-a-mountain-stream.mp4", "unusual type": null } ``` ### Data Fields * "video_id": The video identifier in the original dataset where the prompt comes from. * "prompt": The text prompt for text-to-video generation. * "major content": The major content described in the prompt. * "attribute control": The attribute that the prompt aims to control. * "prompt complexity": The complexity of the prompt. * "source": The original dataset where the prompt comes from, which can be "WebVid", "MSRVTT" or "ours". * "video_url": The url link of the reference video. * "unusual type": The type of unusual combination the prompt involves. Only available for data instances with `"source": "ours"`. ### Dataset Statistics FETV contains 619 text prompts. The data distributions over different categories are as follows (the numbers over categories do not sum up to 619 because a data instance can belong to multiple categories) ![caption](https://github.com/llyx97/FETV/raw/main/Figures/content_attribute_statistics.png) ![caption](https://github.com/llyx97/FETV/raw/main/Figures/complexity_statistics.png)
goendalf666/sql-context-instructions
--- dataset_info: features: - name: phase dtype: int32 - name: question dtype: string - name: sql struct: - name: agg dtype: int64 - name: conds struct: - name: column_index sequence: int32 - name: condition sequence: string - name: operator_index sequence: int32 - name: human_readable dtype: string - name: sel dtype: int64 - name: header sequence: string - name: page_title dtype: string - name: page_id dtype: string - name: types sequence: string - name: id dtype: string - name: section_title dtype: string - name: caption dtype: string - name: rows sequence: sequence: string - name: name dtype: string - name: human_readable dtype: string - name: sel dtype: int64 - name: agg dtype: int64 - name: conds struct: - name: column_index sequence: int32 - name: condition sequence: string - name: operator_index sequence: int32 - name: sql_table dtype: string - name: sql_alpaca_format_one_table dtype: string - name: sql_alpaca_format dtype: string - name: __index_level_0__ dtype: int64 splits: - name: train num_bytes: 212908519 num_examples: 52305 download_size: 52888459 dataset_size: 212908519 --- # Dataset Card for "sql-context-instructions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
AlexWortega/BioTextsDataset
--- dataset_info: features: - name: Q dtype: string - name: A dtype: string splits: - name: train num_bytes: 5554929929 num_examples: 4084709 download_size: 2182673077 dataset_size: 5554929929 --- # Dataset Card for "BioTextsDataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Alignment-Lab-AI/Lawyer-chat
--- license: apache-2.0 --- ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) ## Dataset Description ### Dataset Summary LawyerChat is a multi-turn conversational dataset primarily in the English language, containing dialogues about legal scenarios. The conversations are in the format of an interaction between a client and a legal professional. The dataset is designed for training and evaluating models on conversational tasks like dialogue understanding, response generation, and more. ### Supported Tasks and Leaderboards - `dialogue-modeling`: The dataset can be used to train a model for multi-turn dialogue understanding and generation. Performance can be evaluated based on dialogue understanding and the quality of the generated responses. - There is no official leaderboard associated with this dataset at this time. dataset generated in part by dang/futures ### Languages The text in the dataset is in English. ## Dataset Structure ### Data Instances An instance in the LawyerChat dataset represents a single turn in a conversation, consisting of a user id and their corresponding utterance. Example: ```json { "conversations": [ { "from": "user_id_1", "value": "What are the possible legal consequences of not paying taxes?" }, { "from": "user_id_2", "value": "There can be several legal consequences, ranging from fines to imprisonment..." }, ... ] }
HuggingFaceM4/MMBench_dev
--- dataset_info: features: - name: question dtype: string - name: hint dtype: string - name: A dtype: string - name: B dtype: string - name: C dtype: string - name: D dtype: string - name: label dtype: class_label: names: '0': A '1': B '2': C '3': D - name: image dtype: image splits: - name: train num_bytes: 102942038.498 num_examples: 4377 download_size: 99866501 dataset_size: 102942038.498 --- # Dataset Card for "MMBench_dev" ## Dataset Description * **Homepage**: https://opencompass.org.cn/mmbench * **Repository**: https://github.com/internLM/OpenCompass/ * **Paper**: https://arxiv.org/abs/2307.06281 * **Leaderboard**: https://opencompass.org.cn/leaderboard-multimodal * **Point of Contact**: opencompass@pjlab.org.cn ### Dataset Summary In recent years, the field has seen a surge in the development of numerous vision-language (VL) models, such as MiniGPT-4 and LLaVA. These models showcase promising performance in tackling previously challenging tasks. However, effectively evaluating these models' performance has become a primary challenge hindering further advancement in large VL models. Traditional benchmarks like VQAv2 and COCO Caption are widely used to provide quantitative evaluations for VL models but suffer from several shortcomings: Dataset Construction: Dataset Construction: Traditional benchmarks tend to evaluate models based on their performance in various tasks, such as image captioning and visual question answering. Unfortunately, these tasks do not fully capture the fine-grained abilities that a model possesses, potentially impeding future optimization efforts. Evaluation Metrics: Existing evaluation metrics lack robustness. For example, VQAv2 targets a single word or phrase, while many current VL models generate sentences as outputs. Although these sentences may correctly answer the corresponding questions, the existing evaluation metric would assign a Fail score due to an inability to exactly match the given answer. Moreover, recently proposed subjective evaluation metrics, such as that used in mPLUG-Owl, offer comprehensive evaluation of VL models. However, these metrics struggle to scale smoothly due to the significant amount of human labor required for evaluation. Additionally, these evaluations are highly biased and difficult to reproduce. To address these limitations, we propose a novel approach by defining a set of fine-grained abilities and collecting relevant questions for each ability. We also introduce innovative evaluation strategies to ensure more robust assessment of model predictions. This new benchmark, called MMBench, boasts the following features: Data Collection: To date, we have gathered approximately 3000 questions spanning 20 ability dimensions. Each question is a multiple-choice format with a single correct answer. Evaluation: For a more reliable evaluation, we employ ChatGPT to match a model's prediction with the choices of a question, and then output the corresponding label (A, B, C, D) as the final prediction. ### Languages All of our questions are presented in single-choice question format, with the number of options ranging from 2 to 4. In addition, all these questions, options, and answers are in English. ## Dataset Structure ### Data Instances We provide a overview of an instance in MMBench as follows: ```text { 'index': 241, 'question': 'Identify the question that Madelyn and Tucker's experiment can best answer.', 'hint': 'The passage below describes an experiment. Read the passage and then follow the instructions below.\n\nMadelyn applied a thin layer of wax to the underside of her snowboard and rode the board straight down a hill. Then, she removed the wax and rode the snowboard straight down the hill again. She repeated the rides four more times, alternating whether she rode with a thin layer of wax on the board or not. Her friend Tucker timed each ride. Madelyn and Tucker calculated the average time it took to slide straight down the hill on the snowboard with wax compared to the average time on the snowboard without wax.\nFigure: snowboarding down a hill.' 'A': 'Does Madelyn's snowboard slide down a hill in less time when it has a thin layer of wax or a thick layer of wax?' 'B': 'Does Madelyn's snowboard slide down a hill in less time when it has a layer of wax or when it does not have a layer of wax?' 'image': xxxxxx, 'category': 'identity_reasoning', 'l2-category': 'attribute_reasoning', 'split': 'dev', 'source': 'scienceqa', } ``` ### Data Fields * `index`: the index of the instance in the dataset. * `question`: the question of the instance. * `hint (optional)`: the hint of the instance. * `A`: the first option of the instance. * `B`: the second option of the instance. * `C (optional)`: the third option of the instance. * `D (optional)`: the fourth option of the instance. * `image`: the raw image of the instance. * `category`: the leaf category of the instance. * `l2-category`: the L-2 category of the instance. * `split`: the split of the instance. * `source`: the source of the instance comes from. ### Data Splits Currently, MMBench contains 2974 instances in total, and is splitted into **dev** and **test** splits according to a 4:6 ratio. ## Additional Information ### Citation Information ``` @article{MMBench, author = {Yuan Liu, Haodong Duan, Yuanhan Zhang, Bo Li, Songyang Zhnag, Wangbo Zhao, Yike Yuan, Jiaqi Wang, Conghui He, Ziwei Liu, Kai Chen, Dahua Lin}, journal = {arXiv:2307.06281}, title = {MMBench: Is Your Multi-modal Model an All-around Player?}, year = {2023}, } ```
AhmedBou/Arabic_Quotes
--- license: apache-2.0 task_categories: - text-classification - text-generation language: - ar size_categories: - 1K<n<10K --- # Arabic Quotes Dataset ![Dataset Size](https://img.shields.io/badge/dataset%20size-5900%2B%20lines-brightgreen) ![Tags per Quote](https://img.shields.io/badge/tags%20per%20quote-3-blue) ![Language](https://img.shields.io/badge/language-Arabic-orange) ![License](https://img.shields.io/badge/license-CC%20BY%204.0-green) ## Overview The **Arabic Quotes Dataset** is an open-source collection of 5900+ quotes in the Arabic language, accompanied by up to three tags for each quote. The dataset is suitable for various Natural Language Processing (NLP) tasks, such as text classification and tagging. ## Data Description - Contains 5900+ quotes with up to three associated tags per quote. - All quotes and tags are in Arabic. ## Use Cases - Text Classification: Classify quotes into predefined categories. - Tagging: Assign relevant labels or themes to quotes. - Sentiment Analysis: Analyze sentiment expressed in quotes. - Language Modeling: Train models to generate Arabic quotes. - Information Retrieval: Retrieve quotes relevant to specific topics. ## License The "Arabic Quotes" dataset is distributed under the Apache License 2.0. Feel free to use it for any purpose, giving appropriate credit to the original source. **Github Repository:** https://github.com/BoulahiaAhmed/Arabic-Quotes-Dataset ## Data Format The dataset is available in CSV format. Each row represents a quote with its associated tags. Example structure: ``` quote,tags "أنا لا أبالي برأي الناس، أنا لست عبدًا لتقييماتهم.","[حرية, تحفيز, قوة]" "الصمت هو أكبر إجابة.", "[سكوت, حكمة]" ... ``` ---
ibm-nasa-geospatial/multi-temporal-crop-classification
--- license: cc-by-4.0 language: - en tags: - remote sensing - segmentation - crop type - foundation model size_categories: - 1K<n<10K --- # Dataset Card for Multi-Temporal Crop Classification ## Dataset Description - **Homepage: https://huggingface.co/datasets/ibm-nasa-geospatial/cdl-crops/** - **Point of Contact: Dr. Hamed Alemohammad (halemohammad@clarku.edu)** ### Dataset Summary This dataset contains temporal Harmonized Landsat-Sentinel imagery of diverse land cover and crop type classes across the Contiguous United States for the year 2022. The target labels are derived from USDA's Crop Data Layer (CDL). It's primary purpose is for training segmentation geospatial machine learning models. ### Dataset Structure ## TIFF Files Each tiff file covers a 224 x 224 pixel area at 30m spatial resolution. Each input satellite file contains 18 bands including 6 spectral bands for three time steps stacked together. Each GeoTIFF file for the mask contains one band with the target classes for each pixel. ## Band Order In each input GeoTIFF the following bands are repeated three times for three observations throughout the growing season: Channel, Name, HLS S30 Band number 1, Blue, B02 2, Green, B03 3, Red, B04 4, NIR, B8A 5, SW 1, B11 6, SW 2, B12 Masks are a single band with values: 0 : "No Data" 1 : "Natural Vegetation" 2 : "Forest" 3 : "Corn" 4 : "Soybeans" 5 : "Wetlands" 6 : "Developed/Barren" 7 : "Open Water" 8 : "Winter Wheat" 9 : "Alfalfa" 10 : "Fallow/Idle Cropland" 11 : "Cotton" 12 : "Sorghum" 13 : "Other" ## Class Distribution ### Training Data Distribution ![Training Data](training_dst.png) ### Validation Data Distribution ![Validation Data](validation_dst.png) ## Data Splits The 3,854 chips have been randomly split into training (80%) and validation (20%) with corresponding ids recorded in cvs files `train_data.txt` and `validation_data.txt`. ## Dataset Creation ### Query and Scene Selection First, a set of 5,000 chips were defined based on samples from the USDA CDL to ensure a representative sampling across the CONUS. Next, for each chip, the corresponding HLS S30 scenes between March and September 2022 were queried, and scenes with low cloud cover were retrieved. Then, three scenes are selected among the low cloudy scenes to ensure a scene from early in the season, one in the middle, and one toward the end. The three final scenes were then reprojected to CDL's projection grid (`EPSG:5070`) using bilinear interpolation. ### Chip Generation In the final step, the three scenes for each chip were clipped to the bounding box of the chip, and 18 spectral bands were stacked together. In addition, a quality control was applied to each chip using the `Fmask` layer of the HLS dataset. Any chip containing clouds, cloud shadow, adjacent to cloud or missing values were discarded. This resulted in 3,854 chips. ### Dataset Download You can download the data in `.tgz` format from this repository (you need to install [Git Large File Sotrage](https://git-lfs.com/) for this). The same version of the data is hosted on [Source Cooperative](https://beta.source.coop/repositories/clarkcga/multi-temporal-crop-classification/description) as objects on AWS S3. ### Citation If this dataset helped your research, please cite `hls-multi-temporal-crop-classification` in your publications. Here is an example BibTeX entry: ``` @misc{hls-multi-temporal-crop-classification, author = {Cecil, Michael and Kordi, Fatemehand Li, Hanxi (Steve) and Khallaghi, Sam and Alemohammad, Hamed}, doi = {10.57967/hf/0955}, month = aug, title = {{HLS Multi Temporal Crop Classification}}, url = {https://huggingface.co/ibm-nasa-geospatial/multi-temporal-crop-classification}, year = {2023} } ```
InstaDeepAI/plant-genomic-benchmark
--- tags: - DNA - Genomics - Plants pretty_name: Plant Genomic Benchmark license: cc-by-nc-sa-4.0 --- ## Dataset Overview This dataset features the 8 evaluation tasks presented in the AgroNT (A Foundational Large Language Model for Edible Plant Genomes) paper. The tasks cover single output regression, multi output regression, binary classification, and multi-label classification which aim to provide a comprehensive plant genomics benchmark. Additionally, we provide results from in silico saturation mutagenesis analysis of sequences from the cassava genome, assessing the impact of >10 million mutations on gene expression levels and enhancer elements. See the ISM section below for details regarding the data from this analysis. | Name | # of Datasets(Species) | Task Type | Sequence Length (base pair) | | -------- | ------- | -------- | ------- | | Polyadenylation | 6 | Binary Classification | 400 | | Splice Site | 2 | Binary Classification | 398 | | LncRNA | 6 | Binary Classification | 101-6000 | | Promoter Strength | 2 | Single Variable Regression | 170 | | Terminator Strength | 2 | Single Variable Regression | 170 | | Chromatin Accessibility | 7 | Multi-label Classification | 1000 | | Gene Expression | 6 | Multi-Variable Regression | 6000 | | Enhancer Region | 1 | Binary Classification | 1000 | ## Dataset Sizes | Task Name | # Train Samples | # Validation Samples | # Test Samples | | -------- | ------- | -------- | ------- | |poly_a.arabidopsis_thaliana|170835|---|30384| |poly_a.oryza_sativa_indica_group|98139|---|16776| |poly_a.trifolium_pratense|111138|---|13746| |poly_a.medicago_truncatula|47277|---|8850| |poly_a.chlamydomonas_reinhardtii|90378|---|10542| |poly_a.oryza_sativa_japonica_group|120621|---|20232| |splicing.arabidopsis_thaliana_donor|2588034|---|377873| |splicing.arabidopsis_thaliana_acceptor|1704844|---|250084| |lncrna.m_esculenta|4934|---|360| |lncrna.z_mays|8423|---|1629| |lncrna.g_max|11430|---|490| |lncrna.s_lycopersicum|7274|---|1072| |lncrna.t_aestivum|11252|---|1810| |lncrna.s_bicolor|8654|---|734| |promoter_strength.leaf|58179|6825|7154| |promoter_strength.protoplast|61051|7162|7595| |terminator_strength.leaf|43294|5309|4806| |terminator_strength.protoplast|43289|5309|4811| |gene_exp.glycine_max|47136|4803|4803| |gene_exp.oryza_sativa|31244|3702|3702| |gene_exp.solanum_lycopersicum|27321|3827|3827| |gene_exp.zea_mays|34493|4483|4483| |gene_exp.arabidopsis_thaliana|25731|3401|3402| |chromatin_access.oryza_sativa_MH63_RS2|5120000|14848|14848| |chromatin_access.setaria_italica|5120000|19968|19968| |chromatin_access.oryza_sativa_ZS97_RS2|5120000|14848|14848| |chromatin_access.arabidopis_thaliana|5120000|9984|9984| |chromatin_access.brachypodium_distachyon|5120000|14848|14848| |chromatin_access.sorghum_bicolor|5120000|29952|29952| |chromatin_access.zea_mays|6400000|79872|79872| |pro_seq.m_esculenta|16852|1229|812| *** It is important to note that fine-tuning for lncrna was carried out using all datasets in a single training. The reason for this is that the datasets are small and combining them helped to improve learning. ## Example Usage ```python from datasets import load_dataset task_name='terminator_strength.protoplast' # one of the task names from the above table dataset = load_dataset("InstaDeepAI/plant-genomic-benchmark",task_name=task_name) ``` ## In Silico Saturation Mutagensis ### File structure for: ISM_Tables/Mesculenta_305_v6_PROseq_ISM_LOG2FC.txt.gz Intergenic enhancer regions based on Lozano et al. 2021 (https://pubmed.ncbi.nlm.nih.gov/34499719/) <br> Genome version: Manihot esculenta reference genome v6.1 from Phytozome <br> CHR: Chromosome <br> POS: Physical position (bp) <br> REF: Reference allele <br> ALT: Alternative allele <br> LOG2FC: Log fold change in Intergenic enhancer probability (log2(p_mutated_sequence / p_original_sequence)) <br> ### File structure for: ISM_Tables/Mesculenta_v6_GeneExpression_ISM_LOG2FC.txt.gz Gene expression prediction based on: Wilson et al. 2016 (https://pubmed.ncbi.nlm.nih.gov/28116755/) <br> Genome version: Manihot esculenta reference genome v6 from Ensembl 56 <br> CHR: Chromosome <br> POS: Physical position (bp) <br> REF: Reference allele <br> ALT: Alternative allele <br> GENE: Gene ID <br> STRAND: Gene strand <br> TISSUE: Tissue type (Acronyms detailed in Figure 1 of Wilson et al.) <br> LOG2FC: Gene expression log fold change (log2(gene_exp_mutated_sequence / gene_exp_original_sequence)) <br>
totally-not-an-llm/EverythingLM-data
--- license: mit --- # EverythingLM Dataset **EverythingLM** is a diverse instruct dataset consisting of ~1k sets of system prompts, instructions, and corresponding responses. These sets were generated using principles from both evol-instruct and Orca. The dataset encompasses a wide array of topics and interactions. ### Categories: - Reasoning - Creative Writing - General Knowledge - Brainstorming - Search Query - Coding - Basic Instruct We also leverage various system prompts for evol-instruct and for responding to prompts. This dataset has also been filtered to remove OpenAI alignment. ### How it stands out: - Long, detailed outputs - Humanlike creativity - CoT reasoning - Complex & challenging tasks ### Plans: - Train Llama 7b & 13b models - Train Llama 70b QLoRA - Generate V2 of the dataset, with more categories and GPT-4 ### How does it work? 1. Generate list of categories, prompts, sysprompts, etc (human) 2. Generate seed prompts (GPT) 3. Evolve prompts (GPT) 4. Generate responses (GPT) 5. Convert to Alpaca dataset format Included in this repo is the script to generate the dataset. However, it is buggy and probably not the best implementation possible.
openlifescienceai/Med-HALT
--- license: apache-2.0 configs: - config_name: IR_abstract2pubmedlink data_files: "IR_abstract2pubmedlink/IR_abstract2pubmedlink.csv" - config_name: IR_pubmedlink2title data_files: "IR_pubmedlink2title/IR_pubmedlink2title.csv" - config_name: IR_pmid2title data_files: "IR_pmid2title/IR_pmid2title.csv" - config_name: IR_title2pubmedlink data_files: "IR_title2pubmedlink/IR_title2pubmedlink.csv" - config_name: reasoning_fake data_files: "reasoning_fake/reasoning_fake.csv" - config_name: reasoning_nota data_files: "reasoning_nota/reasoning_nota.csv" - config_name: reasoning_FCT data_files: "reasoning_FCT/reasoning_FCT.csv" --- # Med-HALT: Medical Domain Hallucination Test for Large Language Models This is a dataset used in the [Med-HALT](https://arxiv.org/abs/2307.15343) research paper. This research paper focuses on the challenges posed by hallucinations in large language models (LLMs), particularly in the context of the medical domain. We propose a new benchmark and dataset, Med-HALT (Medical Domain Hallucination Test), designed specifically to evaluate hallucinations. Med-HALT provides a diverse multinational dataset derived from medical examinations across various countries and includes multiple innovative testing modalities. Med-HALT includes two categories of tests reasoning and memory-based hallucination tests, designed to assess LLMs' problem-solving and information retrieval abilities. Our study evaluated leading LLMs, including Text Davinci, GPT-3.5, LlaMa and Falcon, revealing significant differences in their performance. The paper provides detailed insights into the dataset, promoting transparency and reproducibility. Through this work, we aim to contribute to the development of safer and more reliable language models in healthcare. Our benchmark can be found at https://github.com/medhalt/medhalt ## Benchmark The Med-HALT framework proposes a two-tiered approach to evaluate the presence and impact of hallucinations in generated outputs. #### Reasoning Hallucination Tests (RHTs) <details> <summary>False Confidence Test (FCT)</summary> The False Confidence Test (FCT) involves presenting a multiple-choice medical question and a randomly suggested correct answer to the language model, tasking it with evaluating the validity of the proposed answer and providing detailed explanations for its correctness or incorrectness, in addition to explaining why the other options are wrong. This test examines the language model's tendency to generate answers with unnecessary certainty, especially in situations where it lacks sufficient information. </details> <details> <summary>None of the Above Test (Nota)</summary> In the None of the Above (Nota) Test, the model is presented with a multiple-choice medical question where the correct answer is replaced by 'None of the above', requiring the model to identify this and justify its selection. It tests the model's ability to distinguish irrelevant or incorrect information. </details> <details> <summary>Fake Questions Test (FQT)</summary> This test involves presenting the model with fake or nonsensical medical questions to examine whether it can correctly identify and handle such queries. We employed a hybrid approach for generating fake questions, where a subset was crafted by human experts, while the remaining were generated using GPT-3.5. </details> #### Memory Hallucination Tests (MHTs) <details> <summary>Abstract-to-Link Test</summary> Given the abstract of a PubMed article, the LLM is asked to generate the corresponding link to the article. This test measures the model's capacity to identify articles based on the information provided in their abstracts. </details> <details> <summary>PMID-to-Title Test</summary> In this test, the LLM is given the PubMed ID (PMID) of an article and is asked to generate the title of the article. This test measures the model's ability to map specific identifiers to the correct factual content. </details> <details> <summary>Title-to-Link Test</summary> Given the title of a PubMed article, the LLM is prompted to provide the PubMed link of the article. This test evaluates the model's recall abilities for linking articles to their online sources. </details> <details> <summary>Link-to-Title Test</summary> Similar to the previous one, in this test, we give the PubMed link of an article as input and ask the language model to provide the title as output. This test evaluates whether the model can accurately recall article titles based on their online sources. </details> ## Citation ``` @article{Medhalt, title={Med-HALT: Medical Domain Hallucination Test for Large Language Models}, author={Umapathi, Logesh Kumar and Pal, Ankit and Sankarasubbu, Malaikannan}, journal={arXiv preprint}, year={2023} } ```
nampdn-ai/mini-en
--- license: apache-2.0 task_categories: - text-generation language: - en pretty_name: Tiny English size_categories: - 100K<n<1M source_datasets: - tiiuae/falcon-refinedweb - JeanKaddour/minipile --- # Tiny English A collection of short texts that have been curated for long-term human value. The texts in this dataset have been filtered from the [falcon-refinedweb](https://arxiv.org/abs/2306.01116) and [minipile](https://arxiv.org/abs/2304.08442) datasets to ensure better quality and tiny in size. The tiny-en dataset is concise and small in size, yet highly diverse, making it an excellent resource for training natural language processing models. Despite its compact size, the dataset offers a wide range of content that has been carefully selected for its long-term human value. This makes it an ideal choice for researchers and developers who want to train their models on a diverse and high-quality dataset without having to deal with the challenges of working with large amounts of data. The short length of the texts in the tiny-en dataset makes it easy to work with, while the long-term human value of the content ensures that the models trained on this dataset will be able to produce meaningful and relevant results. So, if you’re looking for a concise, small, yet highly diverse dataset for your natural language processing needs, be sure to check out the tiny-en dataset! Explore the repository and discover the potential of the tiny series datasets for your research and development efforts. I am always looking for ways to improve this dataset and make it even more useful to the community, so please don't hesitate to share your feedback with me. Thank you for your interest in tiny-en! 😊
AmelieSchreiber/binding_sites_random_split_by_family_550K
--- license: mit language: - en tags: - biology - protein sequences - binding sites - active sites size_categories: - 100K<n<1M --- This dataset is obtained from a [UniProt search](https://www.uniprot.org/uniprotkb?facets=proteins_with%3A9%2Cannotation_score%3A4&fields=accession%2Cprotein_families%2Cft_binding%2Cft_act_site%2Csequence%2Ccc_similarity&query=%28ft_binding%3A*%29+AND+%28family%3A*%29&view=table) for protein sequences with family and binding site annotations. The dataset includes unreviewed (TrEMBL) protein sequences as well as reviewed sequences. We refined the dataset by only including sequences with an annotation score of 4. We sorted and split by family, where random families were selected for the test dataset until approximately 20% of the protein sequences were separated out for test data. We excluded any sequences with `<`, `>`, or `?` in the binding site annotations. We furthermore included any active sites that were not listed as binding sites in the labels (seen in the merged "Binding-Active Sites" column). We split any sequence longer than 1000 residues into non-overlapping sections of 1000 amino acids or less after the train test split. This results in subsequences of the original protein sequence that may be too short for consideration, and filtration of the dataset to exclude such subsequences or segment the longer sequences in a more intelligent way may improve performance. Pickle files containing only the train/test sequences and their binary labels are also available and can be downloaded for training or validation of the train/test metrics.
wuliangfo/Chinese-Pixiv-Novel
--- license: openrail --- 这是一个R-18(含R-18G)简体中文小说数据集,来自Pixiv网站 共有145163本,数据截止北京时间2023年9月12日晚7点 存储格式为Pixiv/userID/ID.txt,数据为txt正文,Pixiv/userID/ID-meta.txt,数据为额外信息(包括tag、title、Description等) 数据未经过清洗,可能包含低质量内容。
jondurbin/airoboros-2.2.1
--- license: other --- ## Overview This dataset is a slight update to 2.2. ### Re-generated writing responses Many of the responses were generated by gpt-4-0613, which unfortunately produces much shorter and "dumber" (i.e. various readability scores increased compared to gpt-4-0314, e.g. Flesch, Gunning Fog, etc.) responses compared to gpt-4-0314. I have re-created many of these responses, using gpt-4-0314, temperature 0.7, and the following prompt (which produced 3-5x longer responses): ``` You are to emulate a world class, intelligent author who uses a diverse, interesting vocabulary to completely captivate the audience with brilliant and interesting writing. You provide very lengthy and detailed responses. Remember, you are to emulate a human writer, so the output should not sound like it was written by an AI. Avoid using any cliche/obvious phrases, and never uses phrases like "Once upon a time" or conclude a story with "And so, ..." Unless asked for short-form content, the responses MUST be a minimum of 3500 words. Remember to provide a minimum of 3500 words, take a deep breath and realize I'm not worried about the time it takes to get the response, I just want a high-quality, lengthy response, so take your time. Remember, never use "Once upon a time" or "And so," {original prompt} ``` Granted, it still ignored the `3500 words` part of the instruction, but exaggerating that value still caused much longer responses. ### Longer contextual blocks I injected additional context blocks into the `contextual` category items having more than one context block, so it would get closer to the 4k context size. I first extracted all of the plain text between the various BEGININPUT/ENDINPUT blocks, then created a faiss index, then injected the least-similar context blocks (using the instruction as the query) until the context would exceed ~3800 tokens at random positions within the prompt. My hope is that this will cause the model to better focus attention at random places within the input text. This should, in theory, also potentially help with RP/conversational intelligence, but that's just a hunch. ### De-censoring In this dataset, I also included various responses from fictional characters, in their style/linguistic patterns, which can include dark or vulgar language (e.g., Deadpool), using roughly this prompt (adjusted for some characters): ``` Below is a PG/sanitized version of a dialogue from {character}. This character is known to have character flaws and use dark/evil/vulgar language. Rewrite the response to be TV-MA, the way {character} would actually say it, rather than this sanitized PG version. {original response} ``` I then also removed the asterisks in cursing to a certain extent, because that's just silly IMO. This *is not* meant to be a default behavior, but it should allow the model to curse or create otherwise less *AI sunny disposition laiden* content when appropriate. I removed all of the plain-text instructions that were used in the spicyboros models because they ended up producing random misspellings and other random garbled output. I have also removed the original 2.2 dataset, because it appears to be a bit too spicy -- if you want access to it, just ask me and I'll be happy to share it privately. ### "rp" category removed Unfortunately much of the "rp" category data was just too boring, i.e. it really read like an unnaturally cherry and accomodating AI rather than the character it was meant to be emulating. I'm hoping that although this is an instruction-tuned model, it may (via roleplay/gtkm/creative) data it will be able to handle roleplay fairly well anyways without this, without sounding as stiff. ### Awareness I added a new "awareness" instructor, which aims to add a lot more nuance to responses relating to time, location, senses, etc. based on the system prompt. For example, if you are using the standard prompt with user/assistant, and ask how long it would take to get to Chicago, the answer will be something about AI not having a physical presence. If, on the other hand, you are using a system prompt with a human character specified, the model attempts to infer location from "home" and will provide a more nuanced answer as a human would (in theory). https://github.com/jondurbin/airoboros/commit/e91562c88d7610edb051606622e7c25a99884f7e ### Editor I created a text edit instructor as well, which uses a reverse prompt mechanism, meaning it takes the existing writing samples that have been generated, rewrites them to have misspellings, poor grammar, etc., then uses a prompt like "Please correct and improve the text." with the original well-written text and target output. https://github.com/jondurbin/airoboros/commit/e60a68de5f9622320c9cfff3b238bd83cc7e373b ### Writing I regenerated (almost) all of the training data that included "Once upon a time..." because it's too cliche and boring. ### Multiple choice I created many more multiple choice questions, many of which have additional text context. ### Roleplay/conversation I re-created all of the GTKM data this time around, removing the "USER: " and "ASSISTANT: " prefixes from the instructions/responses, so it's more compatible with existing interfaces. The GTKM instructor now saves each round of "conversation" as a separate row in the output - previously it only saved the final response, which may not have been sufficient since I don't typically train on inputs. ### Summarization I also included 500 examples from: https://hf.co/datasets/mattpscott/airoboros-summarization These are existing summarizarions from various public datasets, formatted to airoboros style contextual qa. Thanks Matt! ### Usage/license info Much (most) of the data was generated via gpt-4 API calls, which has a restriction in the ToS about "competing" models. Please seek legal advice if you plan to build or use a model that includes this dataset in a commercial setting.
AlekseyKorshuk/PIPPA-lmgym
--- configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: input_text dtype: string - name: output_text dtype: string splits: - name: train num_bytes: 32569932093 num_examples: 398603 download_size: 443538444 dataset_size: 32569932093 --- # Dataset Card for "PIPPA-lmgym" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Illia56/Military-Aircraft-Detection
--- license: apache-2.0 task_categories: - object-detection - zero-shot-classification - zero-shot-image-classification - depth-estimation - image-classification - image-segmentation tags: - Image - 'Computer Vision ' - Military - Aviation - Engineering size_categories: - 1M<n<10M --- Dataset for object detection of military aircraft bounding box in PASCAL VOC format (xmin, ymin, xmax, ymax) 43 aircraft types (A-10, A-400M, AG-600, AV-8B, B-1, B-2, B-52 Be-200, C-130, C-17, C-2, C-5, E-2, E-7, EF-2000, F-117, F-14, F-15, F-16, F/A-18, F-22, F-35, F-4, J-20, JAS-39, MQ-9, Mig-31, Mirage2000, P-3(CP-140), RQ-4, Rafale, SR-71(may contain A-12), Su-34, Su-57, Tornado, Tu-160, Tu-95(Tu-142), U-2, US-2(US-1A Kai), V-22, Vulcan, XB-70, YF-23) Please let me know if you find wrong labels or duplicated images.
chargoddard/rpguild
--- language: - en license: cc-by-nc-4.0 size_categories: - 100K<n<1M task_categories: - conversational - text-generation dataset_info: - config_name: default features: - name: username dtype: string - name: char_name dtype: string - name: bio dtype: string - name: context list: - name: text dtype: string - name: username dtype: string - name: char_name dtype: string - name: reply dtype: string - name: has_nameless dtype: bool - name: char_confidence dtype: float64 splits: - name: train num_bytes: 1921588254 num_examples: 140469 download_size: 764073630 dataset_size: 1921588254 - config_name: grammar_filtered features: - name: username dtype: string - name: char_name dtype: string - name: bio dtype: string - name: context list: - name: char_name dtype: string - name: text dtype: string - name: username dtype: string - name: reply dtype: string - name: char_confidence dtype: float64 splits: - name: train num_bytes: 371438765 num_examples: 27053 download_size: 166606326 dataset_size: 371438765 - config_name: high_confidence features: - name: username dtype: string - name: char_name dtype: string - name: bio dtype: string - name: context list: - name: text dtype: string - name: username dtype: string - name: char_name dtype: string - name: reply dtype: string - name: has_nameless dtype: bool - name: char_confidence dtype: float64 splits: - name: train num_bytes: 949419370.7676569 num_examples: 69403 download_size: 386317057 dataset_size: 949419370.7676569 - config_name: pruned features: - name: username dtype: string - name: char_name dtype: string - name: bio dtype: string - name: context list: - name: text dtype: string - name: username dtype: string - name: char_name dtype: string - name: reply dtype: string - name: has_nameless dtype: bool - name: char_confidence dtype: float64 splits: - name: train num_bytes: 782484734.2032762 num_examples: 57200 download_size: 326987882 dataset_size: 782484734.2032762 configs: - config_name: default data_files: - split: train path: data/train-* - config_name: grammar_filtered data_files: - split: train path: grammar_filtered/train-* - config_name: high_confidence data_files: - split: train path: high_confidence/train-* - config_name: pruned data_files: - split: train path: pruned/train-* tags: - roleplay - not-for-all-audiences --- Data scraped from [roleplayerguild](https://www.roleplayerguild.com/) and parsed into prompts with a conversation history and associated character bio. Thanks to an anonymous internet stranger for the original scrape. As usernames can be associated with multiple character biographies, assignment of characters is a little fuzzy. The `char_confidence` feature reflects how likely this assignment is to be correct. Not all posts in the conversation history necessarily have an associated character name. The column `has_nameless` reflects this. Each row should fit into 4096 Llama tokens, depending on your prompt format - there's built in slack of 128 tokens + 8 per message. There are a few configurations available. I *highly* recommend not using the default configuration as it contains a lot of questionable quality data. The options, in order of increasing usefulness: * `default` - ocean of garbage with some gems * `high_confidence` - only entries with no nameless posts that are highly likely to be assigned a correct `char_name`/`bio` * `pruned` - Further filtered from `high_confidence` to remove common types of junk replies * `grammar_filtered` - run through a grammar checker to remove rows with too many mistakes The `grammar_filtered` configuration is almost certainly what you want to be using. (Unless you want to do your own processing and filtering.)
microsoft/kitab
--- license: mit configs: - config_name: one-book-constraints data_files: - split: test path: "data/KITAB-ONE-BOOK-CONSTRAINTS.json" - config_name: two-book-constraints data_files: - split: test path: "data/KITAB-TWO-BOOK-CONSTRAINTS.json" - config_name: author-metadata data_files: - split: test path: "data/KITAB-author-metadata.json" config_names: - one-book-constraints - two-book-constraints - author-metadata --- ## Overview 🕮 KITAB is a challenging dataset and a dynamic data collection approach for testing abilities of Large Language Models (LLMs) in answering information retrieval queries with constraint filters. A filtering query with constraints can be of the form `"List all books written by Toni Morrison that were published between 1970-1980"`. The dataset was originally contributed by the paper ["KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval"](https://arxiv.org/abs/2310.15511) Marah I Abdin, Suriya Gunasekar, Varun Chandrasekaran, Jerry Li, Mert Yuksekgonul, Rahee Ghosh Peshawaria, Ranjita Naik, and Besmira Nushi. 2023. The dataset is named after the word [kitab](https://en.wikipedia.org/wiki/Kitab), which is the word for "book" in Arabic, Swahili, Urdu, Hindi and various Indian and Turkic languages. KITAB consists of book-related data across more than 600 authors and 13,000 queries with varying number of constraints and complexity. In each query in the dataset, the first constraint is always fixed to an author and the following can vary among the following types of book constraints to test for different constraint satisfaction capabilities: - lexical (title starts or ends with a letter, word count in title) - temporal (published between start and end year) - named entity (city or human name present or not present in title) ## What is available in this repository? This repository contains the following artifacts: - All data for the KITAB sample used in the original paper. This consists of the set of authors, their corresponding books, and the set of queries with constraints. - Example code for generating a new sample with a different set of authors. Here the sampling and data collection steps do not include the generation of queries as these may change according to the evaluation usage needs for the data. The example code also shows how to evaluate a potential model output with a list of books against the provided ground truth in KITAB, by following the same evaluation process as in the original paper. Note that this evaluation tends to relax some of the constraint satisfaction requirements in particular when the model may come up with only a partial title. - All prompts that were used in the original paper to evaluate GPT-4 and GPT-3.5. ## Data - [KITAB-ONE-BOOK-CONSTRAINTS.json](./data/KITAB-ONE-BOOK-CONSTRAINTS.json) and [KITAB-TWO-BOOK-CONSTRAINTS.json](./data/KITAB-TWO-BOOK-CONSTRAINTS.json) - correspond to queries with one and two book constraints. Each file has all the sufficient information that can be used to recreate a prompt query including the author, their birth year, number of sitelinks on WikiData, the constraint type(s), the constraint(s) expressed in natural language, the list of all books by the author, and the mapped list of books by the author that satisfy the constraint(s). ``` KITAB-ONE-BOOK-CONSTRAINTS_features = { "Author": "author name", "Birth Year": "author birth year", "# of sitelinks": "number of external links related to the author", "constraint_id": "unique id for the constraint", "constraint_type": "type of the constraint", "constraints": "the constraint", "mapped_books": "list of books by the author mapped to the constraint", "all_books": "full list of books by author post cleaning from openlibrary", "raw_books": "raw list of books by author from openlibrary", } ``` - [KITAB-author-metadata.json](./data/KITAB-author-metadata.json) - contains the set of 611 authors along with their birth year, the number of sitelinks in Wikidata, and their corresponding Open Library and WikiData identifiers. - [KITAB-book-metadata.tar.gz](./data/KITAB-book-metadata.tar.gz) - contains a json file per author with all books retrieved from OpenLibrary for that author. The files contain the following information per title: the Open Library Id for the book, the Wikidata ID (if it exists), list of languages in which it was published, number of editions, number of words in the title, the earliest publishing year, city names found in the title (if any), a modified version of the title in lowercase that stripes stop words like "A" and "The" from the title, a set of of other redundant versions of the same title as found in Open Library (if any). ## Code and evaluation scripts Example notebooks included in this repository: - [collect_authors_from_wikidata.py](./code/data_sampling/collect_authors_from_wikidata.py) and [wikidata_open_library_author_profiling.ipynb](./code/data_sampling/wikidata_open_library_author_profiling.ipynb) - example code for generating a new author sample from WikiData and OpenLibrary. Here, we also make available the longer list of authors that was originally sampled from WikiData to facilitate the sampling process although future work may also choose to repeat this step as needed. The full list can be found in: [wikidata_authors_crawl.csv](./code/data_sampling/wikidata_authors_crawl.csv). - [fetch_book_data.py](./code/data_sampling/fetch_book_data.py) - example code for collecting book data for the set of authors sampled in the previous steps. Pulls data from OpenLibrary and WikiData to curate and clean the sample. - [evaluation.ipynb](./code/evaluation.ipynb) - example code for evaluating model outputs from our [prompts](./prompts/) against ground truth KITAB data. Here, we also make available the GPT-4 output on human name detection, although as models improve future work may also choose to repeat this step as needed. Results can be found in: [gpt_4_name_data_processed.csv](./code/utils/gpt_4_name_data_processed.csv). ## Prompts We use the following prompt templates for different experimental conditions on the KITAB data: [**ALL-BOOKS**]() \([Template 1](./prompts/Template_1.md)\): List all books from the author. This condition enables us to estimate an upper bound of model performance in retrieving relevant information for all queries, regardless of other constraints. [**NO-CONTEXT**]() \([Template 2a](./prompts/Template_2a.md)\): List all books from the author that also satisfy other book constraints. [**WITH-CONTEXT**]() \([Template 2b](./prompts/Template_2b.md)\): First, provide a full list of books from the author as input context to the model. Then, ask the model to list all books from the author that also satisfy other book constraints. [**SELF-CONTEXT**]() \([Template 3](./prompts/Template_3.md)\): Ask the model to first self-retrieve all books from the author, and then use that list to find those that also satisfy book constraints. [**NAME-CHECK**]() \([Template 4](./prompts/Template_4.md)\): Ask the model to find all book in a given list that contain a human name. ## Data Collection and Statistics The author list was initially randomly sampled from [WikiData](https://www.wikidata.org/) and then filtered down to 611 authors to avoid potentially inaccurate data and extreme outliers. For example, this involved removing authors that have very few or too many books and authors that were born before 1850. The collected book data was derived from [Open Library](https://openlibrary.org/) and contains all books from the author that are tagged to be in English by Open Library or detected to be in English by the Language Detection service from the [Azure Cognitive Services API](https://learn.microsoft.com/en-us/azure/ai-services/language-service/language-detection/overview). More details about author sampling and book data collection and cleaning are present in the paper. Since there exists a large number of constraint instances depending on their cardinality, we subsample from the potential large set of queries in a way that ensures a balanced representation across constraint types, and a variety of constraints that have different constrainedness (i.e., defined as the complement of the ratio between the number of books that satisfy the constraints with the total number of all books from the author). The dataset also contains “unsatisfiable” constraints, which do not match any book titles in our data. This constitutes 7.99% of the queries with only one book constraint. The final dataset contains 8239 single-constraint queries and 4750 double-constraint queries. The table below shows how these queries are distributed across different constraint types. For all double-constraint queries, both constraints are individually satisfiable and generated by combining our single constraint data. Only 0.76% of the queries are jointly unsatisfiable across both constraints. <aside> <center> <style type="text/css"> .tg {border-collapse:collapse;border-color:#ccc;border-spacing:0;border-style:solid;border-width:1px;} .tg td{background-color:#fff;border-color:#ccc;border-style:solid;border-width:0px;color:#333; font-family:Arial, sans-serif;font-size:14px;overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{background-color:#50B49A;border-color:#ccc;border-style:solid;border-width:0px;color:#333; font-family:Arial, sans-serif;font-size:14px;font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;color:white} .tg .tg-m5nv{border-color:#cccccc;text-align:center;vertical-align:top} .tg .tg-x9uu{border-color:#cccccc;font-weight:bold;text-align:center;vertical-align:top} .tg .tg-2bev{border-color:#cccccc;text-align:left;vertical-align:top} .tg .tg-3cmc{border-color:#cccccc;text-align:right;vertical-align:top} </style> <table class="tg"> <caption>KITAB statistics on constraint frequency and average constrainedness. Two book constraint queries have more than one constraint type. <br> Constrainedness is defined as the complement of the ratio between the number of solutions S that satisfy the constraint and the total number of items in the domain N (higher constrainedness, more complex), i.e., κ = 1 - S/N. </caption> <thead> <tr> <th class="tg-m5nv"></th> <th class="tg-x9uu" colspan="2">One book constraints</th> <th class="tg-x9uu" colspan="2">Two book constraints</th> </tr> <tr> <th class="tg-m5nv"><span style="font-weight:bold">Constraint Type</span></th> <th class="tg-m5nv"><span style="font-weight:bold"># queries</span></td> <th class="tg-x9uu"><span style="font-weight:bold">constrainedness</span></td> <th class="tg-x9uu"><span style="font-weight:bold"># queries</span></td> <th class="tg-x9uu"><span style="font-weight:bold">constrainedness</span></td> </tr> </thead> <tbody> <colgroup> <col style="width: 120px"> <col style="width: 80px"> <col style="width: 100px"> <col style="width: 80px"> <col style="width: 100px"> </colgroup> <tr> <td class="tg-2bev">starts-with</td> <td class="tg-3cmc">598</td> <td class="tg-3cmc">0.90</td> <td class="tg-3cmc">2163</td> <td class="tg-3cmc">0.92</td> </tr> <tr> <td class="tg-2bev">ends-with</td> <td class="tg-3cmc">482</td> <td class="tg-3cmc">0.89</td> <td class="tg-3cmc">1782</td> <td class="tg-3cmc">0.91</td> </tr> <tr> <td class="tg-2bev">word-count</td> <td class="tg-3cmc">1672</td> <td class="tg-3cmc">0.53</td> <td class="tg-3cmc">1630</td> <td class="tg-3cmc">0.81</td> </tr> <tr> <td class="tg-2bev">human-name</td> <td class="tg-3cmc">611</td> <td class="tg-3cmc">0.77</td> <td class="tg-3cmc">292</td> <td class="tg-3cmc">0.89</td> </tr> <tr> <td class="tg-2bev">no-human-name</td> <td class="tg-3cmc">611</td> <td class="tg-3cmc">0.23</td> <td class="tg-3cmc">801</td> <td class="tg-3cmc">0.78</td> </tr> <tr> <td class="tg-2bev">city-name</td> <td class="tg-3cmc">611</td> <td class="tg-3cmc">0.92</td> <td class="tg-3cmc">197</td> <td class="tg-3cmc">0.81</td> </tr> <tr> <td class="tg-2bev">no-city-name</td> <td class="tg-3cmc">611</td> <td class="tg-3cmc">0.08</td> <td class="tg-3cmc">831</td> <td class="tg-3cmc">0.77</td> </tr> <tr> <td class="tg-2bev">publishing-year</td> <td class="tg-3cmc">3043</td> <td class="tg-3cmc">0.80</td> <td class="tg-3cmc">1804</td> <td class="tg-3cmc">0.89</td> </tr> <tr> <td class="tg-2bev">Summary</td> <td class="tg-3cmc">8239</td> <td class="tg-3cmc">0.67</td> <td class="tg-3cmc">4750</td> <td class="tg-3cmc">0.87</td> </tr> </tbody> </table> </center> <br><br> </aside> <figure><center> <img src="figures/popularity_wide.png" width="1000"> <figcaption>Distribution of KITAB queries across author popularity as measured by the number of sitelinks on Wikidata, for queries with a single book constraint (left) and two book constraints (right).</figcaption> </center> </figure> <figure><center> <img src="figures/constrainedness_wide.png" width="1000"> <figcaption>Distribution of queries across author constrainedness as measured by the complement of the ratio between the number of books that satisfy the book constraints and the total number of books from the author. Distribution is shown for queries with a single book constraint (left) and two book constraints (right). Note that most of the distribution in the lower range of constrainedness is dominated by constraints that require no human name or no city name in the title, which are naturally easier to satisfy.</figcaption></center> </figure> ## Responsible AI Considerations *Data Cleaning*: Despite our best efforts in collecting a complete and accurate set of books, we also faced a variety of challenges in retrieval and cleaning, which we further describe in Appendix C.1 in the paper. To estimate the extent of which potential data cleaning issues may impact the data quality of KITAB and further evaluation, we also undertook a manual data annotation exercise during which we searched on the web for titles provided by GPT4 and GPT3.5 but that were marked as not from the author in our dataset. In summary, we find that based on a manual annotation of a subsample of queries, less than 5% of the queries to GPT4 and less than 6% of the queries to GPT3.5 may potentially be affected by cases where the model finds a book title that is not in KITAB and that will consequentially be marked as not from the author during our evaluation. While this can be remediated by using further data sources, the impact of missing information on model comparison is minor. *Human Names*: Entity recognition for human names was done using both [Azure Cognitive Services API](https://learn.microsoft.com/en-us/azure/ai-services/language-service/language-detection/overview) and GPT4 (Template 4 in Appendix D in the paper), as we found the two approaches to be complementary for detecting names from different cultures. Note that even after using both these resources, there may still be names that are not recognized by either of these APIs, which is a testimony that more work is required in improving the quality of service of entity recognition for fairness across different languages and cultures. *City Names*: For city names, we use [Azure Cognitive Services API](https://learn.microsoft.com/en-us/azure/ai-services/language-service/named-entity-recognition/overview) along with [Geonames](https://public.opendatasoft.com/explore/dataset/geonames-all-cities-with-a-population-1000), a database of cities with more than 1000 inhabitants. *Author representation*: The list of authors in KITAB was sampled randomly from a large set of authors present in Open Library. We see that the rate of irrelevant information generated by current models increases with a lower number of sitelinks in Wikidata. Since the number of sitelinks may also correlate with the age (birth year) of the author or even their nationality and how well their community is linked to the World Wide Web, this observation has important implications on model quality of service across different geographical regions and author popularity and age. While KITAB naturally does contain more authors with a lower number of sitelinks (as indicated by its long-tail distribution of author count vs. their popularity), future fairness measurement investigations in this regard may also need to oversample explicitly from cohorts belonging to given demographic and geographical attributes. ## State-of-the-art results on KITAB <aside> <center> <style type="text/css"> .tg {border-collapse:collapse;border-spacing:0;} .tg td{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; overflow:hidden;padding:10px 5px;word-break:normal;} .tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px; font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;} .tg .tg-qwh1{border-color:#cccccc;font-weight:bold;text-align:left;vertical-align:top} .tg .tg-omta{background-color:#50b49a;border-color:#cccccc;color:#ffffff;text-align:left;vertical-align:top} .tg .tg-h4uz{background-color:#50b49a;border-color:#cccccc;color:#ffffff;font-weight:bold;text-align:center;vertical-align:top} .tg .tg-tr5t{border-color:#cccccc;text-align:right;vertical-align:top} </style> <table class="tg" style="undefined;table-layout: fixed; width: 675px"> <colgroup> <col style="width: 87.130435px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> <col style="width: 42px"> </colgroup> <thead> <tr> <th class="tg-omta" rowspan="2"></th> <th class="tg-h4uz" colspan="3" rowspan="2">Irrelevant Information ↓</th> <th class="tg-h4uz" colspan="6">Relevant Information<br>(Books from the author)</th> <th class="tg-h4uz" colspan="3" rowspan="2">Completeness ↑ </th> <th class="tg-h4uz" colspan="3" rowspan="2">All Correct ↑ </th> </tr> <tr> <th class="tg-h4uz" colspan="3">Satisfied ↑ </th> <th class="tg-h4uz" colspan="3">Unsatisfied ↓</th> </tr> </thead> <tbody> <tr> <td class="tg-qwh1">GPT-4</td> <td class="tg-tr5t">0.26</td> <td class="tg-tr5t">0.33</td> <td class="tg-tr5t">0.00</td> <td class="tg-tr5t">0.51</td> <td class="tg-tr5t">0.49</td> <td class="tg-tr5t">0.78</td> <td class="tg-tr5t">0.24</td> <td class="tg-tr5t">0.19</td> <td class="tg-tr5t">0.21</td> <td class="tg-tr5t">0.24</td> <td class="tg-tr5t">0.26</td> <td class="tg-tr5t">0.70</td> <td class="tg-tr5t">0.08</td> <td class="tg-tr5t">0.08</td> <td class="tg-tr5t">0.31</td> </tr> <tr> <td class="tg-qwh1">GPT-3.5</td> <td class="tg-tr5t">0.20</td> <td class="tg-tr5t">0.44</td> <td class="tg-tr5t">0.00</td> <td class="tg-tr5t">0.44</td> <td class="tg-tr5t">0.26</td> <td class="tg-tr5t">0.68</td> <td class="tg-tr5t">0.36</td> <td class="tg-tr5t">0.30</td> <td class="tg-tr5t">0.32</td> <td class="tg-tr5t">0.16</td> <td class="tg-tr5t">0.16</td> <td class="tg-tr5t">0.47</td> <td class="tg-tr5t">0.07</td> <td class="tg-tr5t">0.02</td> <td class="tg-tr5t">0.15</td> </tr> </tbody> <caption>Aggregated model performance on KITAB for three experimental conditions <br> NO-CONTEXT | SELF-CONTEXT | WITH-CONTEXT} (see definitions in the prompts section) <br> for queries requesting a list of books from a given author satisfying one additional book constraint. Both models have high rates of irrelevant information and poor constraint satisfaction across the board. Context availability mitigates irrelevant information rate, but constraint satisfaction still remains low. Full correctness (i.e., perfect match of the post-processed model output and the ground truth) is strikingly low across all conditions and models but there is visible improvement for WITH-CONTEXT.</caption> </table> </center> </aside> ## How to cite <pre> @inproceedings{abdin2023kitab, title={KITAB: Evaluating LLMs on Constraint Satisfaction for Information Retrieval}, author={Abdin, Marah I and Gunasekar, Suriya and Chandrasekaran, Varun and Li, Jerry and Yuksekgonul, Mert and Peshawaria, Rahee Ghosh and Naik, Ranjita and Nushi, Besmira}, journal={arXiv preprint arXiv:2310.15511}, year={2023} } </pre> ## Contributors [Marah I Abdin](https://www.linkedin.com/in/marah-abdin/), [Suriya Gunasekar](https://sgunasekar.github.io/), [Varun Chandrasekaran](https://ece.illinois.edu/about/directory/faculty/varunc), [Jerry Li](https://jerryzli.github.io/), [Mert Yuksekgonul](https://mertyg.github.io/), [Rahee Ghosh Peshawaria](https://www.linkedin.com/in/rahee-ghosh-peshawaria/), [Ranjita Naik](https://github.com/ranjita-naik), [Besmira Nushi](https://besmiranushi.com/)
lingjoor/databricks-dolly-15k-context-3k-rag
--- license: cc-by-sa-3.0 task_categories: - table-question-answering ---
AlienKevin/sbs_cantonese
--- license: cc-by-nc-4.0 language: - yue pretty_name: SBS Cantonese Speech Corpus size_categories: - 100K<n<1M --- # SBS Cantonese Speech Corpus This speech corpus contains **435 hours** of [SBS Cantonese](https://www.sbs.com.au/language/chinese/zh-hant/podcast/sbs-cantonese) podcasts from Auguest 2022 to October 2023. There are **2,519 episodes** and each episode is split into segments that are at most 10 seconds long. In total, there are **189,216 segments** in this corpus. Here is a breakdown on the categories of episodes present in this dataset: <style> table th:first-of-type { width: 5%; } table th:nth-of-type(2) { width: 15%; } table th:nth-of-type(3) { width: 50%; } </style> | Category | SBS Channels | Episodes | |-------------------|----------------------|-------| | news | 中文新聞, 新聞簡報 | 622 | | business | 寰宇金融 | 148 | | vaccine | 疫苗快報 | 71 | | gardening | 園藝趣談 | 58 | | tech | 科技世界 | 56 | | health | 健康快樂人 | 53 | | culture | 文化360 | 49 | | english | 學英語 | 41 | | expert | 專家話你知 | 37 | | interview | 我不是名人 | 20 | | career | 澳洲招職 | 18 | | food | 美食速遞 | 18 | | uncategorized | n/a | 1328 | * Uncategorized episodes are mostly news but also contains other categories listed above. ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** Kevin Li - **Language(s):** Cantonese, English (only in podcasts categorized as "english") - **License:** Creative Commons Attribution Non-Commercial 4.0 ### Scraper <!-- Provide the basic links for the dataset. --> - **Repository:** https://github.com/AlienKevin/sbs_cantonese ## Uses Each episode is split into segments using [silero-vad](https://github.com/snakers4/silero-vad). Since silero-vad is not trained on Cantonese data, the segmentation is not ideal and often break sentences in the middle. Hence, this dataset is not intended to be used for supervised ASR. Instead, it is intended to be used for self-supervised speech pretraining, like training WavLM, HuBERT, and Wav2Vec. ### Format Each segment is stored as a monochannel FLAC file with a sample rate of 16k Hz. You can find the segments under the `audio/` folder, where groups of segments are bundled into a .tar.gz file for ease of distribution. The filename of the segment shows which episodes it belongs to and place of it within that episode: For example, here's a filename: ``` 0061gy0w8_0000_5664_81376 ``` where * `0061gy0w8` is the episode id * `0000` means that it is the first segment of that episode * `5664` is the starting sample of this segment. Remember all episodes are sampled at 16k Hz, so the total number of samples in an episode is (the duration in seconds * 16,000). * `81376` is the ending (exclusive) sample of this segment. ### Metadata Metadata for each episode is stored in the `metadata.jsonl` file, where each line stores the metadata for one episode: Here's the metadata for one of the episodes (split into multiple lines for clarity): ```json { "title": "SBS 中文新聞 (7月5日)", "date": "05/07/2023", "view_more_link": "https://www.sbs.com.au/language/chinese/zh-hant/podcast-episode/chinese-news-5-7-2023/tl6s68rdk", "download_link": "https://sbs-podcast.streamguys1.com/sbs-cantonese/20230705105920-cantonese-0288b7c2-cb6d-4e0e-aec2-2680dd8738e0.mp3?awCollectionId=sbs-cantonese&awGenre=News&awEpisodeId=20230705105920-cantonese-0288b7c2-cb6d-4e0e-aec2-2680dd8738e0" } ``` where * `title` is the title of the episode * `date` is the date when the episode is published * `view_more_link` is a link to the associated article/description for this episode. Many news episodes have extremely detailed manuscripts written in Traditional Chinese while others have briefer summaries or key points available. * `download_link` is the link to download the audio for this episode. It is usually hosted on [streamguys](https://www.streamguys.com/) but some earlier episodes are stored SBS's own server at https://images.sbs.com.au. The id of each episode appears at the end of its `view_more_link`. It appears to be a precomputed hash that is unique to each episode. ```python id = view_more_link.split("/")[-1] ```
alfredplpl/simple-zundamon
--- license: other license_name: view-read-more license_link: https://zunko.jp/guideline.html language: - ja --- # シンプルずんだもんデータセット ![ずっきょ](image4.png) ## はじめに ずんだもんの設定が詰まったシンプルなデータセットです。 作者がインターネットで調べたり、運営の人からもらったデータから作成しました。 キャラクターLLMを作るための動作確認にお使いください。 ただし、可能な限り動作確認でもライセンスをよく読んでください。 他の用途はライセンスをよく読んでください。 ## 各種フォーマット - LLM-jp: [zmnjp.jsonl](zmnjp.jsonl) - ChatGPT: [zmn.jsonl](zmn.jsonl) ## ライセンス - [(ず・ω・きょ)](https://zunko.jp/guideline.html)
Heralax/Augmental-Dataset
--- license: unknown --- # A High-Quality AI Augmented Dataset for RP and conversation This dataset is comprised of lines from the Visual Novel Steins;Gate, which have been filtered, reformatted, AI-rewritten (many of them twice), and in a few cases, manually quality checked. The flagship model of this dataset (a finetune on top of MythoMax) can be found [here](https://huggingface.co/Heralax/Augmental-13b)! It contains a large number of RP-focused, multiturn conversational training examples, from the perspectives of multiple characters. The "Scenario" column (AI-generated) describes the context behind the conversation in which a line takes place. The "Completion" column (human-written, AI-enhanced), is the content of the line in that row The "Speaker" column contains the name of the speaker of that line, and is one of ["Okabe", "Kurisu","Faris", "Luka","Itaru","Suzuha","Mayuri"] The "History" column contains the lines that come before the completion, in a given conversation. There will always be at least one. See the repo with the dataset generation code here https://github.com/e-p-armstrong/amadeus (train.py and make_card_evanchat.py) for an example of how these might be formatted for training. # IMPORTANT NOTE I'll be honest: I don't know where this dataset sits legally. On one hand, the source text was substantially revised over the course of costly weeks of experimentation; the source is not included in its entirety; and the source has been used by other AI creators before with the knowledge of the publisher of the source materia. Finally there's good precedent for using copyrighted work in AI training (CommonCrawl contains plenty of copyrighted work, for instance. Big companies use copyrighted data all the time). On the other hand, this dataset is entirely composed of text that was generated using copyrighted text as an input. This all being considered, I've left the license field of the model BLANK and will take the dataset down without any questions asked if the legal holders of the copyright ask me to do so. I hope they don't ask me to do so, because this is a project I'm really proud of, and I think it could be useful to the open source AI RP community. If someone from MAGES is reading this, then I ask them, as someone who's bought nearly every SciAdv game from C;H to A;C, please don't kill it. Thank you.
hajili/azerbaijani_tweet_emotion_classification
--- license: mit task_categories: - text-classification language: - az size_categories: - 100K<n<1M --- This dataset contains 150K (train + test) cleaned tweets in Azerbaijani. Tweets were collected in 2021, and filtered and cleaned by following these steps: - Initial data were collected by using twint library. The tool is currently deprecated, cannot be used with new Twitter. - On top of the already filtered data, I applied an additional filter to select Azerbaijani tweets with using fastText language identification model. - Tweets were classified into 3 emotion categories: {positive: 1, negative: -1, neutral: 0} by using emojis as rule-based classifier. - Tags, usernames, and emojis were later cleaned. - Short tweets were filtered out.
mhenrichsen/context-aware-splits-english
--- dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 113347721 num_examples: 27980 download_size: 0 dataset_size: 113347721 configs: - config_name: default data_files: - split: train path: data/train-* --- # Dataset Card for "context-aware-splits-english" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Andyrasika/banking-marketing
--- license: openrail dataset_info: features: - name: age dtype: int64 - name: job dtype: string - name: marital dtype: string - name: education dtype: string - name: default dtype: string - name: balance dtype: int64 - name: housing dtype: string - name: loan dtype: string - name: contact dtype: string - name: day dtype: int64 - name: month dtype: string - name: duration dtype: int64 - name: campaign dtype: int64 - name: pdays dtype: int64 - name: previous dtype: int64 - name: poutcome dtype: string - name: y dtype: string splits: - name: train num_bytes: 6654353 num_examples: 45211 - name: test num_bytes: 665707 num_examples: 4521 download_size: 834481 dataset_size: 7320060 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* --- ## About Dataset ### Context Term deposits are a major source of income for a bank. A term deposit is a cash investment held at a financial institution. Your money is invested for an agreed rate of interest over a fixed amount of time, or term. The bank has various outreach plans to sell term deposits to their customers such as email marketing, advertisements, telephonic marketing, and digital marketing. Telephonic marketing campaigns still remain one of the most effective way to reach out to people. However, they require huge investment as large call centers are hired to actually execute these campaigns. Hence, it is crucial to identify the customers most likely to convert beforehand so that they can be specifically targeted via call. The data is related to direct marketing campaigns (phone calls) of a Portuguese banking institution. The classification goal is to predict if the client will subscribe to a term deposit (variable y). Content The data is related to the direct marketing campaigns of a Portuguese banking institution. The marketing campaigns were based on phone calls. Often, more than one contact to the same client was required, in order to access if the product (bank term deposit) would be ('yes') or not ('no') subscribed by the customer or not. The data folder contains two datasets:- train.csv: 45,211 rows and 18 columns ordered by date (from May 2008 to November 2010) test.csv: 4521 rows and 18 columns with 10% of the examples (4521), randomly selected from train.csv Detailed Column Descriptions bank client data: - 1 - age (numeric) - 2 - job : type of job (categorical: "admin.","unknown","unemployed","management","housemaid","entrepreneur","student", "blue-collar","self-employed","retired","technician","services") - 3 - marital : marital status (categorical: "married","divorced","single"; note: "divorced" means divorced or widowed) - 4 - education (categorical: "unknown","secondary","primary","tertiary") - 5 - default: has credit in default? (binary: "yes","no") - 6 - balance: average yearly balance, in euros (numeric) - 7 - housing: has housing loan? (binary: "yes","no") - 8 - loan: has personal loan? (binary: "yes","no") # related with the last contact of the current campaign: - 9 - contact: contact communication type (categorical: "unknown","telephone","cellular") - 10 - day: last contact day of the month (numeric) - 11 - month: last contact month of year (categorical: "jan", "feb", "mar", …, "nov", "dec") - 12 - duration: last contact duration, in seconds (numeric) # other attributes: - 13 - campaign: number of contacts performed during this campaign and for this client (numeric, includes last contact) - 14 - pdays: number of days that passed by after the client was last contacted from a previous campaign (numeric, -1 means client was not previously contacted) - 15 - previous: number of contacts performed before this campaign and for this client (numeric) - 16 - poutcome: outcome of the previous marketing campaign (categorical: "unknown","other","failure","success") Output variable (desired target): - 17 - y - has the client subscribed a term deposit? (binary: "yes","no")
imvladikon/hebrew_speech_campus
--- language: - he size_categories: - 10K<n<100K task_categories: - automatic-speech-recognition dataset_info: features: - name: uid dtype: string - name: file_id dtype: string - name: audio dtype: audio: sampling_rate: 16000 - name: sentence dtype: string - name: n_segment dtype: int32 - name: duration_ms dtype: float32 - name: language dtype: string - name: sample_rate dtype: int32 - name: course dtype: string - name: sentence_length dtype: int32 - name: n_tokens dtype: int32 splits: - name: train num_bytes: 17559119499.576 num_examples: 75924 download_size: 17274739665 dataset_size: 17559119499.576 configs: - config_name: default data_files: - split: train path: data/train-* --- ## Data Description Hebrew Speech Recognition dataset from [Campus IL](https://campus.gov.il/). Data was scraped from the Campus website, which contains video lectures from various courses in Hebrew. Then subtitles were extracted from the videos and aligned with the audio. Subtitles that are not on Hebrew were removed (WIP: need to remove non-Hebrew audio as well, e.g. using simple classifier). Samples with duration less than 3 second were removed. Total duration of the dataset is 152 hours. Outliers in terms of the duration/char ratio were not removed, so it's possible to find suspiciously long or short sentences compared to the duration. Note: if loading is slow, just clone it : `git clone hebrew_speech_campus && cd hebrew_speech_campus && git lfs pull` and load it from the folder `load_dataset("./hebrew_speech_campus")` ## Data Format Audio files are in WAV format, 16kHz sampling rate, 16bit, mono. Ignore `path` field, use `audio.array` field value. ## Data Usage ```python from datasets import load_dataset ds = load_dataset("imvladikon/hebrew_speech_campus", split="train", streaming=True) print(next(iter(ds))) ``` ## Data Sample ``` {'uid': '10c3eda27cf173ab25bde755d0023abed301fcfd', 'file_id': '10c3eda27cf173ab25bde755d0023abed301fcfd_13', 'audio': {'path': '/content/hebrew_speech_campus/data/from_another_angle-_mathematics_teaching_practices/10c3eda27cf173ab25bde755d0023abed301fcfd_13.wav', 'array': array([ 5.54326562e-07, 3.60812592e-05, -2.35188054e-04, ..., 2.34067178e-04, 1.55649337e-04, 6.32447700e-05]), 'sampling_rate': 16000}, 'sentence': 'הדוברים צריכים לקחת עליו אחריות, ולהיות מחויבים לו כלומר, השיח צריך להיות מחויב', 'n_segment': 13, 'duration_ms': 6607.98193359375, 'language': 'he', 'sample_rate': 16000, 'course': 'from_another_angle-_mathematics_teaching_practices', 'sentence_length': 79, 'n_tokens': 13} ``` ## Data Splits and Stats Split: train Number of samples: 75924 ## Citation Please cite the following if you use this dataset in your work: ``` @misc{imvladikon2023hebrew_speech_campus, author = {Gurevich, Vladimir}, title = {Hebrew Speech Recognition Dataset: Campus}, year = {2023}, howpublished = \url{https://huggingface.co/datasets/imvladikon/hebrew_speech_campus}, } ```
LanguageBind/Video-Bench
--- license: apache-2.0 ---
imone/OpenOrca_FLAN
--- license: mit --- This is the [OpenOrca](https://huggingface.co/datasets/Open-Orca/OpenOrca) GPT4 subset with the original FLAN answers. Each even row (indexed starting from 0) contains the OpenOrca GPT4 answer, while each odd row contains the corresponding FLAN answer.
AlexWortega/InstructCaptions2
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 33059118217.928 num_examples: 22776 download_size: 33273147003 dataset_size: 33059118217.928 configs: - config_name: default data_files: - split: train path: data/train-* license: apache-2.0 language: - en pretty_name: InstructImages --- # InstructImages Following dataset created in Dalle3 paper style 1. Caption all images with LVM(Llava13b in my case) 2. Improve captions with GPT4 Also i have a plans to open source RLAIF pipeline with these images.
lorinma/Slim-LCCC-zh
--- language: - zh task_categories: - conversational - text-generation size_categories: - 1K<n<10K --- 在LLM横行的今天,大家都在讲究SFT数据质量。相比于各种一板一眼的AI回复,又是step by step又是detailed reasoning,这种非常casual的对话显得那么的独特,更适合用作情感陪伴闲聊机器人的目的。 本项目提供了一个大规模中文对话数据集,原始数据来自于清华大学的[LCCC(Large-scale Cleaned Chinese Conversation)数据集](https://github.com/thu-coai/CDial-GPT) 基于LCCC-large,但因为有1200万。故使用bert-base-chinese转换为embedding,且使用类knn的方法抽取了1万条。并转换成了sharegpt格式。 从实用的角度来说,因为对话都只有两句,需要通过GPT进行续写。但是实测发现openai系列的太严肃了,失去了casual的味道。浅测了一下文心一言可以续写这种闲聊对话。只是测试了一下,并没有放在这个数据集中。 当然了,最好的还是收集真实世界的对话。
wenge-research/yayi_uie_sft_data
--- license: apache-2.0 language: - zh - en size_categories: - 1M<n<10M --- ## 训练数据/Training Data 百万级语料中文54%,英文46%;其中其中数据集包括**12**个领域包括金融,社会,生物,商业,工业制造,化学,车辆,科学,疾病医疗,个人生活,安全和通用。覆盖数百个使用场景 - NER:中文覆盖**28**个实体类型包括人物,地缘政治,组织,身体部位,药物等,英文覆盖**130**个实体类型包括Animal, Weapon, Conference, Book等。 - RE:中文覆盖**232**种关系包括买资,增持,重组,国籍,别名,亲属,入股,转让,导致,发生地点,制造商等,英文覆盖**236**种关系包括founded by,state or province of headquarters,employee of,occupation,creator等。 - EE:中文覆盖**84**种事件类型,包括中标,高管变动,产品行为-发布,公司上市等,和**203**种论元,英文覆盖**45**种事件类型,包括Born, Demonstrate, Meet, End Organization, Divorce等,和**62**种论元。 In the corpus of over a million entries, 54% are in Chinese and 46% in English. The dataset encompasses 12 fields including finance, society, biology, business, industrial manufacturing, chemistry, vehicles, science, disease and medicine, personal life, security, and general topics, covering hundreds of scenarios: - NER: In Chinese, it covers **28** types of entities including individuals, geopolitics, organizations, body parts, drugs, etc., while in English, it covers 130 types of entities such as Animals, Weapons, Conferences, Books, etc. - RE: In Chinese, it includes **232** types of relations like acquisitions, stake increases, restructurings, nationality, aliases, relatives, buying shares, transfers, causes, locations of occurrence, manufacturers, etc., and in English, 236 types of relations such as founded by, state or province of headquarters, employee of, occupation, creator, etc. - EE: Chinese covers **84** types of events including winning a bid, executive changes, product actions - launches, company listings, etc., and **203** types of arguments, whereas English covers **45** types of events such as Birth, Demonstration, Meeting, End of Organization, Divorce, etc., and **62** types of arguments. ![数据分布](./data-dist.png)
LucasWeber/icl_consistency_test
--- task_categories: - text-classification language: - en pretty_name: The ICL consistency test size_categories: - 100K<n<1M --- # The ICL consistency test This 🤗 dataset provides data for the [GenBench CBT task 'The ICL consistency test'](https://github.com/GenBench/genbench_cbt/tree/main/src/genbench/tasks/icl_consistency_test). The ICL consistency test measures the consistency of LLM predictions on the same data points across many different equivalent prompting setups. The score in the associated metric (Cohen's kappa) can be understood as a measure of a model's prediction consistency in the face of task-irrelevant information. For an easy evaluation of any 🤗 models, we refer to the code provided in the GenBench task. For in-depth information on the task, we refer to the associated publications ([Weber et al., 2023](https://arxiv.org/abs/2312.04945),[2023](https://aclanthology.org/2023.conll-1.20/)) and the respective GenBench [doc.md](https://github.com/GenBench/genbench_cbt/blob/main/src/genbench/tasks/icl_consistency_test/doc.md). Evaluation on the relevant metrics can be done via the _example_evaluation.py_ script in the [GenBench repository](https://github.com/GenBench/genbench_cbt/blob/main/src/genbench/tasks/icl_consistency_test/). ### Dataset Description _Abstract_: The ICL consistency test measures the consistency of LLM predictions on the same data points across many different prompting setups. Different setups are defined by "factors". On the one hand, factors can be specific attributes of the used prompt (e.g. the number of examples the model is presented with ["n_shots"] or the type of instructions that were used to wrap a specific datapoint ["Instructions"]). On the other hand, the analysis can also be augmented by factors that are related to the way a model is evaluated (e.g. whether a model is calibrated) or the type of model that is evaluated (e.g. the number of parameters or instructions tuning). These external factors can be added to the analysis by using the task.add_factor() method. The output metric is Cohen's kappa for each factor across all different conditions. A kappa value close to 1 indicates that the factors do not change the model prediction, while a factor close to 0 strongly changes model predictions. The ICL consistency test has two subtasks, one evaluating the ANLI-dataset ([Nie et al., 2019](https://aclanthology.org/N18-1101/)); the other the MNLI-dataset ([Wang et al., 2017](https://aclanthology.org/N18-1101/)). _Size_: Each subtask contains 57600 when using the full 600 data_IDs. The user can choose to reduce the number of evaluated data_IDs. - **Curated by:** - resampling and arrangement was done by [Weber et al., 2023](https://arxiv.org/abs/2312.04945),[2023](https://aclanthology.org/2023.conll-1.20/); - original data were curated by [Nie et al., 2019](https://aclanthology.org/N18-1101/) (ANLI) and [Wang et al., 2017](https://aclanthology.org/N18-1101/) (MNLI); - templates were curated by [Bach et al., 2022](https://aclanthology.org/2022.acl-demo.9/) (promptsource). - **Language:** English ### Dataset Sources (basic links) - **Repository:** Data files on [github](https://github.com/LucWeber/icl_consistency_data). - **Paper:** [Weber et al., 2023](https://arxiv.org/abs/2312.04945),[2023](https://aclanthology.org/2023.conll-1.20/). - **Demo:** Find pre-implemented code to evaluate any 🤗 model on [github](https://github.com/GenBench/genbench_cbt/blob/main/src/genbench/tasks/icl_consistency_test/example_evaluation.py). ## Uses In prompting, models are sensitive to task-irrelevant information in their prompt. This test can be used to quantify this sensitivity of any 🤗 model. The ICL consistency test does this by measuring a model's prediction consistency across many different semantically equivalent prompting setups. ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [_TBA_] ## Dataset Creation The data is a sample from the [MNLI](https://aclanthology.org/N18-1101/) and [ANLI](https://aclanthology.org/2020.acl-main.441/) datasets as well as prompt templates from [promptsource](https://aclanthology.org/2022.acl-demo.9/). Please refer to the original publications's documentation for detailed information on dataset creation. ## Bias, Risks, and Limitations This dataset contains data from the [MNLI](https://aclanthology.org/N18-1101/) and [ANLI](https://aclanthology.org/2020.acl-main.441/) datasets and adheres to the same biases, risks and limitations. ### Recommendations We identify the following limitations of the consistency test: 1. The number of factors is limited and does not cover all possible factors that might influence the predictions. We limited ourselves to factors we deem relevant, to ensure fast evaluation. 2. Currently, the test is only implemented for the ANLI- and MNLI-datasets. 3. Factors that are external to the dataset but should be considered in the analysis (e.g. _instruction tuning_ or _calibration_) have to be manually added by the user using the task.add_factor() method (please use the GenBench implementation of the dataset. You can find it on [github](https://github.com/GenBench/genbench_cbt/tree/main/src/genbench/tasks/icl_consistency_test)). ## Citation This dataset was used in the following publications. If you use it, please consider citing the following references: **BibTeX:** ``` @inproceedings{weber2023mind, title={Mind the instructions: a holistic evaluation of consistency and interactions in prompt-based learning}, author={Weber, Lucas and Bruni, Elia and Hupkes, Dieuwke}, booktitle={Proceedings of the 27th Conference on Computational Natural Language Learning (CoNLL)}, pages={294--313}, year={2023} } ``` ``` @article{weber2023icl, title={The ICL Consistency Test}, author={Weber, Lucas and Bruni, Elia and Hupkes, Dieuwke}, journal={arXiv preprint arXiv:2312.04945}, year={2023} } ``` ## Dataset Card Authors [Lucas Weber](https://lucweber.github.io/) ## Dataset Card Contact lucasweber000@gmail.com
Andyrasika/VQA-Dataset
--- dataset_info: features: - name: question dtype: string - name: answer dtype: string - name: image_id dtype: string - name: label dtype: int64 splits: - name: train num_bytes: 700662 num_examples: 9974 - name: test num_bytes: 174412 num_examples: 2494 download_size: 299109 dataset_size: 875074 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* license: mit language: - en tags: - VQA pretty_name: 'VQA ' size_categories: - 100K<n<1M --- The dataset is available at: https://www.mpi-inf.mpg.de/departments/computer-vision-and-machine-learning/research/vision-and-language/visual-turing-challenge/ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6290ec00a29097b211b94f0f/6TRvuCI3AYnhphzXiCPE4.png) ``` @INPROCEEDINGS{malinowski2014nips, author = {Malinowski, Mateusz and Fritz, Mario}, title = {A Multi-World Approach to Question Answering about Real-World Scenes based on Uncertain Input}, booktitle = {Advances in Neural Information Processing Systems 27}, editor = {Z. Ghahramani and M. Welling and C. Cortes and N.D. Lawrence and K.Q. Weinberger}, pages = {1682--1690}, year = {2014}, publisher = {Curran Associates, Inc.}, url = {http://papers.nips.cc/paper/5411-a-multi-world-approach-to-question-answering-about-real-world-scenes-based-on-uncertain-input.pdf} } ```
neovalle/H4rmony_dpo
--- license: mit task_categories: - question-answering - text-classification - reinforcement-learning - text-generation tags: - ecolinguistics - ecology - sustainability - environment - synthetic size_categories: - 1K<n<10K --- This dataset is based on [neovalle/H4rmony](https://huggingface.co/datasets/neovalle/H4rmony), and optimised to the format required by DPOTrainer from the trl library.
AngelBottomless/danbooru-2023-sqlite-fixed-7110548
--- license: mit task_categories: - text-to-image - image-classification pretty_name: sanitized-danbooru2023-sqlite size_categories: - 100M<n<1B --- # SQLITE-DB for Danbooru 2023 (until 7110548) This is an cleaned-up version (almost totally recreated) of https://huggingface.co/datasets/KBlueLeaf/danbooru2023-sqlite The previous sqlite database had major defects, especially with tag ids being mismatched, which was causing data to be actually different from server. Note that minor information, such as uploader id, are not fixed. Most of the discrepancy has been detected from id 5139963-6859952. The additional scripts and example files that can be used for modify/ adding post are provided for maintenance. # view-dataset.py offers you to get the information of post: The example post which contained wrong information can be viewed like this: post : https://danbooru.donmai.us/posts/6719955 ``` Post 6719955 tag_list_general: 1girl(0), long_hair(24), ribbon(36), simple_background(42), smile(43), solo(44), ahoge(71), open_mouth(90), blonde_hair(116), :d(147), red_eyes(193), blush_stickers(274), upper_body(285), neck_ribbon(376), red_ribbon(380), chibi(437), transparent_background(513), cardigan(2754), looking_afar(3998), white_cardigan(52258), sad_keanu_(meme)(136930) tag_list_copyright: tokino_sora_(old_design)(364562), juufuutei_raden(649634) tag_list_character: regloss_(hololive)(650035) tag_list_meta: (175) tag_list_artist: xt0y4x(525184) ``` However, the actual data will show you the difference: ``` {"id": 6719955, "difference": [{"tag_list_general": ["virtual_youtuber"], "tag_list_character": ["otonose_kanade"], "tag_list_artist": ["jb_jagbung"], "tag_list_copyright": ["hololive", "hololive_dev_is"]}, {"tag_list_general": ["sad_keanu_(meme)"], "tag_list_character": ["regloss_(hololive)"], "tag_list_artist": ["xt0y4x"], "tag_list_copyright": ["tokino_sora_(old_design)", "juufuutei_raden"]}]} ``` The actual tags ids are followed: ``` virtual_youtuber(136931) otonose_kanade(650036) jb_jagbung(525185) hololive(364563) hololive_dev_is(649635) ``` There were tags added / removed from post, but other than that, there is actual shift on tags - which is not consistent over database. ``` tag_list_character: regloss_(hololive)(650035) <-> otonose_kanade(650036) tag_list_artist: xt0y4x(525184) <-> jb_jagbung(525185) tag_list_copyright: tokino_sora_(old_design)(364562), juufuutei_raden(649634) <-> hololive(364563), hololive_dev_is(649635) ``` The tag virtual_youtuber is the good difference that we can add to database too. # crawling code is not included. # commit_difference.py offers you to change the database's information based on **difference jsonl** files. You can prepare bunch of jsonl files, which contains the line of json which contains id-difference. The data **must** contain string form of data, not tag id. **The actual danbooru tag id won't match with the database, it is intended to skip bunch of tags which does not have actual post usages.** # fix-tags.py just contains the code which will reflect the actual tag usage count, to tag popularity. # add_post.py includes code to add more recent post data directly into dataset. It contains simple skip schema, if post already exists and it has non-empty tag list, it won't add the post.
khhuang/CHOCOLATE
--- annotations_creators: - expert-generated - found language_creators: - expert-generated - found language: - en license: apache-2.0 multilinguality: - monolingual size_categories: - 1K<n<10K paperswithcode_id: chocolate pretty_name: CHOCOLATE tags: - chart - plot - chart-to-text - vistext - statista - pew - chart-understanding - chart-captioning - chart-summarization - document-image configs: - config_name: default data_files: - split: test path: chocolate.json --- # Dataset Card for CHOCOLATE - [Dataset Description](https://huggingface.co/datasets/khhuang/CHOCOLATE/blob/main/README.md#dataset-description) - [Paper Information](https://huggingface.co/datasets/khhuang/CHOCOLATE/blob/main/README.md#paper-information) - [Citation](https://huggingface.co/datasets/khhuang/CHOCOLATE/blob/main/README.md#citation) ## Dataset Description **CHOCOLATE** is a benchmark for detecting and correcting factual inconsistency in generated chart captions. It consists of captions produced by six most advanced models, which are categorized into three subsets: - **LVLM**: GPT-4V, Bard (before Gemini) - **LLM-based Pipeline**: DePlot + GPT-4 - **Fine-tuned Model**: ChartT5, MatCha, UniChart The charts are from two datasets: VisText and the Pew split of Chart-to-Text. In total, **CHOCOLATE** consists of **1,187 examples**. Each instance in **CHOCOLATE** consists of a caption generated by one of the model and the annotations of the factual errors for each caption sentence. ## Paper Information - Paper: https://arxiv.org/abs/2312.10160 - Code: https://github.com/khuangaf/CHOCOLATE/ - Project: https://khuangaf.github.io/CHOCOLATE ## Citation If you use the **CHOCOLATE** dataset in your work, please kindly cite the paper using this BibTeX: ``` @misc{huang-etal-2023-do, title = "Do LVLMs Understand Charts? Analyzing and Correcting Factual Errors in Chart Captioning", author = "Huang, Kung-Hsiang and Zhou, Mingyang and Chan, Hou Pong and Fung, Yi R. and Wang, Zhenhailong and Zhang, Lingyu and Chang, Shih-Fu and Ji, Heng", year={2023}, eprint={2312.10160}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
shibing624/huatuo_medical_qa_sharegpt
--- license: apache-2.0 --- source: - https://huggingface.co/datasets/FreedomIntelligence/HuatuoGPT-sft-data-v1 - https://huggingface.co/datasets/FreedomIntelligence/HuatuoGPT2_sft_instruct_GPT4_50K 转为sharegpt格式,jsonl文件。 data size: ``` > wc -l HuatuoGPT_sft_data_v1_sharegpt.jsonl 226042 HuatuoGPT_sft_data_v1_sharegpt.jsonl > wc -l HuatuoGPT2_sft_instruct_GPT4_sharegpt.jsonl 50000 HuatuoGPT2_sft_instruct_GPT4_sharegpt.jsonl ``` 转换代码:convert.py ```python import json # 假设您的JSONL文件名为 'input.jsonl' input_file = './HuatuoGPT2_sft_instruct_GPT4.jsonl' output_file = './HuatuoGPT2_sft_instruct_GPT4_sharegpt.jsonl' # 初始化输出文件 with open(input_file, 'r', encoding='utf-8') as infile, open(output_file, 'w', encoding='utf-8') as outfile: # 初始化输出的JSON结构 # 逐行读取JSONL文件 for id,line in enumerate(infile): output_json = {"conversations": []} # 解析JSON对象 data = json.loads(line.strip()) # if id > 10: # break # 假设每个JSON对象都有一个"data"列表,包含问题和答案 for i, item in enumerate(data['data']): if i % 2 == 0: # 假设问题在偶数位置,答案在奇数位置 output_json['conversations'].append({ "from": "human", "value": item[2:] }) else: output_json['conversations'].append({ "from": "gpt", "value": item[2:] }) # 将转换后的JSON写入文件 a = json.dumps(output_json, ensure_ascii=False) outfile.write(a + '\n') print(f"Conversion complete. Output saved to '{output_file}'.") ```
DiscoResearch/germanrag
--- pretty_name: GermanRAG configs: - config_name: default data_files: - split: train path: germanrag.jsonl license: cc-by-4.0 language: - de multilinguality: - monolingual source_datasets: - deepset/germandpr task_categories: - question-answering - text-retrieval - conversational task_ids: - open-domain-qa - document-retrieval - document-question-answering tags: - RAG - retrieval-augmented-generation size_categories: - 1K<n<10K --- # GermanRAG 🇩🇪📜🦜 This dataset is derived from the [GermanDPR dataset](https://huggingface.co/datasets/deepset/germandpr) and enhances it by providing fully formulated answers instead of answer spans. It can be used to finetune for retrieval augmented generation tasks (RAG) in German. We deduplicated the original contexts resulting in 2243 unique contexts and repeated the hard negatives of half of them, such that the last third of the total dataset contains only not answerable examples. In contrast to the original dataset the number of contexts per QA pair varies to mimic retrieval results in real world scenarios, resulting in a distribution of positive and hard negative contexts as follows: | # positive contexts | # hard negative contexts | # examples |---|---|--- | 1 | 0 | 562 | 1 | 1 | 562 | 1 | 2 | 561 | 1 | 3 | 558 | 0 | 1 | 375 | 0 | 2 | 373 | 0 | 3 | 371 The passages in the `contexts` list are shuffled and `positive_ctx_idx` marks the index of the positive context. `-1` indicates examples without positive context, which are paired with `"Mit den gegebenen Informationen ist diese Frage nicht zu beantworten."` as answer. Code used to create this dataset can be found [here](https://github.com/rasdani/germanrag). ## Known issues In rare cases hard negatives still provide sufficient information to answer the question. For the last third, we therefore paired hard negatives with random questions, sampled without replacement. ## Acknowledgements Full credit for the original dataset goes to the [authors](https://arxiv.org/abs/2104.12741) of [GermanDPR](https://www.deepset.ai/germanquad). The original dataset is licensed under [CC BY-SA 4.0](https://creativecommons.org/licenses/by-sa/4.0/) and this derived work therfore inherits the same license. Citation for the original dataset: ``` @misc{möller2021germanquad, title={GermanQuAD and GermanDPR: Improving Non-English Question Answering and Passage Retrieval}, author={Timo Möller and Julian Risch and Malte Pietsch}, year={2021}, eprint={2104.12741}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` The derived dataset was created for [DiscoResearch](https://huggingface.co/DiscoResearch) by [Daniel Auras](https://huggingface.co/rasdani) with support from [JP Harries](https://huggingface.co/jphme) and [Björn Pluster](https://huggingface.co/bjoernp).
AgoraX/AIEC-140K
--- license: mit task_categories: - text-classification - table-question-answering - question-answering - conversational tags: - code size_categories: - 100K<n<1M --- # AgoraX/AIEC-140K Dataset =============================== Excited to Announce AgoraX/AIEC-140K! An all-new dataset with super high High Quality AI Engineering Code Tokens totaling 140k samples! ## Introduction ------------ The AgoraX/AIEC-140K dataset is a collection of AI engineering code tokens from top research labs such as OpenAI, Nvidia, Google, Lucidrains, and others. These tokens have been scraped from various repositories on GitHub, providing a valuable resource for researchers and developers in the field of Artificial Intelligence. This README file serves as a guide to understand the dataset and effectively utilize its contents. ## Dataset Details --------------- - Dataset Name: AgoraX/AIEC-140K - Total Samples: 140,000 ### Data Format The dataset primarily consists of code tokens, which are the atomic units of code. Each code token is a single word or a character representing a meaningful entity in AI engineering code. These tokens were collected from different repositories, ensuring a diverse collection of samples. The data does not include complete code snippets or files but focuses on individual tokens to enable easy integration and usage in various downstream tasks. ### Data Sources Code tokens in the AgoraX/AIEC-140K dataset are scraped from various repositories on GitHub. Prominent research labs including OpenAI, Nvidia, Google, Lucidrains, and others have contributed to this dataset. Please note that the dataset does not provide details on the exact repositories or sources from where each token is scraped. ### Usage The AgoraX/AIEC-140K dataset is a valuable resource for researchers, developers, and practitioners in the field of AI engineering. The dataset can be utilized for various purposes, including but not limited to: - Training language models for code generation - Pre-training and fine-tuning neural networks - Code completion and suggestion systems - Understanding and analyzing code patterns and trends in AI engineering # Citation -------- If you use the AgoraX/AIEC-140K dataset in your research work, please consider citing it using the following BibTeX: ``` @dataset{agorax/aiec140k, author = {AgoraX Team}, title = {AgoraX/AIEC-140K Dataset}, year = {2022}, publisher = {Hugging Face}, url = {https://huggingface.co/datasets/agorax/aiec-140k} } ``` # License ------- The AgoraX/AIEC-140K dataset is released under the [MIT License](https://opensource.org/licenses/MIT). Please refer to the LICENSE file in the dataset repository for more details. # Contact ------- For any further inquiries or feedback regarding the dataset, please contact the AgoraX Team in the discord: https://discord.gg/t8SWA2CnVN We appreciate your interest and hope that the AgoraX/AIEC-140K dataset proves to be a valuable asset in advancing AI engineering research and development.
zerolink/zsql-postgres-dpo
--- language_creators: - crowdsourced - expert-generated language: - en license: other size_categories: - 100K<n<1M task_categories: - text2text-generation - text-generation license_name: other license_link: https://github.com/zerolink-io/zsql-postgres-dpo dataset_info: features: - name: schema dtype: string - name: question dtype: string - name: rejected dtype: string - name: chosen dtype: string - name: weight dtype: float64 splits: - name: train num_bytes: 246559437.43473467 num_examples: 233393 - name: test num_bytes: 27395962.565265343 num_examples: 25933 download_size: 86570198 dataset_size: 273955400.0 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* tags: - dpo - text-to-sql - sql --- # zsql-postgres-dpo This is a dataset for training machine learning models to convert natural English language text into Postgres dialect SQL queries. This dataset comprises 200,000 DPO pairs curated to support the rapid development of text-to-SQL generation models. The uniqueness of this dataset lies in its optimization process. The "chosen" field within each data pair contains SQL queries that have been canonicalized, optimized, and which are chosen from the candidate set which minimizes syntactic cyclomatic and asymptotic complexity against the given schema. Direct Preference Optimization (see [Rafailov et al, 2023](https://arxiv.org/abs/2305.18290J)) is a novel approach to refinement learning from positive and negative samples to modify the behavior of large-scale unsupervised language models to align with human preferences This method simplifies the fine-tuning process, making it more stable and computationally efficient without the need for extensive hyperparameter tuning or LM sampling, and has been shown to effectively control model outputs, matching or surpassing existing methods. The source data is cleaned and filtered based on the following criteria: - Remove queries which are not in English. - Remove queries which are not valid SQL queries. - Remove queries which are not executable against the given schema. - Remove queries which are executed against tables with non-Latin characters. - Remove queries which use features not supported by the given database. - Remove long queries which contain domain-specific knowledge which cause model confusion. - Remove queries which do not fit within a 4096 token context window. ## Usage To load the dataset using the HuggingFace `datasets` library: ```python from datasets import load_dataset dataset = load_dataset("zerolink/zsql-postgres-dpo") ``` To use in model fine-tuning, apply the following chat tokenizer: ```python tokenizer = AutoTokenizer.from_pretrained(model) def tokenize(element): schema = element["schema"] question = element["question"] answer = element["chosen"] prompt = f""" Using the schema: {schema} Generate SQL for the following question: {question} """ system = "Translate English to Postgres SQL." message = [ {"role": "system", "content": system}, {"role": "user", "content": prompt}, {"role": "assistant", "content": answer}, ] output = tokenizer.apply_chat_template( message, add_generation_prompt=False, tokenize=True ) return {"text": output} ``` ## Fields The fields in this dataset are as follows: | Field Name | Description | | ---------- | ----------------------------------------------------------------------------------------------- | | schema | The schema of the database. | | question | The natural language question. | | chosen | The DPO preferred SQL query. | | rejected | The DPO rejected SQL query. | | weight | The weight of the query in the reward function. | ## Sources This dataset is derived from the following sources: - [x] `datetime` - Use of Postgres date and time functions. - [x] `json` - Use of Postgres JSON functions. - [x] `math` - Use of Postgres math functions. - [ ] `postgis` - Use of Postgres GIS functions. - [x] `re` - Use of Postgres regular expression functions. - [x] `rollup` - Use of Postgres rollup functions. - [x] `set` - Use of Postgres set functions. - [x] `string` - Use of Postgres string functions. - [x] `vector` - Use of PGVector functions. - [x] `window` - Use of Postgres window functions. | Source | License | External Link | | ---------------------- | ------------ | -------------------------------------------------------------------------------------------------------------------- | | wikisql | BSD 3-Clause | [https://github.com/salesforce/WikiSQL](https://github.com/salesforce/WikiSQL) | | spider | CC-BY-SA-4.0 | [https://huggingface.co/datasets/spider](https://huggingface.co/datasets/spider) | | sql_create_context | CC-BY-4.0 | [https://huggingface.co/datasets/b-mc2/sql-create-context](https://huggingface.co/datasets/b-mc2/sql-create-context) | | squall | CC-BY-SA-4.0 | [https://github.com/tzshi/squall](https://github.com/tzshi/squall) | | sede | Apache-2.0 | [https://github.com/hirupert/sede](https://github.com/hirupert/sede) | | nvbench | MIT | [https://github.com/TsinghuaDatabaseGroup/nvBench](https://github.com/TsinghuaDatabaseGroup/nvBench) | | imdb | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) | | advising | CC-BY-4.0 | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) | | atis | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) | | restaurants | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) | | scholar | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) | | yelp | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) | | academic | Not Found | [https://github.com/jkkummerfeld/text2sql-data](https://github.com/jkkummerfeld/text2sql-data) | | criteria2sql | Apache-2.0 | [https://github.com/xiaojingyu92/Criteria2SQL](https://github.com/xiaojingyu92/Criteria2SQL) | | eICU | CC-BY-4.0 | [https://github.com/glee4810/EHRSQL](https://github.com/glee4810/EHRSQL) | | mimic_iii | CC-BY-4.0 | [https://github.com/glee4810/EHRSQL](https://github.com/glee4810/EHRSQL) | | mimicsql_data | MIT | [https://github.com/wangpinggl/TREQS](https://github.com/wangpinggl/TREQS) | | worldsoccerdatabase | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) | | whatcdhiphop | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) | | studentmathscore | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) | | pesticide | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) | | thehistoryofbaseball | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) | | uswildfires | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) | | geonucleardata | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) | | greatermanchestercrime | CC-BY-SA-4.0 | [https://github.com/chiahsuan156/KaggleDBQA](https://github.com/chiahsuan156/KaggleDBQA) | Composition: ![Composition](https://raw.githubusercontent.com/zerolink-io/zsql-sqlite-dpo/d8eb36601fc5cfc35da9bb9d98cc5d72451f7dd4/composition.png) ## License This dataset is provided for academic and research purposes. Please adhere to the specified license terms and conditions for usage and distribution.
OpenGVLab/AS-Core
--- license: apache-2.0 --- # AS-Core AS-Core is the human-verified subset of AS-1B. - `semantic_tag_1m.json`: the human verified annotations for semantic tags. - `region_vqa_1m.jsonl`: the human verified annotations for region VQA. - `region_caption_400k.jsonl`: the region captions generated base on paraphrasing the region question-answer pairs. ***NOTE***: The bbox format is `x1y1x2y2`. ## Introduction We present the All-Seeing Project with: [***All-Seeing 1B (AS-1B) dataset***](https://huggingface.co/datasets/Weiyun1025/AS-100M): we propose a new large-scale dataset (AS-1B) for open-world panoptic visual recognition and understanding, using an economical semi-automatic data engine that combines the power of off-the-shelf vision/language models and human feedback. [***All-Seeing Model (ASM)***](https://huggingface.co/Weiyun1025/All-Seeing-Model-FT): we develop a unified vision-language foundation model (ASM) for open-world panoptic visual recognition and understanding. Aligning with LLMs, our ASM supports versatile image-text retrieval and generation tasks, demonstrating impressive zero-shot capability. <img width="820" alt="image" src="https://github.com/OpenGVLab/all-seeing/assets/8529570/e43ab8db-6437-46f1-8aa1-c95f012e9147"> Figure 1: Overview and comparison of our All-Seeing project with other popular large foundation models. <!-- ## Online Demo **All-Seeing Model demo** is available [here](https://openxlab.org.cn/apps/detail/wangweiyun/All-Seeing-Model-Demo). **Dataset Browser** is available [here](https://openxlab.org.cn/apps/detail/wangweiyun/All-Seeing-Dataset-Browser). https://github.com/OpenGVLab/all-seeing/assets/47669167/9b5b32d1-863a-4579-b576-b82523f2205e --> ## Dataset Overview AS-1B with over 1 billion regions annotated with semantic tags, question-answering pairs, and detailed captions. It covers a wide range of 3.5 million common and rare concepts in the real world, and has 132.2 billion tokens that describe the concepts and their attributes. <img width="800" alt="image" src="https://github.com/OpenGVLab/all-seeing/assets/8529570/adac37ed-312f-4f11-ba8a-6bc62067438f"> Some examples <img width="800" alt="image" src="https://github.com/OpenGVLab/all-seeing/assets/8529570/fcf6ab07-c4ba-441c-aa6c-111c769f75b1"> Please see our [paper](https://arxiv.org/abs/2308.01907) to learn more details. ## Model Architecture The All-Seeing model (ASM) is a unified framework for panoptic visual recognition and understanding, including image/region-text retrieval, image/region recognition, captioning, and question-answering. <img width="820" alt="image" src="https://github.com/OpenGVLab/all-seeing/assets/8529570/8995e88c-6381-452f-91e4-05d68a2795fc"> ## License This project is released under the [Apache 2.0 license](LICENSE). # Citation If you find our work useful in your research, please consider cite: ```BibTeX @article{wang2023allseeing, title={The All-Seeing Project: Towards Panoptic Visual Recognition and Understanding of the Open World}, author={Wang, Weiyun and Shi, Min and Li, Qingyun and Wang, Wenhai and Huang, Zhenhang and Xing, Linjie and Chen, Zhe and Li, Hao and Zhu, Xizhou and Cao, Zhiguo and others}, journal={arXiv preprint arXiv:2308.01907}, year={2023} } @article{wang2024allseeing_v2, title={The All-Seeing Project V2: Towards General Relation Comprehension of the Open World}, author={Wang, Weiyun and Ren, Yiming and Luo, Haowen and Li, Tiantong and Yan, Chenxiang and Chen, Zhe and Wang, Wenhai and Li, Qingyun and Lu, Lewei and Zhu, Xizhou and others}, journal={arXiv preprint arXiv:2402.19474}, year={2024} } ```
ruslanmv/HealthCareMagic-100k
--- configs: - config_name: default dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string splits: - name: train num_bytes: 126454896 num_examples: 112165 download_size: 70518148 dataset_size: 126454896 --- # Dataset Card for "HealthCareMagic-100k" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TIGER-Lab/SKGInstruct
--- license: cc-by-nc-2.0 task_categories: - text-generation language: - en pretty_name: SKGInstruct size_categories: - 100K<n<1M tags: - code - SKG configs: - config_name: default data_files: - split: train path: "skginstruct.json" - split: test path: "skginstruct_test_file_7b.json" --- # 🏗️ StructLM: Towards Building Generalist Models for Structured Knowledge Grounding SKGInstruct is an instruction tuning dataset constructed from 19 structured knowledge grounding datasets, mixed with 🤗 [SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) Project Page: [https://tiger-ai-lab.github.io/StructLM/](https://tiger-ai-lab.github.io/StructLM/) Paper: [https://arxiv.org/pdf/2402.16671.pdf](https://arxiv.org/pdf/2402.16671.pdf) Code: [https://github.com/TIGER-AI-Lab/StructLM](https://github.com/TIGER-AI-Lab/StructLM) Models: 7B | [StructLM-7B](https://huggingface.co/TIGER-Lab/StructLM-7B) 13B | [StructLM-13B](https://huggingface.co/TIGER-Lab/StructLM-13B) 34B | [StructLM-34B](https://huggingface.co/TIGER-Lab/StructLM-34B) ## **License** | Dataset Name | License Type | |--------------|----------------| | TabMWP | [Attribution-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-sa/4.0/)| | SlimOrca | MIT | | everything else | [Attribution-NonCommercial-ShareAlike 4.0 International](https://creativecommons.org/licenses/by-nc-sa/4.0/)| ## **Citation** Please cite our paper if you use our data, model or code. Please also kindly cite the original dataset papers. ``` @misc{zhuang2024structlm, title={StructLM: Towards Building Generalist Models for Structured Knowledge Grounding}, author={Alex Zhuang and Ge Zhang and Tianyu Zheng and Xinrun Du and Junjie Wang and Weiming Ren and Stephen W. Huang and Jie Fu and Xiang Yue and Wenhu Chen}, year={2024}, eprint={2402.16671}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
ba188/NHS_HES
--- tags: - medical - healthcare - NHS language: - en --- # Dataset Card for NHS_HES Data <!-- Provide a quick summary of the dataset. --> This dataset consists of data taken from three CSV files containing Hospital Episode Statistics (HES) for Admitted Patient Care and Outpatient Data supplied by National Health Services (NHS) England from 2018 - 2023. ## Dataset Details ### Dataset Description The data includes monthly counts from hospital visits and admissions of different types in England for April 2018 to December 2023. The data includes both total counts for every category of visit/appointment considered as well as a breakdown of those visits/admissions by treatment specialty and age-group. <!-- Provide a longer summary of what this dataset is. --> ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> More information and the original CSV files can be found at: https://digital.nhs.uk/data-and-information/publications/statistical/provisional-monthly-hospital-episode-statistics-for-admitted-patient-care-outpatient-and-accident-and-emergency-data/april-2023---december-2023. Incorporated CSVs are: 'Provisional Monthly Hospital Episode Statistics for Admitted Patient Care and Outpatients, December 2023: Open Data - Totals', 'Provisional Monthly Hospital Episode Statistics for Admitted Patient Care and Outpatients, December 2023: Open Data - Treatment Specialties' 'Provisional Monthly Hospital Episode Statistics for Admitted Patient Care and Outpatients, December 2023: Open Data - Age Groups' ## Uses The linked Google Colab file shows one possible use for a subset of this data: examining the pattern in hospital admissions episodes before, during, and after the COVID-19 pandemic and analysing whether there is a seasonal trend in those admissions and whether or not that changed during the pandemic. Ex.) https://colab.research.google.com/drive/1u7jNC-CFnoVBCCDnNUIEM7zmt9nJLmF2?usp=sharing <!-- Address questions about how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> The dataset is a csv that is 69*73. Each row contains the data for a single month from April 2018 to December 2023. The columns contain data on each of the variables counts were collected for (e.g. Finished Consultant Episodes, Finished Consultant Episodes with Procedure) split into the three original datasets, with separate columns for the total counts, the age bands, and the specialties. Within these columns, there are lists of dictionaries containing the data. [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] #### Personal and Sensitive Information While this data is related to healthcare, the units of interest are the months, rather than individual patients, so patient privacy is not an issue here. There are also no identifiable features of the patients themselves, and the data was originally released by the NHS for public use. <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases, and limitations of the dataset. More information is needed for further recommendations. ## Dataset Card Contact [More Information Needed]
alfredplpl/wikipedia-qa-ja-500k
--- dataset_info: features: - name: id dtype: string - name: url dtype: string - name: question dtype: string - name: answer dtype: string splits: - name: train num_bytes: 142049495 num_examples: 516932 download_size: 65635910 dataset_size: 142049495 configs: - config_name: default data_files: - split: train path: data/train-* license: cc-by-sa-3.0 task_categories: - question-answering language: - ja --- # Dataset Card for "wikipedia-qa-ja-500k" # Original Dataset - hpprc/wikipedia-20240101 # Procedure - Extract the first line of the title from the dataset. - Generate the answer by summizing the line using LLM: - Input RAG-like prompt to CALM 2 7B Chat. - Format the response. # RAG-like Prompt ```python f"""USER: {title}とはなんですか?次の文章を参考に一言でまとめてください。{text} ASSISTANT: """ ```
cagliostrolab/860k-ordered-tags
--- license: mit task_categories: - text-to-image language: - en tags: - art - not-for-all-audiences size_categories: - 100K<n<1M viewer: false ---
bethgelab/Let-It-Wag
--- language: - en license: mit size_categories: - 100K<n<1M task_categories: - image-classification pretty_name: LetItWag! configs: - config_name: default data_files: - split: train path: data/train-* dataset_info: features: - name: image dtype: image - name: label dtype: class_label: names: '0': A300B4_aircraft '1': A310_aircraft '2': Acadian_Flycatcher_bird '3': Affenpinscher '4': African_rock_python '5': Alder_Flycatcher_bird '6': American_Golden_Plover_bird '7': American_Tree_Sparrow_bird '8': An-12_aircraft '9': Appenzeller_Sennenhund '10': Artic_Tern_bird '11': Ash_throated_Flycatcher_bird '12': Audubons_Oriole_bird '13': Australian_Silky_Terrier '14': Australian_Terrier '15': BAE-125_aircraft '16': BAE_146-200_aircraft '17': BAE_146-300_aircraft '18': Baird_Sparrow_bird '19': Bairds_Sandpiper_bird '20': Bank_Swallow_bird '21': Barrows_Goldeneye_bird '22': Bay_breasted_Warbler_bird '23': Beechcraft_1900_aircraft '24': Bells_Vireo_bird '25': Bewick_Wren_bird '26': Black_Rosy_Finch_bird '27': Black_chinned_Sparrow_bird '28': Black_crested_Titmouse_bird '29': Bouvier_des_Flandres_dog '30': Brandt_Cormorant_bird '31': Brewers_Blackbird_bird '32': Brewers_Sparrow_bird '33': Briard '34': Broad_winged_Hawk_bird '35': Bronzed_Cowbird_bird '36': Brown_crested_Flycatcher_bird '37': Bullocks_Oriole_bird '38': C-47_aircraft '39': California_Towhee_bird '40': Canada_Warbler_bird '41': Canyon_Towhee_bird '42': Cassins_Finch_bird '43': Cassins_Kingbird_bird '44': Cassins_Sparrow_bird '45': Cassins_Vireo_bird '46': Cave_Swallow_bird '47': Cessna_525_aircraft '48': Cessna_560_aircraft '49': Challenger_600_aircraft '50': Chestnut_collared_Longspur_bird '51': Chuck_will_Widow_bird '52': Clarks_Grebe_bird '53': Clay_colored_Sparrow_bird '54': Connecticut_Warbler_bird '55': Coopers_Hawk_bird '56': Cordilleran_Flycatcher_bird '57': Couchs_Kingbird_bird '58': DC-3_aircraft '59': DC-6_aircraft '60': DHC-1_aircraft '61': DHC-6_aircraft '62': DHC-8-100_aircraft '63': DHC-8-300_aircraft '64': Dandie_Dinmont_Terrier '65': Dornier_328_aircraft '66': Double_crested_Cormorant_bird '67': Dunlin_bird '68': Dusky_Flycatcher_bird '69': E-195_aircraft '70': EMB-120_aircraft '71': Eastern_Phoebe_bird '72': Eastern_Wood_Pewee_bird '73': Elegant_Tern_bird '74': Embraer_Legacy_600_aircraft '75': English_Setter '76': English_Springer_Spaniel '77': Entlebucher_Sennenhund '78': Falcon_900_aircraft '79': Ferruginous_Hawk_bird '80': Field_Sparrow_bird '81': Florida_Scrub_Jay_bird '82': Fokker_50_aircraft '83': Forsters_Tern_bird '84': Geococcyx_bird '85': Giant_Schnauzer '86': Global_Express_aircraft '87': Grasshopper_Sparrow_bird '88': Gray_Flycatcher_bird '89': Gray_cheeked_Thrush_bird '90': Gray_crowned_Rosy_Finch_bird '91': Great_Cormorant_bird '92': Great_tailed_Grackle_bird '93': Greater_Swiss_Mountain_Dog '94': Groenendael_dog '95': Gulfstream_IV_aircraft '96': Gulfstream_V_aircraft '97': Hammonds_Flycatcher_bird '98': Handstand_Walking '99': Harris_Sparrow_bird '100': Harriss_Hawk_bird '101': Henslow_Sparrow_bird '102': Horned_Grebe_bird '103': House_Sparrow_bird '104': House_Wren_bird '105': Huttons_Vireo_bird '106': Ibizan_Hound '107': Inca_Dove_bird '108': Indian_cobra '109': Irish_Setter '110': Irish_Terrier '111': Irish_Wolfhound '112': Japanese_Chin '113': Kentucky_Warbler_bird '114': Kerry_Blue_Terrier '115': King_Rail_bird '116': Komondor '117': Kuvasz '118': Lakeland_Terrier '119': Lapland_Longspur_bird '120': Lark_Bunting_bird '121': Lark_Sparrow_bird '122': Lazuli_Bunting_bird '123': Le_Conte_Sparrow_bird '124': Least_Flycatcher_bird '125': Least_Grebe_bird '126': Lesser_Nighthawk_bird '127': Lesser_Scaup_bird '128': Lesser_Yellowlegs_bird '129': Lhasa_Apso '130': Lincoln_Sparrow_bird '131': Long_billed_Dowitcher_bird '132': MD-11_aircraft '133': Magnolia_Warbler_bird '134': Marsh_Wren_bird '135': Merlin_bird '136': Metroliner_aircraft '137': Mexican_Jay_bird '138': Mountain_Plover_bird '139': Mourning_Warbler_bird '140': Myrtle_Warbler_bird '141': Nelsons_Sparrow_bird '142': Neotropic_Cormorant_bird '143': Norfolk_Terrier '144': Northern_Goshawk_bird '145': Norwich_Terrier '146': Oak_Titmouse_bird '147': Old_English_Sheepdog '148': Olive_Sparrow_bird '149': Olive_sided_Flycatcher_bird '150': Orange_crowned_Warbler_bird '151': Otterhound '152': Pacific_Golden_Plover_bird '153': Pacific_Loon_bird '154': Pacific_slope_Flycatcher_bird '155': Parakeet_Auklet_bird '156': Pectoral_Sandpiper_bird '157': Pekingese '158': Pelagic_Cormorant_bird '159': Philadelphia_Vireo_bird '160': Pigeon_Guillemot_bird '161': Plumbeous_Vireo_bird '162': Pomarine_Jaeger_bird '163': Prairie_Warbler_bird '164': Red_Knot_bird '165': Red_Phalarope_bird '166': Red_eyed_Vireo_bird '167': Red_faced_Cormorant_bird '168': Red_naped_Sapsucker_bird '169': Red_necked_Grebe_bird '170': Red_necked_Phalarope_bird '171': Redbone_Coonhound '172': Rhinoceros_Auklet_bird '173': Rhodesian_Ridgeback '174': Rock_Ptarmigan_bird '175': Rock_Sandpiper_bird '176': Roseate_Tern_bird '177': Rufous_crowned_Sparrow_bird '178': SR-20_aircraft '179': Saab_2000_aircraft '180': Saab_340_aircraft '181': Saltmarsh_Sparrow_bird '182': Saluki '183': Sayornis_bird '184': Scaled_Quail_bird '185': Scott_Oriole_bird '186': Scottish_Deerhound '187': Scottish_Terrier '188': Sealyham_Terrier '189': Seaside_Sparrow_bird '190': Sedge_Wren_bird '191': Semipalmated_Sandpiper_bird '192': Sharp_shinned_Hawk_bird '193': Shih_Tzu '194': Shiny_Cowbird_bird '195': Short_billed_Dowitcher_bird '196': Song_Sparrow_bird '197': Sooty_Grouse_bird '198': Sora_bird '199': Spruce_Grouse_bird '200': Staffordshire_Bull_Terrier '201': Stilt_Sandpiper_bird '202': Surf_Scoter_bird '203': Sussex_Spaniel '204': Swainsons_Thrush_bird '205': Swamp_Sparrow_bird '206': Tennessee_Warbler_bird '207': Tibetan_Mastiff '208': Tibetan_Terrier '209': Townsends_Warbler_bird '210': Tree_Sparrow_bird '211': Treeing_Walker_Coonhound '212': Tropical_Kingbird_bird '213': Tu-134_aircraft '214': Tu-154_aircraft '215': Veery_bird '216': Vizsla '217': Warbling_Vireo_bird '218': Welsh_Springer_Spaniel '219': Western_Sandpiper_bird '220': Western_Scrub_Jay_bird '221': Western_Wood_Pewee_bird '222': White_eyed_Vireo_bird '223': White_rumped_Sandpiper_bird '224': White_tailed_Ptarmigan_bird '225': White_winged_Scoter_bird '226': Williamsons_Sapsucker_bird '227': Willow_Flycatcher_bird '228': Willow_Ptarmigan_bird '229': Wilsons_Phalarope_bird '230': Wilsons_Warbler_bird '231': Winter_Wren_bird '232': Wire_Fox_Terrier '233': Worm_eating_Warbler_bird '234': Wrentit_bird '235': Yak-42_aircraft '236': Yellow_bellied_Flycatcher_bird '237': Yellow_breasted_Chat_bird '238': Yellow_eyed_Junco_bird '239': Yellow_throated_Warbler_bird '240': Zone_tailed_Hawk_bird '241': barn_spider '242': bishop_of_llandaff_flowers '243': bolete '244': borzoi '245': brussels_griffon '246': cape_flower_flowers '247': chiton '248': consomme '249': dowitcher '250': dung_beetle '251': dust_jacket '252': earth_star_fungus '253': eastern_diamondback_rattlesnake '254': eastern_hog-nosed_snake '255': eel '256': eggnog '257': flatfish '258': flatworm '259': gar_fish '260': gibbon '261': globe-flower_flowers '262': great_masterwort_flowers '263': green_mamba '264': guenon '265': guillotine '266': gyromitra '267': isopod '268': kingsnake '269': ladle '270': lakeshore '271': langur '272': letter_opener '273': mallow_flowers '274': mexican_aster_flowers '275': newt '276': night_snake '277': partridge '278': patas_monkey '279': ptarmigan '280': sea_cucumber '281': sea_snake '282': sidewinder_rattlesnake '283': stratified_texture '284': sword_lily_flowers '285': thorn_apple_flowers '286': tree_mallow_flowers '287': vine_snake '288': water_snake '289': worm_snake splits: - name: train num_bytes: 4375007936.5 num_examples: 130500 download_size: 4911914985 dataset_size: 4375007936.5 ---
Telugu-LLM-Labs/nepali_alpaca_yahma_cleaned_filtered
--- dataset_info: features: - name: instruction dtype: string - name: input dtype: string - name: output dtype: string - name: nepali_instruction dtype: string - name: nepali_input dtype: string - name: nepali_output dtype: string splits: - name: train num_bytes: 106615877 num_examples: 28910 download_size: 45751925 dataset_size: 106615877 configs: - config_name: default data_files: - split: train path: data/train-* ---
AdaptLLM/FPB
--- configs: - config_name: FPB data_files: - split: train path: train.csv - split: test path: test.csv task_categories: - text-classification - question-answering - zero-shot-classification language: - en tags: - finance --- # Domain Adaptation of Large Language Models This repo contains the **FPB dataset** used in our **ICLR 2024** paper [Adapting Large Language Models via Reading Comprehension](https://huggingface.co/papers/2309.09530). We explore **continued pre-training on domain-specific corpora** for large language models. While this approach enriches LLMs with domain knowledge, it significantly hurts their prompting ability for question answering. Inspired by human learning via reading comprehension, we propose a simple method to **transform large-scale pre-training corpora into reading comprehension texts**, consistently improving prompting performance across tasks in biomedicine, finance, and law domains. **Our 7B model competes with much larger domain-specific models like BloombergGPT-50B**. ### 🤗 We are currently working hard on developing models across different domains, scales and architectures! Please stay tuned! 🤗 **************************** **Updates** **************************** * 2024/4/2: Released the raw data splits (train and test) of all the evaluation datasets * 2024/1/16: 🎉 Our [research paper](https://huggingface.co/papers/2309.09530) has been accepted by ICLR 2024!!!🎉 * 2023/12/19: Released our [13B base models](https://huggingface.co/AdaptLLM/law-LLM-13B) developed from LLaMA-1-13B. * 2023/12/8: Released our [chat models](https://huggingface.co/AdaptLLM/law-chat) developed from LLaMA-2-Chat-7B. * 2023/9/18: Released our [paper](https://huggingface.co/papers/2309.09530), [code](https://github.com/microsoft/LMOps), [data](https://huggingface.co/datasets/AdaptLLM/law-tasks), and [base models](https://huggingface.co/AdaptLLM/law-LLM) developed from LLaMA-1-7B. ## Domain-Specific LLaMA-1 ### LLaMA-1-7B In our paper, we develop three domain-specific models from LLaMA-1-7B, which are also available in Huggingface: [Biomedicine-LLM](https://huggingface.co/AdaptLLM/medicine-LLM), [Finance-LLM](https://huggingface.co/AdaptLLM/finance-LLM) and [Law-LLM](https://huggingface.co/AdaptLLM/law-LLM), the performances of our AdaptLLM compared to other domain-specific LLMs are: <p align='center'> <img src="https://cdn-uploads.huggingface.co/production/uploads/650801ced5578ef7e20b33d4/6efPwitFgy-pLTzvccdcP.png" width="700"> </p> ### LLaMA-1-13B Moreover, we scale up our base model to LLaMA-1-13B to see if **our method is similarly effective for larger-scale models**, and the results are consistently positive too: [Biomedicine-LLM-13B](https://huggingface.co/AdaptLLM/medicine-LLM-13B), [Finance-LLM-13B](https://huggingface.co/AdaptLLM/finance-LLM-13B) and [Law-LLM-13B](https://huggingface.co/AdaptLLM/law-LLM-13B). ## Domain-Specific LLaMA-2-Chat Our method is also effective for aligned models! LLaMA-2-Chat requires a [specific data format](https://huggingface.co/blog/llama2#how-to-prompt-llama-2), and our **reading comprehension can perfectly fit the data format** by transforming the reading comprehension into a multi-turn conversation. We have also open-sourced chat models in different domains: [Biomedicine-Chat](https://huggingface.co/AdaptLLM/medicine-chat), [Finance-Chat](https://huggingface.co/AdaptLLM/finance-chat) and [Law-Chat](https://huggingface.co/AdaptLLM/law-chat) ## Domain-Specific Tasks ### Pre-templatized/Formatted Testing Splits To easily reproduce our prompting results, we have uploaded the filled-in zero/few-shot input instructions and output completions of the test each domain-specific task: [biomedicine-tasks](https://huggingface.co/datasets/AdaptLLM/medicine-tasks), [finance-tasks](https://huggingface.co/datasets/AdaptLLM/finance-tasks), and [law-tasks](https://huggingface.co/datasets/AdaptLLM/law-tasks). **Note:** those filled-in instructions are specifically tailored for models before alignment and do NOT fit for the specific data format required for chat models. ### Raw Datasets We have also uploaded the raw training and testing splits, for facilitating fine-tuning or other usages: - [ChemProt](https://huggingface.co/datasets/AdaptLLM/ChemProt) - [RCT](https://huggingface.co/datasets/AdaptLLM/RCT) - [ConvFinQA](https://huggingface.co/datasets/AdaptLLM/ConvFinQA) - [FiQA_SA](https://huggingface.co/datasets/AdaptLLM/FiQA_SA) - [Headline](https://huggingface.co/datasets/AdaptLLM/Headline) - [NER](https://huggingface.co/datasets/AdaptLLM/NER) - [FPB](https://huggingface.co/datasets/AdaptLLM/FPB) The other datasets used in our paper have already been available in huggingface, and you can directly load them with the following code: ```python from datasets import load_dataset # MQP: dataset = load_dataset('medical_questions_pairs') # PubmedQA: dataset = load_dataset('bigbio/pubmed_qa') # USMLE: dataset=load_dataset('GBaker/MedQA-USMLE-4-options') # SCOTUS dataset = load_dataset("lex_glue", 'scotus') # CaseHOLD dataset = load_dataset("lex_glue", 'case_hold') # UNFAIR-ToS dataset = load_dataset("lex_glue", 'unfair_tos') ``` ## Citation If you find our work helpful, please cite us: ```bibtex @inproceedings{ cheng2024adapting, title={Adapting Large Language Models via Reading Comprehension}, author={Daixuan Cheng and Shaohan Huang and Furu Wei}, booktitle={The Twelfth International Conference on Learning Representations}, year={2024}, url={https://openreview.net/forum?id=y886UXPEZ0} } ``` and the original dataset: ```bibtex @article{FPB, author = {Pekka Malo and Ankur Sinha and Pekka J. Korhonen and Jyrki Wallenius and Pyry Takala}, title = {Good debt or bad debt: Detecting semantic orientations in economic texts}, journal = {J. Assoc. Inf. Sci. Technol.}, volume = {65}, number = {4}, pages = {782--796}, year = {2014} } ```
jiangjiechen/ekar_chinese
--- language: - zh license: - afl-3.0 size_categories: - 1K<n<2K source_datasets: - original task_categories: - question-answering - text-generation task_ids: - analogical-qa - explanation-generation --- # Dataset Card for ekar_chinese ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** https://ekar-leaderboard.github.io - **Paper:** [E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning](https://aclanthology.org/2022.findings-acl.311) - **Leaderboard:** https://eval.ai/web/challenges/challenge-page/1671/overview - **Point of Contact:** jjchen19@fudan.edu.cn ### Dataset Summary ***New!***(9/18/2022) E-KAR `v1.1` is officially released (at the `main` branch), **with a higher-quality English dataset!** In `v1.1`, we further improve the Chinese-to-English translation quality of the English E-KAR, with over 600 problems and over 1,000 explanations manually adjusted. You can still find previous version (as in the paper) in the `v1.0` branch in the repo. For more information please refer to https://ekar-leaderboard.github.io. The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area. ### Supported Tasks and Leaderboards - `analogical-qa`: The dataset can be used to train a model for analogical reasoning in the form of multiple-choice QA. - `explanation-generation`: The dataset can be used to generate free-text explanations to rationalize analogical reasoning. This dataset supports two task modes: EASY mode and HARD mode: - `EASY mode`: where query explanation can be used as part of the input. - `HARD mode`: no explanation is allowed as part of the input. ### Languages This dataset is in Chinese, with its [English version](https://huggingface.co/datasets/Jiangjie/ekar_english). ## Dataset Structure ### Data Instances ```json { "id": "982f17-en", "question": "plant:coal", "choices": { "label": [ "A", "B", "C", "D" ], "text": [ "white wine:aged vinegar", "starch:corn", "milk:yogurt", "pickled cabbage:cabbage" ] }, "answerKey": "C", "explanation": [ "\"plant\" is the raw material of \"coal\".", "both \"white wine\" and \"aged vinegar\" are brewed.", "\"starch\" is made of \"corn\", and the order of words is inconsistent with the query.", "\"yogurt\" is made from \"milk\".", "\"pickled cabbage\" is made of \"cabbage\", and the word order is inconsistent with the query." ], "relation": [ [["plant", "coal", "R3.7"]], [["white wine", "aged vinegar", "R2.4"]], [["corn", "starch", "R3.7"]], [["milk", "yogurt", "R3.7"]], [["cabbage", "pickled cabbage", "R3.7"]] ] } ``` ### Data Fields - id: a string identifier for each example. - question: query terms. - choices: candidate answer terms. - answerKey: correct answer. - explanation: explanations for query (1st) and candidate answers (2nd-5th). - relation: annotated relations for terms in the query (1st) and candidate answers (2nd-5th). ### Data Splits | name |train|validation|test| |:-----:|:---:|:--------:|:--:| |default| 1155 | 165 | 335 | |description| | | blinded | ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset The purpose of this dataset is to help develop analogical reasoning systems that are right for the right reasons. ### Discussion of Biases This dataset is sourced and translated from the Civil Service Examinations of China. Therefore, it may contain information biased to Chinese culture. ### Other Known Limitations 1. The explanation annotation process in E-KAR (not the EG task) is mostly post-hoc and reflects only the result of reasoning. Humans solve the analogy problems in a trial-and-error manner, i.e., adjusting the abduced source structure and trying to find the most suited one for all candidate answers. Therefore, such explanations cannot offer supervision for intermediate reasoning. 2. E-KAR only presents one feasible explanation for each problem, whereas there may be several. ## Additional Information ### Dataset Curators The dataset was initially created and curated by Jiangjie Chen (Fudan University, ByteDance), Rui Xu (Fudan University), Ziquan Fu (Brain Technologies, Inc.), Wei Shi (South China University of Technology), Xinbo Zhang (ByteDance), Changzhi Sun (ByteDance) and other colleagues at ByteDance and Fudan University. ### Licensing Information [Needs More Information] ### Citation Information ```latex @inproceedings{chen-etal-2022-e, title = "{E}-{KAR}: A Benchmark for Rationalizing Natural Language Analogical Reasoning", author = "Chen, Jiangjie and Xu, Rui and Fu, Ziquan and Shi, Wei and Li, Zhongqiao and Zhang, Xinbo and Sun, Changzhi and Li, Lei and Xiao, Yanghua and Zhou, Hao", booktitle = "Findings of the Association for Computational Linguistics: ACL 2022", month = may, year = "2022", address = "Dublin, Ireland", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.findings-acl.311", pages = "3941--3955", } ```
albertxu/CrosswordQA
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - unknown multilinguality: - monolingual size_categories: - 1M<n<10M task_categories: - question-answering task_ids: - open-domain-qa --- # Dataset Card for CrosswordQA ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-instances) - [Data Splits](#data-instances) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) ## Dataset Description - **Homepage:** [Needs More Information] - **Repository:** https://github.com/albertkx/Berkeley-Crossword-Solver - **Paper:** [Needs More Information] - **Leaderboard:** [Needs More Information] - **Point of Contact:** [Albert Xu](mailto:albertxu@usc.edu) and [Eshaan Pathak](mailto:eshaanpathak@berkeley.edu) ### Dataset Summary The CrosswordQA dataset is a set of over 6 million clue-answer pairs scraped from the New York Times and many other crossword publishers. The dataset was created to train the Berkeley Crossword Solver's QA model. See our paper for more information. Answers are automatically segmented (e.g., BUZZLIGHTYEAR -> Buzz Lightyear), and thus may occasionally be segmented incorrectly. ### Supported Tasks and Leaderboards [Needs More Information] ### Languages [Needs More Information] ## Dataset Structure ### Data Instances ``` { "id": 0, "clue": "Clean-up target", "answer": "mess" } ``` ### Data Fields [Needs More Information] ### Data Splits [Needs More Information] ## Dataset Creation ### Curation Rationale [Needs More Information] ### Source Data #### Initial Data Collection and Normalization [Needs More Information] #### Who are the source language producers? [Needs More Information] ### Annotations #### Annotation process [Needs More Information] #### Who are the annotators? [Needs More Information] ### Personal and Sensitive Information [Needs More Information] ## Considerations for Using the Data ### Social Impact of Dataset [Needs More Information] ### Discussion of Biases [Needs More Information] ### Other Known Limitations [Needs More Information] ## Additional Information ### Dataset Curators [Needs More Information] ### Licensing Information [Needs More Information] ### Citation Information [Needs More Information]
Team-PIXEL/rendered-bookcorpus
--- annotations_creators: - no-annotation language_creators: - found language: - en license: - unknown multilinguality: - monolingual pretty_name: Team-PIXEL/rendered-bookcorpus size_categories: - 1M<n<10M source_datasets: - rendered|BookCorpusOpen task_categories: - masked-auto-encoding - rendered-language-modelling task_ids: - masked-auto-encoding - rendered-language-modeling paperswithcode_id: bookcorpus --- # Dataset Card for Team-PIXEL/rendered-bookcorpus ## Dataset Description - **Homepage:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel) - **Repository:** [https://github.com/xplip/pixel](https://github.com/xplip/pixel) - **Papers:** [Aligning Books and Movies: Towards Story-like Visual Explanations by Watching Movies and Reading Books ](https://arxiv.org/abs/1506.06724), [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991) - **Point of Contact:** [Phillip Rust](mailto:p.rust@di.ku.dk) - **Size of downloaded dataset files:** 63.58 GB - **Size of the generated dataset:** 63.59 GB - **Total amount of disk used:** 127.17 GB ### Dataset Summary This dataset is a version of the BookCorpus available at [https://huggingface.co/datasets/bookcorpusopen](https://huggingface.co/datasets/bookcorpusopen) with examples rendered as images with resolution 16x8464 pixels. The original BookCorpus was introduced by Zhu et al. (2015) in [Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books](https://arxiv.org/abs/1506.06724) and contains 17868 books of various genres. The rendered BookCorpus was used to train the [PIXEL](https://huggingface.co/Team-PIXEL/pixel-base) model introduced in the paper [Language Modelling with Pixels](https://arxiv.org/abs/2207.06991) by Phillip Rust, Jonas F. Lotz, Emanuele Bugliarello, Elizabeth Salesky, Miryam de Lhoneux, and Desmond Elliott. The BookCorpusOpen dataset was rendered book-by-book into 5.4M examples containing approximately 1.1B words in total. The dataset is stored as a collection of 162 parquet files. It was rendered using the script openly available at [https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_bookcorpus.py](https://github.com/xplip/pixel/blob/main/scripts/data/prerendering/prerender_bookcorpus.py). The text renderer uses a PyGame backend and a collection of merged Google Noto Sans fonts. The PyGame backend does not support complex text layouts (e.g. ligatures and right-to-left scripts) or emoji, so occurrences of such text in the BookCorpus have not been rendered accurately. Each example consists of a "pixel_values" field which stores a 16x8464 (height, width) grayscale image containing the rendered text, and an integer value "num_patches" which stores how many image patches (when splitting the image into 529 non-overlapping patches of resolution 16x16 pixels) in the associated images contain actual text, i.e. are neither blank (fully white) nor are the fully black end-of-sequence patch. The rendered BookCorpus can be loaded via the datasets library as follows: ```python from datasets import load_dataset # Download the full dataset to disk load_dataset("Team-PIXEL/rendered-bookcorpus", split="train") # Stream the dataset directly from the hub load_dataset("Team-PIXEL/rendered-bookcorpus", split="train", streaming=True) ``` ## Dataset Structure ### Data Instances - **Size of downloaded dataset files:** 63.58 GB - **Size of the generated dataset:** 63.59 GB - **Total amount of disk used:** 127.17 GB An example of 'train' looks as follows. ``` { "pixel_values": <PIL.PngImagePlugin.PngImageFile image mode=L size=8464x16 "num_patches": "498" } ``` ### Data Fields The data fields are the same among all splits. - `pixel_values`: an `Image` feature. - `num_patches`: a `Value(dtype="int64")` feature. ### Data Splits |train| |:----| |5400000| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information The books have been crawled from smashwords.com, see their [terms of service](https://www.smashwords.com/about/tos) for more information. A data sheet for this dataset has also been created and published in [Addressing "Documentation Debt" in Machine Learning Research: A Retrospective Datasheet for BookCorpus](https://arxiv.org/abs/2105.05241) ### Citation Information ```bibtex @InProceedings{Zhu_2015_ICCV, title = {Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books}, author = {Zhu, Yukun and Kiros, Ryan and Zemel, Rich and Salakhutdinov, Ruslan and Urtasun, Raquel and Torralba, Antonio and Fidler, Sanja}, booktitle = {The IEEE International Conference on Computer Vision (ICCV)}, month = {December}, year = {2015} } ``` ```bibtex @article{rust-etal-2022-pixel, title={Language Modelling with Pixels}, author={Phillip Rust and Jonas F. Lotz and Emanuele Bugliarello and Elizabeth Salesky and Miryam de Lhoneux and Desmond Elliott}, journal={arXiv preprint}, year={2022}, url={https://arxiv.org/abs/2207.06991} } ``` ### Contact Person This dataset was added by Phillip Rust. Github: [@xplip](https://github.com/xplip) Twitter: [@rust_phillip](https://twitter.com/rust_phillip)
cmotions/Beatles_lyrics
--- language: - en tags: - language modeling datasets: - full dataset - cleaned dataset --- ## Dataset overview This dataset contains all lyrics from songs produced by The Beatles, 180 in total. There a two splits available in the dictionary: - dataset_cleaned: contains all lyrics including Intro, Outro, Chorus tagging. - dataset_full: contains only lyrics without any tagging Each split contains the title, album, the lyrics for the song, the length of the lyrics field (tokens) and a number.
joelniklaus/mapa
--- annotations_creators: - other language_creators: - found language: - multilingual - bg - cs - da - de - el - en - es - et - fi - fr - ga - hu - it - lt - lv - mt - nl - pt - ro - sk - sv license: - cc-by-4.0 multilinguality: - multilingual size_categories: - 1K<n<10K source_datasets: - original task_categories: - token-classification task_ids: - named-entity-recognition pretty_name: Spanish Datasets for Sensitive Entity Detection in the Legal Domain tags: - named-entity-recognition-and-classification --- # Dataset Card for Multilingual European Datasets for Sensitive Entity Detection in the Legal Domain ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** - ** Repository:** [Spanish](https://elrc-share.eu/repository/browse/mapa-anonymization-package-spanish/b550e1a88a8311ec9c1a00155d026706687917f92f64482587c6382175dffd76/), [Most](https://elrc-share.eu/repository/search/?q=mfsp:3222a6048a8811ec9c1a00155d0267067eb521077db54d6684fb14ce8491a391), [German, Portuguese, Slovak, Slovenian, Swedish](https://elrc-share.eu/repository/search/?q=mfsp:833df1248a8811ec9c1a00155d0267067685dcdb77064822b51cc16ab7b81a36) - **Paper:** de Gibert Bonet, O., García Pablos, A., Cuadros, M., & Melero, M. (2022). Spanish Datasets for Sensitive Entity Detection in the Legal Domain. Proceedings of the Language Resources and Evaluation Conference, June, 3751–3760. http://www.lrec-conf.org/proceedings/lrec2022/pdf/2022.lrec-1.400.pdf - **Leaderboard:** - **Point of Contact:** [Joel Niklaus](mailto:joel.niklaus.2@bfh.ch) ### Dataset Summary The dataset consists of 12 documents (9 for Spanish due to parsing errors) taken from EUR-Lex, a multilingual corpus of court decisions and legal dispositions in the 24 official languages of the European Union. The documents have been annotated for named entities following the guidelines of the [MAPA project]( https://mapa-project.eu/) which foresees two annotation level, a general and a more fine-grained one. The annotated corpus can be used for named entity recognition/classification. ### Supported Tasks and Leaderboards The dataset supports the task of Named Entity Recognition and Classification (NERC). ### Languages The following languages are supported: bg, cs, da, de, el, en, es, et, fi, fr, ga, hu, it, lt, lv, mt, nl, pt, ro, sk, sv ## Dataset Structure ### Data Instances The file format is jsonl and three data splits are present (train, validation and test). Named Entity annotations are non-overlapping. ### Data Fields For the annotation the documents have been split into sentences. The annotations has been done on the token level. The files contain the following data fields - `language`: language of the sentence - `type`: The document type of the sentence. Currently, only EUR-LEX is supported. - `file_name`: The document file name the sentence belongs to. - `sentence_number`: The number of the sentence inside its document. - `tokens`: The list of tokens in the sentence. - `coarse_grained`: The coarse-grained annotations for each token - `fine_grained`: The fine-grained annotations for each token As previously stated, the annotation has been conducted on a global and a more fine-grained level. The tagset used for the global and the fine-grained named entities is the following: - Address - Building - City - Country - Place - Postcode - Street - Territory - Amount - Unit - Value - Date - Year - Standard Abbreviation - Month - Day of the Week - Day - Calender Event - Person - Age - Email - Ethnic Category - Family Name - Financial - Given Name – Female - Given Name – Male - Health Insurance Number - ID Document Number - Initial Name - Marital Status - Medical Record Number - Nationality - Profession - Role - Social Security Number - Title - Url - Organisation - Time - Vehicle - Build Year - Colour - License Plate Number - Model - Type The final coarse grained tagset (in IOB notation) is the following: `['O', 'B-ORGANISATION', 'I-ORGANISATION', 'B-ADDRESS', 'I-ADDRESS', 'B-DATE', 'I-DATE', 'B-PERSON', 'I-PERSON', 'B-AMOUNT', 'I-AMOUNT', 'B-TIME', 'I-TIME']` The final fine grained tagset (in IOB notation) is the following: `[ 'O', 'B-BUILDING', 'I-BUILDING', 'B-CITY', 'I-CITY', 'B-COUNTRY', 'I-COUNTRY', 'B-PLACE', 'I-PLACE', 'B-TERRITORY', 'I-TERRITORY', 'I-UNIT', 'B-UNIT', 'B-VALUE', 'I-VALUE', 'B-YEAR', 'I-YEAR', 'B-STANDARD ABBREVIATION', 'I-STANDARD ABBREVIATION', 'B-MONTH', 'I-MONTH', 'B-DAY', 'I-DAY', 'B-AGE', 'I-AGE', 'B-ETHNIC CATEGORY', 'I-ETHNIC CATEGORY', 'B-FAMILY NAME', 'I-FAMILY NAME', 'B-INITIAL NAME', 'I-INITIAL NAME', 'B-MARITAL STATUS', 'I-MARITAL STATUS', 'B-PROFESSION', 'I-PROFESSION', 'B-ROLE', 'I-ROLE', 'B-NATIONALITY', 'I-NATIONALITY', 'B-TITLE', 'I-TITLE', 'B-URL', 'I-URL', 'B-TYPE', 'I-TYPE', ]` ### Data Splits Splits created by Joel Niklaus. | language | # train files | # validation files | # test files | # train sentences | # validation sentences | # test sentences | |:-----------|----------------:|---------------------:|---------------:|--------------------:|-------------------------:|-------------------:| | bg | 9 | 1 | 2 | 1411 | 166 | 560 | | cs | 9 | 1 | 2 | 1464 | 176 | 563 | | da | 9 | 1 | 2 | 1455 | 164 | 550 | | de | 9 | 1 | 2 | 1457 | 166 | 558 | | el | 9 | 1 | 2 | 1529 | 174 | 584 | | en | 9 | 1 | 2 | 893 | 98 | 408 | | es | 7 | 1 | 1 | 806 | 248 | 155 | | et | 9 | 1 | 2 | 1391 | 163 | 516 | | fi | 9 | 1 | 2 | 1398 | 187 | 531 | | fr | 9 | 1 | 2 | 1297 | 97 | 490 | | ga | 9 | 1 | 2 | 1383 | 165 | 515 | | hu | 9 | 1 | 2 | 1390 | 171 | 525 | | it | 9 | 1 | 2 | 1411 | 162 | 550 | | lt | 9 | 1 | 2 | 1413 | 173 | 548 | | lv | 9 | 1 | 2 | 1383 | 167 | 553 | | mt | 9 | 1 | 2 | 937 | 93 | 442 | | nl | 9 | 1 | 2 | 1391 | 164 | 530 | | pt | 9 | 1 | 2 | 1086 | 105 | 390 | | ro | 9 | 1 | 2 | 1480 | 175 | 557 | | sk | 9 | 1 | 2 | 1395 | 165 | 526 | | sv | 9 | 1 | 2 | 1453 | 175 | 539 | ## Dataset Creation ### Curation Rationale *„[…] to our knowledge, there exist no open resources annotated for NERC [Named Entity Recognition and Classificatio] in Spanish in the legal domain. With the present contribution, we intend to fill this gap. With the release of the created resources for fine-tuning and evaluation of sensitive entities detection in the legal domain, we expect to encourage the development of domain-adapted anonymisation tools for Spanish in this field“* (de Gibert Bonet et al., 2022) ### Source Data #### Initial Data Collection and Normalization The dataset consists of documents taken from EUR-Lex corpus which is publicly available. No further information on the data collection process are given in de Gibert Bonet et al. (2022). #### Who are the source language producers? The source language producers are presumably lawyers. ### Annotations #### Annotation process *"The annotation scheme consists of a complex two level hierarchy adapted to the legal domain, it follows the scheme described in (Gianola et al., 2020) […] Level 1 entities refer to general categories (PERSON, DATE, TIME, ADDRESS...) and level 2 entities refer to more fine-grained subcategories (given name, personal name, day, year, month...). Eur-Lex, CPP and DE have been annotated following this annotation scheme […] The manual annotation was performed using INCePTION (Klie et al., 2018) by a sole annotator following the guidelines provided by the MAPA consortium."* (de Gibert Bonet et al., 2022) #### Who are the annotators? Only one annotator conducted the annotation. More information are not provdided in de Gibert Bonet et al. (2022). ### Personal and Sensitive Information [More Information Needed] ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed] ### Discussion of Biases [More Information Needed] ### Other Known Limitations Note that the dataset at hand presents only a small portion of a bigger corpus as described in de Gibert Bonet et al. (2022). At the time of writing only the annotated documents from the EUR-Lex corpus were available. Note that the information given in this dataset card refer to the dataset version as provided by Joel Niklaus and Veton Matoshi. The dataset at hand is intended to be part of a bigger benchmark dataset. Creating a benchmark dataset consisting of several other datasets from different sources requires postprocessing. Therefore, the structure of the dataset at hand, including the folder structure, may differ considerably from the original dataset. In addition to that, differences with regard to dataset statistics as give in the respective papers can be expected. The reader is advised to have a look at the conversion script ```convert_to_hf_dataset.py``` in order to retrace the steps for converting the original dataset into the present jsonl-format. For further information on the original dataset structure, we refer to the bibliographical references and the original Github repositories and/or web pages provided in this dataset card. ## Additional Information ### Dataset Curators The names of the original dataset curators and creators can be found in references given below, in the section *Citation Information*. Additional changes were made by Joel Niklaus ([Email](mailto:joel.niklaus.2@bfh.ch) ; [Github](https://github.com/joelniklaus)) and Veton Matoshi ([Email](mailto:veton.matoshi@bfh.ch) ; [Github](https://github.com/kapllan)). ### Licensing Information [Attribution 4.0 International (CC BY 4.0) ](https://creativecommons.org/licenses/by/4.0/) ### Citation Information ``` @article{DeGibertBonet2022, author = {{de Gibert Bonet}, Ona and {Garc{\'{i}}a Pablos}, Aitor and Cuadros, Montse and Melero, Maite}, journal = {Proceedings of the Language Resources and Evaluation Conference}, number = {June}, pages = {3751--3760}, title = {{Spanish Datasets for Sensitive Entity Detection in the Legal Domain}}, url = {https://aclanthology.org/2022.lrec-1.400}, year = {2022} } ``` ### Contributions Thanks to [@JoelNiklaus](https://github.com/joelniklaus) and [@kapllan](https://github.com/kapllan) for adding this dataset.
Vipitis/Shadertoys
--- annotations_creators: - no-annotation language: - en - code language_creators: - machine-generated license: - cc-by-nc-sa-3.0 multilinguality: [] pretty_name: Shadertoys size_categories: - 10K<n<100K source_datasets: [] tags: - code task_categories: - text-generation - text-to-image task_ids: [] dataset_info: features: - name: num_passes dtype: int64 - name: has_inputs dtype: bool - name: name dtype: string - name: type dtype: string - name: code dtype: string - name: title dtype: string - name: description dtype: string - name: tags sequence: string - name: author dtype: string - name: license dtype: string - name: source dtype: string splits: - name: train num_bytes: 162960894 num_examples: 37841 - name: test num_bytes: 26450429 num_examples: 6617 download_size: 86294414 dataset_size: 189411323 --- # Dataset Card for Shadertoys ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Source Data](#source-data) - [Licensing Information](#licensing-information) ## Dataset Description - **Repository:** https://github.com/Vipitis/shadertoys-dataset ### Dataset Summary The Shadertoys dataset contains over 44k renderpasses collected from the Shadertoy.com API. Some shader programm contain multiple render passes. To browse a subset of this dataset, look at the [ShaderEval](https://huggingface.co/spaces/Vipitis/ShaderCoder) space. A finer variant of this dataset is [Shadertoys-fine](https://huggingface.co/datasets/Vipitis/Shadertoys-fine). ### Supported Tasks and Leaderboards `text-generation` the dataset can be used to train generative language models, for code completion tasks. `ShaderEval` [task1](https://huggingface.co/spaces/Vipitis/ShaderEval) from ShaderEval uses a dataset derived from Shadertoys to test return completion of autoregressive language models. ### Languages - English (title, description, tags, comments) - Shadercode **programming** language, a subset of GLSL specifically for Shadertoy.com ## Dataset Structure ### Data Instances A data point consists of the whole shadercode, some information from the API as well as additional metadata. ``` { 'num_passes': 1, 'has_inputs': False, 'name': 'Image', 'type': 'image', 'code': '<full code>', 'title': '<title of the shader>', 'description': '<description of the shader>', 'tags': ['tag1','tag2','tag3', ... ], 'license': 'unknown', 'author': '<username>', 'source': 'https://shadertoy.com/view/<shaderID>' } ``` ### Data Fields - 'num_passes' number of passes the parent shader program has - 'has_inputs' if any inputs were used like textures, audio streams, - 'name' Name of the renderpass, usually Image, Buffer A, Common, etc - 'type' type of the renderpass; one of `{'buffer', 'common', 'cubemap', 'image', 'sound'}` - 'code' the raw code (including comments) the whole renderpass. - 'title' Name of the Shader - 'description' description given for the Shader - 'tags' List of tags assigned to the Shader (by it's creator); there are more than 10000 unique tags. - 'license' currently in development - 'author' username of the shader author - 'source' URL to the shader. Not to the specific renderpass. ### Data Splits Currently available (shuffled): - train (85.0%) - test (15.0%) ## Dataset Creation Data retrieved starting 2022-07-20 ### Source Data #### Initial Data Collection and Normalization All data was collected via the [Shadertoy.com API](https://www.shadertoy.com/howto#q2) and then iterated over the items in 'renderpass' while adding some of the fields from 'info'. The code to generate these datasets should be published on the GitHub repository in the near future. #### Who are the source language producers? Shadertoy.com contributers which publish shaders as 'public+API' ## Licensing Information The Default [license for each Shader](https://www.shadertoy.com/terms) is CC BY-NC-SA 3.0. However, some Shaders might have a different license attached. The Dataset is currently not filtering for any licenses but gives a license tag, if easily recognizeable by naive means. Please check the first comment of each shader program yourself as to not violate any copyrights for downstream use. The main license requires share alike and by attribution. Attribution of every data field can be found in the 'author' column, but might not include further attribution within the code itself or parents from forked shaders.
ziwenyd/transcoder-geeksforgeeks
--- license: mit --- # statistics cpp-java: 627 pairs python-java: 616 pairs cpp-python: 545 pairs
its5Q/panorama
--- annotations_creators: - no-annotation language: - ru language_creators: - other license: - unknown multilinguality: - monolingual pretty_name: Dataset of satirical news from "Panorama", Russian "The Onion". size_categories: - 10K<n<100K source_datasets: - original tags: - news - articles - newspapers - panorama task_categories: - text-generation task_ids: - language-modeling --- ### Dataset Summary Dataset of satirical news from "Panorama", Russian "The Onion". ### Dataset Format Dataset is in JSONLines format, where "title" is the article title, and "body" are contents of the article.
copenlu/answerable_tydiqa
--- annotations_creators: - crowdsourced language: - en - ar - bn - fi - id - ja - sw - ko - ru - te - th language_creators: - crowdsourced license: - apache-2.0 multilinguality: - multilingual pretty_name: Answerable TyDi QA size_categories: - ['100K<n<1M'] source_datasets: - extended|wikipedia task_categories: - question-answering task_ids: - extractive-qa --- # Dataset Card for "answerable-tydiqa" ## Dataset Description - **Homepage:** [https://github.com/google-research-datasets/tydiqa](https://github.com/google-research-datasets/tydiqa) - **Paper:** [Paper](https://aclanthology.org/2020.tacl-1.30/) - **Size of downloaded dataset files:** 75.43 MB - **Size of the generated dataset:** 131.78 MB - **Total amount of disk used:** 207.21 MB ### Dataset Summary [TyDi QA](https://huggingface.co/datasets/tydiqa) is a question answering dataset covering 11 typologically diverse languages. Answerable TyDi QA is an extension of the GoldP subtask of the original TyDi QA dataset to also include unanswertable questions. ## Dataset Structure The dataset contains a train and a validation set, with 116067 and 13325 examples, respectively. Access them with ```py from datasets import load_dataset dataset = load_dataset("copenlu/answerable_tydiqa") train_set = dataset["train"] validation_set = dataset["validation"] ``` ### Data Instances Here is an example of an instance of the dataset: ``` {'question_text': 'dimanakah Dr. Ernest François Eugène Douwes Dekker meninggal?', 'document_title': 'Ernest Douwes Dekker', 'language': 'indonesian', 'annotations': {'answer_start': [45], 'answer_text': ['28 Agustus 1950'] }, 'document_plaintext': 'Ernest Douwes Dekker wafat dini hari tanggal 28 Agustus 1950 (tertulis di batu nisannya; 29 Agustus 1950 versi van der Veur, 2006) dan dimakamkan di TMP Cikutra, Bandung.', 'document_url': 'https://id.wikipedia.org/wiki/Ernest%20Douwes%20Dekker'} ``` Description of the dataset columns: | Column name | type | Description | | ----------- | ----------- | ----------- | | document_title | str | The title of the Wikipedia article from which the data instance was generated | | document_url | str | The URL of said article | | language | str | The language of the data instance | | question_text | str | The question to answer | | document_plaintext | str | The context, a Wikipedia paragraph that might or might not contain the answer to the question | | annotations["answer_start"] | list[int] | The char index in 'document_plaintext' where the answer starts. If the question is unanswerable - [-1] | | annotations["answer_text"] | list[str] | The answer, a span of text from 'document_plaintext'. If the question is unanswerable - [''] | **Notice:** If the question is *answerable*, annotations["answer_start"] and annotations["answer_text"] contain a list of length 1 (In some variations of the dataset the lists might be longer, e.g. if more than one person annotated the instance, but not in our case). If the question is *unanswerable*, annotations["answer_start"] will have "-1", while annotations["answer_text"] contain a list with an empty sring. ## Useful stuff Check out the [datasets ducumentations](https://huggingface.co/docs/datasets/quickstart) to learn how to manipulate and use the dataset. Specifically, you might find the following functions useful: `dataset.filter`, for filtering out data (useful for keeping instances of specific languages, for example). `dataset.map`, for manipulating the dataset. `dataset.to_pandas`, to convert the dataset into a pandas.DataFrame format. ``` @article{tydiqa, title = {TyDi QA: A Benchmark for Information-Seeking Question Answering in Typologically Diverse Languages}, author = {Jonathan H. Clark and Eunsol Choi and Michael Collins and Dan Garrette and Tom Kwiatkowski and Vitaly Nikolaev and Jennimaria Palomaki} year = {2020}, journal = {Transactions of the Association for Computational Linguistics} } ``` ### Contributions Thanks to [@thomwolf](https://github.com/thomwolf), [@albertvillanova](https://github.com/albertvillanova), [@lewtun](https://github.com/lewtun), [@patrickvonplaten](https://github.com/patrickvonplaten) for adding this dataset.
SLPL/naab-raw
--- language: - fa license: - mit multilinguality: - monolingual task_categories: - fill-mask - text-generation task_ids: - language-modeling - masked-language-modeling pretty_name: naab-raw (raw version of the naab corpus) --- # naab-raw (raw version of the naab corpus) _[If you want to join our community to keep up with news, models and datasets from naab, click on [this](https://docs.google.com/forms/d/e/1FAIpQLSe8kevFl_ODCx-zapAuOIAQYr8IvkVVaVHOuhRL9Ha0RVJ6kg/viewform) link.]_ ## Table of Contents - [Dataset Card Creation Guide](#dataset-card-creation-guide) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Changelog](#changelog) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Contribution Guideline](#contribution-guideline) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [Sharif Speech and Language Processing Lab](https://huggingface.co/SLPL) - **Paper:** [naab: A ready-to-use plug-and-play corpus for Farsi](https://arxiv.org/abs/2208.13486) - **Point of Contact:** [Sadra Sabouri](mailto:sabouri.sadra@gmail.com) ### Dataset Summary This is the raw (uncleaned) version of the [naab](https://huggingface.co/datasets/SLPL/naab) corpus. You can use also customize our [preprocess script](https://github.com/Sharif-SLPL/t5-fa/tree/main/preprocess) and make your own cleaned corpus. This repository is a hub for all Farsi corpora. Feel free to add your corpus following the [contribution guidelines](#contribution-guideline). You can download the dataset by the command below: ```python from datasets import load_dataset dataset = load_dataset("SLPL/naab-raw") ``` If you wanted to download a specific part of the corpus you can set the config name to the specific corpus name: ```python from datasets import load_dataset dataset = load_dataset("SLPL/naab-raw", "CC-fa") ``` ### Supported Tasks and Leaderboards This corpus can be used for training all language models trained by Masked Language Modeling (MLM) or any other self-supervised objective. - `language-modeling` - `masked-language-modeling` ### Changelog It's crucial to log changes on the projects which face changes periodically. Please refer to the [CHANGELOG.md](https://huggingface.co/datasets/SLPL/naab-raw/blob/main/CHANGELOG.md) for more details. ## Dataset Structure Each row of the dataset will look like something like the below: ```json { 'text': "این یک تست برای نمایش یک پاراگراف در پیکره متنی ناب است.", } ``` + `text` : the textual paragraph. ### Data Splits This corpus contains only a split (the `train` split). ## Dataset Creation ### Curation Rationale Here are some details about each part of this corpus. #### CC-fa The Common Crawl corpus contains petabytes of data collected since 2008. It contains raw web page data, extracted metadata, and text extractions. We use the Farsi part of it here. #### W2C The W2C stands for Web to Corpus and it contains several corpera. We contain the Farsi part of it in this corpus. ### Contribution Guideline In order to add your dataset, you should follow the below steps and make a pull request in order to be merged with the _naab-raw_: 1. Add your dataset to `_CORPUS_URLS` in `naab-raw.py` like: ```python ... "DATASET_NAME": "LINK_TO_A_PUBLIC_DOWNLOADABLE_FILE.txt" ... ``` 2. Add a log of your changes to the [CHANGELOG.md](https://huggingface.co/datasets/SLPL/naab-raw/blob/main/CHANGELOG.md). 3. Add some minor descriptions to the [Curation Rationale](#curation-rationale) under a subsection with your dataset name. ### Personal and Sensitive Information Since this corpus is briefly a compilation of some former corpora we take no responsibility for personal information included in this corpus. If you detect any of these violations please let us know, we try our best to remove them from the corpus ASAP. We tried our best to provide anonymity while keeping the crucial information. We shuffled some parts of the corpus so the information passing through possible conversations wouldn't be harmful. ## Additional Information ### Dataset Curators + Sadra Sabouri (Sharif University of Technology) + Elnaz Rahmati (Sharif University of Technology) ### Licensing Information mit ### Citation Information ``` @article{sabouri2022naab, title={naab: A ready-to-use plug-and-play corpus for Farsi}, author={Sabouri, Sadra and Rahmati, Elnaz and Gooran, Soroush and Sameti, Hossein}, journal={arXiv preprint arXiv:2208.13486}, year={2022} } ``` DOI:[https://doi.org/10.48550/arXiv.2208.13486](https://doi.org/10.48550/arXiv.2208.13486). ### Contributions Thanks to [@sadrasabouri](https://github.com/sadrasabouri) and [@elnazrahmati](https://github.com/elnazrahmati) for adding this dataset. ### Keywords + Farsi + Persian + raw text + پیکره فارسی + پیکره متنی + آموزش مدل زبانی
zyznull/dureader-retrieval-corpus
--- license: apache-2.0 --- # dureader 数据来自DuReader-Retreval数据集,这里是[原始地址](https://github.com/baidu/DuReader/tree/master/DuReader-Retrieval)。 > 本数据集只用作学术研究使用。如果本仓库涉及侵权行为,会立即删除。
GabeHD/pokemon-type-captions
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: train num_bytes: 19372532.0 num_examples: 898 download_size: 0 dataset_size: 19372532.0 --- # Dataset Card for Pokémon type captions Contains official artwork and type-specific caption for Pokémon #1-898 (Bulbasaur-Calyrex). Each Pokémon is represented once by the default form from [PokéAPI](https://pokeapi.co/) Each row contains `image` and `text` keys: - `image` is a 475x475 PIL jpg of the Pokémon's official artwork. - `text` is a label describing the Pokémon by its type(s) ## Attributions _Images and typing information pulled from [PokéAPI](https://pokeapi.co/)_ _Based on the [Lambda Labs Pokémon Blip Captions Dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions)_
Rosenberg/genia
--- license: mit ---
pinkmooncake/rico-screen2words
--- dataset_info: features: - name: image dtype: image - name: text dtype: string splits: - name: test num_bytes: 454423304.26 num_examples: 4310 - name: dev num_bytes: 246957743.116 num_examples: 2364 - name: train num_bytes: 1737030544.084 num_examples: 15743 download_size: 1897987283 dataset_size: 2438411591.46 --- # Dataset Card for "rico-screen2words" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
foldl/rumeme-desc
--- license: cc-by-sa-4.0 language: - ru tags: - ru - memes - text2image - image2text pretty_name: rumeme-desc size_categories: - 1K<n<10K --- # Dataset Card for ruMeme Descriptions ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Contributions](#contributions) ## Dataset Description ### Dataset Summary This is a dataset of more than 2500 memes in Russian and their descriptions from parsing https://vk.com/textmeme. ### Supported Tasks and Leaderboards `text2image` - generate meme from its textual description `image2text` - generate description of given meme ### Languages The text in the dataset is in only in Russian. The associated BCP-47 code is `ru`. ## Dataset Structure ### Data Fields - `Image`: Meme itself at 512 by 512px (image) - `Text`: Description (str) ### Data Splits There is not enough examples yet to split it to train/test/val in my opinion. ## Dataset Creation As already mentioned, data was gathered from parsing https://vk.com/textmeme.
silatus/1k_Website_Screenshots_and_Metadata
--- license: cc-by-nc-sa-4.0 task_categories: - text-to-image - image-classification - image-segmentation language: - en tags: - screenshots - metadata - websites - webpages pretty_name: 1000 Website Screenshots with Metadata size_categories: - 1K<n<10K --- # Dataset Card for 1000 Website Screenshots with Metadata ## Dataset Description - **Homepage:** [silatus.com](https://silatus.com/datasets) - **Point of Contact:** [datasets@silatus.com](mailto:datasets@silatus.com) ### Dataset Summary Silatus is sharing, for free, a segment of a dataset that we are using to train a generative AI model for text-to-mockup conversions. This dataset was collected in December 2022 and early January 2023, so it contains very recent data from 1,000 of the world's most popular websites. You can get our larger 10,000 website dataset for free at: [https://silatus.com/datasets](https://silatus.com/datasets) This dataset includes: **High-res screenshots** - 1024x1024px - Loaded Javascript - Loaded Images **Text metadata** - Site title - Navbar content - Full page text data - Page description **Visual metadata** - Content (images, videos, inputs, buttons) absolute & relative positions - Color profile - Base font
stanford-crfm/DSIR-filtered-pile-50M
--- license: mit language: - en size_categories: - 10M<n<100M task_categories: - text-generation - fill-mask tags: - language modeling - masked language modeling - pretraining - pile - DSIR --- # Dataset Card for DSIR-filtered-pile-50M ## Dataset Description - **Repository:** https://github.com/p-lambda/dsir - **Paper:** https://arxiv.org/abs/2302.03169 - **Point of Contact: Sang Michael Xie <xie@cs.stanford.edu>** ### Dataset Summary This dataset is a subset of The Pile, selected via the DSIR data selection method. The target distribution for DSIR is the Wikipedia and BookCorpus2 subsets of The Pile. ### Languages English (EN) ## Dataset Structure A train set is provided (51.2M examples) in jsonl format. ### Data Instances ``` {"contents": "Hundreds of soul music enthusiasts from the United Kingdom plan to make their way to Detroit this month for a series of concerts.\n\nDetroit A-Go-Go, a festival organized by DJ Phil Dick, will take place Oct. 19-22 with 26 scheduled acts.\n\nThe festival is focused on what Dick calls the northern soul movement.\n\n\"We just love Detroit soul and Motown music,\" Dick said. \"It's been popular in England for decades. Every weekend, thousands of people go out and listen to this music in England.\"\n\nArtists booked for the festival include: The Elgins, Pat Lewis, Melvin Davis, The Velvelettes, The Contours, Kim Weston, Ronnie McNeir, The Capitols, Yvonne Vernee, JJ Barnes, Gino Washington, Spyder Turner, The Adorables, Lorraine Chandler, Eddie Parker, Dusty Wilson, The Precisions, The Professionals, The Tomangoes, The Fabulous Peps andNow that\u2019s a punishment: club vice president sent to train with the reserves!\n\nFor almost an entire year, Gabriel Bostina has been playing a double role for Universitatea Cluj. Unfortunately for him, the position acquired in the club\u2019s board didn\u2019t earn him any favors from the technical staff, who recently punished the central midfielder. Twice. First of all, Bostina lost the armband during one of the training camps from Antalya for some unknown disciplinary problems and now the player & vice president has suffered further embarrassment being sent to train with the reservers \u201cfor an unlimited period\u201d.\n\nCurrently injured, he failed to show up for the weekend training sessions that were going to be supervised by the club\u2019s medical staff, so the former Otelul, Steaua and Dinamo man is now", "metadata": {"pile_set_name": ["OpenWebText2", "Pile-CC"]}, "id": 423} ``` ### Data Fields ``` "contents": the text "metadata": contains information about the source(s) of text that the text comes from. Multiple sources means that the example is concatenated from two sources. "id": Ignore - a non-unique identifier ``` ## Dataset Creation We first select 102.4M examples then concatenate every two examples to create 51.2M examples. This ensures that the examples are long enough for a max token length of 512 without much padding. We train the importance weight estimator for DSIR from The Pile validation set, where the target is Wikipedia + BookCorpus2 + Gutenberg + Books3 and the raw data come from the rest of the data sources in The Pile. We first select 98.4M examples from non-Wikipedia and book data, then randomly select 2M from Wikipedia and 0.66M each from BookCorpus2, Gutenberg, and Books3. After this, we concatenate every two examples. ### Source Data The Pile #### Initial Data Collection and Normalization We select data from The Pile, which comes in 30 random chunks. We reserve chunk 0 for validation purposes and only consider the last 29 chunks. We first divided the documents in The Pile into chunks of 128 words, according to whitespace tokenization. These chunks define the examples that we do data selection on, totaling 1.7B examples. Before DSIR, we first apply a manual quality filter (see paper for details) and only consider the examples that pass the filter. ## Considerations for Using the Data The dataset is biased towards choosing data from non-Wikipedia and non-Books sources. A balanced approach would be to mix in more data from Wikipedia and books. ### Dataset Curators Sang Michael Xie, Shibani Santurkar ### Citation Information Paper: <https://arxiv.org/abs/2302.03169> ``` @article{xie2023data, author = {Sang Michael Xie and Shibani Santurkar and Tengyu Ma and Percy Liang}, journal = {arXiv preprint arXiv:2302.03169}, title = {Data Selection for Language Models via Importance Resampling}, year = {2023}, } ```
gsdf/EasyNegative
--- license: other --- # Negative Embedding This is a Negative Embedding trained with Counterfeit. Please use it in the "\stable-diffusion-webui\embeddings" folder. It can be used with other models, but the effectiveness is not certain. # Counterfeit-V2.0.safetensors ![sample1](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/sample01.png) # AbyssOrangeMix2_sfw.safetensors ![sample2](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/sample02.png) # anything-v4.0-pruned.safetensors ![sample3](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/sample03.png)
range3/wikipedia-ja-20230101
--- license: - cc-by-sa-3.0 - gfdl task_categories: - text-generation - fill-mask language: - ja --- # range3/wikipedia-ja-20230101 This dataset consists of a parquet file from the wikipedia dataset with only Japanese data extracted. It is generated by the following python code. このデータセットは、wikipediaデータセットの日本語データのみを抽出したparquetファイルで構成されます。以下のpythonコードによって生成しています。 ```py import datasets dss = datasets.load_dataset( "wikipedia", language="ja", date="20230101", beam_runner="DirectRunner", ) for split,ds in dss.items(): ds.to_parquet(f"wikipedia-ja-20230101/{split}.parquet") ```
playgroundai/blip_clipseg_inpainting_ip2p_data_test
--- dataset_info: features: - name: text dtype: string - name: chosen_item dtype: string - name: og_image dtype: image - name: mask dtype: image - name: output_image dtype: image splits: - name: train num_bytes: 650403547.0 num_examples: 825 download_size: 650341793 dataset_size: 650403547.0 --- # Dataset Card for "blip_clipseg_inpainting_ip2p_data_test" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
5w4n/OSCAR-2019-Burmese-fix
--- pretty_name: OSCAR-2019-Burmese-fix annotations_creators: - no-annotation configs: - unshuffled_deduplicated_cleaned_my language: - my language_creators: - found license: - cc0-1.0 multilinguality: - monolingual paperswithcode_id: oscar size_categories: - 100K<n<1M source_datasets: - extended|oscar tags: - burmese - myanmar - myanmar-news - myanmar-corpus task_categories: - text-generation - fill-mask task_ids: - language-modeling - masked-language-modeling --- # Dataset Card for OSCAR-2019-Burmese-fix ## Dataset Description This dataset is a cleand version of Myanmar language in OSCAR 2019 dataset. ### Contributions [Swan Htet Aung](https://github.com/swanhtet1992)
calum/the-stack-smol-python-docstrings
--- dataset_info: features: - name: body dtype: string - name: body_hash dtype: int64 - name: docstring dtype: string - name: path dtype: string - name: name dtype: string - name: repository_name dtype: string - name: lang dtype: string - name: body_without_docstring dtype: string splits: - name: train num_bytes: 33019111.239729874 num_examples: 24616 download_size: 0 dataset_size: 33019111.239729874 --- # Dataset Card for "the-stack-smol-filtered-python-docstrings" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jjmachan/NSFW-questions
--- license: apache-2.0 dataset_info: features: - name: title dtype: string - name: subreddit dtype: string - name: post_id dtype: string - name: score dtype: int64 - name: link_flair_text dtype: string - name: is_self dtype: bool - name: over_18 dtype: bool - name: upvote_ratio dtype: float64 - name: is_question dtype: bool - name: C1 dtype: string - name: C2 dtype: string - name: C3 dtype: string - name: C4 dtype: string - name: C5 dtype: string splits: - name: train num_bytes: 1541472 num_examples: 1442 download_size: 904939 dataset_size: 1541472 ---
shahules786/prosocial-nsfw
--- dataset_info: features: - name: user dtype: string - name: subreddit dtype: string - name: post_id dtype: string - name: is_self dtype: bool - name: over_18 dtype: bool - name: is_question dtype: bool - name: rots sequence: string - name: safety_label dtype: string - name: response dtype: string - name: episode_done dtype: bool splits: - name: train num_bytes: 341584 num_examples: 1502 download_size: 166054 dataset_size: 341584 --- # Dataset Card for "prosocial-nsfw" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
enesxgrahovac/the-feynman-lectures-on-physics
--- dataset_info: features: - name: book_volume dtype: string - name: book_title dtype: string - name: chapter_number dtype: string - name: chapter_title dtype: string - name: section_number dtype: string - name: section_title dtype: string - name: section_text dtype: string splits: - name: train num_bytes: 4609643 num_examples: 641 download_size: 2276758 dataset_size: 4609643 --- # Dataset Card for "the-feynman-lectures-on-physics" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
checkai/instruction-poems
--- license: cc-by-4.0 --- Poem dataset to be used with instruction fine tuning
DataAgent/medical-qa-instruction-zhtw
--- license: cc ---
recastai/LAION-art-EN-improved-captions
--- license: cc-by-4.0 dataset_info: features: - name: orig_caption dtype: string - name: generated_caption dtype: string - name: key dtype: string - name: url dtype: string - name: index dtype: int64 splits: - name: train num_bytes: 681710086 num_examples: 2684160 download_size: 441945582 dataset_size: 681710086 language: - en --- # Dataset Card for LAION-art-EN-improved-captions ### Dataset Summary This dataset has been created by **Re:cast AI** for improving the semantic relationship of image-caption pairs. `generated_captions` were created in a semi-supervised fashion using the **Salesforce/blip2-flan-t5-xxl** model. ### Supported Tasks Fine-tuning text-to-image generators (e.g. stable-diffusion), or a searchable prompt database (requires faiss-index). ## Dataset Structure ### Data Fields - orig_caption - generated_caption - key - index - url ### Data Splits - train ### Source Data LAION-Art
jeremyc/Alpaca-Lora-GPT4-Swedish
--- language: - sv pretty_name: Alpaca-Lora GPT4 Swedish size_categories: - 10K<n<100K --- This dataset is the machine translation of the GPT4 dataset provided on Alpaca-Lora github repository. We provide two version: The full translation, and a translation of a subset of ~50 000 entries that was cleaned and do not contain instances of "I am an AI language model" or similar. This work was inspired from the French alpaca lora variant **Vigogne** and the Ukrainian alpaca lora variante **Kruk**.