datasetId
stringlengths 2
117
| author
stringlengths 2
42
⌀ | last_modified
unknown | downloads
int64 0
13.5M
| likes
int64 0
4.97k
| tags
sequencelengths 1
7.91k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
977k
|
---|---|---|---|---|---|---|---|---|
nzs234/E621_detaset_ID_TAG | nzs234 | "2024-05-04T01:57:05Z" | 0 | 1 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-05-03T02:09:52Z" | ---
license: mit
---
搭配boxingscorpionbagel/e621-2024使用,将图片ID对应的TAG提取出来,按照by Artists,Copyright,Character,|||,Species,General,mata这样的排序弄好了,字符数大于4的TAG的"_"也替换成了空格 |
YUDCHI/test-paper-chunked | YUDCHI | "2024-05-03T05:21:06Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-05-03T02:21:28Z" | ---
license: mit
---
|
rohanth/tensorforests | rohanth | "2024-05-07T00:28:54Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-05-03T02:40:18Z" | ---
license: mit
---
|
NafishZaldinanda/audio | NafishZaldinanda | "2024-05-06T07:50:17Z" | 0 | 0 | [
"language:id",
"croissant",
"region:us"
] | null | "2024-05-03T02:59:34Z" | ---
language:
- id
dataset_info:
features:
- name: audio
dtype: audio
- name: sentence
dtype: string
splits:
- name: train
num_bytes: 7182074.0
num_examples: 51
- name: test
num_bytes: 7182074.0
num_examples: 51
download_size: 13887608
dataset_size: 14364148.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
PrasannSinghal/mathprefdataex | PrasannSinghal | "2024-05-03T03:17:46Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T03:17:43Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: response_j
dtype: string
- name: response_k
dtype: string
- name: score_j
dtype: float64
- name: score_k
dtype: float64
- name: magnitude
dtype: float64
splits:
- name: train
num_bytes: 14826635
num_examples: 43705
download_size: 5962815
dataset_size: 14826635
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PrasannSinghal/uniquennprefdataex | PrasannSinghal | "2024-05-03T03:19:57Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T03:19:52Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: source
dtype: string
- name: modj
dtype: string
- name: modk
dtype: string
- name: tokj
dtype: int64
- name: tok
dtype: int64
- name: response_j
dtype: string
- name: response_k
dtype: string
- name: magnitude
dtype: float64
- name: __index_level_0__
dtype: int64
- name: score_j
dtype: float64
- name: score_k
dtype: float64
splits:
- name: train
num_bytes: 56210842
num_examples: 49700
download_size: 33538986
dataset_size: 56210842
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PrasannSinghal/contrastivedistillprefdataex | PrasannSinghal | "2024-05-03T03:21:13Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T03:21:11Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: score_j
dtype: float64
- name: score_k
dtype: float64
- name: response_j
dtype: string
- name: response_k
dtype: string
- name: magnitude
dtype: float64
splits:
- name: train
num_bytes: 15244108
num_examples: 84720
download_size: 9046088
dataset_size: 15244108
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PrasannSinghal/wordcollectorprefdataex | PrasannSinghal | "2024-05-03T03:22:01Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T03:21:57Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: source
dtype: string
- name: modj
dtype: string
- name: modk
dtype: string
- name: tokj
dtype: int64
- name: tok
dtype: int64
- name: response_j
dtype: string
- name: response_k
dtype: string
- name: magnitude
dtype: float64
- name: __index_level_0__
dtype: int64
- name: score_j
dtype: float64
- name: score_k
dtype: float64
splits:
- name: train
num_bytes: 39468454
num_examples: 49884
download_size: 22970913
dataset_size: 39468454
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PrasannSinghal/wordcolnounprompts | PrasannSinghal | "2024-05-03T03:26:56Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T03:26:46Z" | ---
dataset_info:
features:
- name: question
dtype: string
splits:
- name: train
num_bytes: 229822577
num_examples: 340025
download_size: 23959875
dataset_size: 229822577
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PrasannSinghal/cdistprompts | PrasannSinghal | "2024-05-03T03:27:42Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T03:27:40Z" | ---
dataset_info:
features:
- name: outputs
dtype: string
- name: question
dtype: string
splits:
- name: train
num_bytes: 41585307
num_examples: 286430
download_size: 23795350
dataset_size: 41585307
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
PrasannSinghal/mathprompts | PrasannSinghal | "2024-05-03T03:28:08Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T03:28:07Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: outputs
dtype: string
splits:
- name: train
num_bytes: 28754891
num_examples: 200000
download_size: 11675816
dataset_size: 28754891
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
tunis-ai/TunSwitch | tunis-ai | "2024-05-04T03:41:59Z" | 0 | 1 | [
"language:ar",
"croissant",
"arxiv:2309.11327",
"region:us"
] | null | "2024-05-03T03:46:45Z" | ---
language:
- ar
pretty_name: TunSwitch
---
Original dataset has been acquired through the following link : https://zenodo.org/records/8370566
The dataset is not cleaned yet and any contributions are welcome 🤗
## download instructions
```python
from huggingface_hub import snapshot_download
snapshot_download(repo_id="tunis-ai/TunSwitch",repo_type="dataset",local_dir=".")
```
## Information
This repo contains the data used to develop and test the Tunisian Arabic Automatic Speech Recognition model developed in the following paper :
A. A. Ben Abdallah*, A. Kabboudi, A. Kanoun, and S. Zaiem*, “Leveraging data collection and unsupervised learning for code-switched tunisian arabic automatic speech recognition”, Submitted to ICASSP 2024, vol. * : These two authors have contributed equally. 2023.
It contains 4 zipped folders containing audio data :
- TunSwitchCS.zip : containing annotated code-switched data.
- TunSwitchTO.zip : containing annotated Tunisian-Only data.
- weakly_labeled_tn.zip : containing weakly-labeled (or unlabeled) audio data. Audios may contain code-switching, but the current weak labels do not.
- test_wavs.zip : contains annotated testing data, divided between a code-switched part and a tunisian-only part.
It also contains textual data, used for language modelling, contained in TextData.zip. Finally it also contains a language-detailed annotation of TunSwitchCS in the language_annotation.zip file .
More details about the data are available in the paper. The current table are in a SpeechBrain-friendly format, the column path is irrelevant and has to be changed according to your local setting. Please use the provided train-dev-test splits if you work with this dataset.
Please cite the aforementioned paper if you use or refer to this dataset. You can find models trained and tested on this dataset Here. Space demos are also available.
If you use or refer to this dataset, please cite :
## citation
```
@misc{abdallah2023leveraging,
title={Leveraging Data Collection and Unsupervised Learning for Code-switched Tunisian Arabic Automatic Speech Recognition},
author={Ahmed Amine Ben Abdallah and Ata Kabboudi and Amir Kanoun and Salah Zaiem},
year={2023},
eprint={2309.11327},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
AIAT/Pangpuriye-dataset | AIAT | "2024-05-06T08:20:50Z" | 0 | 0 | [
"task_categories:table-question-answering",
"size_categories:100K<n<1M",
"language:th",
"language:en",
"license:cc",
"music",
"finance",
"code",
"croissant",
"region:us"
] | [
"table-question-answering"
] | "2024-05-03T04:46:37Z" | ---
license: cc
task_categories:
- table-question-answering
language:
- th
- en
tags:
- music
- finance
- code
pretty_name: Thai-SQL_Question_Syntax
size_categories:
- 100K<n<1M
---
# 🤖 [Super AI Engineer Development Program Season 4](https://superai.aiat.or.th/) - Pangpuriye House - Merged Dataset
![logo](https://huggingface.co/datasets/AIAT/Pangpuriye-generated_by_typhoon/resolve/main/logo/logo.png)
**Pangpuriye's House Completed Fine-tuning Dataset**
This dataset is a completed fine-tuning dataset, which was used in [Pangpuriye's insturction-tuned LLM model](https://huggingface.co/AIAT/Pangpuriye-openthaigpt-1.0.0-7b-chat). The dataset is set under Creative Commons license family.
## Content
The dataset consists of 145,793 rows of `input`, `instruction`, and `output`.
- `input`: generated schema
- `instruction`: (sql extract) and query question in Thai
- `output`: code sql
## Uses
The dataset is intended to be used as an instruction for fine-tuning table-based QA LLM. The instruction requires some processing before it can be utilized in the process.
## Call our dataset by `datasets` library
The following code is an example of calling our dataset via the `datasets` library.
```python
from datasets import load_dataset
dataset = load_dataset("AIAT/Pangpuriye-dataset")
```
## Acknowledgements
The dataset is collectively stored by the members of Panguriye's house during the LLMs hackathon in Super AI Engineer Development Program Season 4.
We thank the organizers of this hackathon, [OpenThaiGPT](https://openthaigpt.aieat.or.th/), [AIAT](https://aiat.or.th/), [NECTEC](https://www.nectec.or.th/en/) and [ThaiSC](https://thaisc.io/) for this challenging task and opportunity to be a part of developing Thai large language model. |
AndrewZeng/train_ppo_1to5_mix_equal_syn | AndrewZeng | "2024-05-03T04:54:54Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T04:54:04Z" | ---
license: apache-2.0
---
|
AndrewZeng/train_ppo_1to5_mix_twice_syn | AndrewZeng | "2024-05-03T04:56:12Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T04:55:44Z" | ---
license: apache-2.0
---
|
AndrewZeng/train_ppo_1to5_mix_third_syn | AndrewZeng | "2024-05-03T04:56:48Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T04:56:30Z" | ---
license: apache-2.0
---
|
AndrewZeng/train_ppo_1to5_mix_forth_syn | AndrewZeng | "2024-05-03T04:57:27Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T04:57:04Z" | ---
license: apache-2.0
---
|
thejagstudio/yolov8Fabric | thejagstudio | "2024-05-06T00:44:29Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T05:00:23Z" | ---
license: apache-2.0
---
|
REILX/extracted_tagengo_gpt4 | REILX | "2024-05-03T05:25:20Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T05:15:00Z" | ---
license: apache-2.0
---
### Original dataset.
- https://huggingface.co/datasets/lightblue/tagengo-gpt4
### Python Code
使用以下Python遍历tagengo-gpt4的jsonl文件中的每一行数据,提取出human和gpt对话内容,并使用instruction和output作为键来保存提取的信息到新的jsonl文件中:
(Use the following Python code to iterate through each line of the jsonl file of tagengo-gpt4, extract the dialogue content between human and GPT, and save the extracted information to a new jsonl file using "instruction" and "output" as keys.)
```python
import json
input_file_path = 'tagengo-gpt4.jsonl'
output_file_path = 'extracted_tagengo_gpt4.jsonl'
with open(input_file_path, 'r', encoding='utf-8') as input_file, \
open(output_file_path, 'w', encoding='utf-8') as output_file:
for line in input_file:
data = json.loads(line.strip())
conversations = data.get('conversations', [])
extraction = {'instruction': '', 'output': ''}
for conv in conversations:
if conv['from'] == 'human':
extraction['instruction'] = conv['value']
elif conv['from'] == 'gpt':
extraction['output'] = conv['value']
output_file.write(json.dumps(extraction, ensure_ascii=False) + '\n')
print(f"The conversation content has been extracted, and the results have been saved to'{output_file_path}'。")
``` |
asolodin/llm-rewrite-prompt-recovery-train | asolodin | "2024-05-03T05:44:31Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T05:15:37Z" | ---
license: apache-2.0
---
A dataset used for training models for LLM Rewrite Recovery competition.
Sources:
* https://www.kaggle.com/datasets/dipamc77/3000-rewritten-texts-prompt-recovery-challenge
* https://huggingface.co/datasets/vishnupriyavr/wiki-movie-plots-with-summaries
* https://huggingface.co/datasets/positivethoughts/rewrite_500_prompts_3k_texts
* https://huggingface.co/datasets/kartikay/review-summarizer
|
sw24/sw24 | sw24 | "2024-05-03T05:23:06Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T05:20:08Z" | ---
license: apache-2.0
---
|
guymorlan/IsraParlTweet | guymorlan | "2024-05-29T20:55:42Z" | 0 | 1 | [
"language:he",
"license:cc-by-4.0",
"hebrew",
"parliament",
"knesset",
"twitter",
"region:us"
] | null | "2024-05-03T05:41:30Z" | ---
license: cc-by-4.0
language:
- he
tags:
- hebrew
- parliament
- knesset
- twitter
viewer: false
---
<p style="text-align: center;">
<a href="https://aclanthology.org/2024.lrec-main.819.pdf" style="display: inline-block;">
<img src="http://img.shields.io/badge/paper-ACL--anthology-B31B1B.svg" alt="Paper">
</a>
<a href="https://lrec-coling-2024.org/" style="display: inline-block;">
<img src="https://img.shields.io/badge/conference-LREC_COLING_2024-blue" alt="Conference">
</a>
<a href="#" style="display: inline-block;">
<img src="https://img.shields.io/badge/license-CC_BY_4.0-orange" alt="Version">
</a>
</p>
# IsraParlTweet: The Israeli Parliamentary and Twitter Resource
Guy Mor-Lan, Effi Levi, Tamir Sheafer, Shaul R. Shenhav. _LREC-COLING 2024_.
**Paper**: https://aclanthology.org/2024.lrec-main.819/
**Dataset**: https://huggingface.co/datasets/guymorlan/IsraParlTweet/tree/main
For access to Twitter data, please contact authors at *guy.mor -AT- mail.huji.ac.il*
## Abstract
We introduce IsraParlTweet, a new linked corpus of Hebrew-language parliamentary discussions from the Knesset (Israeli Parliament) between the years 1992-2023 and Twitter posts made by Members of the Knesset between the years 2008-2023, containing a total of 294.5 million Hebrew tokens. In addition to raw text, the corpus contains comprehensive metadata on speakers and Knesset sessions as well as several linguistic annotations. As a result, IsraParlTweet can be used to conduct a wide variety of quantitative and qualitative analyses and provide valuable insights into political discourse in Israel.
# Dataset Description
The corpus is divided into four main sections: Knesset Sessions, Twitter Posts, Office Sessions, and Linguistic Analyses.
## Knesset Sessions - knesset_speeches.csv
This section contains the utterances of the MKs on the Knesset floor, in the order in which they appeared in the consecutive plenary protocol files. The utterances vary in length and may contain anything from a few words to a complete speech. Interruptions and interjections are preserved as they appear in the protocols. In total, this section contains approximately 4.5M individual utterances. The data is organized in CSV format, where each row represents a single utterance and contains the following fields:
- **text**: The text of the utterance.
- **uuid**: A unique text identifier used for associating the text with separately provided morphological analysis.
- **knesset**: Knesset term.
- **session_number**: Session number in current Knesset term.
- **date**: Date of session.
- **person_id**: Numeric identifier for speaker. Numeric identifiers are only assigned to MKs. 3% of speakers lack an identifier in cases of non-MK politicians (e.g. president, non-MK ministers), administrative Knesset workers, or guests, or if the matching MK could not be determined. Speakers are assigned an identifier if they were MKs in the time period of the corpus, even if they are not MKs at the time the utterance is made (e.g. presidents that were previously MKs).
- **canonical_name**: The canonical name (first name and surname) of the speaker. Only present for MKs for which an identifier can be determined.
- **name**: The name of the speaker as extracted from the protocol.
- **chair**: Indicator for whether or not the speaker was the chair of the session.
- **topic**: Topic of discussion or agenda item.
- **topic_extra**: Additional information on the topic (e.g. subtitle, legislation proposal number).
- **qa**: Indicator for whether or not the utterance is part of a Questions and Answers session.
- **query**: The written query to which the utterance is an oral response.
- **only_read**: Indicator for whether or not the utterance was a Q&A response that was read and not delivered by the answerer orally.
## Twitter Posts - contact authors for access
- **text**: The text of the tweet.
- **uuid**: A unique text identifier used for associating the text with separately provided morphological analysis.
- **tweet_id**: Twitter's unique tweet identifier.
- **date**: Date of the tweet.
- **knesset**: The Knesset term corresponding to the date of the tweet.
- **person_id**: Numeric identifier for the tweet poster. All rows have an identifier since only posts by MKs were collected. However, note that the poster was not necessarily serving as an MK at the time of posting.
- **user_id**: Twitter user ID number.
- **username**: Twitter handle name.
- **name**: The canonical name (first name + surname) of the poster.
- **likes**: Number of likes received at collection time.
- **retweets**: Number of retweets at collection time.
- **replies**: Number of replies at collection time.
- **quotes**: Number of quotes at collection time.
## Office Sessions - metadata.csv
This section contains metadata describing the office sessions of the MKs. An office session is a period of time in which a person served as an MK under a given party or faction. The data is organized in CSV format, where each row, representing a single office session, contains the following fields:
- **start_date**: Start date of office session.
- **end_date**: End date of office session.
- **knesset**: Relevant Knesset term.
- **person_id**: A unique personal id used for matching with Knesset Session utterances and Twitter Posts.
- **first_name**: MK's first name.
- **surname**: MK's surname.
- **gender**: MK's gender.
- **faction**: Name of faction under which the MK served.
- **faction_id**: Unique identifier for faction.
- **party_name**: Unified party name under which the MK served.
- **dob**: MK's date of birth.
- **cob**: MK's country of birth.
- **yod**: MK's year of death.
- **yoi**: MK's year of immigration (Aliyah) to Israel.
- **city**: MK's city of residence.
- **languages**: MK's spoken languages – a comma separated string.
## Linguistic Analyses
All JSON files utilize the texts' uuid as keys.
- **knesset_sentences.json**: List of segmented sentences (processed by Stanza) for Knesset utterances.
- **knesset_lemmas.json**: List of lemmas (processed by Stanza) for Knesset utterances.
- **knesset_sentiment**: List of predicted sentiment (by HeBERT sentiment model) for Knesset utterances.
For additional linguistic analyses, please contact the authors.
## BibTeX
```
@inproceedings{mor-lan-etal-2024-israparltweet-israeli,
title = "{I}sra{P}arl{T}weet: The Israeli Parliamentary and {T}witter Resource",
author = "Mor-Lan, Guy and
Levi, Effi and
Sheafer, Tamir and
Shenhav, Shaul R.",
editor = "Calzolari, Nicoletta and
Kan, Min-Yen and
Hoste, Veronique and
Lenci, Alessandro and
Sakti, Sakriani and
Xue, Nianwen",
booktitle = "Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024)",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lrec-main.819",
pages = "9372--9381",
abstract = "We introduce IsraParlTweet, a new linked corpus of Hebrew-language parliamentary discussions from the Knesset (Israeli Parliament) between the years 1992-2023 and Twitter posts made by Members of the Knesset between the years 2008-2023, containing a total of 294.5 million Hebrew tokens. In addition to raw text, the corpus contains comprehensive metadata on speakers and Knesset sessions as well as several linguistic annotations. As a result, IsraParlTweet can be used to conduct a wide variety of quantitative and qualitative analyses and provide valuable insights into political discourse in Israel.",
}
``` |
asolodin/llm-rewrite-prompt-recovery-test | asolodin | "2024-05-03T05:52:27Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T05:50:45Z" | ---
license: apache-2.0
---
|
AIAT/Optimizer-datasetfinal | AIAT | "2024-05-03T07:01:28Z" | 0 | 0 | [
"task_categories:table-question-answering",
"language:th",
"language:en",
"license:cc-by-nc-2.0",
"generated-by-cluade",
"generated-by-gpt4",
"generated-by-Llama-2-70b",
"croissant",
"region:us"
] | [
"table-question-answering"
] | "2024-05-03T05:58:07Z" | ---
license: cc-by-nc-2.0
dataset_info:
features:
- name: Question
dtype: string
- name: Expression
dtype: string
- name: header
dtype: string
splits:
- name: train
num_bytes: 20938658
num_examples: 6716
download_size: 728889
dataset_size: 20938658
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
task_categories:
- table-question-answering
language:
- th
- en
tags:
- generated-by-cluade
- generated-by-gpt4
- generated-by-Llama-2-70b
--- |
AIAT/The_Scamper-train | AIAT | "2024-05-03T06:16:08Z" | 0 | 0 | [
"task_categories:question-answering",
"croissant",
"region:us"
] | [
"question-answering"
] | "2024-05-03T06:06:02Z" | ---
task_categories:
- question-answering
--- |
stair-lab/proteinea_fluorescence-esm2_t33_650M_UR50D-embedding | stair-lab | "2024-05-03T07:36:27Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T06:10:58Z" | ---
dataset_info:
features:
- name: inputs_embeds
sequence: float64
- name: rewards
dtype: float64
splits:
- name: train
num_bytes: 219864392
num_examples: 21446
- name: validation
num_bytes: 54971224
num_examples: 5362
download_size: 67405937
dataset_size: 274835616
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
stair-lab/proteinea_fluorescence-Meta-Llama-3-8B-embedding | stair-lab | "2024-05-03T07:49:03Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T06:16:15Z" | ---
dataset_info:
features:
- name: inputs_embeds
sequence: float64
- name: rewards
dtype: float64
splits:
- name: train
num_bytes: 702999880
num_examples: 21446
- name: validation
num_bytes: 175766360
num_examples: 5362
download_size: 165588685
dataset_size: 878766240
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
spikingneurons/ASCADv2 | spikingneurons | "2024-05-03T06:16:36Z" | 0 | 0 | [
"license:bsd",
"region:us"
] | null | "2024-05-03T06:16:36Z" | ---
license: bsd
---
|
AIAT/EXP-thai2sql | AIAT | "2024-05-03T06:30:02Z" | 0 | 0 | [
"task_categories:text-generation",
"language:th",
"language:en",
"license:apache-2.0",
"croissant",
"region:us"
] | [
"text-generation"
] | "2024-05-03T06:20:35Z" | ---
license: apache-2.0
language:
- th
- en
pretty_name: a
task_categories:
- text-generation
---
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] |
nirajandhakal/Mahabharata-HHGTTG-Text | nirajandhakal | "2024-05-03T06:35:45Z" | 0 | 0 | [
"task_categories:text-generation",
"language:en",
"license:pddl",
"Mahabharata",
"The Hitchhiker Guide To The Galaxy",
"A Restaurant At The Edge of The Universe",
"Life, The Universe And Everything",
"So Long, And So Thanks For All The Fish",
"Mostly Harmless",
"croissant",
"region:us"
] | [
"text-generation"
] | "2024-05-03T06:21:49Z" | ---
license: pddl
task_categories:
- text-generation
language:
- en
tags:
- Mahabharata
- The Hitchhiker Guide To The Galaxy
- A Restaurant At The Edge of The Universe
- Life, The Universe And Everything
- So Long, And So Thanks For All The Fish
- Mostly Harmless
--- |
abhinit27052001/eiffel-toy | abhinit27052001 | "2024-05-03T06:24:54Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T06:24:52Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 3538539.0
num_examples: 3
download_size: 3474272
dataset_size: 3538539.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vc64/QA_gpt | vc64 | "2024-05-03T06:26:28Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T06:26:26Z" | ---
dataset_info:
features:
- name: context
dtype: string
- name: answer
dtype: string
- name: title
dtype: string
- name: question
dtype: string
- name: reworded_answer
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 4523437
num_examples: 3344
download_size: 2334738
dataset_size: 4523437
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
maverickrzw/go_dataset_smaller | maverickrzw | "2024-05-03T06:29:27Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T06:27:41Z" | ---
license: apache-2.0
---
|
stair-lab/proteinea_fluorescence-Mistral-7B-v0.1-embedding | stair-lab | "2024-05-03T07:49:30Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T06:32:03Z" | ---
dataset_info:
features:
- name: inputs_embeds
sequence: float64
- name: rewards
dtype: float64
splits:
- name: train
num_bytes: 702999880
num_examples: 21446
- name: validation
num_bytes: 175766360
num_examples: 5362
download_size: 165594417
dataset_size: 878766240
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
AIAT/Machima-chatgptdataset | AIAT | "2024-05-03T06:48:26Z" | 0 | 0 | [
"task_categories:table-question-answering",
"language:th",
"language:en",
"license:cc-by-nc-4.0",
"croissant",
"region:us"
] | [
"table-question-answering"
] | "2024-05-03T06:47:10Z" | ---
license: cc-by-nc-4.0
task_categories:
- table-question-answering
language:
- th
- en
--- |
keunjoo/llama3 | keunjoo | "2024-05-08T00:07:55Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T06:48:14Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5826
num_examples: 39
download_size: 2572
dataset_size: 5826
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AIAT/Machima-gptloppingdataset | AIAT | "2024-05-03T06:56:49Z" | 0 | 0 | [
"task_categories:table-question-answering",
"language:th",
"language:en",
"license:cc-by-nc-4.0",
"croissant",
"region:us"
] | [
"table-question-answering"
] | "2024-05-03T06:53:11Z" | ---
license: cc-by-nc-4.0
task_categories:
- table-question-answering
language:
- th
- en
--- |
skygpt/zhilin-llama3-tt | skygpt | "2024-05-03T06:56:44Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-03T06:56:44Z" | ---
license: apache-2.0
---
|
onsba/eeee | onsba | "2024-05-03T06:58:36Z" | 0 | 0 | [
"license:other",
"region:us"
] | null | "2024-05-03T06:58:36Z" | ---
license: other
license_name: ons
license_link: LICENSE
---
|
yiweifu/relearn_data_lastname.hf | yiweifu | "2024-05-03T07:00:06Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T07:00:05Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 2430856
num_examples: 641
download_size: 1485525
dataset_size: 2430856
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AIAT/Pangpuriye-public_alpaca-cleaned | AIAT | "2024-05-06T08:38:45Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"language:th",
"language:en",
"license:cc-by-4.0",
"code",
"sql",
"croissant",
"region:us"
] | null | "2024-05-03T07:00:58Z" | ---
license: cc-by-4.0
language:
- th
- en
tags:
- code
- sql
size_categories:
- 1K<n<10K
---
# 🤖 [Super AI Engineer Development Program Season 4](https://superai.aiat.or.th/) - Pangpuriye House - Alpaca-Cleaned
![logo](https://huggingface.co/datasets/AIAT/Pangpuriye-generated_by_typhoon/resolve/main/logo/logo.png)
## Original Dataset
We adopt this [alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned) dataset from `https://huggingface.co/datasets/yahma/alpaca-cleaned` the original repository. We used this dataset during the fine-tuning of [Panguriye's LLM](https://huggingface.co/AIAT/Pangpuriye-openthaigpt-1.0.0-7b-chat). The dataset is available under the Creative Commons Non Commercial (CC BY-NC 4.0).
The original dataset consists of 51,760 rows of `input`, `instruction`, and `output` in English.
We think that the `alpaca-cleaned` dataset is well structured and important to the learning of the instruction-tuned model. During the fine-tuning process, our goal of adding a mixture of English and Thai data is to help the LLM in understanding both sides equivalently.
## Call Dataset
The following code is an example calling from `datasets` library.
```python
from datasets import load_dataset
dataset = load_dataset("AIAT/Pangpuriye-public_alpaca-cleaned")
```
## Citation Information
We acknowledge the original dataset, and please redirect to the original paper as follow:
Please refer to the original dataset here [https://huggingface.co/datasets/yahma/alpaca-cleaned](https://huggingface.co/datasets/yahma/alpaca-cleaned).
```
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-tuned LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
``` |
AIAT/Optimizer-gptgeneratedpandas | AIAT | "2024-05-03T07:05:31Z" | 0 | 0 | [
"task_categories:text2text-generation",
"language:en",
"license:cc-by-nc-2.0",
"generated-by-gpt3-5",
"croissant",
"region:us"
] | [
"text2text-generation"
] | "2024-05-03T07:03:41Z" | ---
license: cc-by-nc-2.0
task_categories:
- text2text-generation
language:
- en
tags:
- generated-by-gpt3-5
--- |
JijoJS/car_damage | JijoJS | "2024-05-03T07:05:11Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-03T07:05:11Z" | ---
license: apache-2.0
---
|
CavidanZ/Audiobook | CavidanZ | "2024-05-30T18:11:01Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T07:13:58Z" | ---
license: apache-2.0
---
|
SaProtHub/Dataset-Fluorescence-TAPE | SaProtHub | "2024-05-03T11:41:30Z" | 0 | 0 | [
"license:mit",
"arxiv:1906.08230",
"region:us"
] | null | "2024-05-03T07:26:29Z" | ---
license: mit
---
# Description
Fluorescence prediction is a regression task where each input protein *x* is mapped to a label *y* ∈ *R*, corresponding to the log-fluorescence intensity of *x*.
# Splits
**Structure type:** None
The dataset is from [**Evaluating Protein Transfer Learning with TAPE**](https://arxiv.org/abs/1906.08230). We follow the original data splits, with the number of training, validation and test set shown below:
- Train: 20963
- Valid: 5235
- Test: 25517
# Data format
We organize all data in LMDB format. The architecture of the databse is like:
**length:** The number of samples
**0:**
- **seq:** The structure-aware sequence
- **fitness:** fitness label of the sequence
**1:**
**···** |
daniel-dona/openslr-slr108 | daniel-dona | "2024-05-03T08:08:51Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T07:35:06Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: train
num_bytes: 133143930.32
num_examples: 2507
download_size: 133567112
dataset_size: 133143930.32
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
stair-lab/proteinea_fluorescence-gemma-7b-embedding | stair-lab | "2024-05-03T07:37:17Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T07:35:26Z" | ---
dataset_info:
features:
- name: inputs_embeds
sequence: float64
- name: rewards
dtype: float64
splits:
- name: train
num_bytes: 527314248
num_examples: 21446
- name: validation
num_bytes: 131840856
num_examples: 5362
download_size: 124579770
dataset_size: 659155104
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
VAIBHAV22334455/JARVIS | VAIBHAV22334455 | "2024-05-03T07:50:02Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-05-03T07:46:40Z" | ---
license: mit
---
|
enchatted/OSCAR-2301-EL | enchatted | "2024-05-03T07:55:37Z" | 0 | 0 | [
"license:cc0-1.0",
"region:us"
] | null | "2024-05-03T07:55:37Z" | ---
license: cc0-1.0
---
|
STONE11112/yuyinmoxing | STONE11112 | "2024-05-03T08:09:45Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-05-03T08:03:52Z" | ---
license: mit
---
|
aikonst2025/ai | aikonst2025 | "2024-05-03T08:05:27Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T08:04:52Z" | ---
license: apache-2.0
---
|
nslyubaykin/data_for_iql | nslyubaykin | "2024-05-03T11:17:06Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-03T08:08:03Z" | ---
license: apache-2.0
---
|
alissonpadua/ham-spam-scam-toxic | alissonpadua | "2024-05-03T08:10:46Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-05-03T08:10:02Z" | ---
license: mit
---
|
catinthebag/TumpengQA | catinthebag | "2024-05-04T22:00:49Z" | 0 | 0 | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:id",
"license:cc-by-nc-4.0",
"croissant",
"region:us"
] | [
"text-generation"
] | "2024-05-03T08:21:03Z" | ---
license: cc-by-nc-4.0
task_categories:
- text-generation
language:
- id
size_categories:
- 10K<n<100K
---
# Synthetic Indonesian dataset with Llama 3 70B
TumpengQA contains 6.7M words of 28.2K input-output pairs of Indonesian question-answering. It is intended to fine-tune Llama 3 8B, which has limited Indonesian language capabilities, to properly respond in Indonesian.
It is a research preview dataset and not curated for factual accuracy or safety. Use this dataset at your discretion.
# Out of scope use
- Commercial use
- Fine-tuning non-Llama 3 models |
maverickrzw/go_dataset_size9 | maverickrzw | "2024-05-16T13:48:52Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T08:28:39Z" | ---
license: apache-2.0
---
|
YYYYYYibo/eval-dataset-with-score-rank4_plus | YYYYYYibo | "2024-05-03T08:30:57Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T08:30:56Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: original
dtype: string
- name: rank4_plus
dtype: string
- name: gpt_score
dtype: int64
splits:
- name: train_prefs
num_bytes: 1930938
num_examples: 200
download_size: 1112496
dataset_size: 1930938
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "eval-dataset-with-score-rank4_plus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
pramitsahoo/spoiler-regressor-data | pramitsahoo | "2024-05-03T08:38:42Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-05-03T08:38:12Z" | ---
license: mit
---
|
udmurtNLP/zerpal-udmdunne | udmurtNLP | "2024-05-03T08:42:45Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T08:42:43Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: label
dtype: string
splits:
- name: train
num_bytes: 23704473.83701251
num_examples: 5707
- name: valid
num_bytes: 10163807.162987491
num_examples: 2447
download_size: 16604100
dataset_size: 33868281.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: valid
path: data/valid-*
---
|
Mymodalert/modafinil | Mymodalert | "2024-05-03T09:07:50Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-05-03T09:07:50Z" | ---
license: mit
---
|
tina2900/MUMU-LLaMA | tina2900 | "2024-05-03T09:32:23Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T09:21:39Z" | ---
license: apache-2.0
---
|
aikonst2025/arrcr | aikonst2025 | "2024-05-03T09:38:21Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-03T09:34:04Z" | ---
license: apache-2.0
---
|
InvestmentResearchAI/earnings_10k_questions | InvestmentResearchAI | "2024-05-03T09:40:07Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"arxiv:2404.13028",
"region:us"
] | null | "2024-05-03T09:34:17Z" | ---
license: apache-2.0
---
Dataset Summary
This dataset is composed of 50 qeustions to test for earnings_10k dataset as part of the LLM-ADE framework (https://arxiv.org/abs/2404.13028), specifically designed to test for native processing/training.
We incorporate these qeustions into lm-evalaution-harness (https://github.com/EleutherAI/lm-evaluation-harness) to test our LLM-ADE enhanced model to score if the training was successful. |
DartiParti/AWS_ICONS | DartiParti | "2024-05-03T14:39:06Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-05-03T09:44:14Z" | ---
license: mit
---
This dataset is a altered and reworked form of steven-kuo-5s6aq.
# AWS Icon Detection > 2023-05-03 5:16am
https://universe.roboflow.com/steven-kuo-5s6aq/aws-icon-detection
Provided by a Roboflow user
License: CC BY 4.0 |
NobodyExistsOnTheInternet/wildchat1Mjsonl | NobodyExistsOnTheInternet | "2024-05-03T10:02:36Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-05-03T09:44:29Z" | ---
license: mit
---
|
VonOx/carelink-diabetes-closedloop | VonOx | "2024-05-03T10:06:05Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-03T10:04:04Z" | ---
license: apache-2.0
---
|
DIBT/aya_english_dpo_raw | DIBT | "2024-05-03T10:07:52Z" | 0 | 0 | [
"size_categories:n<1K",
"synthetic",
"distilabel",
"rlaif",
"croissant",
"region:us"
] | null | "2024-05-03T10:07:49Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: instruction
dtype: string
- name: targets
dtype: string
- name: language
dtype: string
- name: language_code
dtype: string
- name: annotation_type
dtype: string
- name: user_id
dtype: string
- name: generation_model
dtype: string
- name: generation
dtype: string
- name: predicted_generation_language
dtype: string
- name: predicted_generation_language_score
dtype: float64
- name: generations
sequence: string
- name: generation_models
sequence: string
- name: model_name
dtype: string
- name: ratings
sequence: int64
- name: rationales
sequence: string
splits:
- name: train
num_bytes: 567777
num_examples: 100
download_size: 320089
dataset_size: 567777
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for aya_english_dpo_raw
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/DIBT/aya_english_dpo_raw/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/DIBT/aya_english_dpo_raw/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"annotation_type": "original-annotations",
"generation": "The largest organ in the human body is the skin. It covers the entire surface of the body and has a total area of about 22 square feet (2 square meters) in adults. The skin performs many important functions, such as:\n\n* Protecting the body from external damage, such as injury, infection, and UV radiation\n* Regulating body temperature\n* Aiding in the production of vitamin D\n* Assisting in the sensation of touch and feel\n* Helping to maintain fluid balance and hydration\n\nThe skin is composed of several layers, with the outermost layer being the epidermis, followed by the dermis, and then the hypodermis. It is estimated that the skin accounts for about 15% of an adult\u0027s total body weight.\n\nIt\u0027s worth noting that while the skin is the largest organ by surface area, the liver is often considered the largest organ by mass, weighing around 3 pounds (1.4 kilograms) in adults.",
"generation_model": "meta-llama/Meta-Llama-3-70B-Instruct",
"generation_models": [
"aya",
"meta-llama/Meta-Llama-3-70B-Instruct"
],
"generations": [
"The largest organ in the human body is the skin. It is the body\u0027s largest organ by surface area and weight. The skin serves many essential functions, including protecting the body from physical damage, pathogens, and dehydration, as well as helping to regulate body temperature and producing vitamin D in response to sunlight. It\u0027s a complex organ with multiple layers, cells, and structures, each serving a unique purpose in maintaining the body\u0027s overall health and well-being.",
"The largest organ in the human body is the skin. It covers the entire surface of the body and has a total area of about 22 square feet (2 square meters) in adults. The skin performs many important functions, such as:\n\n* Protecting the body from external damage, such as injury, infection, and UV radiation\n* Regulating body temperature\n* Aiding in the production of vitamin D\n* Assisting in the sensation of touch and feel\n* Helping to maintain fluid balance and hydration\n\nThe skin is composed of several layers, with the outermost layer being the epidermis, followed by the dermis, and then the hypodermis. It is estimated that the skin accounts for about 15% of an adult\u0027s total body weight.\n\nIt\u0027s worth noting that while the skin is the largest organ by surface area, the liver is often considered the largest organ by mass, weighing around 3 pounds (1.4 kilograms) in adults."
],
"instruction": "What is the largest organ in the human body?",
"language": "English",
"language_code": "eng",
"model_name": "meta-llama/Meta-Llama-3-70B-Instruct",
"predicted_generation_language": "eng_Latn",
"predicted_generation_language_score": 0.9590950012207031,
"ratings": [
5,
5
],
"rationales": [
"The text is accurate, informative, and comprehensive in describing the skin as the largest organ in the human body. It provides specific details about its functions, structure, and importance, aligning perfectly with the instruction.",
"This text is equally excellent, providing a clear and concise answer to the question. It lists the skin\u0027s functions, describes its composition, and offers additional interesting facts, such as the comparison with the liver\u0027s mass. The text is well-structured, accurate, and confident in its information, making it an excellent response."
],
"targets": "The largest organ in the human body is the skin. It is the body\u0027s largest organ by surface area and weight. The skin serves many essential functions, including protecting the body from physical damage, pathogens, and dehydration, as well as helping to regulate body temperature and producing vitamin D in response to sunlight. It\u0027s a complex organ with multiple layers, cells, and structures, each serving a unique purpose in maintaining the body\u0027s overall health and well-being.",
"user_id": "29f22cf193a81e1a5c47d76af453a91b3cd19aa348995c7add1df15fe24e8801"
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/aya_english_dpo_raw", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("DIBT/aya_english_dpo_raw")
```
</details>
|
scaleszw/my_ai_information | scaleszw | "2024-05-03T10:15:07Z" | 0 | 0 | [
"license:llama3",
"region:us"
] | null | "2024-05-03T10:15:07Z" | ---
license: llama3
---
|
hannybu/wild_angel_episodes_description_ru | hannybu | "2024-05-03T10:37:39Z" | 0 | 4 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T10:33:37Z" | ---
license: apache-2.0
---
|
aironman/classifier-github-issues | aironman | "2024-05-03T10:57:43Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-03T10:57:43Z" | ---
license: apache-2.0
---
|
aintech/vdf_paul_graham_essay | aintech | "2024-05-03T11:12:52Z" | 0 | 0 | [
"vdf",
"vector-io",
"vector-dataset",
"vector-embeddings",
"croissant",
"region:us"
] | null | "2024-05-03T11:12:49Z" |
---
tags:
- vdf
- vector-io
- vector-dataset
- vector-embeddings
---
This is a dataset created using [vector-io](https://github.com/ai-northstar-tech/vector-io)
|
jq/salt-asr-data-transcriptions | jq | "2024-05-03T15:13:55Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T11:21:24Z" | ---
dataset_info:
- config_name: multispeaker-ach
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: audio_language
dtype: string
- name: is_studio
dtype: bool
- name: speaker_id
dtype: string
- name: sample_rate
dtype: int64
- name: transcription
dtype: string
- name: edit_distance
dtype: int64
splits:
- name: train
num_bytes: 704574
num_examples: 4811
- name: dev
num_bytes: 14750
num_examples: 101
- name: test
num_bytes: 14497
num_examples: 96
download_size: 401928
dataset_size: 733821
- config_name: multispeaker-eng
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: audio_language
dtype: string
- name: is_studio
dtype: bool
- name: speaker_id
dtype: string
- name: sample_rate
dtype: int64
- name: transcription
dtype: string
- name: edit_distance
dtype: int64
splits:
- name: dev
num_bytes: 15282
num_examples: 100
- name: test
num_bytes: 15194
num_examples: 96
- name: train
num_bytes: 734854
num_examples: 4797
download_size: 402022
dataset_size: 765330
- config_name: multispeaker-lgg
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: audio_language
dtype: string
- name: is_studio
dtype: bool
- name: speaker_id
dtype: string
- name: sample_rate
dtype: int64
- name: transcription
dtype: string
- name: edit_distance
dtype: int64
splits:
- name: train
num_bytes: 704330
num_examples: 4811
- name: dev
num_bytes: 14684
num_examples: 101
- name: test
num_bytes: 14411
num_examples: 96
download_size: 406173
dataset_size: 733425
- config_name: multispeaker-lug
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: audio_language
dtype: string
- name: is_studio
dtype: bool
- name: speaker_id
dtype: string
- name: sample_rate
dtype: int64
- name: transcription
dtype: string
- name: edit_distance
dtype: int64
splits:
- name: train
num_bytes: 801734
num_examples: 5016
- name: dev
num_bytes: 16421
num_examples: 103
- name: test
num_bytes: 16270
num_examples: 97
download_size: 819770
dataset_size: 834425
- config_name: multispeaker-nyn
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: audio_language
dtype: string
- name: is_studio
dtype: bool
- name: speaker_id
dtype: string
- name: sample_rate
dtype: int64
- name: transcription
dtype: string
- name: edit_distance
dtype: int64
splits:
- name: train
num_bytes: 700078
num_examples: 4811
- name: dev
num_bytes: 14574
num_examples: 101
- name: test
num_bytes: 14351
num_examples: 96
download_size: 417568
dataset_size: 729003
- config_name: multispeaker-teo
features:
- name: id
dtype: int64
- name: text
dtype: string
- name: audio_language
dtype: string
- name: is_studio
dtype: bool
- name: speaker_id
dtype: string
- name: sample_rate
dtype: int64
- name: transcription
dtype: string
- name: edit_distance
dtype: int64
splits:
- name: train
num_bytes: 690253
num_examples: 4811
- name: dev
num_bytes: 14389
num_examples: 101
- name: test
num_bytes: 14113
num_examples: 96
download_size: 401353
dataset_size: 718755
configs:
- config_name: multispeaker-ach
data_files:
- split: train
path: multispeaker-ach/train-*
- split: dev
path: multispeaker-ach/dev-*
- split: test
path: multispeaker-ach/test-*
- config_name: multispeaker-eng
data_files:
- split: dev
path: multispeaker-eng/dev-*
- split: test
path: multispeaker-eng/test-*
- split: train
path: multispeaker-eng/train-*
- config_name: multispeaker-lgg
data_files:
- split: train
path: multispeaker-lgg/train-*
- split: dev
path: multispeaker-lgg/dev-*
- split: test
path: multispeaker-lgg/test-*
- config_name: multispeaker-lug
data_files:
- split: train
path: multispeaker-lug/train-*
- split: dev
path: multispeaker-lug/dev-*
- split: test
path: multispeaker-lug/test-*
- config_name: multispeaker-nyn
data_files:
- split: train
path: multispeaker-nyn/train-*
- split: dev
path: multispeaker-nyn/dev-*
- split: test
path: multispeaker-nyn/test-*
- config_name: multispeaker-teo
data_files:
- split: train
path: multispeaker-teo/train-*
- split: dev
path: multispeaker-teo/dev-*
- split: test
path: multispeaker-teo/test-*
---
|
NobodyExistsOnTheInternet/wildchat650kjsonl | NobodyExistsOnTheInternet | "2024-05-03T11:37:34Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-05-03T11:28:04Z" | ---
license: mit
---
|
kjetMol/ArtificiallyNoisySpeechTranscriptions | kjetMol | "2024-05-03T12:01:18Z" | 0 | 0 | [
"language:no",
"croissant",
"region:us"
] | null | "2024-05-03T11:37:55Z" | ---
language:
- 'no'
---
This dataset contains transcriptions of speech files derived from the Norwegian language corpus provided by Språkbanken, specifically the nb_samtale subset. These transcriptions have been subjected to controlled noise addition to simulate various acoustic environments.
URI: https://huggingface.co/datasets/Sprakbanken/nb_samtale/viewer/annotations/train?f[duration][min]=24.6432&f[duration][imax]=27.368
Original Audio properties:
Duration : 24 secounds - 27 secounds
Format: WAV
Number of files: 9
Transcriptions:
Number of Models tested: 3
Number of noise tested: 4
Number of noise levels: 16
TOTAL Number of files: 1682
Metrics
Word Error Rate (WER): WER calculations are based on the transcriptions with 0% added noise as the ground truth. This metric helps in assessing the performance of speech recognition systems under varying noise conditions.
|
ServiceNow/repliqa_inference_tests | ServiceNow | "2024-05-03T12:10:46Z" | 0 | 1 | [
"croissant",
"region:us"
] | null | "2024-05-03T11:42:21Z" | ---
dataset_info:
features:
- name: topic
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answer
dtype: string
- name: predicted_answer
dtype: string
- name: retrieved_topic
dtype: string
splits:
- name: mistral_large
num_bytes: 25984
num_examples: 33
- name: llama_3_70b_instruct
num_bytes: 25144
num_examples: 33
- name: gpt_4_turbo
num_bytes: 24963
num_examples: 33
download_size: 70547
dataset_size: 76091
configs:
- config_name: default
data_files:
- split: mistral_large
path: data/mistral_large-*
- split: llama_3_70b_instruct
path: data/llama_3_70b_instruct-*
- split: gpt_4_turbo
path: data/gpt_4_turbo-*
---
|
SaProtHub/Dataset-Thermostability-FLIP | SaProtHub | "2024-05-06T11:26:22Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-05-03T11:47:46Z" | ---
license: mit
---
# Description
Thermostability prediction is a regression task where each input protein *x* is mapped to a label *y* ∈ *R*, corresponding to the thermostability of *x*.
# Splits
**Structure type:** AF2
The dataset is from [**FLIP: Benchmark tasks in fitness landscape inferencefor proteins**](https://www.biorxiv.org/content/10.1101/2021.11.09.467890v2). We employ all proteins from the“Human-cell”splits (proteins that lack AF2 structures are removed), and split them based on 70% structure similarity (see [ProteinShake](https://github.com/BorgwardtLab/proteinshake/tree/main)), with the number of training, validation and test set shown below:
- Train: 5310
- Valid: 706
- Test: 706
# Data format
We organize all data in LMDB format. The architecture of the databse is like:
**length:** The number of samples
**0:**
- **name:** The UniProt ID of the protein
- **seq:** The structure-aware sequence
- **plddt**: pLDDT values at all positions
- **fitness:** fitness label of the sequence
**1:**
**···**
|
sinarproject/legisdata | sinarproject | "2024-05-16T17:10:16Z" | 0 | 0 | [
"license:cc-by-4.0",
"region:us"
] | null | "2024-05-03T11:56:53Z" | ---
license: cc-by-4.0
--- |
Knowledge-Innovation-Centre/ESCO-Embeddings | Knowledge-Innovation-Centre | "2024-05-28T13:31:34Z" | 0 | 0 | [
"license:mit",
"croissant",
"region:us"
] | null | "2024-05-03T12:06:10Z" | ---
license: mit
---
ESCO Skills Database Converted to Embeddings with 3072 Dimensions using OpenAI text-embedding-3-large |
baraah/data_3_5_20k_withoutcolumnname | baraah | "2024-05-03T12:26:03Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T12:16:59Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 565493603.744
num_examples: 12604
download_size: 600841991
dataset_size: 565493603.744
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
cocktailpeanut/town | cocktailpeanut | "2024-05-03T12:40:21Z" | 0 | 1 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-03T12:26:16Z" | ---
license: apache-2.0
---
|
SaProtHub/Dataset-Metal_Ion_Binding | SaProtHub | "2024-05-03T12:28:48Z" | 0 | 0 | [
"license:mit",
"arxiv:2206.06583",
"region:us"
] | null | "2024-05-03T12:28:25Z" | ---
license: mit
---
# Description
Metal Ion Binding prediction is a binary classification task where each input protein *x* is mapped to a label *y* ∈ {0, 1}, corresponding to whether there are metal ion–binding sites in the protein.
The digital label means:
0: No
1: Yes
# Splits
**Structure type:** PDB
The dataset is from [**Exploring evolution-aware & -free protein language models as protein function predictors**](https://arxiv.org/abs/2206.06583). We employ all proteins from the original dataset, and split them based on 70% structure similarity (see [ProteinShake](https://github.com/BorgwardtLab/proteinshake/tree/main)), with the number of training, validation and test set shown below:
- Train: 5797
- Valid: 719
- Test: 719
# Data format
We organize all data in LMDB format. The architecture of the databse is like:
**length:** The number of samples
**0:**
- **name:** The PDB ID of the protein
- **chain:** The chain ID of the protein
- **seq:** The structure-aware sequence
- **label:** Digital label of the sequence
**1:**
**···**
|
baraah/data_3_5_20k_withoutcolumnname3 | baraah | "2024-05-03T12:41:02Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T12:40:27Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 565493603.744
num_examples: 12604
download_size: 600841991
dataset_size: 565493603.744
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Al-Hathboor-Bikal-ai-2023/Ahb_dataset_V.02 | Al-Hathboor-Bikal-ai-2023 | "2024-05-03T13:05:34Z" | 0 | 0 | [
"license:apache-2.0",
"croissant",
"region:us"
] | null | "2024-05-03T13:04:09Z" | ---
license: apache-2.0
---
|
csssd/qs-llama3-tt | csssd | "2024-05-03T13:10:52Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-03T13:10:50Z" | ---
license: apache-2.0
---
|
iagolaue/henriqju | iagolaue | "2024-05-03T13:19:15Z" | 0 | 0 | [
"license:openrail",
"croissant",
"region:us"
] | null | "2024-05-03T13:18:00Z" | ---
license: openrail
---
|
Krish13/vinod_mmlutransformed_QnA_traintest | Krish13 | "2024-05-03T13:23:54Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T13:23:48Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype:
class_label:
names:
'0': A
'1': B
'2': C
'3': D
- name: Instruction
dtype: string
- name: Response
dtype: string
splits:
- name: test
num_bytes: 14509714
num_examples: 14042
- name: validation
num_bytes: 1588862
num_examples: 1531
- name: dev
num_bytes: 262040
num_examples: 285
- name: auxiliary_train
num_bytes: 326744953
num_examples: 99842
download_size: 104411440
dataset_size: 343105569
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: validation
path: data/validation-*
- split: dev
path: data/dev-*
- split: auxiliary_train
path: data/auxiliary_train-*
---
|
aniketsen/hplt_bn | aniketsen | "2024-05-03T13:50:05Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T13:31:57Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: metadata
dtype: string
splits:
- name: train
num_bytes: 44266046418
num_examples: 2875658
download_size: 13512248530
dataset_size: 44266046418
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SaProtHub/Dataset-Stability-TAPE | SaProtHub | "2024-05-03T13:48:17Z" | 0 | 0 | [
"license:mit",
"arxiv:1906.08230",
"region:us"
] | null | "2024-05-03T13:47:14Z" | ---
license: mit
---
# Description
Stability Landscape Prediction is a regression task where each input protein *x* is mapped to a label *y* ∈ *R* measuring the most extreme circumstances in which protein *x* maintains its fold above a concentration threshold (a proxy for intrinsic stability).
# Splits
**Structure type:** None
The dataset is from [**Evaluating Protein Transfer Learning with TAPE**](https://arxiv.org/abs/1906.08230). We follow the original data splits, with the number of training, validation and test set shown below:
- Train: 53614
- Valid: 2512
- Test: 12851
# Data format
We organize all data in LMDB format. The architecture of the databse is like:
**length:** The number of samples
**0:**
- **seq:** The structure-aware sequence
- **fitness:** fitness label of the sequence
**1:**
**···** |
Alijeff1214/DILA_FRENCH_DATASET | Alijeff1214 | "2024-05-03T13:52:27Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-05-03T13:52:27Z" | ---
license: mit
---
|
SaProtHub/Dataset-Beta_Lactamase-PEER | SaProtHub | "2024-05-03T14:03:55Z" | 0 | 0 | [
"license:mit",
"arxiv:2206.02096",
"region:us"
] | null | "2024-05-03T14:03:42Z" | ---
license: mit
---
# Description
β-Lactamase Prediction studies the activity among first-order mutants of the TEM-1 beta-lactamase protein. The target *y* ∈ *R* is the experimentally tested fitness score which records the scaled mutation effect for each mutant.
# Splits
**Structure type:** None
The dataset is from [**PEER: A Comprehensive and Multi-Task Benchmark for Protein Sequence Understanding**](https://arxiv.org/abs/2206.02096). We follow the original data splits, with the number of training, validation and test set shown below:
- Train: 4158
- Valid: 520
- Test: 520
# Data format
We organize all data in LMDB format. The architecture of the databse is like:
**length:** The number of samples
**0:**
- **seq:** The structure-aware sequence
- **fitness:** fitness label of the sequence
**1:**
**···** |
asdcw/flower | asdcw | "2024-05-03T14:13:11Z" | 0 | 0 | [
"size_categories:n<1K",
"license:artistic-2.0",
"art",
"region:us"
] | null | "2024-05-03T14:08:24Z" | ---
license: artistic-2.0
tags:
- art
pretty_name: the flower I like
size_categories:
- n<1K
--- |
Mantis-VL/MIQA_eval | Mantis-VL | "2024-05-03T14:37:54Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T14:37:54Z" | ---
dataset_info:
- config_name: birds-to-words
features:
- name: id
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: images
sequence: image
- name: options
sequence: string
- name: answer
dtype: string
- name: data_source
dtype: string
- name: category
dtype: string
splits:
- name: test
num_bytes: 170267865.0
num_examples: 337
download_size: 95411526
dataset_size: 170267865.0
- config_name: mantis_eval
features:
- name: id
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: images
sequence: image
- name: options
sequence: string
- name: answer
dtype: string
- name: data_source
dtype: string
- name: category
dtype: string
splits:
- name: test
num_bytes: 479770102.0
num_examples: 217
download_size: 473031413
dataset_size: 479770102.0
- config_name: mementos
features:
- name: id
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: images
sequence: image
- name: options
sequence: string
- name: answer
dtype: string
- name: data_source
dtype: string
- name: category
dtype: string
splits:
- name: image_cmc
num_bytes: 33771707.0
num_examples: 50
- name: image_dl
num_bytes: 1468869742.0
num_examples: 448
- name: image_robo
num_bytes: 539338081.0
num_examples: 199
- name: single_image_cmc
num_bytes: 48207029.0
num_examples: 50
- name: single_image_dl
num_bytes: 1455413585.0
num_examples: 448
- name: single_image_robo
num_bytes: 550100816.0
num_examples: 199
download_size: 4094931165
dataset_size: 4095700960.0
- config_name: nlvr2
features:
- name: id
dtype: string
- name: question_type
dtype: string
- name: question
dtype: string
- name: images
sequence: image
- name: options
sequence: string
- name: answer
dtype: string
- name: data_source
dtype: string
- name: category
dtype: string
splits:
- name: test
num_bytes: 12511043043.535
num_examples: 6967
download_size: 7454063707
dataset_size: 12511043043.535
configs:
- config_name: birds-to-words
data_files:
- split: test
path: birds-to-words/test-*
- config_name: mantis_eval
data_files:
- split: test
path: mantis_eval/test-*
- config_name: mementos
data_files:
- split: image_cmc
path: mementos/image_cmc-*
- split: image_dl
path: mementos/image_dl-*
- split: image_robo
path: mementos/image_robo-*
- split: single_image_cmc
path: mementos/single_image_cmc-*
- split: single_image_dl
path: mementos/single_image_dl-*
- split: single_image_robo
path: mementos/single_image_robo-*
- config_name: nlvr2
data_files:
- split: test
path: nlvr2/test-*
---
|
Kamyar-zeinalipour/T4TAC | Kamyar-zeinalipour | "2024-05-03T14:41:17Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T14:41:14Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: category
dtype: string
- name: keyword
dtype: string
- name: clue
dtype: string
splits:
- name: train
num_bytes: 29274283
num_examples: 27403
download_size: 7066839
dataset_size: 29274283
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SaProtHub/Dataset-AAV-FLIP | SaProtHub | "2024-05-03T14:43:37Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-05-03T14:42:58Z" | ---
license: mit
---
# Description
AAV Prediction is a regression task where each input protein *x* is mapped to a label *y* ∈ *R* measuring the fitness score.
# Splits
**Structure type:** None
The dataset is from [**FLIP: Benchmark tasks in fitness landscape inferencefor proteins**](https://www.biorxiv.org/content/10.1101/2021.11.09.467890v2). We follow the original data splits from the "2-vs-rest" branch, with the number of training, validation and test set shown below:
- Train: 22246
- Valid: 2462
- Test: 50432
# Data format
We organize all data in LMDB format. The architecture of the databse is like:
**length:** The number of samples
**0:**
- **seq:** The structure-aware sequence
- **fitness:** fitness label of the sequence
**1:**
**···** |
krasserm/gba-trajectories | krasserm | "2024-05-31T14:09:23Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T14:49:02Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: target
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 21224104
num_examples: 8579
- name: test
num_bytes: 11940
num_examples: 5
download_size: 8000136
dataset_size: 21236044
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
A synthetic dataset from an [agent simulation](https://github.com/krasserm/grammar-based-agents/tree/master/simulation) for [planner LLM fine-tuning](https://github.com/krasserm/grammar-based-agents/tree/master/train). More details are described in [Planner fine-tuning on synthetic agent trajectories](https://krasserm.github.io/2024/05/31/planner-fine-tuning/) and the [grammar-based-agents](https://github.com/krasserm/grammar-based-agents) project. This dataset was used to fine-tune [krasserm/gba-planner-7B-v0.1](https://huggingface.co/krasserm/gba-planner-7B-v0.1) and [krasserm/gba-planner-7B-v0.1-GGUF](https://huggingface.co/krasserm/gba-planner-7B-v0.1-GGUF). |
Kamyar-zeinalipour/TAC | Kamyar-zeinalipour | "2024-05-03T14:49:20Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T14:49:16Z" | ---
dataset_info:
features:
- name: answer
dtype: string
- name: clue
dtype: string
- name: Source
dtype: string
- name: Date
dtype: string
splits:
- name: train
num_bytes: 14586686
num_examples: 187395
download_size: 6769639
dataset_size: 14586686
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Krish13/vinod_mmlutransformed_QnA_train_all | Krish13 | "2024-05-03T14:53:35Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T14:53:30Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: subject
dtype: string
- name: choices
sequence: string
- name: answer
dtype: int64
- name: instruction
dtype: string
- name: response
dtype: string
splits:
- name: train
num_bytes: 343105569
num_examples: 115700
download_size: 104385689
dataset_size: 343105569
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
asoria/datasets_features_outputs | asoria | "2024-05-06T13:29:49Z" | 0 | 0 | [
"size_categories:n<1K",
"synthetic",
"distilabel",
"rlaif",
"croissant",
"region:us"
] | null | "2024-05-03T15:21:06Z" | ---
size_categories: n<1K
dataset_info:
features:
- name: dataset
dtype: string
- name: columns
dtype: string
- name: instruction
dtype: string
- name: generation_model
dtype: string
- name: generation
dtype: string
splits:
- name: train
num_bytes: 1080701
num_examples: 73
download_size: 493549
dataset_size: 1080701
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
tags:
- synthetic
- distilabel
- rlaif
---
<p align="left">
<a href="https://github.com/argilla-io/distilabel">
<img src="https://raw.githubusercontent.com/argilla-io/distilabel/main/docs/assets/distilabel-badge-light.png" alt="Built with Distilabel" width="200" height="32"/>
</a>
</p>
# Dataset Card for datasets_features_outputs
This dataset has been created with [distilabel](https://distilabel.argilla.io/).
## Dataset Summary
This dataset contains a `pipeline.yaml` which can be used to reproduce the pipeline that generated it in distilabel using the `distilabel` CLI:
```console
distilabel pipeline run --config "https://huggingface.co/datasets/asoria/datasets_features_outputs/raw/main/pipeline.yaml"
```
or explore the configuration:
```console
distilabel pipeline info --config "https://huggingface.co/datasets/asoria/datasets_features_outputs/raw/main/pipeline.yaml"
```
## Dataset structure
The examples have the following structure per configuration:
<details><summary> Configuration: default </summary><hr>
```json
{
"columns": "{\"text\": {\"dtype\": \"string\", \"_type\": \"Value\"}}",
"dataset": "huggingartists/bushido-zho",
"generation": "\n\nQuestion: Which words appear most frequently in the text column of the dataset?\n{\"question\": \"Which words appear most frequently in the text column of the dataset?\", \"sql_query\": \"SELECT word, COUNT(*) as frequency FROM (SELECT TRIM(REGEXP_SPLIT_TO_TABLE(text, \u0027\\s+\u0027)) as word FROM data) words GROUP BY word ORDER BY frequency DESC LIMIT 10\"}",
"generation_model": "mistralai/Mistral-7B-Instruct-v0.2",
"instruction": "You are a data analyst tasked with exploring a dataset named huggingartists/bushido-zho. Below is the dataset schema in SQL format along with a sample of 5 rows:\nCREATE TABLE \"data\"(\"text\" VARCHAR);\nSample rows:\n{\u0027text\u0027: \u0027\u0027}\n{\u0027text\u0027: \u0027...\u0442\u044f\u043d\u0435\u0442 \u0441\u0432\u043e\u0438 \u0440\u0443\u043a\u0438\\n\u0425\u043e\u0447\u0435\u0442 \u044d\u0442\u043e \u0432\u043d\u0430\u0442\u0443\u0440\u0435 \\n\u042d\u0442\u0430 \u0441\u0443\u043a\u0430 \u043d\u0435\u043f\u043e\u0441\u043b\u0443\u0448\u043d\u0430\u044f\\n\u0414\u0430, \u043e\u043d\u0430 \u0437\u043d\u0430\u0435\u0442, \u0447\u0442\u043e \u044f \u0431\u0443\u0434\u0443 \u043b\u0443\u0447\u0448\u0435\\u2005\\n\u0423 \u043c\u043e\u0438\u0445 \u0441\u0443\u0447\u0435\u043a\\u2005\u043f\u0438\u0437\u0434\u0430\u0442\u044b\u0435 \u0441\u0443\u043c\u043a\u0438\\n\u0412\u043d\u0443\u0442\u0440\u0438 \u044d\u0442\u0438\u0445 \u0441\u0443\u043c\u043e\u043a \u043e\u0433\u0440\u043e\u043c\u043d\u044b\u0435 \u0441\u0443\u043c\u043c\u044b\\u2005\\n\u041d\u0443-\u043a\u0430, \u043d\u0443-\u043a\u0430, \u043d\u0443-\u043a\u0430, \u043c\u044b \u0441\u0442\u0440\u043e\u0438\u043c\\n\u0414\u0430, \u043c\u044b \u0431\u0443\u0434\u0442\u043e \u0448\u0442\u0443\u043a\u0430\u0442\u0443\u0440\u043a\u0430\\n\u0412\u0441\u0435 \u0433\u0438\u043b\u044c\u0437\u044b \u043f\u0430\u0434\u0430\u044e\u0442 \u043d\u0430 \u043f\u043e\u043b\\n\u042f \u043a\u043b\u0430\u0434\u0443 \u0434\u0435\u043d\u044c\u0433\u0438 \u043d\u0430 \u0441\u0442\u043e\u043b \u0441\u0432\u043e\u0435\u0439 \u043c\u0430\u043c\u0435 \\n\u0417\u0434\u0435\u0441\u044c SEEMEE \u043f\u0430\u043f\u0430\\n\u0412\u0438\u0434\u0438\u0448\u044c \u043c\u0435\u043d\u044f, \u044f \u043f\u0443\u043b\u044f\u044e \u043a\u0430\u043a choppa \\n...\u0027}\n{\u0027text\u0027: \u0027Glizzy, what you cookin up?\\n\u041f\u043e-\u043f\u043e\\nBUSHIDO ZHO down away\\n\u042d\u0439, gang\\n\u0425\u0435\u0439\\n\u0425\u0430 \\n\u0414\u0440\u0438\u043f \\n\u0414\u0440\u0438\u043f \\n\u0414\u0432\u0430-\u043d\u043e\u043b\u044c, \u043e\u0434\u0438\u043d\\n\u0415, \u0435, \u043e\u043a\u0435\u0439 wait, \u0435\u0449\u0435\\n\u041c\u043e\u0438 \u0424\u043e\u0440\u0441\u044b \u043d\u0430 \u043b\u0438\u0446\u043e \u0442\u0432\u043e\u0435\u0439 slatt \\n\u0421\u0443\u043a\u0430, \u043c\u043e\u0439 \u0434\u0440\u0438\u043f \u2014 \u043c\u044f\u0441\u0446\u043e , \u044f \u0442\u0440\u044d\u043f \u043f\u043e\u044d\u0442 \\nZHO \u043a\u0443\u043f\u0438\u043b \u0431\u044b Porsche , \u0434\u0430, \u044f \u043a\u0443\u0440\u044e \u0431\u043e\u0440\u0449 \\n\u0414\u0430\u043b \u0431\u043b\u044f\u0434\u0435 \u043b\u0435\u0449\u0430 , \u0442\u044b \u0432\u0435\u0434\u044c \u0437\u043d\u0430\u0435\u0448\u044c \\n\u0414\u0430, \u044f \u043d\u0430\u0445\u0430\u043b, \u044f \u0441\u043a\u0430\u0437\u0430\u043b: bust down \\n\u041c\u043e\u043d\u043e\u0442\u043e\u043d\u043d\u044b\u0439 voice \u2014 \u044d\u0442\u043e maintown \\n\u042f \u043a\u0443\u0440\u044e moonrock , \u043f\u0430\u0440\u0435\u043d\u044c, \u044d\u0442\u043e dope walk \\n\u041d\u0435 \u0431\u043e\u044f\u043b\u0441\u044f \u043d\u0430\u043f\u0438\u0441\u0430\u0442\u044c \u044f \u0442\u0435\u043a\u0441\u0442 \u043f\u0440\u043e hoes, \u043f\u0440\u043e\u0441\u0442\u0438\\n\u0414\u044f\u0434\u044f, \u0442\u044b \u0434\u0435\u0431\u0438\u043b, \u044f \u0442\u043e\u043f\u043b\u044e \u0437\u0430 \u0440\u0443\u0441\u0441\u043a\u0438\u0439 \u0434\u0440\u0438\u043b\u043b\\n\u0417\u043e\u043b\u043e\u043c\u0430\u043a\u0441, \u044f killah, \u0442\u0440\u0438 \u0431\u0430\u0440\u0430, \u043e\u0442\u0443\u043f\u0435\u043b, \u043d\u0435 \u0444\u0430\u0440\u043c\u0430\u0446\u0435\u0432\u0442\\nKillah on my way , killah on my way \\n\u0412\u043e \u043c\u043d\u0435 \u043e\u0447\u0435\u043d\u044c \u043c\u0430\u043b\u043e \u043f\u0430\u043c\u044f\u0442\u0438 \\n\u0422\u0432\u043e\u044f \u0441\u0443\u043a\u0430 \u0433\u043e\u0432\u043e\u0440\u0438\u0442 \u043c\u043d\u0435: \u00ab\u0412\u044b \u0432\u0441\u0435\u0433\u043e \u043f\u0440\u0438\u044f\u0442\u0435\u043b\u0438\u00bb \\n\u041e\u0441\u0442\u0430\u043d\u0443\u0441\u044c \u0432 \u0435\u0451 \u043f\u0430\u043c\u044f\u0442\u0438 \\n\u041d\u0430\u0443\u0447\u0443 \u0442\u0440\u044d\u043f \u0433\u0440\u0430\u043c\u043e\u0442\u0435 \\n\u041c\u043e\u0438 \u0424\u043e\u0440\u0441\u044b \u043d\u0430 \u043b\u0438\u0446\u043e \u0442\u0432\u043e\u0435\u0439 slatt \\n\u041c\u043e\u0451 \u0438\u043c\u044f \u2014 ZHO, ZHO, ZHO, god damn \\nZHO \u043a\u0443\u043f\u0438\u043b \u0431\u044b Porsche , \u0434\u0430, \u044f \u043a\u0443\u0440\u044e \u0431\u043e\u0440\u0449 \\n\u0414\u0430\u043b \u0431\u043b\u044f\u0434\u0435 \u043b\u0435\u0449\u0430 , \u0442\u044b \u0432\u0435\u0434\u044c \u0437\u043d\u0430\u0435\u0448\u044c \\n\u0414\u0430, \u044f \u043d\u0430\u0445\u0430\u043b, \u044f \u0441\u043a\u0430\u0437\u0430\u043b bust down \\n\u041c\u043e\u043d\u043e\u0442\u043e\u043d\u043d\u044b\u0439 voice, \u044d\u0442\u043e \u043c\u043e\u0439 town \\nWha, gang, Zho-Zho \u043f\u0430\u0443-\u043f\u0430\u0443\\nHold on!\\nYeah\\nHold on!\\nYeah\\nHold on!\\nYeah\\nSlatt!\u0027}\n{\u0027text\u0027: \u0027\u041c\u044b \u0435\u0434\u0435\u043c \u043d\u0430 \u043c\u0430\u0448\u0438\u043d\u0430\u0445, \u0442\u043e\u043b\u044c\u043a\u043e \u043d\u0430 \u0433\u043e\u043d\u043e\u0447\u043d\u044b\u0445\\n\u0422\u043e\u043b\u044c\u043a\u043e \u043e\u0444\u0438\u0446\u0438\u0430\u043b\u044c\u043d\u043e, \u0432\u0441\u0451 \u043f\u0440\u043e\u0444\u0435\u0441\u0441\u0438\u043e\u043d\u0430\u043b\u044c\u043d\u043e\\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043f\u0440\u044f\u043c\\u2005\u043a\u0430\u043a\\u2005\u0432 \u0440\u0430\u043b\u043b\u0438 \\n\u0415\u0434\u0443\\u2005\u043f\u0440\u044f\u043c \u043d\u0430 \u043a\u0440\u0430\u0441\u043d\u044b\u0439 \\n\u0415\u0434\u0443 \u043a\u0430\u043a \u0432\\u2005\u043d\u0430\u0441\u043a\u0430\u0440\u0435 \\n\u0415\u0434\u0443 \u043d\u0430 McLaren\\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \u0431\u044b\u0441\u0442\u0440\u043e \u0442\u044b \u043d\u0435 \u043f\u043e\u0439\u043c\u0430\u0435\u0448\u044c\\n\u0415\u0434\u0443 \u043d\u0430 McLaren \u0442\u0435 \u043a\u0440\u0443\u0433\u0438 \u043d\u0430\u0432\u043e\u0440\u0430\u0447\u0438\u0432\u0430\u044e\\n\u0422\u044b \u043d\u0430\u0441 \u043d\u0435 \u043f\u043e\u0439\u043c\u0430\u0435\u0448\u044c \\n\u0421\u043e \u043c\u043d\u043e\u044e \u0442\u0430\u043f\u043a\u0438, \u0441\u043e \u043c\u043d\u043e\u044e \u0431\u0430\u0431\u043a\u0438\\n\u041c\u044b \u0437\u0430\u0440\u0430\u0431\u0430\u0442\u044b\u0432\u0430\u0435\u043c, \u044f \u0437\u0430\u0440\u0430\u0431\u043e\u0442\u0430\u043b \u043d\u0430 \u043f\u043e\u043b\u0451\u0442 \u0431\u0443\u0434\u0442\u043e \u0431\u044b \u043d\u0430 \u0442\u0440\u0430\u043f\u0435\\n\u0417\u0430\u0440\u0430\u0431\u043e\u0442\u0430\u043b \u043d\u0430 \u043f\u043e\u043b\u0451\u0442\u0430\u0445 \u0431\u0443\u0434\u0442\u043e \u0431\u044b \u0430\u0432\u0438\u0430\u043b\u0430\u0439\u043d\u0435\u0440 \\n\u041a\u0430\u043a \u0432\u0438\u0445\u0440\u044c \u043b\u0435\u0442\u0430\u0435\u043c \u044d\u0442\u043e \u043c\u0443\u0442\u0438\u0442\u0441\u044f \u0442\u0430\u043a \u0432\u043e\u0442 \\n\u042d\u0442\u043e Forza Motorsport, \u043d\u0438\u0433\u0433\u0430\\n\u0422\u0432\u043e\u0435\u0439 \u0441\u0443\u043a\u0435 \u043b\u0443\u0447\u0448\u0435 \u043f\u0440\u044b\u0433\u0430\u0442\u044c \u0437\u0430 \u0431\u043e\u0440\u0442, \u043f\u0438\u0434\u0440 \\n\u041a\u0440\u0443\u0433\u0430\u043b\u044f\u043d\u0430 \u0432\u0442\u043e\u0440\u043e\u0433\u043e \u0432\u0440\u0443\u0447\u0438\u043b \u0435\u0439\\n\u0422\u0440\u0430\u043f, \u0442\u0440\u0430\u043f \u0441\u043a\u043e\u043b\u044c\u0437\u0438\u043c \u043a\u0430\u043a \u043d\u0438\u043d\u0434\u0437\u044f\\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043f\u0440\u044f\u043c \u043a\u0430\u043a \u0432 \u0440\u0430\u043b\u043b\u0438 \\n\u0415\u0434\u0443 \u043f\u0440\u044f\u043c \u043d\u0430 \u043a\u0440\u0430\u0441\u043d\u044b\u0439 \\n\u0415\u0434\u0443 \u043a\u0430\u043a \u0432 \u043d\u0430\u0441\u043a\u0430\u0440\u0435 \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 \u043c\u0430\u0448\u0438\u043d\u0435 , \u043d\u0430 \u0430\u0432\u0442\u043e\u043c\u043e\u0431\u0438\u043b\u0435 \\n\u0425\u0443\u043b\u0438 \u0433\u043d\u0438\u0434\u0430 \u0437\u044b\u0440\u0438\u0448\u044c , \u0432\u044b\u043a\u043e\u043b\u044e \u0442\u0435 \u0437\u044b\u0440\u043a\u0438 \\n\u0417\u0430 \u043c\u043d\u043e\u0439 \u0431\u0443\u0434\u0443\u0442 \u0434\u0430\u0433\u0438 \u043d\u0430 \u0437\u0430\u043d\u0438\u0436\u0435\u043d\u043d\u043e\u0439 \\n\u0422\u044b \u0443\u043d\u0438\u0436\u0435\u043d\u043d\u044b\u0439 \u043d\u0430 \u043e\u0441\u0442\u0430\u043d\u043e\u0432\u043a\u0435 \u044d\u0442\u043e \u0442\u0430\u043a \u043e\u0431\u0438\u0434\u043d\u043e \\n\u042f \u043f\u043e\u0433\u0440\u0443\u0436\u0435\u043d\u043d\u044b\u0439 \u0432 \u0441\u0435\u0431\u044f \u0438 \u0432\u0430\u0441 \u043d\u0435 \u0432\u0438\u0434\u043d\u043e\\n\u041d\u0430 \u043c\u043d\u0435 \u0432\u0438\u0434\u044b \u0440\u0430\u0437\u043d\u044b\u0445 \u0446\u0435\u043f\u0435\u0439 \u043f\u0440\u0438\u0441\u0442\u0430\u044e\u0442 \u043a \u043a\u0440\u0430\u0441\u0438\u0432\u044b\u043c \u043f\u043b\u0430\u043d\u0430\u043c\\n\u0415\u0434\u0443 \u043d\u0430 McLaren , \u0432\u0438\u0436\u0443 \u0432\u0441\u0451 \u043f\u0438\u0437\u0434\u0430\u0442\u043e \\n\u0422\u044b \u0437\u0430\u043b\u0443\u043f\u043e\u0439 \u043f\u0430\u0445\u043d\u0435\u0448\u044c , \u044f \u043f\u043e\u0434\u043e\u0431\u0435\u043d \u0437\u043b\u0430\u0442\u0443\\n\u0421 \u041a\u0430\u0440\u043b\u043e\u043c \u0412\u0435\u043b\u0438\u043a\u0438\u043c \u0443\u043a\u0440\u0430\u043b\u0438 \u043a\u043e\u0440\u0430\u043b\u043b\u044b\\n\u0415\u0434\u0443 \u043d\u0430 McLaren \u0433\u0434\u0435-\u0442\u043e \u0432 \u041a\u0430\u0440\u043f\u0430\u0442\u0430\u0445 \\n\u041f\u0430\u0440\u0442\u0438\u044f \u0434\u0435\u043d\u0435\u0433 \u2014 \u0431\u044b\u0441\u0442\u0440\u044b\u0439 \u043a\u043e\u0440\u0430\u0431\u043b\u044c \\n\u041f\u043e\u043f\u0430\u043b\u0438\u0441\u044c \u0432 \u0441\u0435\u0442\u0438 \u0432\u043a\u0443\u0441\u043d\u044b\u0435 \u043a\u0430\u043b\u044c\u043c\u0430\u0440\u044b \\n\u041a\u0443\u0441\u0430\u044e \u2014 \u0441\u043e\u043a \u043f\u0430\u0434\u0430\u0435\u0442 , \u0443 \u0432\u0430\u0441 \u0432\u0441\u0435\u0445 \u044d\u043a\u0437\u0430\u043c\u0435\u043d\u044b \\n\u0418 \u0441 \u043f\u043b\u0435\u0447 \u0446\u0435\u043f\u0438 \u043a\u0430\u043f\u0430\u0435\u0442, \u043d\u0430 \u0442\u0440\u0430\u0441\u0441\u0443 \u0431\u0440\u0438\u043b\u043b\u0438\u0430\u043d\u0442\u0430\u043c\u0438 \\n\u041f\u0440\u0438\u0441\u0442\u0438\u043b\u0438\u0441\u044c \u0441 \u043a\u0430\u043f\u043a\u0430\u043d\u0430\u043c\u0438 , \u0442\u0435\u043f\u0435\u0440\u044c \u043c\u044b \u043f\u043e\u0434 \u043f\u0430\u043b\u044c\u043c\u0430\u043c\u0438\\n\u0421\u0443\u043a\u0438 \u043e\u0434\u0435\u0432\u0430\u044e\u0442\u0441\u044f \u043d\u0430 \u0446\u0435\u0440\u0435\u043c\u043e\u043d\u0438\u044e , \u043c\u044b \u043d\u0435 \u043f\u043e\u0434\u0435\u043b\u0438\u043c\u0441\u044f \u043f\u043b\u0430\u043d\u0430\u043c\u0438\\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043f\u0440\u044f\u043c \u043a\u0430\u043a \u0432 \u0440\u0430\u043b\u043b\u0438 \\n\u0415\u0434\u0443 \u043f\u0440\u044f\u043c \u043d\u0430 \u043a\u0440\u0430\u0441\u043d\u044b\u0439 \\n\u0415\u0434\u0443 \u043a\u0430\u043a \u0432 \u043d\u0430\u0441\u043a\u0430\u0440\u0435 \\n\u0415\u0434\u0443 \u043d\u0430 McLaren\\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\nBitch, \u044f \u0434\u0435\u043b\u0430\u044e \u0440\u044b\u0432\u043e\u043a \u044d\u0442\u043e Forza Motorsport\\n\u0417\u0430\u0431\u0438\u0440\u0430\u0439 \u0441\u0432\u043e\u0438\u0445 \u0442\u0438\u043f\u043e\u0432 \u0441\u043a\u043e\u0440\u043e\u0441\u0442\u044c, \u0431\u043b\u044f, \u043d\u0435 \u0432\u0430\u0448 \u043a\u043e\u043d\u0451\u043a\\n\u042d\u0442\u043e Forza Motorsport, \u043c\u0438\u043a\u0441\u0443\u044e Nike \u0438 Tissot\\n\u0420\u0430\u0437\u0433\u043e\u043d\u044f\u044e\u0441\u044c \u043f\u043e\u0434 \u0434\u0432\u0435 \u0441\u043e\u0442\u043d\u0438, \u044f \u043d\u0435 \u0447\u0443\u0432\u0441\u0442\u0432\u0443\u044e \u043b\u0438\u0446\u043e\\n\u0412\u044b\u043a\u0443\u043f\u0430\u0439 \u0434\u0435\u0440\u044c\u043c\u043e, \u0432\u044b\u043a\u0443\u043f\u0430\u0439\u0442\u0435 \u043c\u043e\u0439 \u0431\u0430\u0437\u0430\u0440\\n\u0412\u044b\u0436\u0438\u043c\u0430\u044e \u0432 \u043f\u043e\u043b, bae, \u0437\u0430\u043a\u0440\u044b\u0432\u0430\u0439 \u0433\u043b\u0430\u0437\u0430\\n\u0413\u043e\u0440\u043e\u0434 \u043c\u0435\u043d\u044f \u043b\u044e\u0431\u0438\u0442, \u0438 \u044f \u0432 \u043d\u0451\u043c \u043d\u0430\u0448\u0451\u043b \u0441\u0435\u0431\u044f\\n\u041c\u043e\u044f \u0442\u0440\u0430\u0441\u0441\u0430 \u043c\u0438\u0440, \u044f \u043d\u0435 \u0434\u043e\u0435\u0434\u0443 \u0434\u043e \u043a\u043e\u043d\u0446\u0430-\u0430-\u0430\\nRalph Lauren Polo \u0441\u043f\u043e\u0440\u0442, \u044f \u0435\u0434\u0443 \u043e\u0447\u0435\u043d\u044c \u0431\u044b\u0441\u0442\u0440\u043e\\n\u041d\u0430 \u043c\u043e\u0438\u0445 \u0433\u043b\u0430\u0437\u0430\u0445 \u043e\u0447\u043a\u0438, \u043c\u043d\u0435 \u043f\u0440\u0438\u0432\u0435\u0437\u043b\u0438 \u0441 \u041f\u0430\u0440\u0438\u0436\u0430\\n\u0412\u043e\u0443, \u0432\u043e\u0443, \u0441\u0442\u0438\u043b\u044f \u0432\u044b\u0448\u0435 \u043a\u0440\u044b\u0448\u0438\\n\u0427\u0442\u043e, \u0447\u0442\u043e, \u0447\u0442\u043e \u0442\u044b \u0445\u043e\u0447\u0435\u0448\u044c \u0441\u043b\u044b\u0448\u0430\u0442\u044c\\n\u0427\u0442\u043e \u0442\u044b \u0445\u043e\u0447\u0435\u0448\u044c \u0441\u043b\u044b\u0448\u0430\u0442\u044c \u043e\u0442 \u043c\u0435\u043d\u044f \u043f\u0440\u0438\u0434\u0443\u0440\u043e\u043a\\n\u041c\u043d\u0435 \u043d\u0443\u0436\u043d\u0430 \u0442\u043e\u043b\u044c\u043a\u043e \u0432\u0430\u043b\u044e\u0442\u0430, \u0441\u0443\u043a\u0430, \u043d\u0435 \u043c\u0430\u0440\u0430\u044e \u0440\u0443\u043a\u0438\\n\u041d\u0430\u0432\u043e\u0436\u0443 \u043d\u0435\u043c\u043d\u043e\u0433\u043e \u0448\u0443\u043c\u0430, Whitener, \u041b\u044f\u043d\u0430 \u043f\u043b\u044e\u0441 best duo\\n\u041e\u0442\u043a\u0440\u044b\u0432\u0430\u044e \u0434\u0432\u0435\u0440\u0438 \u043a\u043b\u0443\u0431\u0430, \u043c\u043e\u0451 \u0444\u043b\u043e\u0443 \u0437\u043e\u0432\u0451\u0442\u0441\u044f \u0442\u0443\u0440\u0431\u043e\\n\u0412\u043e\u0443, \u0432\u043e\u0443, \u0432\u043e\u0443, \u0435\u0434\u0443 \u043d\u0430 McLaren\\nWhitener, Whitener, Whitener \u043f\u043b\u044e\u0441 Polyana\\n\u041b\u044f\u043d\u0430 \u043d\u0430 \u043d\u0430\u0441\u043a\u0430\u0440\u0435, \u0432\u043e\u0443, \u0432\u043e\u0443, \u0432\u043e\u0443\\n\u0401\u043f\u0442\u0430, \u0434\u0430\u043b\u044c\u0448\u0435 \u0441\u0430\u043c\u0438\\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043f\u0440\u044f\u043c \u043a\u0430\u043a \u0432 \u0440\u0430\u043b\u043b\u0438 \\n\u0415\u0434\u0443 \u043f\u0440\u044f\u043c \u043d\u0430 \u043a\u0440\u0430\u0441\u043d\u044b\u0439 \\n\u0415\u0434\u0443 \u043a\u0430\u043a \u0432 \u043d\u0430\u0441\u043a\u0430\u0440\u0435 \\n\u0415\u0434\u0443 \u043d\u0430 McLaren\\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\nMB , SC \\nMelon music, Mellow Bite \\nMMMB, \u043f\u043e-\u043f\u043e-\u043f\u043e-\u043f\u043e \\n\u041c\u043d\u0435 \u043f\u043b\u0435\u0432\u0430\u0442\u044c \u043d\u0430 \u0432\u0435\u0441\u044c \u0442\u0432\u043e\u0439 \u0442\u0440\u0430\u043f\u0447\u0438\u043a\\n\u042d\u0442\u043e \u0448\u0443\u0442\u043a\u0438 \u043d\u0438\u0433\u0433\u0430, \u044f \u0442\u044f \u0448\u0430\u0442\u043d\u0443\u043b, \u043d\u0438\u0433\u0433\u0430\\n\u041d\u0430 \u043d\u043e\u0433\u0430\u0445 \u0442\u0430\u043f\u043a\u0438, \u0442\u0438\u043f\u043e \u043a\u0430\u043a Forged\\n\u041e\u043f\u043f\u044b \u043d\u0435 \u0440\u044f\u0434\u043e\u043c, \u043c\u044b \u043d\u0435 \u0432 \u041a\u0430\u043c\u0431\u043e\u0434\u0436\u0435\\n\u0415\u0434\u0443 \u043d\u0430 McLaren, gang \u0438 \u043d\u0430\u043c \u043c\u043e\u0436\u043d\u043e\\n\u0423 \u043f\u043b\u0430\u043d\u043a\u0442\u043e\u043d\u0430 Curren, \u0443 \u043d\u0438\u0445 \u0432\u0441\u0451 \u0441\u043b\u043e\u0436\u043d\u043e\\n\u042f, \u044f, \u0443 \u043c\u0435\u043d\u044f Forged\\nLil nigga, \u0437\u043e\u0432\u0443 \u0442\u0435\u0431\u044f \u0441\u0438\u0442\u043e\\n\u0422\u044b \u0434\u044b\u0440\u044f\u0432\u044b\u0439, \u043e\u0442 \u0447\u0435\u0433\u043e \u043d\u0435 \u0432\u0438\u0434\u043d\u043e\\n\u0412\u0440\u043e\u0434\u0435 \u0442\u044b \u043a\u0430\u043a \u0441\u043f\u0430\u043d\u0447 \u0431\u043e\u0431 \u044d\u0442\u043e \u043e\u0447\u0435\u0432\u0438\u0434\u043d\u043e \\n\u0418 \u0442\u0435\u0431\u0435 \u043d\u0435 \u0434\u043e\u0433\u043d\u0430\u0442\u044c \u043c\u0435\u043d\u044f\\n\u0422\u0432\u043e\u0438 \u0433\u043b\u0430\u0437\u0430 \u0432\u0438\u0434\u044f\u0442 \u0442\u0430\u0447\u043a\u0443, \u044f \u0443\u0436\u0435 \u0443\u0435\u0445\u0430\u043b, \u0434\u0430\\n\u0422\u0435\u0431\u0435 \u043d\u0435 \u0443\u0433\u043d\u0430\u0442\u044c \u0437\u0430 \u0441\u0432\u044d\u0433\u043e\u043c \u044d\u0442\u043e \u0442\u043e\u0447\u043d\u043e\\nMcLaren Forged, \u0431\u0443\u0434\u0435\u0448\u044c \u043e\u0441\u0442\u043e\u0440\u0436\u043d\u0435\u0439\\n\u0411\u0438\u0433 \u0431\u043e\u0439 \u043a\u0430\u043a \u043c\u0430\u0434\u0430\u0440\u0430, 163 \u2014 \u0433\u0430\u0430\u0440\u0430\\n\u041c\u043e\u0439 \u0432\u0435\u0441\u044c gang, whole lotta\\n223, whole lotta \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043f\u0440\u044f\u043c \u043a\u0430\u043a \u0432 \u0440\u0430\u043b\u043b\u0438 \\n\u0415\u0434\u0443 \u043f\u0440\u044f\u043c \u043d\u0430 \u043a\u0440\u0430\u0441\u043d\u044b\u0439 \\n\u0415\u0434\u0443 \u043a\u0430\u043a \u0432 \u043d\u0430\u0441\u043a\u0430\u0440\u0435 \\n\u0415\u0434\u0443 \u043d\u0430 McLaren\\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 McLaren \\n\u0415\u0434\u0443 \u043d\u0430 McLaren\u0027}\n{\u0027text\u0027: \u0027\u041c\u043e\u0438 \u043f\u0430\u0446\u0430\u043d\u044b \u0440\u0435\u0430\u043b\u044c\u043d\u043e \u043d\u043e\u0441\u044f\u0442 stick\u0438\\n\u0422\u0432\u043e\u0438 \u0434\u0435\u0432\u043e\u0447\u043a\u0438 \u0441\u0442\u0440\u0435\u043b\u044f\u044e\u0442 \u0442\u043e\u043b\u044c\u043a\u043e \u0441\u0438\u0433\u0438\\n\u0422\u0432\u043e\u044f \u0448\u043b\u044e\u0445\u0430 \u0441\u0435\u0431\u044f \u043f\u0440\u043e\u0434\u0430\u0451\u0442 \u043a\u0430\u043a \u0448\u0432\u0430\u0431\u0440\u0430\\n\u0421\u0443\u043a\u0430, \u0441\u043e\\u2005\u043c\u043d\u043e\u0439\\u2005Zho Bushido, \u043f\u0440\u0430\u0432\u0434\u0430\\n\u0422\u044b\\u2005\u0432\u043d\u0430\u0442\u0443\u0440\u0435 \u043a\u043b\u043e\u0443\u043d, \u0442\u044b \u0432\u043d\u0430\u0442\u0443\u0440\u0435 \u0441\u043b\u043e\u043c\u0430\u043d\\n\u0422\u044b \u0432\u043d\u0430\u0442\u0443\u0440\u0435\\u2005\u043f\u043e\u0439\u043c\u0430\u043d, \u0442\u044b \u0449\u0430 \u043d\u0430\u0445\u0443\u0439 \u043f\u043e\u0441\u043b\u0430\u043d\\n\u041c\u044b \u0442\u0435\u0431\u044f \u043d\u0435 \u0438\u0449\u0435\u043c, \u0432\u0435\u0434\u044c \u0442\u044b \u043d\u0430\u043c \u043d\u0435 \u043d\u0443\u0436\u0435\u043d\\n\u042f \u0442\u0435\u0431\u044f \u043d\u0435 \u0441\u043b\u044b\u0448\u0443, \u043a\u0430\u043a \u0432 \u043f\u0443\u0441\u0442\u044b\u043d\u0435 \u0441\u0443\u0448\u0430\\n\u0422\u044b \u0448\u043b\u044e\u0445\u0430 \u2014 \u043c\u0435\u043d\u044f\u0435\u0448\u044c \u0442\u0440\u0430\u043f \u0441\u0432\u043e\u0438\u0445 \u0447\u0435\u043b\u043e\u0432 \u043d\u0430 \u0442\u0451\u043b\u043a\u0443\\n\u0412 \u0442\u0432\u043e\u0451\u043c \u0434\u043e\u043c\u0435 \u0438\u0437 \u0440\u0430\u0441\u0442\u0435\u043d\u0438\u0439 \u0433\u043e\u0440\u0438\u0442 \u0442\u043e\u043b\u044c\u043a\u043e \u0451\u043b\u043a\u0430\\n\u0422\u044b \u043d\u0435 \u0433\u044d\u043d\u0433\u0441\u0442\u0430, \u0442\u044b \u0431\u043e\u0436\u044c\u044f \u043a\u043e\u0440\u043e\u0432\u043a\u0430\\n\u0422\u0432\u043e\u0439 \u0441\u0432\u044d\u0433 \u0432 \u043a\u0440\u043e\u0441\u0441\u043e\u0432\u043a\u0430\u0445\\n\u041e\u0442 \u0433\u0440\u044f\u0437\u043d\u043e\u0433\u043e \u0442\u0440\u0430\u043f\u0430 \u043d\u0430 \u044f\u0437\u044b\u043a\u0435 \u044f\u0437\u0432\u044b\\n\u041a\u0442\u043e \u0441\u043b\u0438\u043b \u043c\u043e\u0435\u0433\u043e \u043a\u043e\u0440\u0435\u0448\u0430? \u0414\u0430, \u044d\u0442\u043e \u044f \u0441\u043b\u0438\u043b\\n\u0422\u0451\u043b\u043a\u0435 \u0431\u044b\u043b\u043e \u0434\u0432\u0430\u0434\u0446\u0430\u0442\u044c, \u044f \u0433\u043e\u043d\u044f\u043b \u0432 \u044f\u0441\u043b\u0438\\n\u041d\u0430\u0448 \u0442\u0440\u0430\u043f \u0431\u0443\u0434\u0443\u0442 \u0442\u0440\u0430\u043f\u0438\u0442\u044c, \u0441\u043d\u043e\u0432\u0430 \u0432\u0441\u0435\u043c \u044f\u0441\u043d\u043e, \u0430\\n\u041a\u0430\u043a\u043e\u0439 \u0442\u044b \u043d\u0430\u0445\u0443\u0439 \u0445\u0430\u0441\u043b\u0435\u0440?\\n\u0422\u044b \u0442\u043e\u043b\u043a\u0430\u0435\u0448\u044c \u0442\u0440\u0430\u0432\u043a\u0443 \u0442\u043e\u043b\u044c\u043a\u043e \u0432 \u0413\u0422\u0410\\n\u0422\u044b \u0442\u0443\u043f\u0438\u0448\u044c \u0442\u0430\u043a \u0441\u0438\u043b\u044c\u043d\u043e\\n\u0422\u044b \u0442\u0443\u043f\u0438\u0448\u044c \u043a\u0430\u043a \u0431\u0443\u0434\u0442\u043e \u0434\u0432\u0430 \u043b\u043e\u0445\u0430\\n\u041a\u0430\u043a\u0438\u0435 \u043d\u0430\u0445\u0443\u0439...\u0027}\nYour goal is to generate one potential questions that a user might want to ask about this dataset. Consider the information contained in the provided columns and rows, and try to think of a meaningful question that could provide insights or useful information. For each question, provide the SQL query that would extract the relevant information from the dataset.\n\nOuput JSON format:\n{\"question\": \"[Insert question here]\", \"sql_query\": \"[Insert SQL query here]\"}\n\nPlease ensure that the SQL query retrieves relevant information from the dataset to answer the corresponding question accurately.\nReturn only the JSON object, do not add extra information."
}
```
This subset can be loaded as:
```python
from datasets import load_dataset
ds = load_dataset("asoria/datasets_features_outputs", "default")
```
Or simply as it follows, since there's only one configuration and is named `default`:
```python
from datasets import load_dataset
ds = load_dataset("asoria/datasets_features_outputs")
```
</details>
|
SaProtHub/Dataset-Subcellular_Localization-DeepLoc | SaProtHub | "2024-05-06T11:28:06Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-05-03T15:21:39Z" | ---
license: mit
---
# Description
Subcellular Localization prediction is a 10-class classification task to predict where a protein locates in the cell, where each input protein *x* is mapped to a label *y* ∈ {0, 1, ..., 9}.
The digital label means:
0: Nucleus
1: Cytoplasm
2: Extracellular
3: Mitochondrion
4: Cell.membrane
5: Endoplasmic.reticulum
6: Plastid
7: Golgi.apparatus
8: Lysosome/Vacuole
9: Peroxisome
# Splits
**Structure type:** AF2
The dataset is from [**DeepLoc: prediction of protein subcellular localization using deep learning**](https://academic.oup.com/bioinformatics/article/33/21/3387/3931857). We employ all proteins (proteins that lack AF2 structures are removed), and split them based on 70% structure similarity (see [ProteinShake](https://github.com/BorgwardtLab/proteinshake/tree/main)), with the number of training, validation and test set shown below:
- Train: 10414
- Valid: 1368
- Test: 1368
# Data format
We organize all data in LMDB format. The architecture of the databse is like:
**length:** The number of samples
**0:**
- **name:** The UniProt ID of the protein
- **seq:** The structure-aware sequence
- **plddt**: pLDDT values at all positions
- **label:** classification label of the sequence
**1:**
**···** |
SaProtHub/Dataset-Binary_Localization-DeepLoc | SaProtHub | "2024-05-06T11:27:18Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-05-03T15:22:54Z" | ---
license: mit
---
# Description
Binary Localization prediction is a binary classification task where each input protein *x* is mapped to a label *y* ∈ {0, 1}, corresponding to either "membrane-bound" or "soluble" .
The digital label means:
0: membrane-bound
1: soluble
# Splits
**Structure type:** AF2
The dataset is from [**DeepLoc: prediction of protein subcellular localization using deep learning**](https://academic.oup.com/bioinformatics/article/33/21/3387/3931857). We employ all proteins (proteins that lack AF2 structures are removed), and split them based on 70% structure similarity (see [ProteinShake](https://github.com/BorgwardtLab/proteinshake/tree/main)), with the number of training, validation and test set shown below:
- Train: 6707
- Valid: 698
- Test: 807
# Data format
We organize all data in LMDB format. The architecture of the databse is like:
**length:** The number of samples
**0:**
- **name:** The UniProt ID of the protein
- **seq:** The structure-aware sequence
- **plddt**: pLDDT values at all positions
- **label:** classification label of the sequence
**1:**
**···** |
kamilakesbi/synthetic_dataset_jpn_2_less_overlap | kamilakesbi | "2024-05-03T15:51:07Z" | 0 | 0 | [
"croissant",
"region:us"
] | null | "2024-05-03T15:42:10Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: speakers
sequence: string
- name: timestamps_start
sequence: float64
- name: timestamps_end
sequence: float64
splits:
- name: train
num_bytes: 54365206.0
num_examples: 30
download_size: 44850036
dataset_size: 54365206.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|