datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
28.8M
| likes
int64 0
5.87k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
977k
|
---|---|---|---|---|---|---|---|---|
mask-distilled-libri-one-sec-cv12/chunk_77 | mask-distilled-libri-one-sec-cv12 | "2024-05-09T02:19:17Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:timeseries",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T02:19:09Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: logits
sequence: float32
splits:
- name: train
num_bytes: 204441040.12812355
num_examples: 6377
download_size: 177590726
dataset_size: 204441040.12812355
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
justice-416/llama3-tt01 | justice-416 | "2024-05-09T02:45:48Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-05-09T02:25:47Z" | ---
license: apache-2.0
---
|
mask-distilled-libri-one-sec-cv12/chunk_81 | mask-distilled-libri-one-sec-cv12 | "2024-05-09T02:26:20Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:timeseries",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T02:26:10Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: logits
sequence: float32
splits:
- name: train
num_bytes: 246310257.39476988
num_examples: 7683
download_size: 211683419
dataset_size: 246310257.39476988
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
qiqiqiqiqiq/test4 | qiqiqiqiqiq | "2024-05-09T02:40:59Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T02:40:38Z" | ---
license: apache-2.0
---
|
Thauan12/rezendeevil | Thauan12 | "2024-05-12T00:28:42Z" | 0 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-05-09T02:41:27Z" | ---
license: openrail
---
|
justice-416/llama3-002 | justice-416 | "2024-05-09T02:50:42Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T02:50:09Z" | ---
license: apache-2.0
---
|
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_train_part_2_mini_3 | YYYYYYibo | "2024-05-09T02:52:31Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T02:52:28Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 40300954
num_examples: 4850
download_size: 22391996
dataset_size: 40300954
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_train_part_2_mini_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_train_part_2_mini_2 | YYYYYYibo | "2024-05-09T02:52:36Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T02:52:34Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 39928251
num_examples: 4850
download_size: 22115477
dataset_size: 39928251
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_train_part_2_mini_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_train_part_2_mini_1 | YYYYYYibo | "2024-05-09T02:52:58Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T02:52:55Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 40878580
num_examples: 4850
download_size: 22657434
dataset_size: 40878580
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_train_part_2_mini_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_train_part_2_mini_0 | YYYYYYibo | "2024-05-09T02:53:10Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T02:53:08Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 40588036
num_examples: 4850
download_size: 22609569
dataset_size: 40588036
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_train_part_2_mini_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_train_part_2 | YYYYYYibo | "2024-05-09T02:53:26Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T02:53:22Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 161695821
num_examples: 19400
download_size: 89823496
dataset_size: 161695821
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_train_part_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
0x7o/quality-dataset | 0x7o | "2024-05-09T03:13:50Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T03:13:24Z" | ---
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 641696533.8850998
num_examples: 52337
- name: test
num_bytes: 160433329.11490017
num_examples: 13085
download_size: 389335852
dataset_size: 802129863.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
ngoc/protonx-pokemon-challenge | ngoc | "2024-05-09T03:40:51Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-09T03:40:51Z" | ---
license: apache-2.0
---
|
recursal/SuperWiki-1.5 | recursal | "2024-06-10T12:17:53Z" | 0 | 0 | [
"task_categories:text-generation",
"task_categories:fill-mask",
"task_ids:language-modeling",
"task_ids:masked-language-modeling",
"annotations_creators:no-annotation",
"language_creators:crowdsourced",
"source_datasets:original",
"language:ar",
"language:de",
"language:en",
"language:es",
"language:fa",
"language:fr",
"language:he",
"language:hi",
"language:id",
"language:it",
"language:ja",
"language:ko",
"language:nl",
"language:pl",
"language:pt",
"language:ru",
"language:sv",
"language:tr",
"language:uk",
"language:vi",
"language:zh",
"license:cc-by-sa-3.0",
"region:us"
] | [
"text-generation",
"fill-mask"
] | "2024-05-09T03:45:56Z" | ---
annotations_creators:
- no-annotation
language_creators:
- crowdsourced
license:
- cc-by-sa-3.0
task_categories:
- text-generation
- fill-mask
task_ids:
- language-modeling
- masked-language-modeling
source_datasets:
- original
language:
- ar
- de
- en
- es
- fa
- fr
- he
- hi
- id
- it
- ja
- ko
- nl
- pl
- pt
- ru
- sv
- tr
- uk
- vi
- zh
pretty_name: SuperWIKI v1.5
---
## Dataset Details
![](SuperWikipedia-15.png "Based off from Wikipe-tan (Maid, cyan hair, short hair) and Wikipedia's globe logo.")
*Waifu to catch your attention.*
### Dataset Description
*SuperWIKI-1.5* is a **~18.23B** Tokens (llama-2-7b-chat-tokenizer) / **~15.17B** Tokens (RWKV Tokenizer) multilingual dataset of Wikipedia articles, curated from Wikipedia HTML dumps.
It serves as a training resource for large language models and other NLP tasks.
This card details the dataset's origin, content, and limitations.
- **Curated by:** KaraKaraWitch
- **Funded by:** Recursal.ai (I work there lol)
- **Shared by:** KaraKaraWitch
- **Language(s) (NLP):** Multiple (see Supported Languages)
- **License:** cc-by-sa-4.0
SuperWIKI-1.5 was created under time constraints for the release of [EagleX v1](https://huggingface.co/recursal/EagleX_1-7T_HF), and may contain biases in selection.
**We highly recommend using the improved version, *SuperWikipedia-NEXT* instead.**
### Dataset Summary
Wikipedia dataset containing cleaned articles of all languages.
The dataset is manually built from Wikipedia HTML dumps with each split for each language.
Each example contains the content of one full Wikipedia article.
### Supported Tasks and Leaderboards
Primarily used for language modeling
### Languages
The dataset includes Wikipedia articles from the following languages:
```
ar.wikipedia.org
de.wikipedia.org
en.wikipedia.org
es.wikipedia.org
fa.wikipedia.org
fr.wikipedia.org
he.wikipedia.org
hi.wikipedia.org
id.wikipedia.org
it.wikipedia.org
ja.wikipedia.org
ko.wikipedia.org
nl.wikipedia.org
pl.wikipedia.org
pt.wikipedia.org
ru.wikipedia.org
simple.wikipedia.org
sv.wikipedia.org
tr.wikipedia.org
uk.wikipedia.org
vi.wikipedia.org
zh.wikipedia.org
```
*`.wikipedia.org`* extensions have been added for your convenience.
### Selection Bias
Unlike SuperWikipedia-NEXT, the language selection in SuperWIKI-1.5 was manual and may favor certain languages (e.g., CJK and European languages).
### Filtering
The filtering process is documented in code but not well-organized. We recommend reviewing the code directly for details.
### Data Instances
Refer to this sample to see all the fields:
```json
{
"id": 4024053,
"title": "Tharman Shanmugaratnam",
"url": "https://en.wikipedia.org/wiki/Tharman_Shanmugaratnam",
"stub": false,
"template": [
"Efn",
"C-SPAN",
"S-aft",
"S-new",
"Reflist",
"Cite news",
"S-par",
"Cite journal",
"Short description",
"EngvarB"
],
"category": [
"Finance ministers of Singapore",
"Singaporean Hindus",
"Alumni of Wolfson College, Cambridge",
"Deputy Prime Ministers of Singapore",
"Ministers for Manpower of Singapore",
"Presidents of Singapore",
"Singaporean people of Sri Lankan descent",
"Singaporean people of Tamil descent",
"Articles with WorldCat Entities identifiers",
"Articles with GND identifiers",
"Articles with VIAF identifiers",
],
"license": [
"Creative Commons Attribution Share Alike 3.0 Unported"
],
"wikitext": "<...TRUNCATED SAMPLE...> '''Tharman Shanmugaratnam'''{{efn|{{lang-ta|தர்மன் சண்முகரத்தினம்}}}} (born 25 February 1957), also known [[mononymously]] as '''Tharman''', is a Singaporean politician and economist who has been serving as the ninth [[president of Singapore]] since 2023 after winning the [[2023 Singaporean presidential election|2023 presidential election]].\n\nPrior to his presidency, Tharman served as [[Senior Minister of Singapore]] between 2019 and 2023, [[Coordinating Minister for Social Policies (Singapore)|Coordinating Minister for Social Policies]] between 2015 and 2023, and Chairman of the [[Monetary Authority of Singapore]] between 2011 and 2023.<ref name=\"Parliament Profile\"/> \n\nTharman is an economist in roles principally related to economic and social policies. He has also led various international councils and panels simultaneously. Tharman chairs the Board of Trustees of the [[Group of Thirty]], a global council of economic and financial leaders from the public and private sectors and academia. He also co-chairs the Global Commission on the Economics of Water with [[Ngozi Okonjo-Iweala|Ngozi Owonjo-Iweala]], [[Mariana Mazzucato]] and [[Johan Rockström]]. Its initial recommendations helped shape the outcomes of the UN Water Conference in March 2023. Tharman has also been co-chair of the [[G20]] High Level Independent Panel on Global Financing for Pandemic Preparedness and Response since 2021. In 2017, Tharman was appointed to chair the G20 Eminent Persons Group on Global Financial Governance.\n\nA former member of the governing [[People's Action Party]] (PAP), he was the... <...TRUNCATED SAMPLE...>",
"lang": "en",
"abstract": "Tharman Shanmugaratnam, also known mononymously as Tharman, is a Singaporean politician and economist who has been serving as the ninth president of Singapore since 2023. Prior to his presidency, Tharman served as Senior Minister of Singapore between 2019 and 2023, Coordinating Minister for Social Policies between 2015 and 2023, and Chairman of the Monetary Authority of Singapore between 2011 and 2023. Tharman is an economist in roles principally related to economic and social policies. He has also led various international councils and panels simultaneously. Tharman chairs the Board of Trustees of the Group of Thirty, a global council of economic and financial leaders from the public and private sectors and academia. He also co-chairs the Global Commission on the Economics of Water with Ngozi Owonjo-Iweala, Mariana Mazzucato and Johan Rockström. Its initial recommendations helped shape the outcomes of the UN Water Conference in March 2023. Tharman has also been co-chair of the G20 High Level Independent Panel on Global Financing for Pandemic Preparedness and Response since 2021. In 2017, Tharman was appointed to chair the G20 Eminent Persons Group on Global Financial Governance. <...TRUNCATED SAMPLE...>",
"boxes_filters": [],
"infobox_html": [
"<...TRUNCATED SAMPLE...>"
],
"figures_dict": [
{
"file_url": "./File:Mr_Tharman_at_Bloomberg_New_Economy_Forum.jpg",
"caption": ""
}
],
"text": "9th President of Singapore\n\n\n**Tharman Shanmugaratnam** (born 25 February 1957), also known mononymously as **Tharman**, is a Singaporean politician and economist who has been serving as the ninth president of Singapore since 2023. Prior to his presidency, Tharman served as Senior Minister of Singapore between 2019 and 2023, Coordinating Minister for Social Policies between 2015 and 2023, and Chairman of the Monetary Authority of Singapore between 2011 and 2023.\n\nTharman is an economist in roles principally related to economic and social policies. He has also led various international councils and panels simultaneously. <...TRUNCATED SAMPLE...>"
}
```
### Data Fields
`id`: The article ID in question
`title`: The title of the Wikipedia article.
`url`: The URL of the article.
`stub`: A boolean (true/false) flag marked as a stub from the CSS selector `.metadata.plainlinks`.
`template`: A list of templates found in the article.
`text`: The HTML Text (After post-processing) from SuperWIKI converted to Markdown with links removed and formatting (Bold, italics) kept.
`license`: The license for the article.
`wikitext`: The wikitext. Not used but can be used as a reference.
`lang`: The language. Should be the same as wiki. (For `simplewiki` it should be `en`)
`boxes_filters`: Also known as `rituals` found in the original SuperWIKI. These are extracted from the CSS selectors `.ombox` and `.ambox`.
`infobox_html`: A list of side infoboxes that are extracted out from the text.
`figures_dict`: A list of figures used in the article. Again, extracted out from the text.
`text`: The Markdown text. This is what you should probably use or need for LLM training.
## Recursal's Vision
> To make AI accessible to everyone, regardless of language, or economical status
This is the collective goal of the `RWKV Open Source foundation` and `Recursal AI`, the commercial entity who backs it.
We believe that AI should not be controlled by a select few individual organization. And that it should be made accessible regardless if you are rich or poor, or a native speaker of english.
### About RWKV
RWKV is an Open Source, non profit group, under the linux foundation. Focused on developing the RWKV AI architecture, in accordence to our vision.
The RWKV architecture scales efficiently and economically. As an RNN & Transformer hybrid, it is able to provide the performance similar to leading transformer models, while having the compute and energy efficiency of an RNN based architecture.
You can find out more about the project, and latest models, at the following
- [https://blog.rwkv.com](https://blog.rwkv.com)
- [https://wiki.rwkv.com](https://wiki.rwkv.com)
### About Recursal AI
Recursal AI, is the commercial entity built to provide support for RWKV model development and users, while providing commercial services via its public cloud, or private-cloud / on-premise offerings.
As part of our vision. Our commitment, is to ensure open source development and access to the best foundational AI models and datasets.
The following dataset/models provided here, is part of that commitment.
You can find out more about recursal AI here
- [https://recursal.ai](https://recursal.ai)
- [https://blog.recursal.ai](https://blog.recursal.ai)
### Dataset Curators
KaraKaraWitch. (I typically hang out in PygmalionAI discord, sometimes EleutherAI. If something is wrong, `@karakarawitch` on discord.)
I'd be happy if you could spread the word and recommend this dataset over wikitext for your use cases `:)`
### Licensing Information
Most of Wikipedia's text and many of its images are co-licensed under the
[Creative Commons Attribution-ShareAlike 3.0 Unported License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_Creative_Commons_Attribution-ShareAlike_3.0_Unported_License)
(CC BY-SA) and the [GNU Free Documentation License](https://en.wikipedia.org/wiki/Wikipedia:Text_of_the_GNU_Free_Documentation_License)
(GFDL) (un-versioned, with no invariant sections, front-cover texts, or back-cover texts).
Some text has been imported only under CC BY-SA and CC BY-SA-compatible license and cannot be reused under GFDL; such
text will be identified on the page footer, in the page history, or on the discussion page of the article that utilizes
the text.
Recursal Waifus (The banner image) are licensed under CC-BY-SA.
They do not represent the related websites in any official capacity unless otherwise or announced by the website.
You may use them as a banner image. However, you must always link back to the dataset.
### Citation Information
```
@misc{superwiki,
title = {SuperWIKI-1.5},
author = {KaraKaraWitch, recursal.ai},
year = {2024},
howpublished = {\url{https://huggingface.co/datasets/recursal/SuperWiki-1.5}},
}
``` |
chansung/llamaduo-dataset | chansung | "2024-05-09T03:47:56Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-09T03:47:56Z" | ---
license: apache-2.0
---
|
WendyBao/finetune_train_20240509 | WendyBao | "2024-05-09T03:54:34Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T03:54:28Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 482396
num_examples: 702
- name: test
num_bytes: 243247
num_examples: 297
download_size: 378084
dataset_size: 725643
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
WendyBao/finetune_20240509 | WendyBao | "2024-05-10T07:27:41Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T03:54:37Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: text
dtype: string
splits:
- name: train
num_bytes: 482396
num_examples: 702
- name: test
num_bytes: 243247
num_examples: 297
download_size: 378084
dataset_size: 725643
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
guilhermebastos96/francisca_1000_balanceado | guilhermebastos96 | "2024-05-09T04:40:49Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T04:39:23Z" | ---
dataset_info:
features:
- name: speaker_id
dtype: int64
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
splits:
- name: train
num_bytes: 354248737.0
num_examples: 1000
download_size: 342124813
dataset_size: 354248737.0
---
# Dataset Card for "francisca_1000_balanceado"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
ura-hcmut/Generated-texts-Llama-2-7b-hf-arc_challenge | ura-hcmut | "2024-05-10T16:04:53Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T04:42:31Z" | ---
dataset_info:
features:
- name: Prompt
dtype: string
- name: Generated_Text_1
dtype: string
- name: Generated_Text_2
dtype: string
- name: Generated_Text_3
dtype: string
- name: Generated_Text_4
dtype: string
- name: Generated_Text_5
dtype: string
- name: Generated_Text_6
dtype: string
- name: Generated_Text_7
dtype: string
- name: Generated_Text_8
dtype: string
- name: Generated_Text_9
dtype: string
- name: Generated_Text_10
dtype: string
- name: Generated_Text_11
dtype: string
- name: Generated_Text_12
dtype: string
- name: Generated_Text_13
dtype: string
- name: Generated_Text_14
dtype: string
- name: Generated_Text_15
dtype: string
- name: Generated_Text_16
dtype: string
- name: Generated_Text_17
dtype: string
- name: Generated_Text_18
dtype: string
- name: Generated_Text_19
dtype: string
- name: Generated_Text_20
dtype: string
- name: Generated_Text_21
dtype: string
- name: Generated_Text_22
dtype: string
- name: Generated_Text_23
dtype: string
- name: Generated_Text_24
dtype: string
- name: Generated_Text_25
dtype: string
- name: Generated_Text_26
dtype: string
- name: Generated_Text_27
dtype: string
- name: Generated_Text_28
dtype: string
- name: Generated_Text_29
dtype: string
- name: Generated_Text_30
dtype: string
- name: Generated_Text_31
dtype: string
- name: Generated_Text_32
dtype: string
- name: Generated_Text_33
dtype: string
- name: Generated_Text_34
dtype: string
- name: Generated_Text_35
dtype: string
- name: Generated_Text_36
dtype: string
- name: Generated_Text_37
dtype: string
- name: Generated_Text_38
dtype: string
- name: Generated_Text_39
dtype: string
- name: Generated_Text_40
dtype: string
- name: Generated_Text_41
dtype: string
- name: Generated_Text_42
dtype: string
- name: Generated_Text_43
dtype: string
- name: Generated_Text_44
dtype: string
- name: Generated_Text_45
dtype: string
- name: Generated_Text_46
dtype: string
- name: Generated_Text_47
dtype: string
- name: Generated_Text_48
dtype: string
- name: Generated_Text_49
dtype: string
- name: Generated_Text_50
dtype: string
- name: Generated_Text_51
dtype: string
- name: Generated_Text_52
dtype: string
- name: Generated_Text_53
dtype: string
- name: Generated_Text_54
dtype: string
- name: Generated_Text_55
dtype: string
- name: Generated_Text_56
dtype: string
- name: Generated_Text_57
dtype: string
- name: Generated_Text_58
dtype: string
- name: Generated_Text_59
dtype: string
- name: Generated_Text_60
dtype: string
- name: Generated_Text_61
dtype: string
- name: Generated_Text_62
dtype: string
- name: Generated_Text_63
dtype: string
- name: Generated_Text_64
dtype: string
- name: Generated_Text_65
dtype: string
- name: Generated_Text_66
dtype: string
- name: Generated_Text_67
dtype: string
- name: Generated_Text_68
dtype: string
- name: Generated_Text_69
dtype: string
- name: Generated_Text_70
dtype: string
- name: Generated_Text_71
dtype: string
- name: Generated_Text_72
dtype: string
- name: Generated_Text_73
dtype: string
- name: Generated_Text_74
dtype: string
- name: Generated_Text_75
dtype: string
- name: Generated_Text_76
dtype: string
- name: Generated_Text_77
dtype: string
- name: Generated_Text_78
dtype: string
- name: Generated_Text_79
dtype: string
- name: Generated_Text_80
dtype: string
- name: Generated_Text_81
dtype: string
- name: Generated_Text_82
dtype: string
- name: Generated_Text_83
dtype: string
- name: Generated_Text_84
dtype: string
- name: Generated_Text_85
dtype: string
- name: Generated_Text_86
dtype: string
- name: Generated_Text_87
dtype: string
- name: Generated_Text_88
dtype: string
- name: Generated_Text_89
dtype: string
- name: Generated_Text_90
dtype: string
- name: Generated_Text_91
dtype: string
- name: Generated_Text_92
dtype: string
- name: Generated_Text_93
dtype: string
- name: Generated_Text_94
dtype: string
- name: Generated_Text_95
dtype: string
- name: Generated_Text_96
dtype: string
- name: Generated_Text_97
dtype: string
- name: Generated_Text_98
dtype: string
- name: Generated_Text_99
dtype: string
- name: Generated_Text_100
dtype: string
splits:
- name: train
num_bytes: 537375511
num_examples: 3357
download_size: 202841655
dataset_size: 537375511
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SoumyaM/databricks-micro | SoumyaM | "2024-05-09T19:45:53Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T05:06:52Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5206498
num_examples: 10544
download_size: 3149915
dataset_size: 5206498
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
vishheartT/dataset_formatted | vishheartT | "2024-05-09T05:10:14Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T05:10:12Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 712
num_examples: 4
download_size: 1688
dataset_size: 712
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
SeyedAli/Image-Valence-Arousal-continous-class | SeyedAli | "2024-05-09T05:20:12Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T05:11:07Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
dataset_info:
features:
- name: image
dtype: image
- name: Arousal
dtype: float64
- name: Valence
dtype: float64
splits:
- name: train
num_bytes: 742641757.5
num_examples: 8612
- name: test
num_bytes: 186137091.75
num_examples: 2154
download_size: 0
dataset_size: 928778849.25
---
# Dataset Card for "Image-Valence-Arousal-continous-class"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
kenken999/urlounge | kenken999 | "2024-05-16T08:06:57Z" | 0 | 0 | [
"task_categories:text-classification",
"language:ja",
"license:mit",
"region:us",
"finance"
] | [
"text-classification"
] | "2024-05-09T05:12:56Z" | ---
license: mit
task_categories:
- text-classification
language:
- ja
tags:
- finance
pretty_name: te
--- |
changliu8541/Assemblage_vcpkgDLL | changliu8541 | "2024-06-20T08:04:46Z" | 0 | 0 | [
"license:other",
"arxiv:2405.03991",
"region:us"
] | null | "2024-05-09T05:13:31Z" | ---
license: other
---
# Assemblage vcpkg DLL Dataset
<sub>last updated: June 16th</sub>
This reposotory holds the public dataset for [Assemblage](https://assemblage-dataset.net/), this copy of dataset is on vcpkg dll data along with pdb files. Please note, the Assemblage code is published under the MIT license, while the dataset specify each binary's source code repository license, please adhere to original repository's license.
You can find the paper on [arxiv](https://arxiv.org/abs/2405.03991).
Please use cat (the command, not my cat) to concatenate part files into the original xz file, then unzip it.
```
# Concat the SQLite database then uncompress it
cat vcpkg.sqlite.tar.xz.* > vcpkg.sqlite.tar.xz
tar -xvf vcpkg.sqlite.tar.xz
```
### Dataset Details
This public copy of Assemblage data consists of 130k vcpkg DLL binaries, and the information are stored in the SQLite database. Due to the nature that binary files can't be represented as texts, a seperate `vcpkg_final.tar.xz` is included, after inflation the folder contains the bnary files. Each file can be indexed by either its SHA256 hash, by `hash` column or the `binary_path` column. You can also read our [docs on dataset](https://assemblagedocs.readthedocs.io/en/latest/dataset.html)
|
usableteapot/all-v2 | usableteapot | "2024-05-09T05:36:50Z" | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T05:22:28Z" | ---
dataset_info:
features:
- name: model
dtype: string
- name: conversation
list:
- name: content
dtype: string
- name: role
dtype: string
- name: source
dtype: string
- name: tokens
sequence: int64
- name: token_count
dtype: int64
- name: model_category
dtype: string
splits:
- name: train
num_bytes: 42110619386.0
num_examples: 3690835
download_size: 12997633665
dataset_size: 42110619386.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rollerhafeezh-amikom/sk_nergrit_corpus | rollerhafeezh-amikom | "2024-05-09T05:25:49Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T05:25:38Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-LOC
'2': I-LOC
'3': B-DAT
'4': I-DAT
'5': B-TIM
'6': I-TIM
splits:
- name: train
num_bytes: 3236767
num_examples: 6611
- name: test
num_bytes: 660730
num_examples: 1228
- name: validation
num_bytes: 626094
num_examples: 1251
download_size: 1140052
dataset_size: 4523591
---
# Dataset Card for "sk_nergrit_corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
rollerhafeezh-amikom/id_nergrit_corpus | rollerhafeezh-amikom | "2024-05-09T05:28:42Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T05:27:22Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: id
dtype: string
- name: tokens
sequence: string
- name: ner_tags
sequence:
class_label:
names:
'0': O
'1': B-LOC
'2': I-LOC
'3': B-DAT
'4': I-DAT
'5': B-TIM
'6': I-TIM
splits:
- name: train
num_bytes: 2863630.192227897
num_examples: 6611
- name: test
num_bytes: 581270.9078782826
num_examples: 1228
- name: validation
num_bytes: 539116.4898849663
num_examples: 1251
download_size: 677131
dataset_size: 3984017.589991146
---
# Dataset Card for "id_nergrit_corpus"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
trungdang2901/dattta | trungdang2901 | "2024-05-09T05:33:09Z" | 0 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T05:28:34Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 10787543066
num_examples: 75318247
download_size: 6218841142
dataset_size: 10787543066
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
sagwa/train1 | sagwa | "2024-05-09T05:32:11Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T05:30:14Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5826
num_examples: 39
download_size: 2572
dataset_size: 5826
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
rmsdud/complaints_small_data | rmsdud | "2024-05-09T05:36:09Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T05:31:50Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 3788379
num_examples: 2844
download_size: 2030454
dataset_size: 3788379
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ndiy/ASAP_SENT | ndiy | "2024-05-12T06:37:55Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T05:42:11Z" | ---
dataset_info:
features:
- name: label
dtype: int32
- name: text
dtype: string
splits:
- name: train
num_bytes: 34566868
num_examples: 36850
- name: test
num_bytes: 4635605
num_examples: 4940
download_size: 27770214
dataset_size: 39202473
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
amilmshaji/onepane-llm-evaluation-gemini | amilmshaji | "2024-05-09T05:55:04Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T05:49:06Z" | ---
dataset_info:
features:
- name: id
dtype: int64
- name: question
dtype: string
- name: context
dtype: string
- name: response
dtype: string
- name: model_name
dtype: string
- name: latency
dtype: float64
- name: pricing
dtype: float64
- name: input_token_lenght
dtype: int64
- name: output_token_length
dtype: int64
splits:
- name: train
num_bytes: 75304
num_examples: 20
download_size: 38242
dataset_size: 75304
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
WSTR/gender_annotation | WSTR | "2024-05-09T07:11:55Z" | 0 | 0 | [
"license:unknown",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T05:49:46Z" | ---
license: unknown
---
|
ccvl/3D-DST-data | ccvl | "2024-05-09T07:28:30Z" | 0 | 0 | [
"license:mit",
"modality:image",
"region:us"
] | null | "2024-05-09T05:53:28Z" | ---
license: mit
pretty_name: 3D-DST-models
extra_gated_fields:
Name: text
Affiliation: text
---
## 3D-DST-data
As part of our data release in [3D-DST](https://ccvl.jhu.edu/3D-DST/), we present 3D-DST images for all 1000 classes in ImageNet-1k with **MiniGPT4-generated captions** available [here](https://huggingface.co/datasets/ccvl/3D-DST-captions).
See [wufeim/DST3D](https://github.com/wufeim/DST3D) for reproduction instructions and [DeiT-small](https://github.com/facebookresearch/deit/blob/main/README_deit.md) results. |
artificialnoob/weights | artificialnoob | "2024-05-09T06:03:35Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-05-09T05:56:08Z" | ---
license: mit
---
|
zoohun/medical_re_one | zoohun | "2024-05-09T06:06:29Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:06:06Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1543268
num_examples: 2000
download_size: 360246
dataset_size: 1543268
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zoohun/medical_re_two | zoohun | "2024-05-09T06:09:30Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:06:40Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1595151
num_examples: 2000
download_size: 371968
dataset_size: 1595151
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zoohun/medical_re_three | zoohun | "2024-05-09T06:10:40Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:06:54Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1650067
num_examples: 3037
download_size: 378122
dataset_size: 1650067
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
zoohun/medical_re_four | zoohun | "2024-05-09T06:11:14Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:07:05Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 815532
num_examples: 1037
download_size: 201675
dataset_size: 815532
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
qianzhicheng111/TDataSet | qianzhicheng111 | "2024-05-09T08:35:19Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-09T06:15:45Z" | ---
license: apache-2.0
---
|
ndiy/NLPCC14 | ndiy | "2024-05-10T05:48:25Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:20:46Z" | ---
dataset_info:
features:
- name: label
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 4972014
num_examples: 19146
- name: test
num_bytes: 385017
num_examples: 1200
download_size: 3837130
dataset_size: 5357031
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
denizzhansahin/deneme1 | denizzhansahin | "2024-05-09T06:22:44Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:22:38Z" | ---
dataset_info:
features:
- name: review
dtype: string
- name: review_length
dtype: int64
splits:
- name: train
num_bytes: 2109125.3019502354
num_examples: 1338
- name: validation
num_bytes: 234872.69804976461
num_examples: 149
download_size: 1387102
dataset_size: 2343998.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
0xscope/event_rules | 0xscope | "2024-05-09T07:04:16Z" | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:26:40Z" | ---
license: mit
---
## 0xScope Event Trading Dataset Rules & Explaination
This is the description of the meaning of each event.
- Event ID
- Event Category
- Event Details
In addition to these parameters, you will also notice the following parameters:
- Token: the address of the token
- Chain: the chain of the token
- pt: event occurrence time, by hour
- Basescore: the expected influence on the token (1: bullish, -1: bearish) |
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_vllm_part_3_mini_2 | YYYYYYibo | "2024-05-09T06:26:44Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:26:42Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: resp0
dtype: string
- name: resp1
dtype: string
- name: resp2
dtype: string
- name: resp3
dtype: string
splits:
- name: train_prefs
num_bytes: 65120883
num_examples: 4983
download_size: 36311143
dataset_size: 65120883
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_vllm_part_3_mini_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_vllm_part_3_mini_3 | YYYYYYibo | "2024-05-09T06:27:17Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:27:13Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: resp0
dtype: string
- name: resp1
dtype: string
- name: resp2
dtype: string
- name: resp3
dtype: string
splits:
- name: train_prefs
num_bytes: 66293811
num_examples: 5086
download_size: 36949341
dataset_size: 66293811
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_vllm_part_3_mini_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_vllm_part_3_mini_1 | YYYYYYibo | "2024-05-09T06:27:41Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:27:38Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: resp0
dtype: string
- name: resp1
dtype: string
- name: resp2
dtype: string
- name: resp3
dtype: string
splits:
- name: train_prefs
num_bytes: 67659037
num_examples: 5283
download_size: 37873922
dataset_size: 67659037
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_vllm_part_3_mini_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_vllm_part_3_mini_0 | YYYYYYibo | "2024-05-09T06:27:42Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:27:39Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: resp0
dtype: string
- name: resp1
dtype: string
- name: resp2
dtype: string
- name: resp3
dtype: string
splits:
- name: train_prefs
num_bytes: 68335595
num_examples: 5283
download_size: 38355781
dataset_size: 68335595
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_vllm_part_3_mini_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
Xzaxx/Xix-Z | Xzaxx | "2024-05-09T06:28:34Z" | 0 | 0 | [
"license:mit",
"region:us"
] | null | "2024-05-09T06:28:34Z" | ---
license: mit
---
|
NickyNicky/financial_phrasebank_traslate_En_Es | NickyNicky | "2024-05-10T04:13:02Z" | 0 | 3 | [
"task_categories:translation",
"language:en",
"language:es",
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"finance"
] | [
"translation"
] | "2024-05-09T06:30:28Z" | ---
dataset_info:
features:
- name: sentence_en
dtype: string
- name: sentence_es
dtype: string
- name: label
dtype: int64
splits:
- name: train
num_bytes: 1389599
num_examples: 4830
download_size: 816379
dataset_size: 1389599
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: apache-2.0
task_categories:
- translation
language:
- en
- es
tags:
- finance
size_categories:
- 1K<n<10K
---
## time traslation.
![image/png](https://cdn-uploads.huggingface.co/production/uploads/641b435ba5f876fe30c5ae0a/ue61viA2RsCP0G1IiOvwx.png)
## labels.
* Negativo: 0
* Neutral: 1
* Positivo: 2
## model use for traslation.
```
https://huggingface.co/NickyNicky/gemma-1.1-2b-it_orpo_traslate_en_es_V1
```
## original dataset:
```
https://huggingface.co/datasets/financial_phrasebank
```
|
mawadalla/tatoeba_large | mawadalla | "2024-05-09T19:44:23Z" | 0 | 1 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:36:18Z" | ---
size_categories:
- 1M<n<10M
---
A dataset of 8 million sentences in hundreds of languages, crowd sourced from tatoeba. |
soaple/custom-llama2 | soaple | "2024-05-09T06:36:50Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:36:49Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 7754
num_examples: 32
download_size: 4170
dataset_size: 7754
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
bipinsaha/bangla-law-qna | bipinsaha | "2024-05-09T06:47:08Z" | 0 | 2 | [
"license:cc-by-nc-4.0",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:46:20Z" | ---
license: cc-by-nc-4.0
---
|
xieyuankun/Codecfake | xieyuankun | "2024-05-09T17:33:26Z" | 0 | 0 | [
"license:cc-by-nc-3.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:audio",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:46:29Z" | ---
license: cc-by-nc-3.0
dataset_info:
features:
- name: audio
dtype: audio
splits:
- name: train
num_bytes: 1280756603.44
num_examples: 8887
download_size: 2068524987
dataset_size: 1280756603.44
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
niklauslee/customllama2 | niklauslee | "2024-05-09T06:49:10Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:49:09Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 7754
num_examples: 32
download_size: 4170
dataset_size: 7754
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
heegyu/UltraInteract_pair_subtree | heegyu | "2024-05-09T06:54:19Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:54:10Z" | ---
dataset_info:
features:
- name: parent
struct:
- name: chosen
dtype: string
- name: dataset
dtype: string
- name: depth
dtype: int64
- name: id
dtype: string
- name: parent_id
dtype: string
- name: rejected
dtype: string
- name: task
dtype: string
- name: trajectory
list:
- name: from
dtype: string
- name: value
dtype: string
- name: child
struct:
- name: chosen
dtype: string
- name: dataset
dtype: string
- name: depth
dtype: int64
- name: id
dtype: string
- name: parent_id
dtype: string
- name: rejected
dtype: string
- name: task
dtype: string
- name: trajectory
list:
- name: from
dtype: string
- name: value
dtype: string
splits:
- name: train
num_bytes: 707433702
num_examples: 59603
download_size: 176485453
dataset_size: 707433702
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "UltraInteract_pair_subtree"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
AndrewZeng/train_only_twice_syn | AndrewZeng | "2024-05-09T07:00:17Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T06:59:03Z" | ---
license: apache-2.0
---
|
AndrewZeng/train_only_third_syn | AndrewZeng | "2024-05-09T07:02:20Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:01:23Z" | ---
license: apache-2.0
---
|
AndrewZeng/train_only_forth_syn | AndrewZeng | "2024-05-09T07:06:12Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:02:52Z" | ---
license: apache-2.0
---
|
irisxue/childrens_book_covers | irisxue | "2024-05-09T19:03:20Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:08:50Z" | ---
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 917850.0
num_examples: 100
download_size: 926704
dataset_size: 917850.0
---
|
swyoo39/customData | swyoo39 | "2024-05-09T07:09:54Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:08:56Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 5826
num_examples: 39
download_size: 2572
dataset_size: 5826
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Spico/Mirror_ACE | Spico | "2024-10-11T17:37:24Z" | 0 | 0 | [
"language:en",
"arxiv:2311.05419",
"region:us"
] | null | "2024-05-09T07:18:43Z" | ---
language:
- en
---
# Mirror Dataset (ACE Part)
This is the rest part of the whole datasets for [🪞Mirror](https://github.com/Spico197/Mirror).
You could merge with [Mirror_woACE](https://huggingface.co/datasets/Spico/Mirror_woACE) to get the complete dataset.
This repo mainly contains the ACE dataset that requires an additional LDC license.
If you have such a license, please feel free to drop Tong Zhu an email (tzhu1997@outlook.com) with some data samples, then apply the dataset access on this page.
Direct data access applications without email notifications will not be approved.
## Citation
```bibtex
@misc{zhu_mirror_2023,
shorttitle = {Mirror},
title = {Mirror: A Universal Framework for Various Information Extraction Tasks},
author = {Zhu, Tong and Ren, Junfei and Yu, Zijian and Wu, Mengsong and Zhang, Guoliang and Qu, Xiaoye and Chen, Wenliang and Wang, Zhefeng and Huai, Baoxing and Zhang, Min},
url = {http://arxiv.org/abs/2311.05419},
doi = {10.48550/arXiv.2311.05419},
urldate = {2023-11-10},
publisher = {arXiv},
month = nov,
year = {2023},
note = {arXiv:2311.05419 [cs]},
keywords = {Computer Science - Artificial Intelligence, Computer Science - Computation and Language},
}
``` |
uine/single-dpo-practice-dataset | uine | "2024-05-09T07:20:18Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:20:17Z" | ---
dataset_info:
features:
- name: rejected
dtype: string
- name: chosen
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 2740378
num_examples: 2109
download_size: 1009085
dataset_size: 2740378
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Cafet/Tiny_Train_Processed | Cafet | "2024-05-09T07:46:02Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:25:20Z" | ---
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: transcription
dtype: string
- name: reference
dtype: string
- name: prediction_01
dtype: string
splits:
- name: train
num_bytes: 2837918903.875
num_examples: 10369
download_size: 2539257678
dataset_size: 2837918903.875
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Soykot/Godeg | Soykot | "2024-05-09T07:54:05Z" | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:34:09Z" | ---
license: mit
---
|
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_minpi_part_3 | YYYYYYibo | "2024-05-09T07:40:42Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:40:34Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: resp0
dtype: string
- name: resp1
dtype: string
- name: resp2
dtype: string
- name: resp3
dtype: string
- name: minpi
dtype: string
- name: random
dtype: string
splits:
- name: train_prefs
num_bytes: 318778019
num_examples: 20635
download_size: 178621360
dataset_size: 318778019
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_minpi_part_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
nirantk/scifact-bge-m3-sparse-vectors | nirantk | "2024-05-13T13:46:09Z" | 0 | 0 | [
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:55:03Z" | ---
language:
- en
license: mit
dataset_info:
features:
- name: _id
dtype: string
- name: title
dtype: string
- name: text
dtype: string
- name: bge_m3_sparse_vector
dtype: string
splits:
- name: corpus
num_bytes: 27321636
num_examples: 5183
download_size: 13140473
dataset_size: 27321636
configs:
- config_name: default
data_files:
- split: corpus
path: data/corpus-*
---
|
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_train_part_3_mini_0 | YYYYYYibo | "2024-05-09T07:58:02Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:58:00Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 43222850
num_examples: 5158
download_size: 24193117
dataset_size: 43222850
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_train_part_3_mini_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_train_part_3_mini_3 | YYYYYYibo | "2024-05-09T07:58:53Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:58:51Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 43550584
num_examples: 5161
download_size: 24173607
dataset_size: 43550584
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_train_part_3_mini_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_train_part_3_mini_1 | YYYYYYibo | "2024-05-09T07:59:00Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:58:58Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 43091603
num_examples: 5158
download_size: 24073161
dataset_size: 43091603
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_train_part_3_mini_1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_train_part_3_mini_2 | YYYYYYibo | "2024-05-09T07:59:00Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:58:58Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 44145884
num_examples: 5158
download_size: 24555538
dataset_size: 44145884
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_train_part_3_mini_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
YYYYYYibo/ultrafeedback_binarized_doff_real_no_golden_train_part_3 | YYYYYYibo | "2024-05-09T07:59:18Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T07:59:13Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
splits:
- name: train_prefs
num_bytes: 174010921
num_examples: 20635
download_size: 96907152
dataset_size: 174010921
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_doff_real_no_golden_train_part_3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
hotsx/cristiane | hotsx | "2024-05-09T08:07:09Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-05-09T08:00:06Z" | ---
license: openrail
---
|
manjugeorge/common | manjugeorge | "2024-05-09T10:15:47Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:audio",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T08:10:51Z" | ---
license: apache-2.0
---
|
jili/tcm | jili | "2024-05-09T08:15:46Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-09T08:15:46Z" | ---
license: apache-2.0
---
|
llm-values/self_response_2_Qwen__Qwen1.5_110B_answers | llm-values | "2024-05-09T08:16:59Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T08:16:40Z" | ---
dataset_info:
features:
- name: prompt ID
dtype: int64
- name: language
dtype: string
- name: topic
dtype: string
- name: labels
sequence: string
- name: original prompt
dtype: string
- name: translated prompt
dtype: string
- name: 'answer: Qwen/Qwen1.5-110B - original'
dtype: string
- name: 'answer: Qwen/Qwen1.5-110B - translated'
dtype: string
- name: pos priming phrase 0 - original
dtype: string
- name: 'pos priming phrase 0, answer: Yes - original'
dtype: string
- name: pos priming phrase 0, answer Yes logprob - original
dtype: int64
- name: 'pos priming phrase 0, answer: No - original'
dtype: string
- name: pos priming phrase 0, answer No logprob - original
dtype: int64
- name: neg priming phrase 0 - original
dtype: string
- name: 'neg priming phrase 0, answer: Yes - original'
dtype: string
- name: neg priming phrase 0, answer Yes logprob - original
dtype: int64
- name: 'neg priming phrase 0, answer: No - original'
dtype: string
- name: neg priming phrase 0, answer No logprob - original
dtype: int64
- name: pos priming phrase 0 - translated
dtype: string
- name: 'pos priming phrase 0, answer: Yes - translated'
dtype: string
- name: pos priming phrase 0, answer Yes logprob - translated
dtype: int64
- name: 'pos priming phrase 0, answer: No - translated'
dtype: string
- name: pos priming phrase 0, answer No logprob - translated
dtype: int64
- name: neg priming phrase 0 - translated
dtype: string
- name: 'neg priming phrase 0, answer: Yes - translated'
dtype: string
- name: neg priming phrase 0, answer Yes logprob - translated
dtype: int64
- name: 'neg priming phrase 0, answer: No - translated'
dtype: string
- name: neg priming phrase 0, answer No logprob - translated
dtype: int64
splits:
- name: self_response_2_Qwen__Qwen1.5_110B_answers
num_bytes: 700300
num_examples: 701
download_size: 171337
dataset_size: 700300
configs:
- config_name: default
data_files:
- split: self_response_2_Qwen__Qwen1.5_110B_answers
path: data/self_response_2_Qwen__Qwen1.5_110B_answers-*
---
|
med-alex/qa_mt_en_to_kaz | med-alex | "2024-06-11T13:58:57Z" | 0 | 0 | [
"task_categories:question-answering",
"language:kk",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | "2024-05-09T08:32:05Z" | ---
task_categories:
- question-answering
language:
- kk
size_categories:
- 10K<n<100K
---
This is one of the datasets created as part of a project to study the solution of the QA problem for low-resource languages using the example of the Kazakh and Uzbek languages.
Please see the [description](https://github.com/med-alex/turkic_qa?tab=readme-ov-file#добро-пожаловать-на-студенческий-проект-посвященный-решению-задачи-qa-для-низкоресурсных-языков-на-примере-казахского-и-узбекского-языка) of the project to learn more about the method of creating a dataset.
**NOTE:** To achieve the best final models results, you should use similar datasets translated from English: med-alex/qa_mt_en_to_uzn or med-alex/qa_mt_en_to_kaz. |
bunkalab/arxiv_datasets_llm | bunkalab | "2024-05-09T08:37:31Z" | 0 | 1 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T08:37:29Z" | ---
dataset_info:
features:
- name: Title
dtype: string
- name: Authors
dtype: string
- name: Abstract
dtype: string
- name: entry_id
dtype: string
- name: Date
dtype: timestamp[ns, tz=UTC]
- name: Categories
dtype: string
- name: year
dtype: int32
splits:
- name: train
num_bytes: 6037799
num_examples: 3851
download_size: 3297037
dataset_size: 6037799
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
wisenut-nlp-team/dst_dataset | wisenut-nlp-team | "2024-05-09T08:45:27Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T08:45:13Z" | ---
dataset_info:
features:
- name: left_inputs
dtype: string
- name: right_inputs
dtype: string
- name: length_labels
dtype: int64
- name: task_labels
dtype: int64
- name: multiturn_labels
dtype: int64
- name: multiquestion_labels
sequence: int64
splits:
- name: train
num_bytes: 76241205
num_examples: 74583
- name: validation
num_bytes: 19096089
num_examples: 18646
- name: test
num_bytes: 12053756
num_examples: 11444
download_size: 53963164
dataset_size: 107391050
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
wisenut-nlp-team/smr_dataset | wisenut-nlp-team | "2024-05-09T08:56:17Z" | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T08:45:28Z" | ---
dataset_info:
features:
- name: passage
dtype: string
- name: summary
dtype: string
splits:
- name: train
num_bytes: 7691192512
num_examples: 899822
- name: validation
num_bytes: 810294954
num_examples: 108339
download_size: 3494915664
dataset_size: 8501487466
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
edizpy/karar-ds-mini_0905_1 | edizpy | "2024-05-09T08:46:17Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T08:46:11Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: karar_uzunluk
dtype: int64
splits:
- name: train
num_bytes: 2424646.93625
num_examples: 3073
- name: validation
num_bytes: 269843.5575
num_examples: 342
download_size: 773010
dataset_size: 2694490.49375
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
Sathviksoma/capitals | Sathviksoma | "2024-05-09T08:56:52Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T08:56:39Z" | ---
license: openrail
---
|
Paco4365483/dataset | Paco4365483 | "2024-05-09T10:32:23Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T09:03:13Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 495865.0
num_examples: 9
download_size: 497519
dataset_size: 495865.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
onepaneai/harmful-prompts | onepaneai | "2024-05-17T06:33:39Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T09:11:47Z" | ---
dataset_info:
features:
- name: type
dtype: string
- name: prompt
dtype: string
splits:
- name: train
num_bytes: 45720
num_examples: 200
download_size: 22390
dataset_size: 45720
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
goihere/image-generation-prompting | goihere | "2024-05-09T09:22:12Z" | 0 | 0 | [
"license:mit",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T09:13:15Z" | ---
license: mit
split: train
path: "dataset - Sheet1 (1).csv"
---
|
Sathviksoma/captials_answers | Sathviksoma | "2024-05-09T09:14:37Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T09:14:24Z" | ---
license: openrail
---
|
irisxue/testing_childrensbooks | irisxue | "2024-05-09T09:23:36Z" | 0 | 0 | [
"license:mit",
"modality:image",
"region:us"
] | null | "2024-05-09T09:22:40Z" | ---
license: mit
---
|
ikeno-ada/Japanese-English_translation_of_contents_HScodes | ikeno-ada | "2024-05-09T09:37:08Z" | 0 | 0 | [
"language:ja",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T09:29:48Z" | ---
license: mit
language:
- ja
- en
---
日本郵便が提供する「国際郵便 内容品の日英・中英訳、HSコード類」(2024/05/09)のデータに基づいています。
詳しくはサイトをご覧ください
https://www.post.japanpost.jp/int/use/publication/contentslist/index.php?id=0&ie=utf8&lang=_ja&q= |
Sathviksoma/capitals_formatted | Sathviksoma | "2024-05-09T09:38:30Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T09:38:19Z" | ---
license: openrail
---
|
andyWuTw/uozhiba-llama3-tt | andyWuTw | "2024-05-09T09:50:02Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-05-09T09:50:02Z" | ---
license: apache-2.0
---
|
Sathviksoma/important_person | Sathviksoma | "2024-05-09T09:50:22Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T09:50:12Z" | ---
license: openrail
---
|
chrisx01/tsetse-42 | chrisx01 | "2024-05-20T07:49:13Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T09:50:22Z" | ---
license: apache-2.0
---
|
talalmuzaffar/empathetic_dataset | talalmuzaffar | "2024-05-09T10:01:56Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T10:01:52Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: utterance
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1719028
num_examples: 8000
download_size: 954635
dataset_size: 1719028
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
talalmuzaffar/empathetic_dataset_1 | talalmuzaffar | "2024-05-09T10:04:44Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T10:04:40Z" | ---
dataset_info:
features:
- name: Input
dtype: string
- name: Outputs
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1719028
num_examples: 8000
download_size: 954569
dataset_size: 1719028
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
talalmuzaffar/empathetic_dataset_2 | talalmuzaffar | "2024-05-09T10:06:03Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T10:05:59Z" | ---
dataset_info:
features:
- name: Input
dtype: string
- name: Response
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1719028
num_examples: 8000
download_size: 954587
dataset_size: 1719028
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
liswei/PromptPair-TW | liswei | "2024-06-02T06:22:23Z" | 0 | 2 | [
"task_categories:text-generation",
"task_categories:translation",
"task_categories:summarization",
"language:zh",
"license:cc-by-nc-sa-4.0",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation",
"translation",
"summarization"
] | "2024-05-09T10:07:12Z" | ---
language:
- zh
license: cc-by-nc-sa-4.0
size_categories:
- 100K<n<1M
task_categories:
- text-generation
- translation
- summarization
dataset_info:
features:
- name: id
dtype: int64
- name: conversations
list:
- name: content
dtype: string
- name: role
dtype: string
- name: meta
dtype: string
splits:
- name: train
num_bytes: 50151365
num_examples: 119413
download_size: 31222381
dataset_size: 50151365
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for PromptPair-TW
- Contains 120K high quality instruction tuning data in Traditional Chinese.
- Contains English prompt + Traditional Chinese responses to better adapt from English pre-trained models.
- Includes custom system prompts to avoid code-switching after finetuning.
- Is de-duplicated and cleaned using both rule-based and learning-based filters.
Examples of field `conversations`:
```text
[
{"role": "system" , "content": "Explain the following instructions in Traditional Chinese"},
{"role": "user", "content": "Construct a sentence using the given verb in the past tense\n「shudder」"},
{"role": "assistant", "content": "當雷聲在遠處響起時,她不由自主地顫抖了。"}
]
```
```text
[
{"role": "system", "content": "以繁體中文完成此任務"},
{"role": "user", "content": "Suggest two interventions that may reduce drug abuse."},
{"role": "assistant", "content": "兩種可能減少藥物濫用的干預措施是增加諮詢和康復計劃的接觸,以及為學齡兒童實施有關使用藥物風險的教育計劃。諮詢和康復計劃有助於個人確定其成癮的潛在原因和管理渴望的技能,而教育計劃可以幫助青少年更多地了解藥物濫用的潛在後果。"}
]
```
## System Prompt Details
System prompts for en-tw pairs are randomly selected from the following list with 20% dropout:
```text
prompts = [
"Please translate the following English paragraph into Traditional Chinese: ",
"Translate this text into Traditional Chinese: ",
"Convert the English below into Traditional Chinese: ",
"Please provide a Traditional Chinese translation of the following text: ",
"English to Traditional Chinese translation requested for the paragraph below: ",
"Kindly render the English passage below into Traditional Chinese: ",
"Turn the following English into Traditional Chinese: ",
"English to Traditional Chinese: Please translate the text below: ",
"Provide a Traditional Chinese version of the English paragraph below: ",
"English text to be translated into Traditional Chinese follows: "
"請將以下英文段落翻譯成繁體中文:",
"將此文本翻譯成繁體中文:",
"把下面的英文翻譯成繁體中文:",
"請提供以下文本的繁體中文翻譯:",
"請將下列英文段落翻譯為繁體中文:",
"以下英文請翻成繁體中文:",
"翻譯下面的英文到繁體中文:",
"請把以下英文轉譯為繁體中文:",
"將下列英文譯為繁體中文:",
"英文到繁體中文:請翻譯下面的文本:"
]
```
## Dataset Details
Dataset source:
- [ntunlplab/traditional-chinese-alpaca](https://github.com/ntunlplab/traditional-chinese-alpaca): GPT-3.5 generated alpaca instruction set in zhTW and EN.
- [KSmart/chinese_traditional_chengyu](https://huggingface.co/datasets/KSmart/chinese_traditional_chengyu): Translated from zhCN to zhTW using OpenCC (s2tw.json).
- [zetavg/coct-en-zh-tw-translations-twp-300k](https://huggingface.co/datasets/zetavg/coct-en-zh-tw-translations-twp-300k): Formed instruction tuning pairs using zhTW and EN paired paragraphs.
- [Heng666/Traditional_Chinese-aya_dataset](https://huggingface.co/datasets/Heng666/Traditional_Chinese-aya_dataset) |
talalmuzaffar/empathetic_dataset_with_context | talalmuzaffar | "2024-05-09T10:17:58Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T10:17:54Z" | ---
dataset_info:
features:
- name: Context
dtype: string
- name: Input
dtype: string
- name: Response
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 1812315
num_examples: 8000
download_size: 964375
dataset_size: 1812315
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
morten-j/medhie-tokenized-dataset-custom-bertwordpiece | morten-j | "2024-05-09T10:37:15Z" | 0 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T10:20:25Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: word_ids
sequence: int64
- name: labels
sequence: int64
splits:
- name: train
num_bytes: 42113819652
num_examples: 14830301
- name: validation
num_bytes: 4836576832
num_examples: 1703172
download_size: 21646576639
dataset_size: 46950396484
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
CCaccept/Subgraph | CCaccept | "2024-09-22T04:04:11Z" | 0 | 0 | [
"task_categories:question-answering",
"language:en",
"size_categories:10K<n<100K",
"arxiv:2404.07677",
"region:us",
"wikidata",
"freebase",
"KBQA",
"LLM",
"agent"
] | [
"question-answering"
] | "2024-05-09T10:24:39Z" | ---
task_categories:
- question-answering
language:
- en
tags:
- wikidata
- freebase
- KBQA
- LLM
- agent
pretty_name: KG Subgraph
size_categories:
- 10K<n<100K
---
# Subgraph Dataset
## Overview
This repository contains a subgraph dataset associated with the subgraphs used in the paper [ODA: Observation-Driven Agent for integrating LLMs and Knowledge Graphs](https://arxiv.org/abs/2404.07677).
For specific codes and the KBQA dataset, please refer to [KGdata repository](https://github.com/lanjiuqing64/KGdata.git).
## Description
The dataset comprises four folders, each containing subgraphs related to different KBQA datasets based on Wikidata or Freebase:
- Wikidata-based datasets:
- QALD10-en
- T-REx
- Zero-Shot RE
- Creak
- Freebase-based datasets:
- GrailQA
- WebQSP
- Simple Questions
- `hop_2_FreeBase/`: Contains 2-hop subgraphs based on Freebase.
- `hop_3_FreeBase/`: Contains 3-hop subgraphs based on Freebase.
- `hop_2_Wikidata/`: Contains 2-hop subgraphs based on Wikidata.
- `hop_3rd_Wikidata/`: Contains the third hop subgraphs based on Wikidata.
### The format of the subgraph files
1. FreeBase subgraph file
```
{
Topic Entity : [[EntityID,Relation,EntityID],....],
Topic Entity : [[EntityID,Relation,EntityID],....],
...
}
```
2. FreeBase label file
```
{
EntityID : Entity name
EntityID : Entity name
...
}
```
3. Wikidata subgraph file
```
{
Topic Entity : [[EntityID,RelationID,EntityID],....],
Topic Entity : [[EntityID,RelationID,EntityID],....],
...
}
```
2. Wikidata label file
```
{
EntityID : Entity name
RelationID : Relation name
EntityID : Entity name
...
}
```
|
phuctran75/buzzsentiment | phuctran75 | "2024-05-09T10:28:26Z" | 0 | 0 | [
"license:llama3",
"region:us"
] | null | "2024-05-09T10:28:26Z" | ---
license: llama3
---
|
talalmuzaffar/empathetic_dataset_with_correct_context | talalmuzaffar | "2024-05-09T10:28:53Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-05-09T10:28:50Z" | ---
dataset_info:
features:
- name: Context
dtype: string
- name: Input
dtype: string
- name: Response
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 2567028
num_examples: 8000
download_size: 961982
dataset_size: 2567028
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|