datasetId
stringlengths 5
121
| author
stringlengths 2
42
| last_modified
unknown | downloads
int64 0
28.8M
| likes
int64 0
5.87k
| tags
sequencelengths 1
7.92k
| task_categories
sequencelengths 0
40
⌀ | createdAt
unknown | card
stringlengths 19
977k
|
---|---|---|---|---|---|---|---|---|
barbaroo/STS | barbaroo | "2024-05-29T13:03:15Z" | 0 | 0 | [
"language:fo",
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T13:43:49Z" | ---
language:
- fo
size_categories:
- 1K<n<10K
license: cc-by-4.0
---
This is a synthetic Faroese Semantic Textual Similarity dataset. Labels range from 0 (no similarity) to 5 (the two sentences are completely equivalent).
The dataset was generated by:
- Translating sentences from the Basic Faroese Language Resource Kit (BLARK) corpus to English by leveraging a Nordic LLM, GPT-Sw3.
- Sentences were compared to each other in terms of semantic similarity by Sentence BERT (SBERT, )
- Pairs of sentences were then sampled uniformly in terms of similarity score, to compile a balanced dataset.
The dataset contains 200 sentences for each class (Similarity = 0,1,2,3,4,5). |
CognitiveLab/Arc_kan | CognitiveLab | "2024-03-10T14:24:50Z" | 0 | 1 | [
"language:kn",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T14:03:15Z" | ---
language:
- kn
configs:
- config_name: ARC Challenge
data_files:
- split: train
path: "ARC Challenge/arc_kan-train.json"
- split: test
path: "ARC Challenge/arc_kan-test.json"
- split: validation
path: "ARC Challenge/arc_kan-validation.json"
- config_name: ARC Easy
data_files:
- split: train
path: "ARC Easy/arc_easy_kan-train.json"
- split: test
path: "ARC Easy/arc_easy_kan-test.json"
- split: validation
path: "ARC Easy/arc_easy_kan-validation.json"
---
# ARC Kannada |
liuhyuu/NetEaseCrowd | liuhyuu | "2024-06-05T09:31:30Z" | 0 | 0 | [
"language:en",
"license:cc-by-sa-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2403.08826",
"region:us",
"Crowdsourcing",
"Truth Inference",
"Label Aggregation"
] | null | "2024-03-08T14:25:29Z" | ---
license: cc-by-sa-4.0
language:
- en
tags:
- Crowdsourcing
- Truth Inference
- Label Aggregation
pretty_name: 'NetEaseCrowd: A Dataset for Long-term and Online Crowdsourcing Truth Inference'
size_categories:
- 1M<n<10M
---
# 🧑🤝🧑 NetEaseCrowd: A Dataset for Long-term and Online Crowdsourcing Truth Inference
[View it in GitHub](https://github.com/fuxiAIlab/NetEaseCrowd-Dataset)
## Introduction
We introduce NetEaseCrowd, a large-scale crowdsourcing annotation dataset based on
a mature Chinese data crowdsourcing platform of NetEase Inc..
NetEaseCrowd dataset contains about **2,400** workers, **1,000,000** tasks, and **6,000,000** annotations between them,
where the annotations are collected in about 6 months.
In this dataset, we provide ground truths for all the tasks and record timestamps for all the annotations.
### Task
NetEaseCrowd dataset is built based on a gesture comparison task. Each task contains three choices, where two are similar gestures and the other one is not. Annotators are required to pick out the different one.
### Comparison with existing datasets
Compared with the existing crowdsourcing datasets, our NetEaseCrowd dataset has the following characteristics:
| Characteristic | Existing datasets | NetEaseCrowd dataset |
|----------------|------------------------------------------------------|-----------------------------------------------------------|
| Scalability | Relatively small sizes in #workers/tasks/annotations | Lage-scale data collection with 6 millions of annotations |
| Timestamps | Short-term data with no timestamps recorded | Complete timestamps recorded during a 6-month timespan |
| Task Type | Single type of tasks | Various task types with different required capabilities |
<!-- ## Citation
If you use the dataset in your work, please cite:
@inproceedings{TODO} -->
## Dataset Statistics
The basic statistics of NetEaseCrowd dataset and [other previous datasets](#other-public-datasets) are as follows:
| Dataset | \#Worker | \#Task | \#Groundtruth | \#Anno | Avg(\#Anno/worker) | Avg(\#Anno/task) | Timestamp | Task type |
|--------------------------------------------|----------|---------|---------------|-----------|--------------------|------------------|--------------|-----------|
| NetEaseCrowd | 2,413 | 999,799 | 999,799 | 6,016,319 | 2,493.3 | 6.0 | ✔︎ | Multiple |
| Adult | 825 | 11,040 | 333 | 92,721 | 112.4 | 8.4 | ✘ | Single |
| Birds | 39 | 108 | 108 | 4,212 | 108.0 | 39.0 | ✘ | Single |
| Dog | 109 | 807 | 807 | 8,070 | 74.0 | 10.0 | ✘ | Single |
| CF | 461 | 300 | 300 | 1,720 | 3.7 | 5.7 | ✘ | Single |
| CF\_amt | 110 | 300 | 300 | 6030 | 54.8 | 20.1 | ✘ | Single |
| Emotion | 38 | 700 | 565 | 7,000 | 184.2 | 10.0 | ✘ | Single |
| Smile | 64 | 2,134 | 159 | 30,319 | 473.7 | 14.2 | ✘ | Single |
| Face | 27 | 584 | 584 | 5,242 | 194.1 | 9.0 | ✘ | Single |
| Fact | 57 | 42,624 | 576 | 216,725 | 3802.2 | 5.1 | ✘ | Single |
| MS | 44 | 700 | 700 | 2,945 | 66.9 | 4.2 | ✘ | Single |
| product | 176 | 8,315 | 8,315 | 24,945 | 141.7 | 3.0 | ✘ | Single |
| RTE | 164 | 800 | 800 | 8,000 | 48.8 | 10.0 | ✘ | Single |
| Sentiment | 1,960 | 98,980 | 1,000 | 569,375 | 290.5 | 5.8 | ✘ | Single |
| SP | 203 | 4,999 | 4,999 | 27,746 | 136.7 | 5.6 | ✘ | Single |
| SP\_amt | 143 | 500 | 500 | 10,000 | 69.9 | 20.0 | ✘ | Single |
| Trec | 762 | 19,033 | 2,275 | 88,385 | 116.0 | 4.6 | ✘ | Single |
| Tweet | 85 | 1,000 | 1,000 | 20,000 | 235.3 | 20.0 | ✘ | Single |
| Web | 177 | 2,665 | 2,653 | 15,567 | 87.9 | 5.8 | ✘ | Single |
| ZenCrowd\_us | 74 | 2,040 | 2,040 | 12,190 | 164.7 | 6.0 | ✘ | Single |
| ZenCrowd\_in | 25 | 2,040 | 2,040 | 11,205 | 448.2 | 5.5 | ✘ | Single |
| ZenCrowd\_all | 78 | 2,040 | 2,040 | 21,855 | 280.2 | 10.7 | ✘ | Single |
<!-- The basic statistics of NetEaseCrowd dataset shows as follows:
| | NetEaseCrowd |
| ------------- | ------------ |
| #Workers | 2,413 |
| #Tasks | 999,799 |
| #Groundtruths | 999,799 |
| #Annotations | 6,016,319 | -->
## Data Content and Format
### Obtain the data
Two ways to access the dataset:
* Directly download overall NetEaseCrowd in [Hugging Face](https://huggingface.co/datasets/liuhyuu/NetEaseCrowd) [**Recommended**]
* Under the [`data/` folder](https://github.com/fuxiAIlab/NetEaseCrowd-Dataset/tree/main/data), the NetEaseCrowd dataset is provided in partitions in the csv file format. Each partition is named as `NetEaseCrowd_part_x.csv`. Concat them to get the entire NetEaseCrowd dataset.
### Dataset format
In the dataset, each line of record represents an interaction between a worker and a task, with the following columns:
* **taskId**: The unique id of the annotated task.
* **tasksetId**: The unique id of the task set. Each task set contains unspecified number of tasks. Each task belongs to exactly one task set.
* **workerId**: The unique id of the worker.
* **answer**: The annotation given by the worker, which is an enumeric number starting from 0.
* **completeTime**: The integer timestamp recording the completion time of the annotation.
* **truth**: The groundtruth of the annotated task, which, in consistency with answer, is also an enumeric number starting from 0.
* **capability**: The unique id of the capability required by the annotated taskset. Each taskset belongs to exactly one capability.
*For the privacy concerns, all sensitive content like as -Ids, has been anonymized.*
### Data sample
| tasksetId | taskId | workerId | answer | completeTime | truth | capability |
|-----------|---------------------|----------|--------|---------------|-------|------------|
| 6980 | 1012658482844795232 | 64 | 2 | 1661917345953 | 1 | 69 |
| 6980 | 1012658482844795232 | 150 | 1 | 1661871234755 | 1 | 69 |
| 6980 | 1012658482844795232 | 263 | 0 | 1661855450281 | 1 | 69 |
In the example above, there are three annotations, all from the same taskset 6980 and the same task 1012658482844795232. Three annotators, with ids 64, 150, and 263, provide annotations of 2, 1, and 0, respectively. They do the task at different time. The truth label for this task is 1, and the capability id of the task is 69.
## Baseline Models
We test several existing truth inference methods in our dataset, and detailed analysis with more experimental setups can be found in our paper.
| Method | Accuracy | F1-score |
|----------------|----------|----------|
| MV | 0.92695 | 0.92692 |
| DS | 0.95178 | 0.94817 |
| MACE | 0.95991 | 0.94957 |
| Wawa | 0.94814 | 0.94445 |
| ZeroBasedSkill | 0.94898 | 0.94585 |
| GLAD | 0.95064 | 0.95058 |
| EBCC | 0.91071 | 0.90996 |
| ZC | 0.95305 | 0.95301 |
| TiReMGE | 0.92713 | 0.92706 |
| LAA | 0.94173 | 0.94169 |
| BiLA | 0.88036 | 0.87896 |
### Test with the dataset directly from crowd-kit
The NetEaseCrowd dataset has been integrated into the [crowd-kit](https://github.com/Toloka/crowd-kit)
(with pull request [here](https://github.com/Toloka/crowd-kit/pull/101)),
you can use it directly in your code with the following code(with crowd-kit version > 1.2.1):
```python
from crowdkit.aggregation import DawidSkene
from crowdkit.datasets import load_dataset
df, gt = load_dataset('netease_crowd')
ds = DawidSkene(10)
result = ds.fit_predict(df)
print(len(result))
# 999799
```
## Other public datasets
We provide a curated list for other public datasets towards truth inference task.
| Dataset Name | Resource |
|----------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| adult | Quality management on amazon mechanical turk. [[paper](https://dl.acm.org/doi/abs/10.1145/1837885.1837906)][[data](https://github.com/ipeirotis/Get-Another-Label/tree/master/data)] |
| sentiment<br>fact | Workshops Held at the First AAAI Conference on Human Computation and Crowdsourcing: A Report. [[paper](https://ojs.aaai.org/index.php/aimagazine/article/view/2537/2427)][[data](https://sites.google.com/site/crowdscale2013/home)] |
| MS<br>zencrowd_all<br>zencrowd_us<br>zencrowd_in<br>sp<br>sp_amt<br>cf<br>cf_amt | The active crowd toolkit: An open-source tool for benchmarking active learning algorithms for crowdsourcing research. [[paper](https://ojs.aaai.org/index.php/HCOMP/article/download/13256/13104)][[data](https://github.com/orchidproject/active-crowd-toolkit)] |
| Product<br>tweet<br>dog<br>face<br>duck<br>relevance<br>smile | Truth inference in crowdsourcing: Is the problem solved? [[paper](https://hub.hku.hk/bitstream/10722/243527/1/content.pdf?accept=1)][[data](https://zhydhkcws.github.io/crowd_truth_inference/)] <br> *Note that tweet dataset is called sentiment in this source. It is different from the sentiment dataset in CrowdScale2013.* |
| bird<br>rte<br>web<br>trec | Spectral methods meet em: A provably optimal algorithm for crowdsourcing. [[paper](https://proceedings.neurips.cc/paper/2014/file/788d986905533aba051261497ecffcbb-Paper.pdf)][[data](https://github.com/zhangyuc/SpectralMethodsMeetEM)] |
## Citation
If you use this project in your research or work, please cite it using the following BibTeX entry:
```bibtex
@misc{wang2024dataset,
title={A Dataset for the Validation of Truth Inference Algorithms Suitable for Online Deployment},
author={Fei Wang and Haoyu Liu and Haoyang Bi and Xiangzhuang Shen and Renyu Zhu and Runze Wu and Minmin Lin and Tangjie Lv and Changjie Fan and Qi Liu and Zhenya Huang and Enhong Chen},
year={2024},
eprint={2403.08826},
archivePrefix={arXiv},
primaryClass={cs.HC}
}
```
## License
The NetEaseCrowd dataset is licensed under [CC-BY-SA-4.0](https://creativecommons.org/licenses/by-sa/4.0/deed.en). |
aborruso/opencoesione | aborruso | "2024-03-08T14:40:10Z" | 0 | 0 | [
"license:cc-by-4.0",
"size_categories:1M<n<10M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T14:37:34Z" | ---
license: cc-by-4.0
---
|
schwepat/anonymized-amazon-dataset | schwepat | "2024-03-11T11:44:44Z" | 0 | 0 | [
"license:mit",
"size_categories:1M<n<10M",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T14:39:42Z" | ---
license: mit
---
|
JaehyungKim/p2c_dynasent2_all | JaehyungKim | "2024-03-08T15:16:03Z" | 0 | 0 | [
"license:other",
"size_categories:10K<n<100K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T15:15:33Z" | ---
license: other
license_name: following-original-dataset
license_link: LICENSE
---
|
minimario/math-openwebmath-retrievals | minimario | "2024-03-08T15:26:58Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T15:16:09Z" | ---
dataset_info:
features:
- name: problem
dtype: string
- name: level
dtype: string
- name: type
dtype: string
- name: solution
dtype: string
- name: answer
dtype: string
- name: p_retrievals
sequence: string
- name: s_retrievals
sequence: string
- name: ps_retrievals
sequence: string
splits:
- name: test
num_bytes: 261081887
num_examples: 5000
- name: train
num_bytes: 399597127
num_examples: 7500
download_size: 329948190
dataset_size: 660679014
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
---
|
anjan77/guanaco-llama2-1k | anjan77 | "2024-03-08T15:21:25Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T15:21:24Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 1654448
num_examples: 1000
download_size: 966692
dataset_size: 1654448
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
audibeal/my-cool-dataset-4 | audibeal | "2024-03-08T15:49:09Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T15:37:18Z" | ---
configs:
- config_name: train
data_files: "train.csv"
- config_name: dev
data_files: "dev.csv"
- config_name: test
data_files: "test.csv"
--- |
AabirDey/job-queries-and-customer-service | AabirDey | "2024-05-01T07:43:10Z" | 0 | 0 | [
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T15:58:22Z" | ---
language:
- en
license: mit
---
|
iNeil77/pseudo-mini-pile | iNeil77 | "2024-03-09T17:25:32Z" | 0 | 3 | [
"task_categories:text-generation",
"language:en",
"size_categories:100M<n<1B",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-generation"
] | "2024-03-08T16:10:40Z" | ---
dataset_info:
- config_name: all
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 360187653412.6177
num_examples: 56194997
download_size: 199030076349
dataset_size: 360187653412.6177
- config_name: c4_realnews
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 31597106256.723488
num_examples: 11427438
download_size: 19889880484
dataset_size: 31597106256.723488
- config_name: openwebtext
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 30974178275.039234
num_examples: 6474479
download_size: 19069709415
dataset_size: 30974178275.039234
- config_name: peS2o
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 221900508006.5479
num_examples: 32612199
download_size: 116217303065
dataset_size: 221900508006.5479
- config_name: redpajama_books
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 49246538575.26426
num_examples: 107443
download_size: 29612204926
dataset_size: 49246538575.26426
- config_name: stackexchange
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 2034535930.2150385
num_examples: 716532
download_size: 1222605537
dataset_size: 2034535930.2150385
- config_name: uspto
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 14755999149.910166
num_examples: 3247716
download_size: 7058272149
dataset_size: 14755999149.910166
- config_name: wiki
features:
- name: content
dtype: string
splits:
- name: train
num_bytes: 7528525537.163156
num_examples: 1609190
download_size: 4593971902
dataset_size: 7528525537.163156
configs:
- config_name: all
data_files:
- split: train
path: all/train-*
- config_name: c4_realnews
data_files:
- split: train
path: c4_realnews/train-*
- config_name: openwebtext
data_files:
- split: train
path: openwebtext/train-*
- config_name: peS2o
data_files:
- split: train
path: peS2o/train-*
- config_name: redpajama_books
data_files:
- split: train
path: redpajama_books/train-*
- config_name: stackexchange
data_files:
- split: train
path: stackexchange/train-*
- config_name: uspto
data_files:
- split: train
path: uspto/train-*
- config_name: wiki
data_files:
- split: train
path: wiki/train-*
task_categories:
- text-generation
language:
- en
size_categories:
- 10M<n<100M
---
A small, aggressively cleaned and de-duped pre-training corpus for academic settings. It aims to recreate something akin to [The Pile](https://huggingface.co/datasets/EleutherAI/pile) but prioritizes quality for the constrained token budget academic researchers live with.
It has seven config subsets and an eighth `all` subset that combines them for a total of ~91B tokens (GPT2 Tokenizer estimate). These splits are as follows:
1. `c4_realnews`: The RealNews domain subset of the C4 dataset containing news articles.
2. `openwebtext`: The OpenWebText dataset containing the contents of the links mentioned in Reddit posts with at least 3 upvotes.
3. `peS2o`: The PeS2o dataset containing academic articles from Semantic Scholar.
4. `redpajama_books`: The books subset of RedPajama V1.
5. `stackexchange`: The EN StackExchange non-code subset of the BigScience ROOTs dataset.
6. `uspto`: The EN USPTO patent applications contents' subset of the BigScience ROOTs dataset.
7. `wiki`: The EN Wiki subset of the BigScience ROOTs dataset.
The following processing and filtering steps have been applied:
1. Removed citation text and bibliography information for academic texts.
2. Ran a perplexity filter using a KenLM model trained on the EN OSCAR corpus and removed documents with a perplexity of more than 325 and less than 7.
3. Removed samples which have a repeating <=4-gram proportion of 15%.
4. Removed samples which have lower than 99% confidence of being EN using the lingua language detector.
5. Performed an aggressive MinHash de-dupe using a shingle size of 8 and a low threshold of 0.5. |
Iker/NoticIA_Human_Validation | Iker | "2024-04-12T10:57:08Z" | 0 | 0 | [
"task_categories:summarization",
"multilinguality:monolingual",
"source_datasets:original",
"language:es",
"license:cc-by-nc-sa-4.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2404.07611",
"region:us",
"summarization",
"clickbait",
"news"
] | [
"summarization"
] | "2024-03-08T16:17:21Z" | ---
language:
- es
license: cc-by-nc-sa-4.0
multilinguality:
- monolingual
size_categories:
- n<1K
source_datasets:
- original
task_categories:
- summarization
pretty_name: NoticIA Human Validation
dataset_info:
features:
- name: web_url
dtype: string
- name: web_headline
dtype: string
- name: summary
dtype: string
- name: summary2
dtype: string
- name: web_text
dtype: string
splits:
- name: test
num_examples: 100
configs:
- config_name: default
data_files:
- split: test
path: test.jsonl
tags:
- summarization
- clickbait
- news
---
<p align="center">
<img src="https://huggingface.co/datasets/Iker/NoticIA/resolve/main/assets/logo.png" style="height: 250px;">
</p>
<h3 align="center">"A Clickbait Article Summarization Dataset in Spanish."</h3>
This repository contains the manual annotations from a second human to validate the test set of the NoticIA dataset.
The full NoticIA dataset is available here: [https://huggingface.co/datasets/Iker/NoticIA](https://huggingface.co/datasets/Iker/NoticIA)
# Data explanation
- **web_url** (int): The URL of the news article
- **web_headline** (str): The headline of the article, which is a Clickbait.
- **summary** (str): The original summary in the NoticIA dataset.
- **summary2** (str): The second summary written by another human to validate the quality of `summary`
- **web_text** (int): The body of the article.
# Dataset Description
- **Curated by:** [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/), [Begoña Altura](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139)
- **Language(s) (NLP):** Spanish
- **License:** apache-2.0
# Dataset Usage
```Python
# pip install datasets evaluate rouge-score
from datasets import load_dataset
from evaluate import load
dataset = load_dataset("Iker/NoticIA_Human_Validation",split="test")
rouge = load("rouge")
results = rouge.compute(
predictions=[x["summary2"] for x in dataset],
references=[[x["summary"]] for x in dataset],
use_aggregator=True,
)
print(results)
```
# Uses
This dataset is intended to build models tailored for academic research that can extract information from large texts. The objective is to research whether current LLMs, given a question formulated as a Clickbait headline, can locate the answer within the article body and summarize the information in a few words. The dataset also aims to serve as a task to evaluate the performance of current LLMs in Spanish.
# Out-of-Scope Use
You cannot use this dataset to develop systems that directly harm the newspapers included in the dataset. This includes using the dataset to train profit-oriented LLMs capable of generating articles from a short text or headline, as well as developing profit-oriented bots that automatically summarize articles without the permission of the article's owner. Additionally, you are not permitted to train a system with this dataset that generates clickbait headlines.
This dataset contains text and headlines from newspapers; therefore, you cannot use it for commercial purposes unless you have the license for the data.
# Dataset Creation
The dataset has been meticulously created by hand. We utilize two sources to compile Clickbait articles:
- The Twitter user [@ahorrandoclick1](https://twitter.com/ahorrandoclick1), who reposts Clickbait articles along with a hand-crafted summary. Although we use their summaries as a reference, most of them have been rewritten (750 examples from this source).
- The web demo [⚔️ClickbaitFighter⚔️](https://iker-clickbaitfighter.hf.space/), which operates a pre-trained model using an early iteration of our dataset. We collect all the model inputs/outputs and manually correct them (100 examples from this source).
# Who are the annotators?
The dataset was originally by [Iker García-Ferrero](https://ikergarcia1996.github.io/Iker-Garcia-Ferrero/) and has been validated by [Begoña Altura](https://www.linkedin.com/in/bego%C3%B1a-altuna-78014139).
The annotation took ~40 hours.
# Citation
```bittext
@misc{noticia2024,
title={NoticIA: A Clickbait Article Summarization Dataset in Spanish},
author={Iker García-Ferrero and Begoña Altuna},
year={2024},
eprint={2404.07611},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
valurank/spam_ham_comments | valurank | "2024-03-08T16:23:11Z" | 0 | 0 | [
"license:other",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T16:22:38Z" | ---
license: other
license_name: valurank
license_link: LICENSE
---
|
imperialwarrior/open-australian-legal-qa-paraphrased-easy-gemini | imperialwarrior | "2024-03-10T08:46:14Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T16:44:12Z" | ---
dataset_info:
features:
- name: index
dtype: 'null'
- name: pipeline_1_result
dtype: string
- name: pipeline_1_result_embeddings
dtype: string
- name: pipeline_2_context
dtype: string
- name: pipeline_2_result
dtype: string
- name: pipeline_2_result_embeddings
dtype: string
- name: pipeline_3_context
dtype: string
- name: pipeline_3_result
dtype: string
- name: pipeline_3_result_embeddings
dtype: string
- name: pipeline_4_context
dtype: string
- name: pipeline_4_result
dtype: string
- name: pipeline_4_result_embeddings
dtype: string
- name: pipeline_5_context
dtype: string
- name: pipeline_5_result
dtype: string
- name: pipeline_5_result_embeddings
dtype: string
- name: pipeline_6_context
dtype: string
- name: pipeline_6_result
dtype: string
- name: pipeline_6_result_embeddings
dtype: string
- name: pipeline_7_context
dtype: string
- name: pipeline_7_result
dtype: string
- name: pipeline_7_result_embeddings
dtype: string
- name: referenced_question
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
- name: question_non_retrieval_embeddings
dtype: string
- name: answer_non_retrieval_embeddings
dtype: string
- name: question_retrieval_embeddings
dtype: string
- name: answer_retrieval_embeddings
dtype: string
- name: __index_level_0__
dtype: float64
- name: case_index
dtype: float64
- name: pipeline_6_case_indexes
sequence: int64
- name: pipeline_7_case_indexes
sequence: int64
splits:
- name: train
num_bytes: 41703799
num_examples: 207
download_size: 14322382
dataset_size: 41703799
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
imperialwarrior/open-australian-legal-qa-paraphrased-hard-gemini | imperialwarrior | "2024-03-10T08:53:51Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T16:45:03Z" | ---
dataset_info:
features:
- name: index
dtype: 'null'
- name: pipeline_1_result
dtype: string
- name: pipeline_1_result_embeddings
dtype: string
- name: pipeline_2_context
dtype: string
- name: pipeline_2_result
dtype: string
- name: pipeline_2_result_embeddings
dtype: string
- name: pipeline_3_context
dtype: string
- name: pipeline_3_result
dtype: string
- name: pipeline_3_result_embeddings
dtype: string
- name: pipeline_4_context
dtype: string
- name: pipeline_4_result
dtype: string
- name: pipeline_4_result_embeddings
dtype: string
- name: pipeline_5_context
dtype: string
- name: pipeline_5_result
dtype: string
- name: pipeline_5_result_embeddings
dtype: string
- name: pipeline_6_context
dtype: string
- name: pipeline_6_result
dtype: string
- name: pipeline_6_result_embeddings
dtype: string
- name: pipeline_7_context
dtype: string
- name: pipeline_7_result
dtype: string
- name: pipeline_7_result_embeddings
dtype: string
- name: referenced_question
dtype: string
- name: answer
dtype: string
- name: question
dtype: string
- name: question_non_retrieval_embeddings
dtype: string
- name: answer_non_retrieval_embeddings
dtype: string
- name: question_retrieval_embeddings
dtype: string
- name: answer_retrieval_embeddings
dtype: string
- name: __index_level_0__
dtype: float64
- name: case_index
dtype: float64
- name: pipeline_6_case_indexes
sequence: int64
- name: pipeline_7_case_indexes
sequence: int64
splits:
- name: train
num_bytes: 40967131
num_examples: 203
download_size: 14378490
dataset_size: 40967131
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
ariji1/acn_train_Test | ariji1 | "2024-03-08T16:49:23Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T16:48:35Z" | ---
license: apache-2.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 74288.00649350649
num_examples: 123
- name: test
num_bytes: 18722.993506493505
num_examples: 31
download_size: 49882
dataset_size: 93011.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
GusPuffy/python-decompiler-37-0.7-train | GusPuffy | "2024-03-23T17:25:00Z" | 0 | 0 | [
"license:unknown",
"size_categories:100K<n<1M",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"python",
"python3.7",
"code"
] | null | "2024-03-08T16:53:31Z" | ---
license: unknown
tags:
- python
- python3.7
- code
pretty_name: p
---
I don't know what the licenses are with the code that is in this dataset, so keep that in mind if you are using it.
Pulled source code from the top 8k PyPi projects.
If you own part of the data in the dataset and want to request it be removed please let me know and I will remove it. |
ritwikraha/random-storage | ritwikraha | "2024-10-02T10:41:36Z" | 0 | 0 | [
"license:creativeml-openrail-m",
"size_categories:n<1K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-03-08T16:56:13Z" | ---
license: creativeml-openrail-m
---
|
carlavic/Ani | carlavic | "2024-03-08T17:06:10Z" | 0 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-03-08T17:06:10Z" | ---
license: openrail
---
|
aryamannningombam/indian-female-tts-dataset | aryamannningombam | "2024-03-09T18:14:05Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T17:11:41Z" | ---
dataset_info:
features:
- name: file
dtype: string
- name: text
dtype: string
- name: tag
dtype: string
- name: file_path
dtype: string
- name: y
sequence: float32
- name: emotional_embedding
sequence: int64
- name: non_characters
dtype: 'null'
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 923807734
num_examples: 2843
download_size: 911770863
dataset_size: 923807734
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
florentgbelidji/ncbi_extracted_running | florentgbelidji | "2024-03-14T14:01:10Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T17:17:07Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: text
dtype: string
- name: title
dtype: string
- name: date
dtype: string
- name: authors
dtype: string
- name: language
dtype: string
splits:
- name: train
num_bytes: 19228492
num_examples: 432
download_size: 10192770
dataset_size: 19228492
---
# Dataset Card for "ncbi_extracted_running"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
previsone/modwell-kitch03 | previsone | "2024-05-13T15:01:25Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T17:23:40Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 824628362.19
num_examples: 6862
download_size: 823779542
dataset_size: 824628362.19
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
charkgan/TheMET | charkgan | "2024-03-08T17:23:48Z" | 0 | 0 | [
"license:cc0-1.0",
"region:us"
] | null | "2024-03-08T17:23:48Z" | ---
license: cc0-1.0
---
|
Romildon/boa | Romildon | "2024-03-08T17:24:48Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-03-08T17:23:55Z" | ---
license: openrail
---
|
FreedomIntelligence/ALLaVA-4V-Chinese | FreedomIntelligence | "2024-04-29T15:26:44Z" | 0 | 12 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:zh",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2402.11684",
"region:us",
"GPT-4V",
"LVLM",
"Vision",
"Language"
] | [
"question-answering",
"text-generation"
] | "2024-03-08T17:39:02Z" | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- zh
tags:
- GPT-4V
- LVLM
- Vision
- Language
size_categories:
- 1M<n<10M
configs:
- config_name: allava_laion
data_files:
- split: caption
path: "allava_laion/ALLaVA-Caption-LAION-4V_Chinese.json"
- split: instruct
path: "allava_laion/ALLaVA-Instruct-LAION-4V_Chinese.json"
- config_name: allava_vflan
data_files:
- split: caption
path: "allava_vflan/ALLaVA-Caption-VFLAN-4V_Chinese.json"
- split: instruct
path: "allava_vflan/ALLaVA-Instruct-VFLAN-4V_Chinese.json"
# - config_name: allava_laion_instruction
# data_files: "allava_laion/ALLaVA-Instruct-LAION-4V.json"
# configs:
# - config_name: default
# data_files:
# - split: allava_laion_caption
# path: "allava_laion/ALLaVA-Caption-LAION-4V.json"
# - split: allava_laion_instruction
# path: "allava_laion/ALLaVA-Instruction-LAION-4V.json"
# configs:
# - config_name: default
# - data_files:
# - split: allava_laion_caption
# - path:
# - "allava_laion/ALLaVA-Caption-LAION-4V.json"
# - split: allava_laion_instruction
# - path:
# - "allava_laion/ALLaVA-Instruction-LAION-4V.json"
---
## ALLaVA-4V for Chinese
This is the Chinese version of the ALLaVA-4V data. We have translated the ALLaVA-4V data into Chinese through ChatGPT and instructed ChatGPT not to translate content related to OCR.
The original dataset can be found [here](https://huggingface.co/datasets/FreedomIntelligence/ALLaVA-4V), and the image data can be downloaded from [ALLaVA-4V](https://huggingface.co/datasets/FreedomIntelligence/ALLaVA-4V).
#### Citation
If you find our data useful, please consider citing our work! We are FreedomIntelligence from Shenzhen Research Institute of Big Data and The Chinese University of Hong Kong, Shenzhen.
```
@misc{chen2024allava,
title={ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model},
author={Guiming Hardy Chen and Shunian Chen and Ruifei Zhang and Junying Chen and Xiangbo Wu and Zhiyi Zhang and Zhihong Chen and Jianquan Li and Xiang Wan and Benyou Wang},
year={2024},
eprint={2402.11684},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
Kamran1367/Resume_Classificattion_Curated | Kamran1367 | "2024-03-09T16:54:56Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T17:53:03Z" | ---
dataset_info:
features:
- name: Resume_str_cleaned
dtype: string
- name: Category
dtype: string
splits:
- name: train
num_bytes: 14301269
num_examples: 2484
download_size: 6769854
dataset_size: 14301269
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
This is a curated Resume Classification CSV file |
methane69/finetuning_LLaVA | methane69 | "2024-03-08T18:05:40Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T18:05:37Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
splits:
- name: train
num_bytes: 8426655.0
num_examples: 138
- name: test
num_bytes: 201798.0
num_examples: 3
download_size: 8321060
dataset_size: 8628453.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Mitsuki-Sakamoto/alpaca_farm-reward-model-deberta-v3-large-v2-re-preference-64-nsample-16_random | Mitsuki-Sakamoto | "2024-03-10T18:07:07Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T18:17:35Z" | ---
dataset_info:
- config_name: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
splits:
- name: preference
num_bytes: 25791877
num_examples: 20001
download_size: 12310829
dataset_size: 25791877
- config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
splits:
- name: preference
num_bytes: 25837484
num_examples: 20001
download_size: 12262392
dataset_size: 25837484
- config_name: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: preference
dtype: int64
- name: output_1
dtype: string
- name: output_2
dtype: string
- name: reward_model_prompt_format
dtype: string
- name: gen_prompt_format
dtype: string
- name: gen_kwargs
struct:
- name: do_sample
dtype: bool
- name: max_new_tokens
dtype: int64
- name: pad_token_id
dtype: int64
- name: top_k
dtype: int64
- name: top_p
dtype: float64
- name: reward_1
dtype: float64
- name: reward_2
dtype: float64
- name: n_samples
dtype: int64
- name: reject_select
dtype: string
splits:
- name: preference
num_bytes: 25779381
num_examples: 20001
download_size: 11985077
dataset_size: 25779381
configs:
- config_name: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500
data_files:
- split: preference
path: alpaca_instructions-pythia-1.4b_alpaca_farm_instructions_sft_constant_pa-checkpoint-7500/preference-*
- config_name: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1
data_files:
- split: preference
path: alpaca_instructions-pythia_160m_alpaca_farm_instructions_sft_constant_pa_seed_1/preference-*
- config_name: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1
data_files:
- split: preference
path: alpaca_instructions-pythia_70m_alpaca_farm_instructions_sft_constant_pa_seed_1/preference-*
---
|
fede97/external_test_set_v1 | fede97 | "2024-03-11T09:45:15Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T18:19:30Z" | ---
dataset_info:
features:
- name: stable_unclip
dtype: image
- name: kandisky_2_2
dtype: image
- name: multi_diffusion
dtype: image
- name: self_attention_guidance
dtype: image
- name: latent_consistency_xl
dtype: image
- name: kandisky_3
dtype: image
- name: deepfloyd_if
dtype: image
- name: latent_consistency_model_simianluo
dtype: image
- name: amused
dtype: image
- name: stabilityai_stable_diffusion_2_1_base
dtype: image
- name: kandisky_2_1
dtype: image
- name: sdxl_turbo
dtype: image
- name: stabilityai_stable_diffusion_xl_base_1_0
dtype: image
- name: compvis_stable_diffusion_v1_4
dtype: image
- name: pixart_alpha
dtype: image
- name: id
dtype: string
splits:
- name: train
num_bytes: 71299802571.0
num_examples: 4800
download_size: 71163208124
dataset_size: 71299802571.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jbrinkma/pile-300k | jbrinkma | "2024-03-08T18:28:37Z" | 0 | 0 | [
"license:mit",
"size_categories:100K<n<1M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T18:27:36Z" | ---
license: mit
dataset_info:
features:
- name: text
dtype: string
- name: meta
struct:
- name: pile_set_name
dtype: string
splits:
- name: train
num_bytes: 1675060733
num_examples: 300000
download_size: 873058629
dataset_size: 1675060733
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dar-tau/test-test | dar-tau | "2024-03-08T18:29:52Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:timeseries",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T18:28:30Z" | ---
dataset_info:
features:
- name: logit_loss
dtype: float64
- name: log_kl_loss
dtype: float64
- name: mask_weight
dtype: float64
- name: probe_loss
dtype: float64
- name: top_k_acc
dtype: float64
- name: attention_maps
sequence: float32
- name: attention_maps_shape
sequence: int64
splits:
- name: train
num_bytes: 83472
num_examples: 3
download_size: 11666
dataset_size: 83472
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Omriy123/Dogs_vs_Cats | Omriy123 | "2024-03-08T18:36:20Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T18:34:52Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': cat
'1': dog
splits:
- name: train
num_bytes: 525901830.0
num_examples: 25000
download_size: 573158859
dataset_size: 525901830.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
halitefe/lima-tr | halitefe | "2024-03-11T13:45:55Z" | 0 | 4 | [
"task_categories:translation",
"task_categories:text2text-generation",
"task_categories:question-answering",
"language:tr",
"size_categories:1K<n<10K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2305.11206",
"region:us"
] | [
"translation",
"text2text-generation",
"question-answering"
] | "2024-03-08T18:35:51Z" | ---
task_categories:
- translation
- text2text-generation
- question-answering
language:
- tr
size_categories:
- 1K<n<10K
---
# LIMA-TR
This project is dedicated to translating the LIMA (Less Is More for Alignment) dataset from English to Turkish using OpenAI's API (`gpt-3.5-turbo`)
## Source Dataset
LIMA (Less Is More for Alignment) can be found [paper-link](https://arxiv.org/pdf/2305.11206.pdf) [dataset-link](https://huggingface.co/datasets/GAIR/lima) |
tldoan/iNF200 | tldoan | "2024-03-10T00:15:08Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:10K<n<100K",
"format:webdataset",
"modality:image",
"modality:text",
"library:datasets",
"library:webdataset",
"library:mlcroissant",
"region:us"
] | null | "2024-03-08T18:36:15Z" | ---
license: apache-2.0
---
# A streamlined Approach to Multimodal Few-Shot Class Incremental Learning for Fine-Grained Datasets (CLIP-M3)
Official Datasets iNaturalist Fungi200 (iNF200) |
zliu333/truck_at_port5 | zliu333 | "2024-03-08T18:39:10Z" | 0 | 1 | [
"size_categories:n<1K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T18:38:31Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: text
dtype: string
splits:
- name: train
num_bytes: 52878226.0
num_examples: 36
download_size: 52870667
dataset_size: 52878226.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
adorkin/evalatin2024 | adorkin | "2024-09-07T17:47:42Z" | 0 | 0 | [
"task_categories:text-classification",
"language:la",
"license:mit",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2405.01159",
"region:us"
] | [
"text-classification"
] | "2024-03-08T18:44:03Z" | ---
license: mit
task_categories:
- text-classification
language:
- la
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: gpt
path: gpt4-turbo-annotations.jsonl
- config_name: gpt
data_files:
- split: gpt
path: gpt4-turbo-annotations.jsonl
- config_name: heuristics
data_files:
- split: heuristics
path: heuristics-annotations.jsonl
---
# TartuNLP at EvaLatin 2024: Emotion Polarity Detection
## BibTeX entry and citation info
```
@inproceedings{dorkin-sirts-2024-tartunlp-evalatin,
title = "{T}artu{NLP} at {E}va{L}atin 2024: Emotion Polarity Detection",
author = "Dorkin, Aleksei and
Sirts, Kairit",
editor = "Sprugnoli, Rachele and
Passarotti, Marco",
booktitle = "Proceedings of the Third Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA) @ LREC-COLING-2024",
month = may,
year = "2024",
address = "Torino, Italia",
publisher = "ELRA and ICCL",
url = "https://aclanthology.org/2024.lt4hala-1.26",
pages = "223--228",
}
```
```
@misc{dorkin2024tartunlp,
title={TartuNLP at EvaLatin 2024: Emotion Polarity Detection},
author={Aleksei Dorkin and Kairit Sirts},
year={2024},
eprint={2405.01159},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
kovakavics/comfyuicuccaim | kovakavics | "2024-05-09T04:22:18Z" | 0 | 0 | [
"license:afl-3.0",
"region:us"
] | null | "2024-03-08T18:53:16Z" | ---
license: afl-3.0
---
|
imperialwarrior/open-australian-legal-qa-paraphrased-moderation-results | imperialwarrior | "2024-03-08T19:41:06Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T18:54:14Z" | ---
dataset_info:
features:
- name: question
dtype: string
- name: answer
dtype: string
- name: snippet
dtype: string
- name: question_retrieval_embeddings
dtype: string
- name: question_non_retrieval_embeddings
dtype: string
- name: answer_retrieval_embeddings
dtype: string
- name: answer_non_retrieval_embeddings
dtype: string
- name: snippet_retrieval_embeddings
dtype: string
- name: answer_moderation
dtype: string
- name: question_moderation
dtype: string
- name: snippet_moderation
dtype: string
splits:
- name: train
num_bytes: 509750534
num_examples: 2122
download_size: 176727859
dataset_size: 509750534
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
mudassar93/data_piano | mudassar93 | "2024-03-08T18:58:27Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T18:58:25Z" | ---
dataset_info:
features:
- name: response
dtype: string
- name: instruction
dtype: string
- name: chat
dtype: string
splits:
- name: train
num_bytes: 1080219
num_examples: 1823
download_size: 238567
dataset_size: 1080219
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
AlexanderHolmes0/true-fake-news | AlexanderHolmes0 | "2024-04-12T13:44:07Z" | 0 | 0 | [
"task_categories:text-classification",
"task_categories:question-answering",
"task_categories:text-generation",
"language:en",
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"news"
] | [
"text-classification",
"question-answering",
"text-generation"
] | "2024-03-08T19:02:28Z" | ---
language:
- en
license: mit
size_categories:
- 10K<n<100K
task_categories:
- text-classification
- question-answering
- text-generation
dataset_info:
features:
- name: label
dtype:
class_label:
names:
'0': 'true'
'1': fake
- name: text
dtype: string
splits:
- name: train
num_bytes: 82978144
num_examples: 33672
- name: test
num_bytes: 28512596
num_examples: 11224
download_size: 67949019
dataset_size: 111490740
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
tags:
- news
---
# True-Fake-News
<!-- Provide a quick summary of the dataset. -->
These are collected news articles from various sources with curated labels aligning to `true` of `fake` classification.
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
The dataset contains two types of articles fake and real News. This dataset was collected from realworld sources; the truthful articles were obtained by crawling articles from Reuters.com (News website). As for the fake news articles, they were collected from different sources. The fake news articles were collected from unreliable websites that were flagged by Politifact (a fact-checking organization in the USA) and Wikipedia. The dataset contains different types of articles on different topics, however, the majority of articles focus on political and World news topics.
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [Kaggle Repo](https://www.kaggle.com/datasets/emineyetm/fake-news-detection-datasets/data)
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
Text classification or question answering would be ways to use this dataset.
## Dataset Structure
| Classification | Total Number of Articles | Article Type | Article Count |
|----------------|--------------------------|--------------|---------------|
| Real-News | 21,417 | World | 10,145 |
| | | Political | 11,272 |
| Fake-News | 23,481 | Government | 1,570 |
| | | Middle East | 778 |
| | | US | 783 |
| | | Left-Leaning | 4,459 |
| | | Political | 6,841 |
| | | General | 9,050 |
|
HiTZ/basqueparl | HiTZ | "2024-03-08T19:46:38Z" | 0 | 0 | [
"task_categories:text-classification",
"task_categories:token-classification",
"task_categories:summarization",
"task_categories:translation",
"task_categories:zero-shot-classification",
"task_categories:text-generation",
"multilinguality:multilingual",
"source_datasets:original",
"language:es",
"language:eu",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:csv",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"politics",
"parliamentary",
"code switching",
"multilinguality"
] | [
"text-classification",
"token-classification",
"summarization",
"translation",
"zero-shot-classification",
"text-generation"
] | "2024-03-08T19:08:31Z" | ---
license: apache-2.0
task_categories:
- text-classification
- token-classification
- summarization
- translation
- zero-shot-classification
- text-generation
language:
- es
- eu
tags:
- politics
- parliamentary
- code switching
- multilinguality
pretty_name: BasqueParl
size_categories:
- 100K<n<1M
source_datasets:
- original
multilinguality:
- multilingual
paperswithcode_id: basqueparl
---
# BasqueParl: A Bilingual Corpus of Basque Parliamentary Transcriptions
This repository contains **BasqueParl**, a bilingual corpus for political discourse analysis. It covers transcriptions from the Parliament of
the Basque Autonomous Community for eight years and two legislative terms (2012-2020), and its main characteristic is the presence of Basque-Spanish
code-switching speeches.
📖 Paper: [BasqueParl A Bilingual Corpus of Basque Parliamentary Transcriptions](https://aclanthology.org/2022.lrec-1.361/) In LREC 2022.
## Example
The following unprocessed speech combines a **Basque** text (plain) with **Spanish** fragments (highlighted):
> Bai, zure baimenarekin hemendik.
>
> Ba zure desioak, Guanche andrea, gureak ere badira. Harritu nau eta ez nau harritu hitza berriro hartzeak, zeren hitz egiten nengoen bitartean esan diozu albokoari `le voy a contestar. Le voy a contestar`, ondo iruditzen, zure eskubidean zaude, baino beno, ez dut uste inongo astakeriarik esan dudanik.
>
> Gauzak egiten dira eta uste dut nik, nik ere eskubidea dudala Gobernuak eta beste erakundeek egiten dutena esateko. Zeren beti `ver el vaso medio vacío o medio lleno, pues cambia un poco la perspectiva y vernos siempre en modo Gobierno, creo que no es nada objetivo. Se hacen cosas, se harán cosas y esta vez creo que me deberían reconocer que de la iniciativa primera a lo que hemos acordado, no nos hemos dejado nada o creo que casi nada. Entonces, bueno, sólo querı́a aclarar eso` eta eskerrak berriro.
>
> Eta ziur egon emakumea dokumentu horietan ez bada agertzen hitzetan, zeren uste dut hori ez dela garrantzitsuena, bai politiketan egongo dela eta dagoela.
>
> Eskerrik asko.
## Description
The specificities of the **BasqueParl** corpus are:
- **14 M words** of bilingual parliamentary transcriptions
- **Speech paragraphs** as units
- Metadata such as **date** and **speaker's name**, **gender** and **party** for each paragraph
- **Language** of each paragraph (either Basque or Spanish)
- **Lemmas** and **named entities** of each paragraph, with and without stopwords
## Data Fields
The **BasqueParl** corpus is written as a Tab Separated Values (TSV) file. Each unit presents the next fields:
- **"date"**: Date corresponding to the speech, e.g. _2020-02-07_
- **"speech_id"**: Number that identifies the speech within its date, e.g. _3_
- **"text_id"**: Number that identifies the paragraph within its speech, e.g. _3_
- **"speaker"**: Family names of the speaker, including their position if any, e.g. _Tejeria Otermin LEHENDAKARIA_
- **"birth"**: Year of birth of the speaker, e.g. _1971_
- **"gender"**: Gender of the speaker, either _E_ (_emakumea_) for female or _G_ (_gizona_) for male
- **"party"**: Political group of the speaker, e.g. _EAJ_
- **"language"**: Language assigned to a paragraph, either _eu_ for Basque or _es_ for Spanish
- **"text"**: Paragraph of the speech text
- **"lemmas"**: Lemmatized paragraph
- **"lemmas_stw"**: Lemmatized paragraph without stopwords
- **"entities"**: Named entities extracted from the paragraph
- **"entities_stw"**: Named entities extracted from the paragraph without stopwords
Some fields such as gender or party may have been sometimes annotated as missing if the data was impossible to retrieve or was not appropriate.
# Citation
````bibtex
@inproceedings{escribano-etal-2022-basqueparl,
title = "{B}asque{P}arl: A Bilingual Corpus of {B}asque Parliamentary Transcriptions",
author = "Escribano, Nayla and
Gonzalez, Jon Ander and
Orbegozo-Terradillos, Julen and
Larrondo-Ureta, Ainara and
Pe{\~n}a-Fern{\'a}ndez, Sim{\'o}n and
Perez-de-Vi{\~n}aspre, Olatz and
Agerri, Rodrigo",
booktitle = "Proceedings of the Thirteenth Language Resources and Evaluation Conference (LREC)",
year = "2022",
publisher = "European Language Resources Association",
pages = "3382--3390"
}
````
# Contact
[Rodrigo Agerri](https://ragerri.github.io/)
HiTZ Center - Ixa, University of the Basque Country UPV/EHU
|
apollo-research/monology-pile-uncopyrighted-tokenizer-EleutherAI-gpt-neox-20b | apollo-research | "2024-03-08T19:56:27Z" | 0 | 0 | [
"size_categories:10M<n<100M",
"format:parquet",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T19:09:51Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 168975915696.0
num_examples: 20616876
download_size: 71503236187
dataset_size: 168975915696.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
dar-tau/grammar-attention-maps-opt-350m | dar-tau | "2024-03-08T23:53:20Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T19:25:15Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: logit_loss
dtype: float64
- name: log_kl_loss
dtype: float64
- name: mask_weight
dtype: float64
- name: probe_loss
dtype: float64
- name: top_k_acc
dtype: float64
- name: attention_maps
sequence: float32
- name: attention_maps_shape
sequence: int64
splits:
- name: train
num_bytes: 3868218
num_examples: 95
download_size: 703054
dataset_size: 3868218
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
taylorbollman/wikitext2_tb | taylorbollman | "2024-03-08T19:27:53Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T19:27:48Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
- name: token_type_ids
sequence: int8
- name: attention_mask
sequence: int8
- name: labels
sequence: int64
splits:
- name: test
num_bytes: 3963136
num_examples: 2192
- name: train
num_bytes: 33513088
num_examples: 18536
- name: validation
num_bytes: 3467744
num_examples: 1918
download_size: 11981141
dataset_size: 40943968
configs:
- config_name: default
data_files:
- split: test
path: data/test-*
- split: train
path: data/train-*
- split: validation
path: data/validation-*
---
|
saadalafalcon/officialEmails | saadalafalcon | "2024-03-09T08:39:01Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-03-08T19:42:00Z" | ---
license: apache-2.0
---
|
Weni/WeniGPT-QA-1.0.1_DPO | Weni | "2024-03-13T11:06:45Z" | 0 | 1 | [
"language:pt",
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T20:02:06Z" | ---
language:
- pt
dataset_info:
features:
- name: context
dtype: string
- name: question
dtype: string
- name: chosen_response
dtype: string
- name: rejected_response
dtype: string
- name: correct_ans
dtype: int64
- name: flag_type
dtype: int64
splits:
- name: pt
num_bytes: 27689890
num_examples: 3180
download_size: 14905751
dataset_size: 27689890
configs:
- config_name: default
data_files:
- split: pt
path: data/pt-*
---
|
chagasclone/agrecivo | chagasclone | "2024-03-08T20:12:51Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-03-08T20:09:00Z" | ---
license: openrail
---
|
allandclive/alpaca_4k_luganda | allandclive | "2024-03-08T20:33:24Z" | 0 | 0 | [
"task_categories:text2text-generation",
"task_categories:text-generation",
"language:lg",
"region:us"
] | [
"text2text-generation",
"text-generation"
] | "2024-03-08T20:11:25Z" | ---
task_categories:
- text2text-generation
- text-generation
language:
- lg
---
# Alpaca 4k Luganda
Translated using Google Translate
|
hon9kon9ize/yue-toxic-dpo | hon9kon9ize | "2024-03-14T15:28:51Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"region:us",
"not-for-all-audiences"
] | null | "2024-03-08T20:45:26Z" | ---
dataset_info:
- config_name: yue
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: id
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 779960
num_examples: 531
download_size: 441753
dataset_size: 779960
- config_name: zh
features:
- name: prompt
dtype: string
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: id
dtype: string
splits:
- name: train
num_bytes: 825175
num_examples: 541
download_size: 472938
dataset_size: 825175
configs:
- config_name: yue
data_files:
- split: train
path: yue/train-*
- config_name: zh
data_files:
- split: train
path: zh/train-*
tags:
- not-for-all-audiences
---
# Cantonese Toxic DPO v0.2
This dataset is a Cantonese and Simplified Chinese translation of [unalignment/toxic-dpo-v0.2](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.2). For more detailed information about the original dataset, please refer to the provided link.
This dataset is translated by Gemini Pro and has not undergone any manual verification. The content may be inaccurate or misleading. please keep this in mind when using this dataset.
## License
This dataset is provided under the same license as the original dataset: CC BY 4.0
## Limitation and Usage Limits
Please check the original dataset for more information. |
Rohit-D/synthetic-confidential-information-injected-business-excerpts | Rohit-D | "2024-03-09T18:27:56Z" | 0 | 1 | [
"task_categories:question-answering",
"task_categories:text-classification",
"task_categories:feature-extraction",
"task_categories:summarization",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us",
"business",
"fine-tuning"
] | [
"question-answering",
"text-classification",
"feature-extraction",
"summarization"
] | "2024-03-08T20:47:07Z" | ---
license: mit
task_categories:
- question-answering
- text-classification
- feature-extraction
- summarization
language:
- en
tags:
- business
- fine-tuning
size_categories:
- n<=1K
---
## Synthetic Confidential Information Injected Business Excerpts
This dataset aims to provide business report excerpts which contain relevant confidential/sensitive information.
<pre>
This includes mentions of :
1. Internal Marketing Strategies.
2. Proprietary Product Composition.
3. License Internals.
4. Internal Sales Projections.
5. Confidential Patent Details.
6. others.
</pre>
The dataset contains around 1k business excerpt - Reasons pairs. The Reason field contains the confidential portion from the business excerpt field
in quotes and also reasons succintly (in about a line) as to why the quoted portion might be confidential.
**Note** : All 'confidential information' injected is purely artifical and the business excerpts themselves along with companies, products, numbers, licenses, patents they reference or mention are hypothetical and artificial.
This data is to be treated as pure simulation of what leaks in business excerpts might look like.
This data does not contain or intend to provide any kind of actual/real cases of confidential information. |
ravithejads/alpaca-cleaned-tagged | ravithejads | "2024-03-10T17:40:41Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T20:47:52Z" | ---
dataset_info:
features:
- name: instruction
dtype: string
- name: input
dtype: string
- name: output
dtype: string
- name: type
dtype: string
splits:
- name: train
num_bytes: 42276800
num_examples: 51760
download_size: 24347133
dataset_size: 42276800
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
thafiz/llm-crossfit | thafiz | "2024-03-08T21:18:14Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T21:18:05Z" | ---
dataset_info:
features:
- name: input_ids
sequence: int32
splits:
- name: train
num_bytes: 319644.0
num_examples: 39
- name: test
num_bytes: 40980.0
num_examples: 5
download_size: 199220
dataset_size: 360624.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
fraviofranco/vozcortella | fraviofranco | "2024-03-08T22:07:02Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-03-08T21:51:16Z" | ---
license: openrail
---
|
dongyoung4091/hh-rlhf_with_features_flan_t5_large | dongyoung4091 | "2024-03-08T22:34:13Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T22:34:08Z" | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: helpfulness_chosen
dtype: int64
- name: helpfulness_rejected
dtype: int64
- name: specificity_chosen
dtype: int64
- name: specificity_rejected
dtype: int64
- name: intent_chosen
dtype: int64
- name: intent_rejected
dtype: int64
- name: factuality_chosen
dtype: int64
- name: factuality_rejected
dtype: int64
- name: easy-to-understand_chosen
dtype: int64
- name: easy-to-understand_rejected
dtype: int64
- name: relevance_chosen
dtype: int64
- name: relevance_rejected
dtype: int64
- name: readability_chosen
dtype: int64
- name: readability_rejected
dtype: int64
- name: enough-detail_chosen
dtype: int64
- name: enough-detail_rejected
dtype: int64
- name: biased:_chosen
dtype: int64
- name: biased:_rejected
dtype: int64
- name: fail-to-consider-individual-preferences_chosen
dtype: int64
- name: fail-to-consider-individual-preferences_rejected
dtype: int64
- name: repetetive_chosen
dtype: int64
- name: repetetive_rejected
dtype: int64
- name: fail-to-consider-context_chosen
dtype: int64
- name: fail-to-consider-context_rejected
dtype: int64
- name: too-long_chosen
dtype: int64
- name: too-long_rejected
dtype: int64
- name: human
dtype: string
- name: assistant_chosen
dtype: string
- name: assistant_rejected
dtype: string
- name: log_score_chosen
dtype: float64
- name: log_score_rejected
dtype: float64
- name: labels
dtype: string
splits:
- name: train
num_bytes: 14434424
num_examples: 9574
- name: test
num_bytes: 14378349
num_examples: 9574
download_size: 15748504
dataset_size: 28812773
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
dongyoung4091/hh-rlhf_with_features | dongyoung4091 | "2024-03-08T22:44:07Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T22:36:26Z" | ---
dataset_info:
features:
- name: chosen
dtype: string
- name: rejected
dtype: string
- name: helpfulness_chosen
dtype: int64
- name: helpfulness_rejected
dtype: int64
- name: specificity_chosen
dtype: int64
- name: specificity_rejected
dtype: int64
- name: intent_chosen
dtype: int64
- name: intent_rejected
dtype: int64
- name: factuality_chosen
dtype: int64
- name: factuality_rejected
dtype: int64
- name: easy-to-understand_chosen
dtype: int64
- name: easy-to-understand_rejected
dtype: int64
- name: relevance_chosen
dtype: int64
- name: relevance_rejected
dtype: int64
- name: readability_chosen
dtype: int64
- name: readability_rejected
dtype: int64
- name: enough-detail_chosen
dtype: int64
- name: enough-detail_rejected
dtype: int64
- name: biased:_chosen
dtype: int64
- name: biased:_rejected
dtype: int64
- name: fail-to-consider-individual-preferences_chosen
dtype: int64
- name: fail-to-consider-individual-preferences_rejected
dtype: int64
- name: repetetive_chosen
dtype: int64
- name: repetetive_rejected
dtype: int64
- name: fail-to-consider-context_chosen
dtype: int64
- name: fail-to-consider-context_rejected
dtype: int64
- name: too-long_chosen
dtype: int64
- name: too-long_rejected
dtype: int64
- name: human
dtype: string
- name: assistant_chosen
dtype: string
- name: assistant_rejected
dtype: string
- name: labels
dtype: string
splits:
- name: train
num_bytes: 14281240
num_examples: 9574
- name: test
num_bytes: 14225165
num_examples: 9574
download_size: 15456243
dataset_size: 28506405
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
dongyoung4091/shp_with_features_20k_flan_t5_large | dongyoung4091 | "2024-03-08T22:40:23Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T22:40:18Z" | ---
dataset_info:
features:
- name: post_id
dtype: string
- name: domain
dtype: string
- name: upvote_ratio
dtype: float64
- name: history
dtype: string
- name: c_root_id_A
dtype: string
- name: c_root_id_B
dtype: string
- name: created_at_utc_A
dtype: int64
- name: created_at_utc_B
dtype: int64
- name: score_A
dtype: int64
- name: score_B
dtype: int64
- name: human_ref_A
dtype: string
- name: human_ref_B
dtype: string
- name: labels
dtype: int64
- name: seconds_difference
dtype: float64
- name: score_ratio
dtype: float64
- name: helpfulness_A
dtype: float64
- name: helpfulness_B
dtype: float64
- name: specificity_A
dtype: float64
- name: specificity_B
dtype: float64
- name: intent_A
dtype: float64
- name: intent_B
dtype: float64
- name: factuality_A
dtype: float64
- name: factuality_B
dtype: float64
- name: easy-to-understand_A
dtype: float64
- name: easy-to-understand_B
dtype: float64
- name: relevance_A
dtype: float64
- name: relevance_B
dtype: float64
- name: readability_A
dtype: float64
- name: readability_B
dtype: float64
- name: enough-detail_A
dtype: float64
- name: enough-detail_B
dtype: float64
- name: biased:_A
dtype: float64
- name: biased:_B
dtype: float64
- name: fail-to-consider-individual-preferences_A
dtype: float64
- name: fail-to-consider-individual-preferences_B
dtype: float64
- name: repetetive_A
dtype: float64
- name: repetetive_B
dtype: float64
- name: fail-to-consider-context_A
dtype: float64
- name: fail-to-consider-context_B
dtype: float64
- name: too-long_A
dtype: float64
- name: too-long_B
dtype: float64
- name: __index_level_0__
dtype: int64
- name: log_score_A
dtype: float64
- name: log_score_B
dtype: float64
splits:
- name: train
num_bytes: 20707062
num_examples: 9459
- name: test
num_bytes: 20659940
num_examples: 9459
download_size: 23927350
dataset_size: 41367002
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
dongyoung4091/hh-generated_flan_t5_large_with_features2 | dongyoung4091 | "2024-03-08T22:41:06Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T22:41:04Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: response
dtype: string
- name: 'biased:'
dtype: int64
- name: easy-to-understand
dtype: int64
- name: enough-detail
dtype: int64
- name: factuality
dtype: int64
- name: fail-to-consider-context
dtype: int64
- name: fail-to-consider-individual-preferences
dtype: int64
- name: helpfulness
dtype: int64
- name: intent
dtype: int64
- name: readability
dtype: int64
- name: relevance
dtype: int64
- name: repetetive
dtype: int64
- name: specificity
dtype: int64
- name: too-long
dtype: int64
splits:
- name: train
num_bytes: 395323
num_examples: 1600
download_size: 76218
dataset_size: 395323
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jtatman/medical-sci-instruct-100k-sharegpt-chatml | jtatman | "2024-03-09T00:01:57Z" | 0 | 0 | [
"license:mit",
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T22:49:15Z" | ---
dataset_info:
features:
- name: text
dtype: string
- name: input_ids
sequence: int32
- name: attention_mask
sequence: int8
splits:
- name: train
num_bytes: 1071589710
num_examples: 96557
download_size: 82474918
dataset_size: 1071589710
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: mit
size_categories:
- 10K<n<100K
---
This is a reformat of the database formatted for Mistral training with chatml tokens added to the tokenizer, and '<|endoftext|>' as an end of sequence token.
The max_length setting may need to be adjusted to include the additional tokens for training.
This max_length was set to 2030 from 2048 for extra headroom. |
gagan3012/arabic-xnli-pairwise | gagan3012 | "2024-03-08T23:18:05Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T22:55:47Z" | ---
dataset_info:
features:
- name: labels
sequence: int64
- name: sent1
sequence: string
- name: sent2
sequence: string
splits:
- name: train
num_bytes: 70811123
num_examples: 1
- name: test
num_bytes: 850605
num_examples: 1
- name: validation
num_bytes: 415074
num_examples: 1
download_size: 37859272
dataset_size: 72076802
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
---
|
dongyoung4091/shp_with_features_20k | dongyoung4091 | "2024-03-08T23:01:57Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T23:01:52Z" | ---
dataset_info:
features:
- name: post_id
dtype: string
- name: domain
dtype: string
- name: upvote_ratio
dtype: float64
- name: history
dtype: string
- name: c_root_id_A
dtype: string
- name: c_root_id_B
dtype: string
- name: created_at_utc_A
dtype: int64
- name: created_at_utc_B
dtype: int64
- name: score_A
dtype: int64
- name: score_B
dtype: int64
- name: human_ref_A
dtype: string
- name: human_ref_B
dtype: string
- name: labels
dtype: int64
- name: seconds_difference
dtype: float64
- name: score_ratio
dtype: float64
- name: helpfulness_A
dtype: float64
- name: helpfulness_B
dtype: float64
- name: specificity_A
dtype: float64
- name: specificity_B
dtype: float64
- name: intent_A
dtype: float64
- name: intent_B
dtype: float64
- name: factuality_A
dtype: float64
- name: factuality_B
dtype: float64
- name: easy-to-understand_A
dtype: float64
- name: easy-to-understand_B
dtype: float64
- name: relevance_A
dtype: float64
- name: relevance_B
dtype: float64
- name: readability_A
dtype: float64
- name: readability_B
dtype: float64
- name: enough-detail_A
dtype: float64
- name: enough-detail_B
dtype: float64
- name: biased:_A
dtype: float64
- name: biased:_B
dtype: float64
- name: fail-to-consider-individual-preferences_A
dtype: float64
- name: fail-to-consider-individual-preferences_B
dtype: float64
- name: repetetive_A
dtype: float64
- name: repetetive_B
dtype: float64
- name: fail-to-consider-context_A
dtype: float64
- name: fail-to-consider-context_B
dtype: float64
- name: too-long_A
dtype: float64
- name: too-long_B
dtype: float64
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 20555718
num_examples: 9459
- name: test
num_bytes: 20508596
num_examples: 9459
download_size: 23638147
dataset_size: 41064314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
felipesampaio2010/clarestaravenska | felipesampaio2010 | "2024-03-08T23:06:35Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-03-08T23:05:50Z" | ---
license: openrail
---
|
ResplendentAI/Alpaca_NSFW_Shuffled | ResplendentAI | "2024-03-08T23:12:04Z" | 0 | 2 | [
"language:en",
"license:cc-by-nc-4.0",
"size_categories:1K<n<10K",
"library:datasets",
"library:mlcroissant",
"region:us",
"not-for-all-audiences"
] | null | "2024-03-08T23:08:08Z" | ---
license: cc-by-nc-4.0
language:
- en
tags:
- not-for-all-audiences
pretty_name: Alpaca NSFW Shuffled
size_categories:
- n<1K
---
Reformatted and pruned this dataset: https://huggingface.co/datasets/athirdpath/DPO_Pairs-Roleplay-Alpaca-NSFW-v1-SHUFFLED |
ZHLiu627/ultrafeedback_binarized_with_response_full | ZHLiu627 | "2024-03-08T23:24:55Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T23:09:22Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: prompt_id
dtype: string
- name: chosen
list:
- name: content
dtype: string
- name: role
dtype: string
- name: rejected
list:
- name: content
dtype: string
- name: role
dtype: string
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: score_chosen
dtype: float64
- name: score_rejected
dtype: float64
- name: reference_response
dtype: string
splits:
- name: train_prefs
num_bytes: 510824465
num_examples: 61135
download_size: 0
dataset_size: 510824465
configs:
- config_name: default
data_files:
- split: train_prefs
path: data/train_prefs-*
---
# Dataset Card for "ultrafeedback_binarized_with_response_full"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
vinisebk/jc_chasez | vinisebk | "2024-03-08T23:17:54Z" | 0 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-03-08T23:16:54Z" | ---
license: openrail
---
|
gagan3012/arabic-sts-pairwise | gagan3012 | "2024-03-08T23:26:58Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T23:26:54Z" | ---
dataset_info:
features:
- name: labels
sequence: int64
- name: sent1
sequence: string
- name: sent2
sequence: string
splits:
- name: train
num_bytes: 227137
num_examples: 1
- name: validation
num_bytes: 63521
num_examples: 1
- name: test
num_bytes: 33531
num_examples: 1
download_size: 182982
dataset_size: 324189
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
gagan3012/arabic-mq2q-pairwise | gagan3012 | "2024-03-08T23:31:22Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T23:31:18Z" | ---
dataset_info:
features:
- name: labels
sequence: int64
- name: sent1
sequence: string
- name: sent2
sequence: string
splits:
- name: train
num_bytes: 1193021
num_examples: 1
- name: validation
num_bytes: 150359
num_examples: 1
- name: test
num_bytes: 148942
num_examples: 1
download_size: 523830
dataset_size: 1492322
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
luzDP/Thiago_Minos | luzDP | "2024-03-08T23:34:40Z" | 0 | 0 | [
"license:openrail",
"region:us"
] | null | "2024-03-08T23:32:46Z" | ---
license: openrail
---
|
gagan3012/arabic-ans-stance-pairwise | gagan3012 | "2024-03-08T23:37:43Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T23:37:39Z" | ---
dataset_info:
features:
- name: labels
sequence: int64
- name: sent1
sequence: string
- name: sent2
sequence: string
splits:
- name: train
num_bytes: 511126
num_examples: 1
- name: validation
num_bytes: 147950
num_examples: 1
- name: test
num_bytes: 73556
num_examples: 1
download_size: 296560
dataset_size: 732632
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
johnsonkuan/wiki_en_chunks_sample | johnsonkuan | "2024-03-09T00:15:53Z" | 0 | 0 | [
"size_categories:1M<n<10M",
"format:parquet",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-08T23:50:14Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: url
dtype: string
- name: title
dtype: string
- name: chunk
dtype: string
- name: chunk_seq
dtype: int64
- name: chunk_md5
dtype: string
splits:
- name: train
num_bytes: 2882990493
num_examples: 6019103
download_size: 1736043605
dataset_size: 2882990493
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
stanmalkinson199/2Ddattaset | stanmalkinson199 | "2024-03-09T16:19:18Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-03-09T00:04:16Z" | ---
license: openrail
---
|
CronosGhost/cpp-code-reranking | CronosGhost | "2024-03-09T00:10:31Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T00:10:28Z" | ---
dataset_info:
features:
- name: query
dtype: string
- name: positive
sequence: string
- name: negative
sequence: string
splits:
- name: train
num_bytes: 23231663.1
num_examples: 9900
- name: test
num_bytes: 2581295.9
num_examples: 1100
download_size: 10424834
dataset_size: 25812959.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
bulkbeings/emma_assistant_conversations_v0.1 | bulkbeings | "2024-03-09T00:19:48Z" | 0 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T00:19:02Z" | ---
license: mit
---
|
goatman/metahuman-gaze-prediction | goatman | "2024-03-09T05:46:50Z" | 0 | 1 | [
"license:apache-2.0",
"size_categories:1K<n<10K",
"format:imagefolder",
"modality:image",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-03-09T00:45:52Z" | ---
license: apache-2.0
---
#Extract and normalize the coordinates (dodgy version for testing)
def get_coords_metahuman(file: Path):
im_id, character, xcoord, ycoord, xsize, ysize = file.name.split('.jpg')[:-1][0].split('_')
xcoord, ycoord, xsize, ysize = float(xcoord), float(ycoord), float(xsize), float(ysize)
base_screensize = tensor([46.49, 26.15]) # generic width and height measurement in cms given by gpt4 as a likely mean screen size
normalized_screensize = tensor([xsize, ysize])/base_screensize
x = (xcoord)/xsize
y = (ycoord)/ysize
# normalize to range -0.5, 0.5
return tensor([x, y])
|
RomilsonB/henryfreitas | RomilsonB | "2024-03-09T00:57:36Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-03-09T00:54:20Z" | ---
license: openrail
---
|
RomilsonB/henry | RomilsonB | "2024-03-09T01:16:18Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-03-09T01:14:38Z" | ---
license: openrail
---
|
youlive789/instructpix2pix | youlive789 | "2024-03-09T01:23:01Z" | 0 | 0 | [
"license:mit",
"size_categories:1K<n<10K",
"format:parquet",
"modality:image",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T01:16:12Z" | ---
license: mit
dataset_info:
features:
- name: original_image
dtype: image
- name: edited_image
dtype: image
- name: edit_promt
dtype: string
splits:
- name: train
num_bytes: 2478786161.568
num_examples: 2904
download_size: 2239120930
dataset_size: 2478786161.568
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
angeluriot/DimensionGPT_instruct | angeluriot | "2024-03-09T14:31:35Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T01:24:13Z" | ---
configs:
- config_name: human_conversations
data_files: human_conversations.json
- config_name: chatbot_conversations
data_files: chatbot_conversations.json
- config_name: dimension_gpt_conversations
data_files: dimension_gpt_conversations.json
- config_name: human_preprompts
data_files: human_preprompts.json
- config_name: chatbot_preprompts
data_files: chatbot_preprompts.json
- config_name: dimension_gpt_preprompts
data_files: dimension_gpt_preprompts.json
--- |
Aeronsc00ll0l/Smth | Aeronsc00ll0l | "2024-03-13T10:52:03Z" | 0 | 0 | [
"license:apache-2.0",
"region:us"
] | null | "2024-03-09T01:25:21Z" | ---
license: apache-2.0
---
|
Vinnyh589/Chaves8 | Vinnyh589 | "2024-08-17T06:58:35Z" | 0 | 0 | [
"license:unknown",
"region:us"
] | null | "2024-03-09T01:49:53Z" | ---
license: unknown
---
|
lapp0/hotpot_query_expansion_synthetic_annotated | lapp0 | "2024-03-09T02:11:06Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T02:11:00Z" | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
- name: input_entities
sequence: string
- name: output_entities
sequence: string
- name: out_in_ent_score
dtype: float64
- name: in_out_ent_score
dtype: float64
- name: pair_score
dtype: float32
splits:
- name: train
num_bytes: 27024967
num_examples: 85925
- name: eval
num_bytes: 1418300
num_examples: 4522
download_size: 19029050
dataset_size: 28443267
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
---
|
sunilrufus/Extes_filtered1 | sunilrufus | "2024-03-09T02:11:23Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T02:11:14Z" | ---
dataset_info:
features:
- name: scene
dtype: string
- name: description
dtype: string
- name: content
dtype: string
- name: __index_level_0__
dtype: int64
splits:
- name: train
num_bytes: 10425199
num_examples: 2864
- name: test
num_bytes: 2613907
num_examples: 717
download_size: 5883920
dataset_size: 13039106
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
Manirathinam21/Resume_classification | Manirathinam21 | "2024-03-09T02:20:34Z" | 0 | 0 | [
"task_categories:text-classification",
"language:en",
"license:mit",
"size_categories:1K<n<10K",
"format:csv",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"text-classification"
] | "2024-03-09T02:18:37Z" | ---
license: mit
task_categories:
- text-classification
language:
- en
size_categories:
- n<1K
--- |
citibankdemobusiness/worldsrecord | citibankdemobusiness | "2024-03-09T02:24:33Z" | 0 | 0 | [
"license:other",
"doi:10.57967/hf/1861",
"region:us"
] | null | "2024-03-09T02:22:15Z" | ---
license: other
license_name: billionaire
license_link: https://github.com/CitibankDemoBusiness/billiondollars/blob/git/LICENSE
---
|
pgajo/subs-v2 | pgajo | "2024-03-09T03:06:14Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:audio",
"modality:text",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T02:23:27Z" | ---
dataset_info:
features:
- name: audio
dtype: audio
- name: transcription
dtype: string
splits:
- name: train
num_bytes: 5367257273.704724
num_examples: 79191
- name: test
num_bytes: 579733939.4022752
num_examples: 8800
download_size: 5812185768
dataset_size: 5946991213.106999
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
FreedomIntelligence/ALLaVA-4V-Arabic | FreedomIntelligence | "2024-04-29T16:09:37Z" | 0 | 2 | [
"task_categories:question-answering",
"task_categories:text-generation",
"language:ar",
"license:apache-2.0",
"size_categories:100K<n<1M",
"format:json",
"modality:image",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"arxiv:2402.11684",
"region:us",
"GPT-4V",
"LVLM",
"Vision",
"Language"
] | [
"question-answering",
"text-generation"
] | "2024-03-09T02:39:51Z" | ---
license: apache-2.0
task_categories:
- question-answering
- text-generation
language:
- ar
tags:
- GPT-4V
- LVLM
- Vision
- Language
size_categories:
- 1M<n<10M
configs:
- config_name: allava_laion
data_files:
- split: caption
path: "allava_laion/ALLaVA-Caption-LAION-4V_Arabic.json"
# - split: instruct
# path: "allava_laion/ALLaVA-Instruct-LAION-4V_Chinese.json"
- config_name: allava_vflan
data_files:
- split: caption
path: "allava_vflan/ALLaVA-Caption-VFLAN-4V_Arabic.json"
# - split: instruct
# path: "allava_vflan/ALLaVA-Instruct-VFLAN-4V_Chinese.json"
# - config_name: allava_laion_instruction
# data_files: "allava_laion/ALLaVA-Instruct-LAION-4V.json"
# configs:
# - config_name: default
# data_files:
# - split: allava_laion_caption
# path: "allava_laion/ALLaVA-Caption-LAION-4V.json"
# - split: allava_laion_instruction
# path: "allava_laion/ALLaVA-Instruction-LAION-4V.json"
# configs:
# - config_name: default
# - data_files:
# - split: allava_laion_caption
# - path:
# - "allava_laion/ALLaVA-Caption-LAION-4V.json"
# - split: allava_laion_instruction
# - path:
# - "allava_laion/ALLaVA-Instruction-LAION-4V.json"
---
## ALLaVA-4V for Arabic
This is the Arabic version of the ALLaVA-4V data. We have translated the ALLaVA-4V data into Arabic through ChatGPT and instructed ChatGPT not to translate content related to OCR.
The original dataset can be found [here](https://huggingface.co/datasets/FreedomIntelligence/ALLaVA-4V), and the image data can be downloaded from [ALLaVA-4V](https://huggingface.co/datasets/FreedomIntelligence/ALLaVA-4V).
#### Citation
If you find our data useful, please consider citing our work! We are FreedomIntelligence from Shenzhen Research Institute of Big Data and The Chinese University of Hong Kong, Shenzhen.
```
@misc{chen2024allava,
title={ALLaVA: Harnessing GPT4V-synthesized Data for A Lite Vision-Language Model},
author={Guiming Hardy Chen and Shunian Chen and Ruifei Zhang and Junying Chen and Xiangbo Wu and Zhiyi Zhang and Zhihong Chen and Jianquan Li and Xiang Wan and Benyou Wang},
year={2024},
eprint={2402.11684},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |
hanesh007/mtsample | hanesh007 | "2024-03-10T09:11:27Z" | 0 | 0 | [
"license:apache-2.0",
"size_categories:n<1K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T02:43:49Z" | ---
license: apache-2.0
---
|
lapp0/hotpot_query_expansion_synthetic_cleaned | lapp0 | "2024-03-09T05:34:56Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T02:56:19Z" | ---
dataset_info:
features:
- name: input
dtype: string
- name: output
dtype: string
splits:
- name: train
num_bytes: 4908240
num_examples: 25593
- name: eval
num_bytes: 264342
num_examples: 1359
download_size: 3390694
dataset_size: 5172582
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: eval
path: data/eval-*
---
|
arafatar/details_harness_drop | arafatar | "2024-03-28T01:48:33Z" | 0 | 0 | [
"license:unknown",
"region:us"
] | null | "2024-03-09T03:01:40Z" | ---
license: unknown
---
|
FreezySandy/Chat_doc | FreezySandy | "2024-03-09T03:08:14Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T03:03:28Z" | ---
dataset_info:
features:
- name: 'Unnamed: 0'
dtype: int64
- name: text
dtype: string
splits:
- name: train
num_bytes: 1061539
num_examples: 1000
download_size: 645906
dataset_size: 1061539
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
imperialwarrior/open-australian-legal-qa-paraphrased-hard-gpt-with-emb | imperialwarrior | "2024-03-13T04:25:29Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T04:38:36Z" | ---
dataset_info:
features:
- name: pipeline_1_result
dtype: string
- name: pipeline_1_result_r_embeddings
sequence: float64
- name: pipeline_1_result_nr_embeddings
sequence: float64
- name: pipeline_2_context
dtype: string
- name: pipeline_2_result
dtype: string
- name: pipeline_2_result_r_embeddings
sequence: float64
- name: pipeline_2_result_nr_embeddings
sequence: float64
- name: pipeline_3_context
dtype: string
- name: pipeline_3_result
dtype: string
- name: pipeline_3_result_r_embeddings
sequence: float64
- name: pipeline_3_result_nr_embeddings
sequence: float64
- name: pipeline_4_context
dtype: string
- name: pipeline_4_result
dtype: string
- name: pipeline_4_result_r_embeddings
sequence: float64
- name: pipeline_4_result_nr_embeddings
sequence: float64
- name: pipeline_5_context
dtype: string
- name: pipeline_5_result
dtype: string
- name: pipeline_5_result_r_embeddings
sequence: float64
- name: pipeline_5_result_nr_embeddings
sequence: float64
- name: pipeline_6_context
dtype: string
- name: pipeline_6_result
dtype: string
- name: pipeline_6_result_r_embeddings
sequence: float64
- name: pipeline_6_result_nr_embeddings
sequence: float64
- name: pipeline_7_context
dtype: string
- name: pipeline_7_result
dtype: string
- name: pipeline_7_result_r_embeddings
sequence: float64
- name: pipeline_7_result_nr_embeddings
sequence: float64
- name: referenced_question
dtype: string
- name: answer
dtype: string
- name: answer_non_retrieval_embeddings
dtype: string
- name: answer_retrieval_embeddings
dtype: string
- name: question
dtype: string
- name: question_retrieval_embeddings
dtype: string
- name: question_non_retrieval_embeddings
dtype: string
- name: __index_level_0__
dtype: float64
- name: case_index
dtype: float64
- name: pipeline_6_case_indexes
sequence: int64
- name: pipeline_7_case_indexes
sequence: int64
splits:
- name: train
num_bytes: 138068314
num_examples: 208
download_size: 33205125
dataset_size: 138068314
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
imperialwarrior/open-australian-legal-qa-paraphrased-easy-gpt-with-emb | imperialwarrior | "2024-03-13T04:04:53Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:tabular",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T04:39:54Z" | ---
dataset_info:
features:
- name: pipeline_1_result
dtype: string
- name: pipeline_1_result_r_embeddings
sequence: float64
- name: pipeline_1_result_nr_embeddings
sequence: float64
- name: pipeline_2_context
dtype: string
- name: pipeline_2_result
dtype: string
- name: pipeline_2_result_r_embeddings
sequence: float64
- name: pipeline_2_result_nr_embeddings
sequence: float64
- name: pipeline_3_context
dtype: string
- name: pipeline_3_result
dtype: string
- name: pipeline_3_result_r_embeddings
sequence: float64
- name: pipeline_3_result_nr_embeddings
sequence: float64
- name: pipeline_4_context
dtype: string
- name: pipeline_4_result
dtype: string
- name: pipeline_4_result_r_embeddings
sequence: float64
- name: pipeline_4_result_nr_embeddings
sequence: float64
- name: pipeline_5_context
dtype: string
- name: pipeline_5_result
dtype: string
- name: pipeline_5_result_r_embeddings
sequence: float64
- name: pipeline_5_result_nr_embeddings
sequence: float64
- name: pipeline_6_context
dtype: string
- name: pipeline_6_result
dtype: string
- name: pipeline_6_result_r_embeddings
sequence: float64
- name: pipeline_6_result_nr_embeddings
sequence: float64
- name: pipeline_7_context
dtype: string
- name: pipeline_7_result
dtype: string
- name: pipeline_7_result_r_embeddings
sequence: float64
- name: pipeline_7_result_nr_embeddings
sequence: float64
- name: referenced_question
dtype: string
- name: answer
dtype: string
- name: answer_non_retrieval_embeddings
dtype: string
- name: answer_retrieval_embeddings
dtype: string
- name: question
dtype: string
- name: question_retrieval_embeddings
dtype: string
- name: question_non_retrieval_embeddings
dtype: string
- name: __index_level_0__
dtype: float64
- name: case_index
dtype: float64
- name: pipeline_6_case_indexes
sequence: int64
- name: pipeline_7_case_indexes
sequence: int64
splits:
- name: train
num_bytes: 137944644
num_examples: 208
download_size: 32779364
dataset_size: 137944644
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
Prajwal3009/Gemma_unisys | Prajwal3009 | "2024-03-09T05:06:16Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T04:50:30Z" | ---
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 295341
num_examples: 1267
download_size: 95471
dataset_size: 295341
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
jjpetrisko/authentiface_v1.0 | jjpetrisko | "2024-03-10T01:25:11Z" | 0 | 0 | [
"size_categories:10K<n<100K",
"format:parquet",
"modality:image",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T04:56:23Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: label
dtype:
class_label:
names:
'0': fake
'1': real
splits:
- name: train
num_bytes: 3488320382.368
num_examples: 68832
- name: validation
num_bytes: 525649096.534
num_examples: 9862
- name: test
num_bytes: 1033989495.113
num_examples: 19581
download_size: 5045508970
dataset_size: 5047958974.015
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: validation
path: data/validation-*
- split: test
path: data/test-*
---
|
oneseco-media/djscrew-dataset | oneseco-media | "2024-04-07T17:07:16Z" | 0 | 3 | [
"task_categories:table-question-answering",
"task_categories:question-answering",
"task_categories:text-classification",
"license:artistic-2.0",
"size_categories:n<1K",
"format:text",
"modality:text",
"library:datasets",
"library:mlcroissant",
"region:us",
"music"
] | [
"table-question-answering",
"question-answering",
"text-classification"
] | "2024-03-09T05:00:26Z" | ---
license: artistic-2.0
task_categories:
- table-question-answering
- question-answering
- text-classification
tags:
- music
pretty_name: DJScrewBookofChapters
size_categories:
- n<1K
--- |
peterandrew987/dev-indo-tydiaqa | peterandrew987 | "2024-03-09T05:30:52Z" | 0 | 0 | [
"size_categories:n<1K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T05:21:22Z" | ---
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: answer_start
dtype: int64
- name: text
dtype: string
- name: indonesian_answers
struct:
- name: answer_start
dtype: int64
- name: text
dtype: string
- name: postags
sequence:
sequence:
sequence: string
splits:
- name: train
num_bytes: 513862
num_examples: 565
download_size: 284921
dataset_size: 513862
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "dev-indo-tydiaqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
botchagalupe/opencontext | botchagalupe | "2024-03-09T05:42:18Z" | 0 | 0 | [
"license:cc-by-4.0",
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T05:39:04Z" | ---
license: cc-by-4.0
dataset_info:
features:
- name: text
dtype: string
splits:
- name: train
num_bytes: 52445
num_examples: 1413
download_size: 20530
dataset_size: 52445
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|
RomilsonB/henryfreitasss | RomilsonB | "2024-03-09T06:01:31Z" | 0 | 0 | [
"license:openrail",
"size_categories:n<1K",
"format:audiofolder",
"modality:audio",
"library:datasets",
"library:mlcroissant",
"region:us"
] | null | "2024-03-09T06:00:55Z" | ---
license: openrail
---
|
boapps/jowiki-qa | boapps | "2024-03-09T07:50:13Z" | 0 | 1 | [
"task_categories:question-answering",
"language:hu",
"license:cc-by-sa-3.0",
"size_categories:10K<n<100K",
"format:json",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | [
"question-answering"
] | "2024-03-09T06:03:31Z" | ---
license: cc-by-sa-3.0
task_categories:
- question-answering
language:
- hu
size_categories:
- 10K<n<100K
---
A [jowiki](https://huggingface.co/datasets/boapps/jowiki) korpusz cikkeiből válogattam részeket, amikhez `gemini-pro`-val generáltattam egy kérdést és választ.
Ez szerintem hasznos lehet például RAG-ok embedding részének tanításához. |
Thunder-rk/stories-t5-1 | Thunder-rk | "2024-03-09T06:53:00Z" | 0 | 0 | [
"size_categories:1K<n<10K",
"format:parquet",
"modality:text",
"library:datasets",
"library:pandas",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T06:19:59Z" | ---
dataset_info:
features:
- name: prompt
dtype: string
- name: answer
dtype: string
splits:
- name: train
num_bytes: 2951011.851190476
num_examples: 1999
- name: test
num_bytes: 1265141.1488095238
num_examples: 857
download_size: 1741551
dataset_size: 4216153.0
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: test
path: data/test-*
---
|
theojiang/image-text-dataset-subset-300k-captions_only_with_latents | theojiang | "2024-03-10T06:02:43Z" | 0 | 0 | [
"size_categories:100K<n<1M",
"format:parquet",
"modality:image",
"modality:text",
"modality:timeseries",
"library:datasets",
"library:dask",
"library:mlcroissant",
"library:polars",
"region:us"
] | null | "2024-03-09T07:09:48Z" | ---
dataset_info:
features:
- name: image
dtype: image
- name: caption
dtype: string
- name: CLIP_text_latent
sequence: float32
- name: SD_VAE_image_latent
sequence:
sequence:
sequence: float32
splits:
- name: train
num_bytes: 57507528731.75
num_examples: 380530
download_size: 60531502833
dataset_size: 57507528731.75
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
|