text-classification bool 2
classes | text stringlengths 0 664k |
|---|---|
false |
# Dataset Card for ZINC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Data... |
false | # laion2B-multi-korean-subset
## Dataset Description
- **Homepage:** [laion-5b](https://laion.ai/blog/laion-5b/)
- **Huggingface:** [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi)
## About dataset
a subset data of [laion/laion2B-multi](https://huggingface.co/datasets/laion/laion2B-multi), i... |
false | ### Dataset Summary
Dataset of satirical news from "Panorama", Russian "The Onion".
### Dataset Format
Dataset is in JSONLines format, where "title" is the article title, and "body" are contents of the article. |
false | EN-ME Special Chars is a dataset of roughly 58000 aligned sentence pairs in English and Middle English, collected from the works of Geoffrey Chaucer, John Wycliffe, and the Gawain Poet.
It includes special characters such as þ.
This dataset reflects the spelling inconsistencies characteristic of Middle English.
|
false |
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#data... |
false | EN-ME Special Chars is a dataset of roughly 58000 aligned sentence pairs in English and Middle English, collected from the works of Geoffrey Chaucer, John Lydgate, John Wycliffe, and the Gawain Poet.
It includes special characters such as þ.
There is mild standardization, but this dataset reflects the spelling incon... |
false | Dataset with sentences regarding professions, half of the translations are to feminine and half for masculine sentences.
How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/handmade-dataset", field="data")
remote_dataset
```
Output:
```
DatasetDict({
train: Dataset(... |
true |
# Indonesian News Categorization
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-str... |
false |
### Dataset Summary
KoPI-CC (Korpus Perayapan Indonesia)-CC is Indonesian only extract from Common Crawl snapshots using [ungoliant](https://github.com/oscar-corpus/ungoliant), each snapshot also filtered using some some deduplicate technique such as exact hash(md5) dedup technique and minhash LSH neardup
### ... |
true | # AutoTrain Dataset for project: provision_classification
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project provision_classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset loo... |
false |
About Dataset
Context
This contains data of news headlines published over a period of nineteen years.
Sourced from the reputable Australian news source ABC (Australian Broadcasting Corporation)
Agency Site: (http://www.abc.net.au)
Content
Format: CSV ; Single File
publish_date: Date of publishing for the arti... |
false |
# Dataset Card for Cerpen Corpus
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-str... |
false |
# Dataset Card for Visual Spatial Reasoning
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#... |
false |
# Dataset Card for Indonesian News Title Generation
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Stru... |
false |
This is the summarization datasets collected by TextBox, including:
- CNN/Daily Mail (cnndm)
- XSum (xsum)
- SAMSum (samsum)
- WLE (wle)
- Newsroom (nr)
- WikiHow (wikihow)
- MicroSoft News (msn)
- MediaSum (mediasum)
- English Gigaword (eg).
The detail and leaderboard of each dataset can be found in [TextBox page](h... |
false |
This is the commonsense generation datasets collected by TextBox, including:
- CommonGen (cg).
The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset). |
false |
This is the question generation datasets collected by TextBox, including:
- SQuAD (squadqg)
- CoQA (coqaqg)
- NewsQA (newsqa)
- HotpotQA (hotpotqa)
- MS MARCO (marco)
- MSQG (msqg)
- NarrativeQA (nqa)
- QuAC (quac).
The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/... |
false |
This is the simplification datasets collected by TextBox, including:
- WikiAuto + Turk/ASSET (wia-t).
The detail and leaderboard of each dataset can be found in [TextBox page](https://github.com/RUCAIBox/TextBox#dataset). |
false |
This is the task dialogue datasets collected by TextBox, including:
- MultiWOZ 2.0 (multiwoz)
- MetaLWOZ (metalwoz)
- KVRET (kvret)
- WOZ (woz)
- CamRest676 (camres676)
- Frames (frames)
- TaskMaster (taskmaster)
- Schema-Guided (schema)
- MSR-E2E (e2e_msr).
The detail and leaderboard of each dataset can be found in ... |
false |
# Dataset Card for Swedish Xsum Dataset
The Swedish xsum dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks.
## Dataset Summary
Read about the full details at original English version: https://huggingface.co/datasets/xsum
### Data Fields
- `id`: a string containi... |
false |
# Dataset Card for Swedish Wiki_lingua Dataset
The Swedish wiki_lingua dataset has only been machine-translated to improve downstream fine-tuning on Swedish summarization tasks.
## Dataset Summary
Read about the full details at original Multilingual version: https://huggingface.co/datasets/wiki_lingua
### Data detai... |
false |
# Dataset Card for Indonesian Sentence Paraphrase Detection
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Data... |
false |
# Dataset Card for broad_twitter_corpus
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [D... |
false |
# Dataset Card for Indonesian Movie Subtitle
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](... |
false | |
true | |
false |
# Dataset Card for KOMET
### Dataset Summary
KOMET 1.0 is a hand-annotated Slovenian corpus of metaphorical expressions which contains about 200 000 words (across 13 963 sentences) from Slovene journalistic, fiction and online texts.
### Supported Tasks and Leaderboards
Metaphor detection, metaphor type classific... |
false |
# Dataset Card for "tner/ttc" (Dummy)
***WARNING***: This is a dummy dataset for `ttc` and the correct one is [`tner/ttc`](https://huggingface.co/datasets/tner/ttc), which is private since **TTC dataset is not publicly released at this point**. We will grant you an access to the `tner/ttc` dataset, once you retained ... |
false |
# Dataset Card for [COCO]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)... |
false | # Dataset Card for NSME-COM
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
... |
false |
# naab-raw (raw version of the naab corpus)
_[If you want to join our community to keep up with news, models and datasets from naab, click on [this](https://docs.google.com/forms/d/e/1FAIpQLSe8kevFl_ODCx-zapAuOIAQYr8IvkVVaVHOuhRL9Ha0RVJ6kg/viewform) link.]_
## Table of Contents
- [Dataset Card Creation Guide](#datase... |
true |
# WikiCAT_ca: Catalan Text Classification dataset
## Dataset Description
- **Paper:**
- **Point of Contact:** carlos.rodriguez1@bsc.es
**Repository**
https://github.com/TeMU-BSC/WikiCAT
### Dataset Summary
WikiCAT_ca is a Catalan corpus for thematic Text Classification tasks. It is created automagically fro... |
false |
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all doc... |
false |
# Dataset Card for Inglish: Indonesian English Translation Dataset
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
... |
false |
# Dataset Card for EstCOPA
### Dataset Summary
EstCOPA is an extended version of [XCOPA](https://huggingface.co/datasets/xcopa) that was created with a goal to further investigate Estonian language understanding of large language models. EstCOPA provides two new versions of train, eval and test datasets in Estonian:... |
false |
# Dataset Card for GitHub-Issues
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-str... |
false |
# Dataset Card for 20Q
|
false |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-st... |
false | # AutoTrain Dataset for project: image-classification-test-18
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project image-classification-test-18.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dat... |
false | # Dataset Card for librispeech_asr
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instanc... |
false |
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all doc... |
false |
This is a copy of the [Multi-XScience](https://huggingface.co/datasets/multi_x_science_sum) dataset, except the input source documents of its `test` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `related_work` field of each example
- __corpus__: The union of all doc... |
false | # Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-stru... |
true |
# Dataset Card for "ArabicNLPDataset"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Preprocessing](#dataset-preprocessing)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- ... |
false |
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `background` field of each example
- __corpus__: The union of all documents in... |
false |
This is a copy of the [MS^2](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `background` field of each example
- __corpus__: The union of all documents in... |
false | 30,000 256x256 mel spectrograms of 5 second samples that have been used in music, sourced from [WhoSampled](https://whosampled.com) and [YouTube](https://youtube.com). The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and ... |
true |
# Dataset Card for "UnpredicTable-5k" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [... |
true |
# Dataset Card for "UnpredicTable-unique" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
... |
false |
KoPI (Korpus Perayapan Indonesia) is Indonesian general corpora for sequence language modelling
Subset of KoPI corpora:
KoPI-CC + KoPI-CC-NEWS + KoPI-Mc4 + KoPI-Wiki + KoPI-Leipzig + KoPI-Paper |
false |
# Dataset Card for ScandiQA
## Dataset Description
- **Repository:** <https://github.com/alexandrainst/scandi-qa>
- **Point of Contact:** [Dan Saattrup Nielsen](mailto:dan.nielsen@alexandra.dk)
- **Size of downloaded dataset files:** 69 MB
- **Size of the generated dataset:** 67 MB
- **Total amount of disk used:** 1... |
false |
# Dataset Card for alchemy
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [D... |
false |
# Dataset Card for aspirin
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [D... |
false |
# Dataset Card for benzene
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [D... |
false |
# Dataset Card for ethanol
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [D... |
false |
# Dataset Card for malonaldehyde
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric... |
true |
# Dataset Card for VaccinChatNL
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-stru... |
false |
# Dataset Card for naphthalene
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
... |
false |
# Dataset Card for salicylic_acid
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometri... |
false |
# Dataset Card for toluene
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [D... |
false |
# Dataset Card for uracil
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [External Use](#external-use)
- [PyGeometric](#pygeometric)
- [Da... |
false | # AutoTrain Dataset for project: dog-classifiers
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project dog-classifiers.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
``... |
false |
### dataset description
We downloaded ZINC dataset from [here](https://zinc15.docking.org/) and canonicalized it.
We used the following function to canonicalize the data and removed some SMILES that cannot be read by RDKit.
```python:
from rdkit import Chem
def canonicalize(mol):
mol = Chem.MolToSmiles(Chem.M... |
false |
# Dataset Card for Yandex_Jobs
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances... |
false | |
false | # Dataset Card for MetaQA Agents' Predictions
## Dataset Description
- **Repository:** [MetaQA's Repository](https://github.com/UKPLab/MetaQA)
- **Paper:** [MetaQA: Combining Expert Agents for Multi-Skill Question Answering](https://arxiv.org/abs/2112.01922)
- **Point of Contact:** [Haritz Puerto](mailto:puerto@ukp.in... |
false | # AutoTrain Dataset for project: satellite-image-classification
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project satellite-image-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this... |
false |
# Dataset Card for Europarl v7 (en-it split)
This dataset contains only the English-Italian split of Europarl v7.
We created the dataset to provide it to the [M2L 2022 Summer School](https://www.m2lschool.org/) students.
For all the information on the dataset, please refer to: [https://www.statmt.org/europarl/](http... |
false | # Battery Device QA Data
Battery device records, including anode, cathode, and electrolyte.
Examples of the question answering evaluation dataset:
\{'question': 'What is the cathode?', 'answer': 'Al foil', 'context': 'The blended slurry was then cast onto a clean current collector (Al foil for the cathode and Cu fo... |
false |
# Abbreviation Detection Dataset
## Original Data Source
#### PLOS
I. Zilio, H. Saadany, P. Sharma, D. Kanojia and C. Orasan,
PLOD: An Abbreviation Detection Dataset for Scientific Docu-
ments, 2022, https://arxiv.org/abs/2204.12061.
#### SDU@AAAI-21
A. P. B. Veyseh, F. Dernoncourt, Q. H. Tran and T. H. Nguyen,
Pr... |
false |
# CNER Dataset
## Original Data Source
#### CHEMDNER
M. Krallinger, O. Rabal, F. Leitner, M. Vazquez, D. Salgado,
Z. Lu, R. Leaman, Y. Lu, D. Ji, D. M. Lowe et al., J. Cheminf.,
2015, 7, 1–17.
#### MatScholar
I. Weston, V. Tshitoyan, J. Dagdelen, O. Kononova, A. Tre-
wartha, K. A. Persson, G. Ceder and A. Jain, J. C... |
false |
# School Notebooks Dataset
The images of school notebooks with handwritten notes in Russian.
The dataset annotation contain end-to-end markup for training detection and OCR models, as well as an end-to-end model for reading text from pages.
## Annotation format
The annotation is in COCO format. The `annotation.js... |
false | # Dataset Card for LibriVox Indonesia 1.0
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data... |
false | # AutoTrain Dataset for project: donut-vs-croissant
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project donut-vs-croissant.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follow... |
false |
# Datastet card for Encyclopaedia Britannica Illustrated
## Table of Contents
- [Dataset Card Creation Guide](#dataset-card-creation-guide)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#su... |
false |
# LAION-Aesthetics :: CLIP → UMAP
This dataset is a CLIP (text) → UMAP embedding of the [LAION-Aesthetics dataset](https://laion.ai/blog/laion-aesthetics/) - specifically the [`improved_aesthetics_6plus` version](https://huggingface.co/datasets/ChristophSchuhmann/improved_aesthetics_6plus), which filters the full dat... |
false |
# CABank Japanese Sakura Corpus
- Susanne Miyata
- Department of Medical Sciences
- Aichi Shukotoku University
- smiyata@asu.aasa.ac.jp
- website: https://ca.talkbank.org/access/Sakura.html
## Important
This data set is a copy from the original one located at https://ca.talkbank.org/access/Sakura.html.
## Details
... |
false |
# CABank Japanese CallHome Corpus
- Participants: 120
- Type of Study: phone call
- Location: United States
- Media type: audio
- DOI: doi:10.21415/T5H59V
- Web: https://ca.talkbank.org/access/CallHome/jpn.html
## Citation information
Some citation here.
In accordance with TalkBank rules, any use of data f... |
false |
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in... |
false |
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in... |
false |
This is a copy of the [Cochrane](https://huggingface.co/datasets/allenai/mslr2022) dataset, except the input source documents of its `validation` split have been replaced by a __sparse__ retriever. The retrieval pipeline used:
- __query__: The `target` field of each example
- __corpus__: The union of all documents in... |
false |
# Şalom Ladino articles text corpus
Text corpus compiled from 397 articles from the Judeo-Espanyol section of [Şalom newspaper](https://www.salom.com.tr/haberler/17/judeo-espanyol). Original sentences and articles belong to Şalom.
Size: 176,843 words
[Offical link](https://data.sefarad.com.tr/dataset/salom-ladino... |
false |
# Una fraza al diya
Ladino language learning sentences prepared by Karen Sarhon of Sephardic Center of Istanbul. Each sentence has translations in Turkish, English, Spanish. Includes audio and image. 307 sentences in total.
Source: https://sefarad.com.tr/judeo-espanyolladino/frazadeldia/
Images and audio: http://co... |
false |
# Data card for Internet Archive historic book pages unlabelled.
- `10,844,387` unlabelled pages from historical books from the internet archive.
- Intended to be used for:
- pre-training computer vision models in an unsupervised manner
- using weak supervision to generate labels |
true |
# Dataset Card for Kelly
Keywords for Language Learning for Young and adults alike
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Insta... |
true |
NLI를 위한 한국어 속담 데이터셋입니다.
'question'은 속담의 의미와 보기(5지선다)가 표시되어 있으며,
'label'에는 정답의 번호(0-4)가 표시되어 있습니다.
licence: cc-by-sa-2.0-kr (원본 출처:국립국어원 표준국어대사전)
|Model| psyche/korean_idioms |
|:------:|:---:|
|klue/bert-base|0.7646| |
true |
|Model| psyche/bool_sentence (10k) |
|:------:|:---:|
|klue/bert-base|0.9335|
licence: cc-by-sa-2.0-kr (원본 출처:국립국어원 표준국어대사전) |
true | # AutoTrain Dataset for project: consbert
## Dataset Description
This dataset has been automatically processed by AutoTrain for project consbert.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
... |
false | # AutoTrain Dataset for project: opus-mt-en-zh_hanz
## Dataset Description
This dataset has been automatically processed by AutoTrain for project opus-mt-en-zh_hanz.
### Languages
The BCP-47 code for the dataset's language is en2zh.
## Dataset Structure
### Data Instances
A sample from this dataset looks as foll... |
false |
# Dataset Card for **slone/myv_ru_2022**
## Dataset Description
- **Repository:** https://github.com/slone-nlp/myv-nmt
- **Paper:**: https://arxiv.org/abs/2209.09368
- **Point of Contact:** @cointegrated
### Dataset Summary
This is a corpus of parallel Erzya-Russian words, phrases and sentences, collected in the p... |
false | 256x256 mel spectrograms of 5 second samples of instrumental Hip Hop. The code to convert from audio to spectrogram and vice versa can be found in https://github.com/teticio/audio-diffusion along with scripts to train and run inference using De-noising Diffusion Probabilistic Models.
```
x_res = 256
y_res = 256
sample... |
false | A sampled version of the [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix) dataset for the English-Romanian pair, containing 1M train entries.
Please refer to the original for more info. |
false |
# Mario Maker 2 level comments
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 level comment dataset consists of 31.9 million level comments from Nintendo's online service totaling around 20GB of data. The dataset was created us... |
false |
# Mario Maker 2 level plays
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 level plays dataset consists of 1 billion level plays from Nintendo's online service totaling around 20GB of data. The dataset was created using the sel... |
false |
# Mario Maker 2 level deaths
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 level deaths dataset consists of 564 million level deaths from Nintendo's online service totaling around 2.5GB of data. The dataset was created using t... |
false |
# Mario Maker 2 users
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 users dataset consists of 6 million users from Nintendo's online service totaling around 1.2GB of data. The dataset was created using the self-hosted [Mario M... |
false |
# Mario Maker 2 user badges
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 user badges dataset consists of 9328 user badges (they are capped to 10k globally) from Nintendo's online service and adds onto `TheGreatRambler/mm2_use... |
false |
# Mario Maker 2 user plays
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 user plays dataset consists of 329.8 million user plays from Nintendo's online service totaling around 2GB of data. The dataset was created using the sel... |
false |
# Mario Maker 2 user likes
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 user likes dataset consists of 105.5 million user likes from Nintendo's online service totaling around 630MB of data. The dataset was created using the s... |
false |
# Mario Maker 2 user uploaded
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
## Dataset Description
The Mario Maker 2 user uploaded dataset consists of 26.5 million uploaded user levels from Nintendo's online service totaling around 215MB of data. The dataset was crea... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.