text-classification bool 2
classes | text stringlengths 0 664k |
|---|---|
false | # Dataset Card for "riffusion-musiccaps-datasets-768"
Converted google/musicCaps to spectograms with audio_to_spectrum with riffusion cli.
Random 7.68 sec for each music in musicCaps.
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false |
# Dataset card for "george-chou/pianos_mel"
## Usage
```
from datasets import load_dataset
data = load_dataset("george-chou/pianos_mel")
trainset = data['train']
validset = data['validation']
testset = data['test']
labels = trainset.features['label'].names
for item in trainset:
print('image: ', item['image'].con... |
true |
# Dataset Card for Peewee Issues
## Dataset Summary
Peewee Issues is a dataset containing all the issues in the [Peewee github repository](https://github.com/coleifer/peewee) up to the last date of extraction (5/3/2023). It has been made for educational purposes in mind (especifically, to get me used to using Hugging... |
false |
## Dataset Description
- **Repository:** [Link to repo](https://github.com/VityaVitalich/IMAD)
- **Paper:** [IMage Augmented multi-modal Dialogue: IMAD](https://arxiv.org/abs/2305.10512v1)
- **Point of Contact:** [Contacts Section](https://github.com/VityaVitalich/IMAD#contacts)
### Dataset Summary
This dataset con... |
false |
# MAP
An SQLite database of video urls and captions/descriptions. |
false |
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage, or the question might be unanswerable.
This is an Indo... |
false | # Dataset Card for Tapir-Cleaned
This is a revised version of the DAISLab dataset of IFTTT rules, which has been thoroughly cleaned, scored, and adjusted for the purpose of instruction-tuning.
## Tapir Dataset Summary
Tapir is a subset of the larger DAISLab dataset, which comprises 242,480 recipes extracted from the... |
false | # Dataset Card for Tapir-Cleaned
This is a revised version of the DAISLab dataset of IFTTT rules, which has been thoroughly cleaned, scored, and adjusted for the purpose of instruction-tuning.
## Tapir Dataset Summary
Tapir is a subset of the larger DAISLab dataset, which comprises 242,480 recipes extracted from the... |
false | # Dataset Card for DIALOGSum Corpus
## Dataset Description
### Links
- **Homepage:** https://aclanthology.org/2021.findings-acl.449
- **Repository:** https://github.com/cylnlp/dialogsum
- **Paper:** https://aclanthology.org/2021.findings-acl.449
### Dataset Summary
DialogSum is a large-scale dialogue summarization dat... |
false |
curr. size: 53,081 videos
goal (todo): 100,000+ |
false | # Dataset Card for "code-search-net-ruby"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-go
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
This dataset is the Ruby porti... |
false |
MMC4-130k是对MMC4中,抽样了130k左右 simliarty较高的图文pair得到的数据集
我们准备陆续翻译这个子集
我们会陆续将更多数据集发布到hf,包括
- [ ] Coco Caption的中文翻译
- [ ] CoQA的中文翻译
- [ ] CNewSum的Embedding数据
- [ ] 增广的开放QA数据
- [x] WizardLM的中文翻译
如果你也在做这些数据集的筹备,欢迎来联系我们,避免重复花钱。
# 骆驼(Luotuo): 开源中文大语言模型
[https://github.com/LC1332/Luotuo-Chinese-LLM](https://github.com/LC133... |
false | # AutoTrain Dataset for project: doodles-30
## Dataset Description
This dataset has been automatically processed by AutoTrain for project doodles-30.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
... |
false |
- subset from https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K
- train: 21000
- val seen: 3000
- val unseen: 2100
- test: 6000 |
false | # Dataset Card for Tapir-Cleaned
This is a revised version of the DAISLab dataset of IFTTT rules, which has been thoroughly cleaned, scored, and adjusted for the purpose of instruction-tuning.
## Tapir Dataset Summary
Tapir is a subset of the larger DAISLab dataset, which comprises 242,480 recipes extracted from the... |
false | # Dataset Card for "instructional_code-search-net-ruby"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-ruby
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
T... |
false | # Dataset Card for "instructional_code-search-net-php"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-php
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset Summary
Thi... |
true |
sts 2012-2016 datasets
|
true |
This dataset contains more than 250k articles obtained from polish news site `tvp.info.pl`.
Main purpouse of collecting the data was to create a transformer-based model for text summarization.
Columns:
* `link` - link to article
* `title` - original title of the article
* `headline` - lead/headline of the article - f... |
false | # Dataset Card for "hotel_reviews"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false | The dataset contains 20703 records. The dataset was created by removing all dataset items from the original 27k dataset that had a BLEU score 0 or more than 0.3388.
|
false |
# VoxCeleb 1
VoxCeleb1 contains over 100,000 utterances for 1,251 celebrities, extracted from videos uploaded to YouTube.
## Verification Split
| | train | validation | test |
| :---: | :---: | :---: | :---: |
| # of speakers | 1211 | 1211 | 40 |
| # of samples | 299246 | 33672 | 4874 |
## References
- https://w... |
false |
Simple anime image rating prediction task. Data is randomly scraped from Sankaku Complex.
Please note that due to the often unclear boundaries between `safe`, `r15` and `r18` levels,
there is no objective ground truth for this task, and the data is scraped without any manual filtering.
Therefore, the models trained... |
true | Conversation Ending Check |
false |
This dataset contains a selection of Q&A-related tasks gathered and cleaned from the webGPT_comparisons set and the databricks-dolly-15k set.
Unicode escapes were explicitly removed, and wikipedia citations in the "output" were stripped through regex to hopefully help any
end-product model ignore these artifacts with... |
false | |
false |
Face Masks ensemble dataset is no longer limited to [Kaggle](https://www.kaggle.com/datasets/henrylydecker/face-masks), it is now coming to Huggingface!
This dataset was created to help train and/or fine tune models for detecting masked and un-masked faces.
I created a new face masks object detection dataset by comp... |
true |
# Dataset Card for News_Articles_Categorization
## Table of Contents
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Source Data](#source-data)
## Dataset Description
29000 News headlines which are classified into 13 different labels namely: "Play... |
false |
# Dataset Card for Leading Decision Summarization
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Struct... |
true | |
false | |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Repository:** https://github.com/danielsteinigen/nlp-legal-texts
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](ht... |
false |
# Dataset Card for Dataset Name
### Dataset Summary
It is just a dataset of dolly-15k-jp(*1) converted to jsonl form so that it can be used in SFTTrainer(*2)'s dataset_text_field property.
(*1)https://huggingface.co/datasets/kunishou/databricks-dolly-15k-ja
(*2)https://huggingface.co/docs/trl/main/en/sft_train... |
false |
## LLaVA Visual Instruct CC3M 595K Pretrain Dataset Card
[LLaVA](https://llava-vl.github.io/)에서 공개한 CC3M의 595K개 Visual Instruction dataset을 한국어로 번역한 데이터셋입니다. 기존 [Ko-conceptual-captions](https://github.com/QuoQA-NLP/Ko-conceptual-captions)에 공개된 한국어 caption을 가져와 데이터셋을 구축했습니다. 번역 결과가 다소 좋지 않아, 추후에 DeepL로 다시 번역할 수 있습니다.
... |
true |
# Dataset Card for Cryptonews articles with price momentum labels
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- ... |
false | # EasyQA: A Kindergarten-Level QA Dataset for Investigating Truthfulness.
EasyQA is a GPT-3.5-turbo-generated dataset of easy kindergarten-level facts, meant to be used to prompt and evaluate large language models for "common-sense" truthful responses. This dataset was originally created to understand how different ty... |
false | |
true | # Content
This is a dataset of Spotify tracks over a range of **125** different genres. Each track has some audio features associated with it. The data is in `CSV` format which is tabular and can be loaded quickly.
# Usage
The dataset can be used for:
- Building a **Recommendation System** based on some user input ... |
false | |
false | |
false | # rudetoxifier_data_detox
This is subset of toxic comments from [d0rj/rudetoxifier_data](https://huggingface.co/datasets/d0rj/rudetoxifier_data) which has detoxified column created by [s-nlp/ruT5-base-detox](https://huggingface.co/s-nlp/ruT5-base-detox). |
false |
prompts and prompt engineering are essential for guiding language models, enabling control over outputs, generating desired content, fostering creativity,
and enhancing the overall user experience. They form a critical component in the interaction between users and AI systems,
ensuring meaningful and contextually ap... |
false |
# Rakuda - Questions for Japanese models
**Repository**: [https://github.com/yuzu-ai/japanese-llm-ranking](https://github.com/yuzu-ai/japanese-llm-ranking)
This is a set of 40 questions in Japanese about Japanese-specific topics designed to evaluate the capabilities of AI Assistants in Japanese.
The questions are e... |
true |
# Dataset Card for "UnpredicTable-cluster22" - Dataset of Few-shot Tasks from Tables
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure... |
false |
# Disclaimer
This was inspired from https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions
# Dataset Card for A subset of Magic card BLIP captions
_Dataset used to train [Magic card text to image model](https://github.com/LambdaLabsML/examples/tree/main/stable-diffusion-finetuning)_
BLIP generated captio... |
false |
# Dataset Card for Dicionário Português
It is a list of 53138 portuguese words with its inflections.
How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/pt-inflections", field="data")
remote_dataset
```
Output:
```
DatasetDict({
train: Dataset({
features: ... |
false |
# Dataset Card for Dicionário Português
It is a list of portuguese words with its inflections
How to use it:
```
from datasets import load_dataset
remote_dataset = load_dataset("VanessaSchenkel/pt-all-words")
remote_dataset
```
|
false | ## Dataset Summary
Depth-of-Field(DoF) dataset is comprised of 1200 annotated images, binary annotated with(0) and without(1) bokeh effect, shallow or deep depth of field. It is a forked data set from the [Unsplash 25K](https://github.com/unsplash/datasets) data set.
## Dataset Description
- **Repository:** [https:/... |
false |
# Dataset Card for COPA-SSE
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structur... |
true |
# MLDoc
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instan... |
false |
# Dataset Description
## Structure
- Consists of 5 fields
- Each row corresponds to a policy - sequence of actions, given an initial `<START>` state, and corresponding rewards at each step.
## Fields
`steps`, `step_attn_masks`, `rewards`, `actions`, `dones`
## Field descriptions
- `steps` (List of lists of `Int`... |
false |
# Dataset Card for GEM/TaTA
## Dataset Description
- **Homepage:** https://github.com/google-research/url-nlp
- **Repository:** https://github.com/google-research/url-nlp
- **Paper:** https://arxiv.org/abs/2211.00142
- **Leaderboard:** https://github.com/google-research/url-nlp
- **Point of Contact:** Sebastian Rud... |
false |
# Dataset Card for QA-Portuguese
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-st... |
false |
# Dataset Card for "LexFiles"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Dataset Specifications](#supported-tasks-and-leaderboards)
## Dataset Description
- **Homepage:** https://github.com/coastalcph/lexlms
- **Repository:** https://github.com/c... |
false |
# Dataset Card for IDK-MRC
## Dataset Description
- **Repository:** [rifkiaputri/IDK-MRC](https://github.com/rifkiaputri/IDK-MRC)
- **Paper:** [PDF](https://aclanthology.org/2022.emnlp-main.465/)
- **Point of Contact:** [rifkiaputri](https://github.com/rifkiaputri)
### Dataset Summary
I(n)dontKnow-MRC (IDK-MRC) is... |
false |
# Dataset Card for Digimon BLIP captions
This project was inspired by the [labelled Pokemon dataset](https://huggingface.co/datasets/lambdalabs/pokemon-blip-captions).
The captions were generated using the BLIP Model found in the [LAVIS Library for Language-Vision Intelligence](https://github.com/salesforce/LAVIS). ... |
false |
# Dataset Card for VoxCeleb
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structur... |
true |
# Dataset Card for SI-NLI
### Dataset Summary
SI-NLI (Slovene Natural Language Inference Dataset) contains 5,937 human-created Slovene sentence pairs (premise and hypothesis) that are manually labeled with the labels "entailment", "contradiction", and "neutral". We created the dataset using sentences that appear in ... |
false |
# Dataset Card for Wikipedia
This repo is a wrapper around [olm/wikipedia](https://huggingface.co/datasets/olm/wikipedia) that just concatenates data from the EU languages.
Please refer to it for a complete data card.
The EU languages we include are:
- bg
- cs
- da
- de
- el
- en
- es
- et
... |
false | |
false | |
false |
# laion-translated-to-en-korean-subset
## Dataset Description
- **Homepage:** [laion-5b](https://laion.ai/blog/laion-5b/)
- **Download Size** 1.40 GiB
- **Generated Size** 3.49 GiB
- **Total Size** 4.89 GiB
## About dataset
a subset data of [laion/laion2B-multi-joined-translated-to-en](https://huggingface.co/dataset... |
false |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-st... |
true | # Dataset Card for "twitter-coronavirus"
## Dataset Description
- **Homepage:** Kaggle Challenge
- **Repository:** https://www.kaggle.com/datasets/datatattle/covid-19-nlp-text-classification
- **Paper:** N.A.
- **Leaderboard:** N.A.
- **Point of Contact:** N.A.
### Dataset Summary
Perform Text Classification on the... |
false | # Indic TTS Malayalam Speech Corpus
The Malayalam subset of [Indic TTS Corpus](https://www.iitm.ac.in/donlab/tts/index.php), taken from
[this Kaggle database.](https://www.kaggle.com/datasets/kavyamanohar/indic-tts-malayalam-speech-corpus) The corpus contains
one male and one female speaker, with a 2:1 ratio of samples... |
false |
# Dataset Card for "lmqg/qag_koquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahius... |
false |
### Roboflow Dataset Page
[https://universe.roboflow.com/material-identification/garbage-classification-3/dataset/2](https://universe.roboflow.com/material-identification/garbage-classification-3/dataset/2?ref=roboflow2huggingface)
### Dataset Labels
```
['biodegradable', 'cardboard', 'glass', 'metal', 'paper', 'pla... |
false | Dataset with Prolog code / query pairs and execution results. |
false |
# Dataset Card for MNIST
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- ... |
false |
# Dataset Card for `disks45/nocr`
The `disks45/nocr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/disks45#disks45/nocr).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=5... |
false |
# Dataset Card for `lotte/technology/test/search`
The `lotte/technology/test/search` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/lotte#lotte/technology/test/search).
# Data
This dataset provides:
-... |
false |
# Dataset Card for `mmarco/fr`
The `mmarco/fr` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/fr).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=8,841,823
... |
false |
# Dataset Card for `mmarco/pt/dev`
The `mmarco/pt/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/pt/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=101,619
- ... |
false |
# Dataset Card for `mmarco/pt/dev/small`
The `mmarco/pt/dev/small` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/pt/dev/small).
# Data
This dataset provides:
- `queries` (i.e., topics);... |
false |
# Dataset Card for `mmarco/pt/dev/v1.1`
The `mmarco/pt/dev/v1.1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/pt/dev/v1.1).
# Data
This dataset provides:
- `queries` (i.e., topics); co... |
false |
# Dataset Card for `mmarco/pt/train/v1.1`
The `mmarco/pt/train/v1.1` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/pt/train/v1.1).
# Data
This dataset provides:
- `queries` (i.e., topic... |
false |
# Dataset Card for `mmarco/v2/pt`
The `mmarco/v2/pt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/v2/pt).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=8,... |
false |
# Dataset Card for `mmarco/v2/pt/dev`
The `mmarco/v2/pt/dev` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/mmarco#mmarco/v2/pt/dev).
# Data
This dataset provides:
- `queries` (i.e., topics); count=10... |
false |
# Dataset Card for `nyt/wksup`
The `nyt/wksup` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/nyt#nyt/wksup).
# Data
This dataset provides:
- `queries` (i.e., topics); count=1,864,661
- `qrels`: (rel... |
false |
# Dataset Card for `wikiclir/pt`
The `wikiclir/pt` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/pt).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=973... |
false |
# Dataset Card for `wikiclir/ru`
The `wikiclir/ru` dataset, provided by the [ir-datasets](https://ir-datasets.com/) package.
For more information about the dataset, see the [documentation](https://ir-datasets.com/wikiclir#wikiclir/ru).
# Data
This dataset provides:
- `docs` (documents, i.e., the corpus); count=1,4... |
false | hand-collected set of 57817 pics mostly from russian internet. pics without captions.
датасет из тех самых "прикольных картинок" с дисков и т.п. все картинки с корневой директории полностью собраны ручками. не размечен. |
false |
# Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain
## Table of Contents
- [Dataset Card for FrenchMedMCQA : A French Multiple-Choice Question Answering Corpus for Medical domain](#dataset-card-for-frenchmedmcqa--a-french-multiple-choice-question-answering-corpus-f... |
false | # Dataset cointelegraph English
## Dataset Description
It is a dataset where information about the title, description, author, etc. is collected.
approx: 10041 row
page: https://cointelegraph.com/
categorie: #cryptocurrency, #Bitcoin, #Ethereum ...
|
false |
# Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This is a multilingual dataset containing ~130k annotated sentence boundaries. It contains laws and court decision in 6 different languages.
### Supp... |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
[More Information Needed]
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset S... |
true | # AutoTrain Dataset for project: books-rating-analysis
## Dataset Description
This dataset has been automatically processed by AutoTrain for project books-rating-analysis.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as f... |
false | # Paraphrase Dataset (Urdu)
This dataset contains paraphrases in Urdu. It is provided in the Parquet format and is split into a training set with 393,000 rows.
## Dataset Details
- Columns:
- `sentence1`: The first sentence in a pair of paraphrases (string).
- `sentence2`: The second sentence in a pair of paraph... |
false |
# Dataset Card for spaeti_store
## Dataset Description
The dataset consists of 10 pictures of one späti (German convenience store) from different angles.
The data is unlabeled.
The dataset was created to fine-tune a text-to-image Stable Diffusion model as part of the DreamBooth Hackathon. Visit the [organization's pa... |
true | # Dataset for the project: reviews-sentiment-analysis
## Dataset Description
This dataset is for project reviews-sentiment-analysis.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"text": "Now... |
false | # Dataset for project: food-classification
## Dataset Description
This dataset has been processed for project food-classification.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<30... |
false |
# MIRACL (id) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-id-queries-22-12](https://huggingface.... |
false |
# MIRACL (ru) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ru-queries-22-12](https://huggingface.... |
false | |
false |
# MIRACL (en) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-en-queries-22-12](https://huggingface.... |
false | # Dataset Card for NST Swedish Speech Synthesis (44 kHz)
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-i... |
false |
# Dataset Card for NeuMARCO
## Dataset Description
- **Website:** https://neuclir.github.io/
### Dataset Summary
This is the dataset created for TREC 2022 NeuCLIR Track. The collection consists of documents from [`msmarco-passage`](https://ir-datasets.com/msmarco-passage) translated into
Chinese, Persian, and Russ... |
false |
# Dataset Card for Beans
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
... |
false | # Dataset Card for "Europarl-ST"
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
- [... |
false | # Neuro CNN Project - Fernando Feltrin
# Brain Meningioma images (39 classes) for image classification
## Dataset Description
- **More info: fernando2rad@gmail.com**
### Dataset Summary
A collection of T1, contrast-enhanced, and T2-weighted MRI images of meningiomas sorted according to location in the brain. Image... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.