text-classification bool 2
classes | text stringlengths 0 664k |
|---|---|
false |
# MIRACL (ar) embedded with cohere.ai `multilingual-22-12` encoder
We encoded the [MIRACL dataset](https://huggingface.co/miracl) using the [cohere.ai](https://txt.cohere.ai/multilingual/) `multilingual-22-12` embedding model.
The query embeddings can be found in [Cohere/miracl-ar-queries-22-12](https://huggingface.... |
false |
Redistributed without modification from https://github.com/phelber/EuroSAT.
EuroSAT100 is a subset of EuroSATallBands containing only 100 images. It is intended for tutorials and demonstrations, not for benchmarking. |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
Dataset contains pairs of sentences with next_sentence_label for NSP. Sentences was given from public jira projects dataset. Next sentence is alwa... |
false | # Urdu_DW-BBC-512
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact: mubashir.munaaf@gmail.com**
### Dataset Summary
Urdu Summarization Dataset containining 76,637 records of Article + Summary pairs scrapped from BBC Urdu and DW Urdu News Websites.
-P... |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
KG dataset created by using spaCy PoS and Dependency parser.
### Supported Tasks and Leaderboards
Can be leveraged for token classification for... |
true | |
false |
# DEplain-web-doc: A corpus for German Document Simplification
DEplain-web-doc is a subcorpus of DEplain [Stodden et al., 2023]((https://arxiv.org/abs/2305.18939)) for document simplification.
The corpus consists of 396 (199/50/147) parallel documents crawled from the web in standard German and plain German (or easy... |
true |
# Sentiment fairness dataset
================================
This dataset is to measure gender fairness in the downstream task of sentiment analysis. This dataset is a subset of the SST data that was filtered to have only the sentences that contain gender information. The python code used to create this dataset can ... |
false | # Dataset Card for "fd_dialogue"
This dataset contains transcripts for famous movies and TV shows from https://transcripts.foreverdreaming.org/
The dataset contains **only a small portion of Forever Dreaming's data**, as only transscripts with a clear dialogue format are included, such as:
```
PERSON 1: Hello
PERSON... |
true | |
false | |
false |
# Dataset Card for brain-tumor-m2pbp
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/brain-tumor-m2pbp
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
brain-tumor-m2pbp
### Supported Task... |
false |
# Dataset Card for printed-circuit-board
** The original COCO dataset is stored at `dataset.tar.gz`**
## Dataset Description
- **Homepage:** https://universe.roboflow.com/object-detection/printed-circuit-board
- **Point of Contact:** francesco.zuppichini@gmail.com
### Dataset Summary
printed-circuit-board
### Su... |
false | # AutoTrain Dataset for project: treehk
## Dataset Description
This dataset has been automatically processed by AutoTrain for project treehk.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"i... |
false | # Abalone
The [Abalone dataset](https://archive-beta.ics.uci.edu/dataset/1/abalone) from the [UCI ML repository](https://archive.ics.uci.edu/ml/datasets).
Predict the age of the given abalone.
# Configurations and tasks
| **Configuration** | **Task** | **Description** |
|------... |
true |
# Dataset Card for BLiterature
*BLiterature is part of a bigger project that is not yet complete. Not all information here may be accurate or accessible.*
## Dataset Description
- **Homepage:** (TODO)
- **Repository:** N/A
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** KaraKaraWitch
### Dataset S... |
true |
# Dataset Card for JSICK
## Table of Contents
- [Dataset Card for JSICK](#dataset-card-for-jsick)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Japanese Sentences Involving Compositional Knowledge (JSICK) Dataset.](#japan... |
false |
This dataset is a redistribution of the following dataset.
https://github.com/suzuki256/dog-dataset
```
The dataset and its contents are made available on an "as is" basis and without warranties of any kind, including without limitation satisfactory quality and conformity, merchantability, fitness for a particular pu... |
false | # Dataset Card for CIFAR-10-Enriched (Enhanced by Renumics)
## Dataset Description
- **Homepage:** [Renumics Homepage](https://renumics.com/?hf-dataset-card=cifar10-enriched)
- **GitHub** [Spotlight](https://github.com/Renumics/spotlight)
- **Dataset Homepage** [CS Toronto Homepage](https://www.cs.toronto.edu/~kriz/c... |
false |
[MLQA (MultiLingual Question Answering)](https://github.com/facebookresearch/mlqa) 中英雙語問答資料集,為原始 MLQA 資料集轉換為台灣正體中文的版本,並將中文與英語版本的相同項目合併,方便供雙語語言模型使用。(致謝:[BYVoid/OpenCC](https://github.com/BYVoid/OpenCC)、[vinta/pangu.js](https://github.com/vinta/pangu.js))
分為 `dev` 以及 `test` 兩個 split,各有 302 及 2986 組資料。
範本:
```json
[
... |
true |
100.772 texts with their corresponding labels
NOT_OFF_HATEFUL_TOXIC 81.359 values
OFF_HATEFUL_TOXIC 19.413 values |
false | # AutoTrain Dataset for project: teste
## Dataset Description
This dataset has been automatically processed by AutoTrain for project teste.
### Languages
The BCP-47 code for the dataset's language is pt.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"cont... |
true |
This is the same dataset as [`ag_news`](https://huggingface.co/datasets/ag_news).
The only differences are
1. Addition of a unique identifier, `uid`
1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers
- `all-mpnet-base-v2`
- `multi-qa-mpnet-base-dot-v1`
... |
false | |
false |
# MInDS-14
## Dataset Description
- **Fine-Tuning script:** [pytorch/audio-classification](https://github.com/huggingface/transformers/tree/main/examples/pytorch/audio-classification)
- **Paper:** [Multilingual and Cross-Lingual Intent Detection from Spoken Data](https://arxiv.org/abs/2104.08524)
- **Total amount of... |
false | This dataset was created to test two different things:
First, check LLM's capabilities of augmenting data in a coherent way.
Second, create a dataset to finetune LLMs for the QA task.
The dataset contains the frequently asked questions and their answers of a made-up online fashion marketplace called: Nels Marketplace. |
false |
# Dataset Card for "alpaca-gpt4-cleaned"
This dataset contains Ukrainian Instruction-Following translated by facebook/nllb-200-3.3B
The dataset was originaly shared in this repository: https://github.com/tloen/alpaca-lora
## Licensing Information
The dataset is available under the [Creative Commons NonCommercial (... |
false | |
false | # Dataset Card for "instructional_code-search-net-javacript"
## Dataset Description
- **Homepage:** None
- **Repository:** https://huggingface.co/datasets/Nan-Do/instructional_code-search-net-javascript
- **Paper:** None
- **Leaderboard:** None
- **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do)
### Dataset... |
false | |
false |
来源 https://github.com/liucongg/NLPDataSet
* 从网上收集数据,将CMeEE数据集、IMCS21_task1数据集、CCKS2017_task2数据集、CCKS2018_task1数据集、CCKS2019_task1数据集、CLUENER2020数据集、MSRA数据集、NLPCC2018_task4数据集、CCFBDCI数据集、MMC数据集、WanChuang数据集、PeopleDairy1998数据集、PeopleDairy2004数据集、GAIIC2022_task2数据集、WeiBo数据集、ECommerce数据集、FinanceSina数据集、BoSon数据集、Resume数据集... |
false | # Dataset Card for "product_ads"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) |
false |
# Dataset Card for Erhu Playing Technique Database(11-class)
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/CCMUSIC/erhu_playing_tech_11>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.... |
false |
# Dataset Card for Chest voice and Falsetto Database
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/chest_falsetto>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-database.github.... |
false |
# Dataset Card for Bel Conto and Chinese Folk Song Singing Tech Database
## Dataset Description
- **Homepage:** <https://ccmusic-database.github.io>
- **Repository:** <https://huggingface.co/datasets/ccmusic-database/bel_folk>
- **Paper:** <https://doi.org/10.5281/zenodo.5676893>
- **Leaderboard:** <https://ccmusic-da... |
false | ```
from datasets import load_dataset
data_files={'data':'data.csv'}
data=load_dataset("theothertom/text_emotion_speech",data_files=data_files)
``` |
true | |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
true | # The Adversarial Natural Language Inference (ANLI)
- Source: https://huggingface.co/datasets/anli
- Num examples:
- 100,459 (train)
- 1,200 (validation)
- 1,200 (test)
- Language: English
```python
from datasets import load_dataset
load_dataset("vietgpt/anli_r3_en")
```
- Format for NLI task
```python
def ... |
true | # COPA
- Source: https://huggingface.co/datasets/super_glue
- Num examples:
- 400 (train)
- 100 (validation)
- Language: English
```python
from datasets import load_dataset
load_dataset("vietgpt/copa_en")
```
- Format for GPT-3
```python
def preprocess_gpt3(sample):
premise = sample['premise']
choice1... |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:**
### Dataset Summary
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg... |
false | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:** kin.naver.com/qna
- **Repository:**
- **Paper:**
- **Leaderboard:**
- **Point of Contact:** mjypark1212@gmail.com
### Dataset Summary
The most active korean qna site - Knowledge In Naver. Instruction + response format. Created for language mo... |
false |
Collection of wing images for conservation of honey bees (Apis mellifera) biodiversity in Europe
https://zenodo.org/record/7244070
Small version (10%) of the original dataset bee-wings-large |
true | # Dataset Card for nli-zh-all
## Dataset Description
- **Repository:** [Chinese NLI dataset](https://github.com/shibing624/text2vec)
- **Dataset:** [zh NLI](https://huggingface.co/datasets/shibing624/nli-zh-all)
- **Size of downloaded dataset files:** 4.7 GB
- **Total amount of disk used:** 4.7 GB
### Dataset Summary
... |
false | |
true | # Dataset Card for Dataset Name
## Dataset Description
- **Homepage:**
- **Repository: https://github.com/timpal0l/ScandiSent**
- **Paper: https://arxiv.org/pdf/2104.10441.pdf**
- **Leaderboard:**
- **Point of Contact: Tim Isbister**
### Dataset Summary
This dataset card aims to be a base template for new datas... |
false |
# **Open Australian Legal Corpus ⚖️**
The Open Australian Legal Corpus is the first and only multijurisdictional open corpus of Australian legislative and judicial documents.
Comprised of 97,750 texts, the Corpus includes almost every in force statute and regulation in the Commonwealth, New South Wales, Queensland, ... |
false | # Dataset Card for BabelCode HumanEval
## Dataset Description
- **Repository:** [GitHub Repository](https://github.com/google-research/babelcode)
- **Paper:** [Measuring The Impact Of Programming Language Distribution](https://arxiv.org/abs/2302.01973)
### How To Use This Dataset
To quickly evaluate BC-HumanEval pr... |
false |
# Dataset Card for ICC
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
... |
false |
# mammut-corpus-venezuela
HuggingFace Dataset for testing purposes. The train dataset is `mammut/mammut-corpus-venezuela`.
## 1. How to use
How to load this dataset directly with the datasets library:
`>>> from datasets import load_dataset`
`>>> dataset = load_dataset("mammut/mammut-corpus-venezuela")`
## 2.... |
false |
# mammut-corpus-venezuela
HuggingFace Dataset
## 1. How to use
How to load this dataset directly with the datasets library:
`>>> from datasets import load_dataset`
`>>> dataset = load_dataset("mammut-corpus-venezuela")`
## 2. Dataset Summary
**mammut-corpus-venezuela** is a dataset for Spanish language mode... |
true | # AutoNLP Dataset for project: Doctor_DE
## Table of content
- [Dataset Description](#dataset-description)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
## Dataset Descritpion
This dataset ... |
false |
# Dataset Card for Clean(maybe) Indonesia mC4
## Dataset Description
- **Original Homepage:** [HF Hub](https://huggingface.co/datasets/allenai/c4)
- **Paper:** [ArXiv](https://arxiv.org/abs/1910.10683)
### Dataset Summary
A thoroughly cleaned version of the Indonesia split of the multilingual colossal, cleaned ve... |
false |
# nateraw/auto-cats-and-dogs
Image Classification Dataset
## Usage
```python
from PIL import Image
from datasets import load_dataset
def pil_loader(path: str):
with open(path, 'rb') as f:
im = Image.open(f)
return im.convert('RGB')
def image_loader(example_batch):
example_batch['image'] = ... |
false |
# nateraw/auto-exp-2
Image Classification Dataset
## Usage
```python
from PIL import Image
from datasets import load_dataset
def pil_loader(path: str):
with open(path, 'rb') as f:
im = Image.open(f)
return im.convert('RGB')
def image_loader(example_batch):
example_batch['image'] = [
... |
false | ## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-fields)... |
true |
# Dataset Card for GitHub Issues
## Dataset Summary
GitHub Issues is a dataset consisting of GitHub issues and pull requests associated with the 🤗 Datasets repository. It is intended for educational purposes and can be used for semantic search or multilabel text classification. The contents of each GitHub issue are i... |
false |
# Dataset Card for frwiki_good_pages_el
## Dataset Description
- Repository: [enwiki_el](https://github.com/GaaH/enwiki_el)
- Point of Contact: [Gaëtan Caillaut](mailto://g.caillaut@brgm.fr)
### Dataset Summary
It is intended to be used to train Entity Linking (EL) systems. Links in Wikipedia articles are used to ... |
false |
# Axolotl-Spanish-Nahuatl : Parallel corpus for Spanish-Nahuatl machine translation
## Table of Contents
- [Dataset Card for [Axolotl-Spanish-Nahuatl]](#dataset-card-for-Axolotl-Spanish-Nahuatl)
## Dataset Description
- **Source 1:** http://www.corpus.unam.mx/axolotl
- **Source 2:** http://link.springer.com/articl... |
false |
# RuNNE dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Structure](#dataset-structure)
- [Citation Information](#citation-information)
- [Contacts](#contacts)
## Dataset Description
Part of NEREL dataset (https://arxiv.org/abs/2108.13112), a Russian dataset
for named entity reco... |
true |
# Dataset Card for DanFEVER
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields]... |
false |
# Dataset Card for "ipm-nel"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
... |
true |
# Dataset Card for "dkstance / DAST"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-ins... |
true |
# Dataset Card for "polstance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances... |
false |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data S... |
false |
# Dataset Card for Bingsu/arcalive_220506
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- ... |
false |
Token classification dataset developed from dataset by Katarina Nimas Kusumawati's undergraduate thesis:
**"Identifikasi Entitas Bernama dalam Domain Medis pada Layanan Konsultasi Kesehatan Berbahasa Menggunkan Alrogritme Bidirectional-LSTM-CRF"**
Institut Teknologi Sepuluh Nopember, Surabaya, Indonesia - 2022
I ... |
true |
# Dataset Card for "rustance"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)... |
true |
# Dataset Card for sd-nlp
## Table of Contents
- [Dataset Card for [Dataset Name]](#dataset-card-for-dataset-name)
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderb... |
false |
# Polish-Political-Advertising
## Info
Political campaigns are full of political ads posted by candidates on social media. Political advertisement constitute a basic form of campaigning, subjected to various social requirements. We present the first publicly open dataset for detecting specific text chunks and catego... |
false |
# Dataset Card for [Dataset Name]
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Fields](#data-fields)
- [Data Splits](#data-splits)
-... |
false |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data S... |
false |
# Dataset Card for "lmqg/qg_subjqa"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushi... |
false |
# Danish Gigaword Corpus, Reddit (filtered)
*Version*: 1.0.0
*License*: See the respective dataset
This dataset is a variant of the Danish Gigaword [3], which excludes the sections containing
tweets and modified news contained in danavis20.
Twitter was excluded as it was a sample of a dataset which was available... |
false |
# Dataset Card for named_timexes
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fie... |
false |
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- [Data Fields](#data-instances)
- [Data S... |
false | # AutoTrain Dataset for project: code_summarization
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project code_summarization.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows... |
true |
# Dataset Card for AraStance
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structur... |
true |
# Dataset Card for `reviews_with_drift`
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#data... |
false |
# rumi-jawi
Notebooks to gather the dataset at https://github.com/huseinzol05/malay-dataset/tree/master/normalization/rumi-jawi |
true |
# Dataset Card for AraStance
## Table of Contents
- [Table of Contents](#table-of-contents)
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structur... |
false | # AutoTrain Dataset for project: test-auto
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project test-auto.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
... |
false |
# Citations
```
@misc{Aniemore,
author = {Артем Аментес, Илья Лубенец, Никита Давидчук},
title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека},
year = {2022},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https:... |
false |
# Citations
```
@misc{Aniemore,
author = {Артем Аментес, Илья Лубенец, Никита Давидчук},
title = {Открытая библиотека искусственного интеллекта для анализа и выявления эмоциональных оттенков речи человека},
year = {2022},
publisher = {Hugging Face},
journal = {Hugging Face Hub},
howpublished = {\url{https:... |
false |
# Dataset for evaluation of (zero-shot) discourse marker prediction with language models
This is the Big-Bench version of our discourse marker prediction dataset, [Discovery](https://huggingface.co/datasets/discovery)
Design considerations:
<https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/disc... |
true | ### Dataset Summary
The dataset contains user reviews about medical institutions.
In total it contains 12,036 reviews. A review tagged with the <em>general</em> sentiment and sentiments on 5 aspects: <em>quality, service, equipment, food, location</em>.
### Data Fields
Each sample contains the following fields:
- **re... |
true | # AutoTrain Dataset for project: Poem_Rawiy_detection
## Dataset Descritpion
We used the APCD dataset cited hereafter for pretraining the model. The dataset has been cleaned and only the main text and the Qafiyah columns were kept:
```
@Article{Yousef2019LearningMetersArabicEnglish-arxiv,
author = {Yousef, Wa... |
false |
# Dataset Card for "lmqg/qg_squadshifts"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asah... |
false |
# Dataset Card for "lmqg/qg_esquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushi... |
false |
# Dataset Card for "lmqg/qg_ruquad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiush... |
false |
# Dataset Card for "lmqg/qg_dequad"
## Dataset Description
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
- **Point of Contact:** [Asahi Ushio](http://asahiushi... |
true |
# Dataset Card for Fewshot Table Dataset
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instances)
- ... |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instan... |
false |
# Dataset Card for BEIR Benchmark
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
- [Data Instances](#data-instan... |
true |
# Dataset Card for "Non-Parallel MultiEURLEX (incl. Translations)"
## Table of Contents
- [Dataset Description](#dataset-description)
- [Dataset Summary](#dataset-summary)
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
- [Languages](#languages)
- [Dataset Structure](#dataset-structure)
... |
true | # AutoTrain Dataset for project: quality-customer-reviews
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project quality-customer-reviews.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset look... |
true | # AutoTrain Dataset for project: qa-team-car-review-project
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project qa-team-car-review-project.
### Languages
The BCP-47 code for the dataset's language is en.
## Dataset Structure
### Data Instances
A sample from this dataset ... |
true | # AutoTrain Dataset for project: car-review-project
## Dataset Descritpion
This dataset has been automatically processed by AutoTrain for project car-review-project. It contains consumer car ratings and reviews from [Edmunds website](https://www.kaggle.com/datasets/ankkur13/edmundsconsumer-car-ratings-and-reviews)
#... |
true | |
false |
# Dataset Card for frwiki_good_pages_el
## Dataset Description
- Repository: [frwiki_el](https://github.com/GaaH/frwiki_el)
- Point of Contact: [Gaëtan Caillaut](mailto://g.caillaut@brgm.fr)
### Dataset Summary
This dataset contains articles from the French Wikipédia.
It is intended to be used to train Entity Link... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.