text-classification
bool
2 classes
text
stringlengths
0
664k
false
false
The dialogue pairs from Wesnoth add-on campanies IftU/AtS.
false
# Dataset Card for "cd45rb_leukocytes_subdataset" Citation: Daisuke Komura, Takumi Onoyama, Koki Shinbo, Hiroto Odaka, Minako Hayakawa, Mieko Ochi, Ranny Rahaningrum Herdiantoputri, Haruya Endo, Hiroto Katoh, Tohru Ikeda, Tetsuo Ushiku, Shumpei Ishikawa, Restaining-based annotation for cancer histology segmentation to...
true
# Dataset Card for "github-issues" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
true
false
# Dataset Card for Voxpopuli ## Table of Contents - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structu...
false
## Dataset Summary This dataset contains 256-dimensional vectors for a 1M sample of Wikipedia for Approximate Nearest Neighbors Search benchmarks. ### Usage ``` git lfs install git clone https://huggingface.co/datasets/unum-cloud/ann-wiki-1m ``` ### Dataset Structure The dataset contains three matrices: - base: ...
false
## Dataset Summary This dataset contains 200-dimensional vectors for 1M images indexed by Yandex and produced by the Se-ResNext-101 model. ### Usage ``` git lfs install git clone https://huggingface.co/datasets/unum-cloud/ann-t2i-1m ``` ### Dataset Structure The dataset contains three matrices: - base: `base.1M....
false
### Dataset Summary This dataset card aims to be creating a new dataset or Sinhala news summarization tasks. It has been generated using [https://huggingface.co/datasets/cnn_dailymail] and google translate. ### Data Instances For each instance, there is a string for the article, a string for the highlights, and a ...
false
false
# 📝 BUOD Article Scraper Authors: [James Esguerra](https://huggingface.co/jamesesguerra), [Julia Avila](), [Hazielle Bugayong](https://huggingface.co/0xhaz) - Article Scraper for the KAMI-3000 dataset used in the BUOD [distilBART](https://huggingface.co/ateneoscsl/BUOD_distilBART_TM) and [bert2bert](https://huggingfa...
false
# Dataset Card for "ms-marco-es" QA asymmetric Spanish dataset filtered from [multilingual version of MS Marco](https://huggingface.co/datasets/unicamp-dl/mmarco) ```python import datasets ms_marco_es = datasets.load_dataset('unicamp-dl/mmarco', name='spanish', split='train') ms_marco_es.push_to_hub("dariolopez/ms-...
false
This dataset is machine-translated version of [databricks-dolly-15k.jsonl](https://github.com/databrickslabs/dolly/tree/master/data) into Turkish. Used `googletrans==3.1.0a0` to translation.
false
# AutoTrain Dataset for project: test-sa-gam ## Dataset Description This dataset has been automatically processed by AutoTrain for project test-sa-gam. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```json [ ...
false
# GMaSC: GEC Barton Hill Malayalam Speech Corpus **GMaSC** is a Malayalam text and speech corpus created by the Government Engineering College Barton Hill with an emphasis on Malayalam-accented English. The corpus contains 2,000 text-audio pairs of Malayalam sentences spoken by 2 speakers, totalling in approximately ...
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg...
false
# abstracts-embeddings This is the embeddings of the titles and abstracts of 95 million academic publications taken from the [OpenAlex](https://openalex.org) dataset as of May 5, 2023. The script that generated the embeddings is available on [Github](https://github.com/colonelwatch/abstracts-search/blob/master/build....
false
# StyleGAN3 Annotated Images This dataset consists of a `pandas` table and attached `images.zip` file with these entries: * seed (`numpy` seed used to generate random vectors) * path (path to the generated image obtained after unzipping `images.zip`) * vector (generated numpy "random" vector used to create StyleGAN3...
true
true
false
# Dataset Card for ImageIn_annotations_resized_images [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
false
# Dataset Card for RoBERTa Pretrain ### Dataset Summary This is the concatenation of the datasets used to Pretrain RoBERTa. The dataset is not shuffled and contains raw text. It is packaged for convenicence. Essentially is the same as: ``` from datasets import load_dataset, concatenate_datasets bookcorpus = load_dat...
false
# bollywood-celebs ## Dataset Description This dataset has been automatically processed by AutoTrain for project bollywood-celebs. Credits: https://www.kaggle.com/datasets/sushilyadav1998/bollywood-celeb-localized-face-dataset ### Languages The BCP-47 code for the dataset's language is unk. ## Dataset Structure ...
true
Beta Dataset Generated by GPT3.5
false
true
# Modified Victorian Era Authorship Attribution Dataset ## About This data set is a modified version of the one that can be found [here](https://archive.ics.uci.edu/ml/datasets/Victorian+Era+Authorship+Attribution). The difference being that the training dataset was split into two parts: 80% training, 20% testing w...
false
# Dataset Card for "face-celeb-vietnamese" ## Dataset Summary This dataset contains information on over 8,000 samples of well-known Vietnamese individuals, categorized into three professions: singers, actors, and beauty queens. The dataset includes data on more than 100 celebrities in each of the three job categories...
false
false
false
false
false
false
# PixAI [scrape script](https://github.com/hlky/scrape/blob/main/pixai.py) ``` 1596472 rows x 31 columns 'id', 'title', 'username', 'displayName', 'userCreatedAt', 'userUpdatedAt', 'followerCount', 'followingCount', 'userInspiredCount', 'prompts', 'createdAt', 'updatedAt', 'isNsfw', 'likedCount', 'views', 'commentCo...
false
# Horde4M 4M+ generation metadata only, too many spicy images ``` 4130252 rows x 14 columns 'id', 'prompt', 'width', 'height', 'steps', 'sampler', 'cfg', 'seed', 'model', 'karras', 'gfpgan', 'realesrgan_x4plus', 'codeformer', 'user_type' ``` Majority use karras because stable horde ui's decided to default karras (t...
false
# NorEval NorEval is a self-curated dataset to evaluate instruction-following LLMs, seeking to evaluate the models in nine categories: Language, Code, Mathematics, Classification, Communication & Marketing, Medical, General Knowledge, and Business Operations
false
# AutoTrain Dataset for project: rwlv_summarizer ## Dataset Description This dataset has been automatically processed by AutoTrain for project rwlv_summarizer. ### Languages The BCP-47 code for the dataset's language is en. ## Dataset Structure ### Data Instances A sample from this dataset looks as follows: ```...
true
This is an Indonesia-translated version of [snli](https://huggingface.co/datasets/snli) dataset Translated using [Helsinki-NLP/EN-ID](https://huggingface.co/Helsinki-NLP/opus-mt-en-id)
false
# Negative Embedding / Textual Inversion ![Sample image of NE4Mitsua](Sample.png) NE4Mitsua is a Negative Embedding for Mitsua Diffusion One. NE4Mitsua は Mitsua Diffusion One用のネガティブEmbeddingです。日本語版READMEはページ下部にあります。 --- # English README ## NE4Mitsua: With this Embedding I tried to achieve the following two goals. -...
true
这是个测试数据集
false
# Dataset Card for jawiki-20220404-c400 This dataset contains passages, each of which consists of consecutive sentences no longer than 400 characters from Japanese Wikipedia as of 2022-04-04. This dataset is used in baseline systems for [the AI王 question answering competition](https://sites.google.com/view/project-ai...
true
# Dataset Card for RuFacts ## Dataset Description RuFacts is a benchmark for internal fact-checking for the Russian language. The dataset contains tagged examples labeled consistent and inconsistent. For inconsistent examples, ranges containing violations of facts in the source text and the generated text are also...
false
# CC-100 zh-Hant (Traditional Chinese) From https://data.statmt.org/cc-100/, only zh-Hant - Chinese (Traditional). Broken into lines, with each line as a row. Estimated to have around 4B tokens when tokenized with the [`bigscience/bloom`](https://huggingface.co/bigscience/bloom) tokenizer. There's another version th...
false
### brainly.co.id dataset ### Data Structure The keys in each JSONL object include: - "id": An integer value representing the page of task from url (e.g. brainly.co.id/tugas/117). - "subject": A string indicating the subject of the question (e.g., "Fisika", "Matematika", "Sejarah"). - "author": A string representing th...
false
# Dataset Card for Odia_GPT-Teacher-Instruct-Odia-18K ## Dataset Description - **Homepage: https://www.odiagenai.org/** - **Repository: https://github.com/shantipriyap/OdiaGenAI** - **Point of Contact: Shantipriya Parida, and Sambit Sekhar** ### Dataset Summary This dataset is the Odia-translated version of the ...
true
# Dataset Card for "boolq-id" This dataset is a translated version of qnli dataset from [super_glue](https://huggingface.co/datasets/super_glue) dataset. # Citing & Authors ``` @inproceedings{clark2019boolq, title={BoolQ: Exploring the Surprising Difficulty of Natural Yes/No Questions}, author={Clark, Christophe...
false
# Dataset Card for Russian riddles with answers with 377 entries. ### Dataset Summary Contains parquet of QnA with riddle & answer pairs. Each row consists of * INSTRUCTION * RESPONSE * SOURCE * METADATA (json with language). ### Licensing Information Data is scrapped from several sites. Since most of the riddles...
true
# Dataset Card for "qnli-id" This dataset is a translated version of qnli dataset from [glue](https://huggingface.co/datasets/glue) dataset. # Citing & Authors ``` @article{warstadt2018neural, title={Neural Network Acceptability Judgments}, author={Warstadt, Alex and Singh, Amanpreet and Bowman, Samuel R}, jou...
false
load_dataset('phongmt184172/mtet') The dataset is cloned https://github.com/vietai/mTet for machine translation task.
false
# Dataset Card for multilingual tatoeba QnA translation with ~120K entries. ### Dataset Summary Contains Parquet of a list of instructions and translation articles on different languages. Each row consists of * INSTRUCTION * RESPONSE * SOURCE (tatoeba) * METADATA (json with language, text length, uuid, langs-pair...
false
# Dataset Card for "turkish-nlp-suite/turkish-wikiNER" <img src="https://raw.githubusercontent.com/turkish-nlp-suite/.github/main/profile/wiki.png" width="20%" height="20%"> ## Dataset Description - **Repository:** [Turkish-WikiNER](https://github.com/turkish-nlp-suite/Turkish-Wiki-NER-Dataset) - **Paper:** [ACL l...
false
# Dataset Card for turkish-nlp-suite/Corona-mini ## Dataset Description - **Repository:** [Turkish Corona-mini corpus](https://github.com/turkish-nlp-suite/Corona-mini-dataset) - **Paper:** [ACL link]() - **Dataset:** Corona-mini - **Domain:** Social Media <img src="https://raw.githubusercontent.com/turkish-nlp-suit...
true
false
false
# Dataset Information ## Keywords Hebrew, handwritten, letters ## Description HDD_v0 consists of images of isolated Hebrew characters together with training and test sets subdivision. The images were collected from hand-filled forms. For more details, please refer to [1]. When using this dataset in research work,...
false
# Printed Photos Attacks The dataset includes 3 different types of files of the real people: original selfies, original videos and videos of attacks with printed photos. The dataset solves tasks in the field of anti-spoofing and it is useful for buisness and safety systems. # Get the Dataset This is just an example o...
false
# Dataset Card for odia-qa-98K ## Dataset Description - **Homepage: https://www.odiagenai.org/** - **Repository: https://github.com/shantipriyap/OdiaGenAI** - **Point of Contact: Shantipriya Parida, and Sambit Sekhar** ### Dataset Summary ### Supported Tasks and Leaderboards Large Language Model (LLM) ### L...
false
# Dataset Card for OdiEnCorp_translation_instructions_25k ## Dataset Description - **Homepage: https://www.odiagenai.org/** - **Repository: https://github.com/shantipriyap/OdiaGenAI** - **Point of Contact: Shantipriya Parida, and Sambit Sekhar** ### Dataset Summary This dataset is the English-to-Odia translatio...
false
false
true
# Dataset Card for "CsFEVERv2" ## Dataset Description CsFEVERv2 is a dataset for Czech fact-checking developed as part of a bachelor thesis at the Artificial Intelligence Center of the Faculty of Electrical Engineering of the Czech technical university in Prague. The dataset consists of an **original** subset, which ...
false
# DivSumm summarization dataset Dataset introduced in the paper: Analyzing the Dialect Diversity in Multi-document Summaries (COLING 2022) _Olubusayo Olabisi, Aaron Hudson, Antonie Jetter, Ameeta Agrawal_ DivSumm is a novel dataset consisting of dialect-diverse tweets and human-written extractive and abstractive su...
false
# so13m so13m is a dataset containing 13m discussion threads from StackOverflow. The origin of the data is the StackExchange data dump from between January 2014 and December 2022. The threads cover a multitude of topics. This dataset serves as a natural language and (often) accompanying code in the domain of software e...
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg...
false
# Dataset Card for MADBase ## Dataset Description - **Homepage:** https://datacenter.aucegypt.edu/shazeem/ - **Repository:** - **Paper:** A Two-Stage System for Arabic Handwritten Digit Recognition Tested on a New Large Database. EA El-Sherif, S Abdelazeem Artificial intelligence and pattern recognition, 237-242 -...
true
true
true
This is the same dataset as [`OxAISH-AL-LLM/pubmed_20k_rct`](https://huggingface.co/datasets/OxAISH-AL-LLM/pubmed_20k_rct). The only differences are 1. Addition of a unique identifier, `uid` 1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers - `all-mpnet-bas...
true
This is the same dataset as [`DeveloperOats/DBPedia_Classes`](https://huggingface.co/datasets/DeveloperOats/DBPedia_Classes). The only differences are 1. Addition of a unique identifier, `uid` 1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers - `all-mpnet-b...
false
# Fine tuning progress validation - RedPajama 3B, StableLM Alpha 7B, Open-LLaMA This repository contains the progress of fine-tuning models: RedPajama 3B, StableLM Alpha 7B, Open-LLaMA. These models have been fine-tuned on a specific text dataset and the results of the fine-tuning process are provided in the text file...
false
# Ukrainian Hypernymy Pairs Dataset ## Background Hypernymy is the super-subordinate or ISA semantic relation that links more general terms to more specific ones. For example, *rose* is a hyponym of *flower*, a hypernym of *rose*. Words that are hyponyms of the same hypernym are called co-hyponyms, for instance, *rose...
false
Dataset created from bittensor's subnet1. Will be constantly updated as I add more Q/A. Dataset is currently in "raw" format, would love to have something prettier for loading into datasets.
true
This is the same dataset as [`armanc/pubmed-rct20k`](https://huggingface.co/datasets/armanc/pubmed-rct20k). The only differences are 1. Addition of a unique identifier, `uid` 1. Addition of the indices, that is 3 columns with the embeddings of 3 different sentence-transformers - `all-mpnet-base-v2` - `multi-...
false
## Kinyarwanda-English Augmented parallel text This dataset contains 1,400,000 Kinyarwanda-English sentence pairs augmented from 48,000 corpus from [MbazaNLP dataset](https://huggingface.co/datasets/mbazaNLP/Kinyarwanda_English_parallel_dataset), obtained by scraping web data from religious sources such as: [Bible]...
false
AugQ-Wiki is an unsupervised augmented dataset for training retrievers used in AugTriever: Unsupervised Dense Retrieval by Scalable Data Augmentation. It consists of 22.6M pseudo query-document pairs based on Wikipedia. It follows the same license of Wikipedia (Creative Commons Attribution-Share-Alike License 3.0). ```...
false
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。 #### 以acos数据集中抽取的jsonl文件一条数据举例: ``` { "task_type": "generation", "dataset": "acos", "input": ["the computer has difficulty swi...
false
> 上述数据集为ABSA(Aspect-Based Sentiment Analysis)领域数据集,基本形式为从句子中抽取:方面术语、方面类别(术语类别)、术语在上下文中情感极性以及针对该术语的观点词,不同数据集抽取不同的信息,这点在jsonl文件的“instruction”键中有分别提到,在此我将其改造为了生成任务,需要模型按照一定格式生成抽取结果。 补充:SemEval-2014数据集文件夹中有两个文件夹"laptop"和"restaurant",其实根据数据集文本的主要围绕主题区分的。抽取的元素方面,laptop和restaurant两文件夹中,数据的抽取元素也不同,laptop抽取的是方面类别和情感极性、restaura...
true
# NLP: Sentiment Classification Dataset This is a bundle dataset for a NLP task of sentiment classification in English. There is a sample project is using this dataset [GURA-gru-unit-for-recognizing-affect](https://github.com/NatLee/GURA-gru-unit-for-recognizing-affect). ## Content - `myanimelist-sts`: This datas...
false
# Dataset Card for Dataset Name ### Dataset Summary The benchmark datasets for document-level machine translation. ### Supported Tasks Document-level Machine Translation Tasks. ### Languages English-German ## Dataset Structure ### Data Instances TED: iwslt17, News: nc2016, Europarl: europarl7 ### Data Fields ...
true
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/hugg...
false
# Dataset Card for Dataset Name ### Dataset Summary Text corpus dataset (fifa world cup 2022) ## Additional Information ### Citation Information ``` @misc{ enwiki:1154298520, author = "{Wikipedia contributors}", title = "2022 FIFA World Cup --- {Wikipedia}{,} The Free Encyclopedia", year = "2023", ...
false
# Overview SGDD-TST - [Schema-Guided Dialogue Dataset for Text Style Transfer](https://arxiv.org/abs/2206.09676) is a dataset for evaluating the quality of content similarity measures for text style transfer in the domain of the personal plans. The original texts were obtained from [The Schema-Guided Dialogue Datase...
false
# Dataset Card for Bulgarian QnA reasoning with ~2.7K entries. ### Dataset Summary Contains Parquet of a list of instructions and answers. Each row consists of * INSTRUCTION * RESPONSE * SOURCE (reasoning_bg) * METADATA (json with language, url, id). ### Original Dataset is available here: * https://huggingface....
false
# Dataset Card for Dataset Name ## Dataset Description - **Homepage:** - **Repository:** - **Paper:** - **Leaderboard:** - **Point of Contact:** ### Dataset Summary For this dataset, we selected literary texts in Russian that are closest in style and subject matter to real diary entries, giving priority to te...
false
# Dataset Card for Dataset: OktoberfestFoodDatasetPlus ## Dataset Description - **Homepage: www.ilass.com** - **Repository: https://github.com/ilassAG/OktoberfestFoodDataset** - **Paper: https://arxiv.org/abs/1912.05007** ### Dataset Summary This dataset comprises three categories: drinkServed, foodServed, perso...
false
false
# KiriTrash Dataset ## Summary KiriTrash is a collection of trash images taken on the shorelines of Tarawa Atoll, Kiribati. This is a dataset I used for my own research. ## Dataset Description + Dataset format: COCO Format + Number of images: 650 training, 90 validation, 5 Test + Preprocessings: Auto-Oriented, Resize...
false
# Dataset Card for GSM QnA reasoning with ~8.8K entries. ### Dataset Summary Contains Parquet of a list of instructions and answers. Each row consists of * INSTRUCTION * RESPONSE * SOURCE * METADATA (json with language). ### Original Datasets are available here: * https://huggingface.co/datasets/gsm8k * https://...
false
# Summary This is a 🇹🇭 Thai-translated (GCP) dataset based on [MBZUAI/LaMini-instruction](MBZUAI/LaMini-instruction), The dataset was generated with a total of 2.58 million pairs of instructions and responses which later used to fine-tune the LaMini-LM model series. This dataset utilizes GPT-3.5-turbo and is based ...
false
# Dataset Card for "github-code-haskell-function" Rows: 3.26M Download Size: 1.17GB This dataset is extracted from [github-code-haskell-file](https://huggingface.co/datasets/blastwind/github-code-haskell-file). Each row has 3 flavors of the same function: `uncommented_code`: Includes the function and its closest si...
true
true
# Dataset Card for russe-semantics-sim with ~200K entries. Russian language. ### Dataset Summary License: MIT. Contains CSV of a list of word1, word2, their `connection score` (are they synonymous or associations), type of connection. ### Original Datasets are available here: - https://github.com/nlpub/russe-eval...
false
# Dataset Card for "code-search-net-php" ## Dataset Description - **Homepage:** None - **Repository:** https://huggingface.co/datasets/Nan-Do/code-search-net-go - **Paper:** None - **Leaderboard:** None - **Point of Contact:** [@Nan-Do](https://github.com/Nan-Do) ### Dataset Summary This dataset is the Php portion...
false
# Dataset Card for "clts" [original link](https://github.com/lxj5957/CLTS-Dataset)
false
false
# Dataset Card for all_combined_odia_171K ## Dataset Description - **Homepage: https://www.odiagenai.org/** - **Repository: https://github.com/shantipriyap/OdiaGenAI** - **Point of Contact: Shantipriya Parida, and Sambit Sekhar** ### Dataset Summary This dataset is a mix of Odia instruction sets translated from...
false
# Instruction Tuning with GPT 4 RedPajama-Chat This dataset has been converted from the <a href="https://github.com/Instruction-Tuning-with-GPT-4/GPT-4-LLM" target="_new">Instruction-Tuning-with-GPT-4</a> dataset for the purpose of fine-tuning the <a href="https://huggingface.co/togethercomputer/RedPajama-INCITE-Chat-3...
false
为https://huggingface.co/TMZN/ChatGLM-wyw 服务的数据集之一。 # ChatGLM-wyw 一个读了文言文的ChatGLM # 缘起 2023年5月16日,念叨了好久要让AI读文言文正式开工。<br> # 感谢 一站式整合包(含chatglm模型):链接:https://pan.baidu.com/s/13GePNuh8ZP_DkMVRf5sHqw?pwd=2d2z 一站式整合包(不含模型):链接:https://pan.baidu.com/s/1lMfG34jerHO7aFjfdKTGUw?pwd=6y7j 数据集制作大佬链接:https://github.com/huang1332/...
false
false
false