Shellcode_IA32 is a dataset for shellcode generation from English intents. The shellcodes are compilable on Intel Architecture 32-bits.
\\nWikipedia-based Image Text (WIT) Dataset is a large multimodal multilingual dataset. WIT is composed of a curated set of 37.6 million entity rich image-text examples with
This dataset is the CNN/Dailymail dataset translated to Dutch. This is the original dataset: ``` load_dataset("cnn_dailymail", '3.0.0') ``` And this is
We present two new large-scale datasets aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. The Qua
CANARD has been preprocessed by Voskarides et al. to train and evaluate their Query Resolution Term Classification model (QuReTeC). CANARD is a dataset for question-in-contex
Stanford Question Answering Dataset (SQuAD) is a reading comprehension dataset, consisting of questions posed by crowdworkers on a set of Wikipedia articles, where the answer
The args.me corpus (version 1.0, cleaned) comprises 382 545 arguments crawled from four debate portals in the middle of 2019. The debate portals are Debatewise, IDebate.org, D
Scientific papers datasets contains two sets of long and structured documents. The datasets are obtained from ArXiv and PubMed OpenAccess repositories. Both "arxiv" and "pubm
RecipeNLG: A Cooking Recipes Dataset for Semi-Structured Text Generation - Lite version The dataset we publish contains 7,198 cooking recipes (>7K). It's processed in more ca
TeCla: Text Classification Catalan dataset Catalan News corpus for Text classification, crawled from ACN (Catalan News Agency) site: www.acn.cat
This is a small subset representing 10K records from the original OSCAR dataset, "unshuffled_deduplicated_en" subset - created for testing. The records were extracted after ha
AnCora Catalan NER. This is a dataset for Named Eentity Reacognition (NER) from Ancora corpus adapted for Machine Learning and Language Mo
WikiGold dataset,Origianl dataset labels converted to IOB-format. Dataloading file based on https://github.com/huggingface/datasets/blob/master/datasets/conllpp/conllpp.py and
\\nNorwegian Colossal Corpus v2. Short sequences of maximum 100k characters.
This dataset is designed to generate lyrics with HuggingArtists.
This new dataset is designed to solve this great NLP task and is crafted with a lot of care.
CrowdSpeech is a publicly available large-scale dataset of crowdsourced audio transcriptions. It contains annotations for more than 50 hours of English speech transcriptions f
The SNLI corpus (version 1.0) is a collection of 570k human-written English sentence pairs manually labeled for balanced classification with the labels entailment, contradicti