dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
KDD Cup 1999 | This is the data set used for The Third International Knowledge Discovery and Data Mining Tools Competition, which was held in conjunction with KDD-99 The Fifth International Conference on Knowledge Discovery and Data Mining. The competition task was to build a network intrusion detector, a predictive model capable of ... | Provide a detailed description of the following dataset: KDD Cup 1999 |
Arcene | ARCENE was obtained by merging three mass-spectrometry datasets to obtain enough training and test data for a benchmark. The original features indicate the abundance of proteins in human sera having a given mass value. Based on those features one must separate cancer patients from healthy patients. We added a number of... | Provide a detailed description of the following dataset: Arcene |
DukeMTMC-VideoReID | The DukeMTMC-VideoReID (Duke Multi-Tracking Multi-Camera Video-based ReIDentification) dataset is a subset of the DukeMTMC for video-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian video datasets wherein images are cropped by hand-draw... | Provide a detailed description of the following dataset: DukeMTMC-VideoReID |
MTOP | A multilingual task-oriented semantic parsing dataset covering 6 languages and 11 domains. | Provide a detailed description of the following dataset: MTOP |
Emotional Dialogue Acts | Emotional Dialogue Acts data contains dialogue act labels for existing emotion multi-modal conversational datasets.
We chose two popular multimodal emotion datasets: Multimodal EmotionLines Dataset (MELD) and Interactive Emotional dyadic MOtion CAPture database (IEMOCAP).
EDAs reveal associations between dialogue acts... | Provide a detailed description of the following dataset: Emotional Dialogue Acts |
Santesteban VTO | Physics-based simulated garments on top of SMPL bodies. The data is generated used a modified version of ARCSim and sequences from the CMU Motion Capture Database converted to SMPL format in SURREAL. Each simulated sequence is stored as a .pkl file that contains the following data: | Provide a detailed description of the following dataset: Santesteban VTO |
Lemons quality control dataset | Lemon dataset has been prepared to investigate the possibilities to tackle the issue of fruit quality control. It contains 2690 annotated images (1056 x 1056 pixels). Raw lemon images have been captured using the procedure described in the following blogpost and manually annotated using CVAT. | Provide a detailed description of the following dataset: Lemons quality control dataset |
Douban Conversation Corpus | We release Douban Conversation Corpus, comprising a training data set, a development set and a test set for retrieval based chatbot. The statistics of Douban Conversation Corpus are shown in the following table.
| |Train|Val| Test |
| ------------- |:-------------:|:-------------:|:-------------:|
|... | Provide a detailed description of the following dataset: Douban Conversation Corpus |
E-commerce | We release E-commerce Dialogue Corpus, comprising a training data set, a development set and a test set for retrieval based chatbot. The statistics of E-commerical Conversation Corpus are shown in the following table.
| |Train|Val| Test |
| ------------- |:-------------:|:-------------:|:-------------... | Provide a detailed description of the following dataset: E-commerce |
RRS | | | Train | Validation | Test | Ranking Test |
| --------- | ----- | ---------- | ------- | ------------ |
| size | 0.4M | 50K | 5K | 800 |
| pos:neg | 1:1 | 1:9 | 1.2:8.8 | - |
| avg turns | 5.0 | 5.0 | 5.0 | 5.0 |
Ranking test ... | Provide a detailed description of the following dataset: RRS |
RRS Ranking Test | | | Train | Validation | Test | Ranking Test |
| --------- | ----- | ---------- | ------- | ------------ |
| size | 0.4M | 50K | 5K | 800 |
| pos:neg | 1:1 | 1:9 | 1.2:8.8 | - |
| avg turns | 5.0 | 5.0 | 5.0 | 5.0 |
Ranking test ... | Provide a detailed description of the following dataset: RRS Ranking Test |
Duolingo STAPLE Shared Task | This is the dataset for the 2020 Duolingo shared task on Simultaneous Translation And Paraphrase for Language Education (STAPLE). Sentence prompts, along with automatic translations, and high-coverage sets of translation paraphrases weighted by user response are provided in 5 language pairs. Starter code for this task ... | Provide a detailed description of the following dataset: Duolingo STAPLE Shared Task |
Duolingo Bandit Notifications | Replication datasets (200 million rows) used in experiments by Yancey & Settles (2020). (2019-06-11) | Provide a detailed description of the following dataset: Duolingo Bandit Notifications |
Duolingo SLAM Shared Task | This repository contains gzipped files containing more than 2 million tokens (words) from answers submitted by more than 6,000 students over the course of their first 30 days of using Duolingo. It also contains baseline starter code written in Python. There are three data sets, corresponding to three different language... | Provide a detailed description of the following dataset: Duolingo SLAM Shared Task |
Duolingo Spaced Repetition Data | This is a gzipped CSV file containing the 13 million Duolingo student learning traces used in experiments by Settles & Meeder (2016). For more details and replication source code, visit: https://github.com/duolingo/halflife-regression (2016-06-07) | Provide a detailed description of the following dataset: Duolingo Spaced Repetition Data |
SubSumE | # SubSumE Dataset
This repository contains the SubSumE dataset for subjective document summarization. See [the paper](https://aclanthology.org/2021.newsum-1.14/) and the [talk](https://www.youtube.com/watch?v=0vyUQArRrvY) for details on dataset creation. Also check out our work [SuDocu](http://sudocu.cs.umass.edu/) ... | Provide a detailed description of the following dataset: SubSumE |
AnswerSumm | AnswerSumm is a dataset of 4,631 CQA threads for answer summarization, curated by professional linguists. | Provide a detailed description of the following dataset: AnswerSumm |
MultiSV | **MultiSV** is a corpus designed for training and evaluating text-independent multi-channel speaker verification systems. It can be readily used also for experiments with dereverberation, denoising, and speech enhancement. | Provide a detailed description of the following dataset: MultiSV |
ANIM | It comprises synthetic mesh sequences from Deformation Transfer for Triangle Meshes. | Provide a detailed description of the following dataset: ANIM |
AMA | **Articulated Mesh Animation** (**AMA**) is a real-world dataset containing 10 mesh sequences depicting 3 different humans performing various actions | Provide a detailed description of the following dataset: AMA |
CAPE | The CAPE dataset is a 3D dynamic dataset of clothed humans, featuring:
- 3D mesh registrations of accurate scans of clothed people in motion, captured at 60 FPS;
- Consistent SMPL mesh topology, all frames in correspondence;
- Precise, captured minimally clothed body shape under clothing;
- Clothed bodies of larg... | Provide a detailed description of the following dataset: CAPE |
TSSB | The time series segmentation benchmark (TSSB) currently contains 75 annotated time series (TS) with 1-9 segments. Each TS is constructed from one of the UEA & UCR time series classification datasets. We group TS by label and concatenate them to create segments with distinctive temporal patterns and statistical properti... | Provide a detailed description of the following dataset: TSSB |
Samoa Measles Outbreak 2019 | Dataset contains cumulative reported cases, hospital admission and discharge, and mortality data as parsed from the publicly available press releases by the Ministry of Health and National Emergency Operations Centre (NEOC) of the Government of Samoa. The data spans the initial press release at the end of September 201... | Provide a detailed description of the following dataset: Samoa Measles Outbreak 2019 |
WPC | The **WPC** (Waterloo Point Cloud) database is a dataset for subjective and objective quality assessment of point clouds. | Provide a detailed description of the following dataset: WPC |
ArgKP-2021 | Data set covering a set of debatable topics, where for each topic and stance, a set of triplets of the form `<argument, KP, label>` is provided. The data set is based on the [ArgKP data set](http://dx.doi.org/10.18653/v1/2020.acl-main.371), which contains arguments contributed by the crowd on 28 debatable topics, split... | Provide a detailed description of the following dataset: ArgKP-2021 |
Arendt | # Digital Edition: Essays from Hannah Arendt
We have created a NER dataset from the digital edition "Sechs Essays" by Hannah Arendt. It consists of 23 documents from the period 1932-1976, which are available as TEI files online (see https://hannah-arendt-edition.net/3p.html?lang=de).
, car colors, object occlusions, diverse backgrounds (build... | Provide a detailed description of the following dataset: Biased-Cars |
VGGFace2 HQ | A high-resolution version of VGGFace2 for academic face editing purposes.
This project uses GFPGAN for image restoration and insightface for data preprocessing (crop and align). | Provide a detailed description of the following dataset: VGGFace2 HQ |
GINC | GINC (Generative In-Context learning Dataset) is a small-scale synthetic dataset for studying in-context learning. The pretraining data is generated by a mixture of HMMs and the in-context learning prompt examples are also generated from HMMs (either from the mixture or not). The prompt examples are out-of-distribution... | Provide a detailed description of the following dataset: GINC |
HGP | Hands Guns and Phones (HGP) dataset contains 2199 images (1989 for training an 210 for testing) of people using guns or phones in real-world scenarios (people making phones reviews, shooting drills, or making calls). Every image of this dataset is labeled with the bounding boxes of Hands, Phones and Guns. All the afore... | Provide a detailed description of the following dataset: HGP |
THGP | Temporal Hands Guns and Phones (THGP) dataset, is a collection of 5960 video frames (5000 for training and 960 for testing). The training part is composed with 50 videos of 100 frames (720 × 720 pixels). This dataset contains 20 videos of shooting drills, 20 videos of armed robberies, and 10 videos of people making cal... | Provide a detailed description of the following dataset: THGP |
ARCT | Freely licensed dataset with warrants for 2k authentic arguments from news comments. On this basis, we present a new challenging task, the argument reasoning comprehension task. Given an argument with a claim and a premise, the goal is to choose the correct implicit warrant from two options. Both warrants are plausible... | Provide a detailed description of the following dataset: ARCT |
Pan-STARRS | Pan-STARRS is a system for wide-field astronomical imaging developed and operated by the Institute for Astronomy at the University of Hawaii. Pan-STARRS1 (PS1) is the first part of Pan-STARRS to be completed and is the basis for both Data Releases 1 and 2 (DR1 and DR2). The PS1 survey used a 1.8 meter telescope and it... | Provide a detailed description of the following dataset: Pan-STARRS |
CAR | CAR contains visual attributes for objects in the Cityscapes dataset.
For each object in an image, we have a list of attributes that depend on the category of the object. For instance, a vehicle category has a visibility attribute while a pedestrian has an activity attribute (walking, standing, etc.).
The objective o... | Provide a detailed description of the following dataset: CAR |
Robotic Interestingness | Robotic Interestingness dataset was created to promote the development visual interesting scene prediction for such purpose, for robots to better sense the world. | Provide a detailed description of the following dataset: Robotic Interestingness |
Haze4k | **Haze4k** is a synthesized dataset with 4,000 hazy images, in which each hazy image has the associate ground truths of a latent clean image, a transmission map, and an atmospheric light ma | Provide a detailed description of the following dataset: Haze4k |
ChEBI-20 | Dataset contains 33,010 molecule-description pairs split into 80\%/10\%/10\% train/val/test splits. The goal of the task is to retrieve the relevant molecule for a natural language description. It is defined as follows:
To push the boundaries of multimodal models, we present a new IR task: \textbf{Text2Mol}.
Give... | Provide a detailed description of the following dataset: ChEBI-20 |
WikiContradiction | **WikiContradiction** is a novel wiki dataset for self-contradiction Wikipedia article detection. | Provide a detailed description of the following dataset: WikiContradiction |
OpenFWI | **OpenFWI** is a collection of large-scale open-source benchmark datasets for seismic full waveform inversion (FWI). OpenFWI is catered for the geoscience and machine learning community to facilitate diversified, rigorous and reproducible research on machine learning-based FWI. | Provide a detailed description of the following dataset: OpenFWI |
B-Pref | **B-Pref** is a benchmark specially designed for preference-based RL. A key challenge with such a benchmark is providing the ability to evaluate candidate algorithms quickly, which makes relying on real human input for evaluation prohibitive. At the same time, simulating human input as giving perfect preferences for th... | Provide a detailed description of the following dataset: B-Pref |
Product Page | **Product Page** is a large-scale and realistic dataset of webpages. The dataset contains 51,701 manually labeled product pages from 8,175 real e-commerce websites. The pages can be rendered entirely in a web browser and are suitable for computer vision applications. This makes it substantially richer and more diverse ... | Provide a detailed description of the following dataset: Product Page |
IconQA | Current visual question answering (VQA) tasks mainly consider answering human-annotated questions for natural images in the daily-life context. **Icon question answering** (**IconQA**) is a benchmark which aims to highlight the importance of abstract diagram understanding and comprehensive cognitive reasoning in real-w... | Provide a detailed description of the following dataset: IconQA |
VoiceBank-SLR | Because there is no publicly available free dataset for
speech dereverberation, we prepared a dataset based on the
clean speech from VoiceBank-DEMAND [26] (discard the
noisy speech) and convolved them with the room impulse
response (RIR) from OpenSLR. | Provide a detailed description of the following dataset: VoiceBank-SLR |
LIVE-VQC | The great variations of videographic skills in videography, camera designs, compression and processing protocols, communication and bandwidth environments, and displays leads to an enormous variety of video impairments. Current no-reference (NR) video quality models are unable to handle this diversity of distortions. T... | Provide a detailed description of the following dataset: LIVE-VQC |
KoNViD-1k | Subjective video quality assessment (VQA) strongly depends on semantics, context, and the types of visual distortions. A lot of existing VQA databases cover small numbers of video sequences with artificial distortions. When testing newly developed Quality of Experience (QoE) models and metrics, they are commonly evalua... | Provide a detailed description of the following dataset: KoNViD-1k |
YouTube-UGC | This YouTube dataset is a sampling from thousands of User Generated Content (UGC) as uploaded to YouTube distributed under the Creative Commons license. This dataset was created in order to assist in the advancement of video compression and quality assessment research of UGC videos. | Provide a detailed description of the following dataset: YouTube-UGC |
LIVE-FB LSVQ | No-reference (NR) perceptual video quality assessment (VQA) is a complex, unsolved, and important problem to social and streaming media applications. Efficient and accurate video quality predictors are needed to monitor and guide the processing of billions of shared, often imperfect, user-generated content (UGC). Unfor... | Provide a detailed description of the following dataset: LIVE-FB LSVQ |
LIVE-ETRI | The video deployed parameter space is continuously increasing to provide more realistic and immersive experiences to global streaming and social media viewers. However, increments in video parameters such as spatial resolution or frame rate are inevitably associated with larger data volumes. Transmitting increasingly v... | Provide a detailed description of the following dataset: LIVE-ETRI |
P3M-10k | P3M-10k contains 10421 high-resolution real-world
face-blurred portrait images, along with their manually labeled alpha mattes. The Dataset is
aimed to aid research efforts in the area of portrait image matting and related topics. | Provide a detailed description of the following dataset: P3M-10k |
SLUE | **Spoken Language Understanding Evaluation** (**SLUE**) is a suite of benchmark tasks for spoken language understanding evaluation. It consists of limited-size labeled training sets and corresponding evaluation sets. This resource would allow the research community to track progress, evaluate pre-trained representatio... | Provide a detailed description of the following dataset: SLUE |
SOSD | SOSD is a collection of dataset to benchmark the lookup performance of learned indexes.
SOSD currently includes eight different datasets. Each dataset consists of 200 million 64-bit unsigned integers (keys) with very few duplicates (if at all):
`amzn` represents book sale popularity data.
`face` is an upsampled ve... | Provide a detailed description of the following dataset: SOSD |
ClevrTex | **ClevrTex** is a new benchmark designed as the next challenge to compare, evaluate and analyze algorithms for unsupervised multi-object segmentation. ClevrTex features synthetic scenes with diverse shapes, textures and photo-mapped materials, created using physically based rendering techniques.
Image source: [Karaz... | Provide a detailed description of the following dataset: ClevrTex |
LegalNERo | LegalNERo is a manually annotated corpus for named entity recognition in the Romanian legal domain.
It provides gold annotations for organizations, locations, persons, time and legal resources mentioned in legal documents.
Additionally it offers GEONAMES codes for the named entities annotated as location (where a li... | Provide a detailed description of the following dataset: LegalNERo |
Evidence Inference 2.0 | The dataset consists of biomedical articles describing randomized control trials (RCTs) that compare multiple treatments. Each of these articles will have multiple questions, or 'prompts' associated with them. These prompts will ask about the relationship between an intervention and comparator with respect to an outcom... | Provide a detailed description of the following dataset: Evidence Inference 2.0 |
RTASC | The ROBIN Technical Acquisition Speech Corpus (ROBINTASC) was developed within the ROBIN project. Its main purpose was to improve the behaviour of a conversational agent, allowing human-machine interaction in the context of purchasing technical equipment. It contains over 6 hours of read speech in Romanian language. We... | Provide a detailed description of the following dataset: RTASC |
The ComMA Dataset v0.2 | The ComMA Dataset v0.2 is a multilingual dataset annotated with a hierarchical, fine-grained tagset marking different types of aggression and the "context" in which they occur. The context, here, is defined by the conversational thread in which a specific comment occurs and also the "type" of discursive role that the c... | Provide a detailed description of the following dataset: The ComMA Dataset v0.2 |
Medical Bottles | Original dataset for "HIGH PRECISION MEDICINE BOTTLES VISION ONLINE INSPECTION SYSTEM AND CLASSIFICATION BASED ON MULTI-FEATURES AND ENSEMBLE LEARNING VIA INDEPENDENCE TEST" | Provide a detailed description of the following dataset: Medical Bottles |
RedCaps | **RedCaps** is a large-scale dataset of 12M image-text pairs collected from Reddit. Images and captions from Reddit depict and describe a wide variety of objects and scenes. The data is collected from a manually curated set of subreddits (350 total), which give coarse image labels and allow steering of the dataset comp... | Provide a detailed description of the following dataset: RedCaps |
Translated TACRED | 533 parallel examples sampled from TACRED, translated into Russian and Korean (and 3 additional examples in Russian), accompanied with tranlsation of a list of trigger words collected for the different relations. | Provide a detailed description of the following dataset: Translated TACRED |
CytoImageNet | CytoImageNet is a large-scale pretraining dataset of microscopy images (890K, 894 classes). In the paper, CytoImageNet pretraining yielded features competitive to **and different** from ImageNet pretrained features on downstream microscopy tasks.
* It was constructed from 40 openly available microscopy datasets.
*... | Provide a detailed description of the following dataset: CytoImageNet |
MP-3DHP: Multi-Person 3D Human Pose Dataset | Multi-Person 3D HumanPose Dataset (MP-3DHP) is a depth sensor-based dataset, which was constructed to facilitate the development of multi-person 3D pose estimation methods targeting real-world challenges. The dataset includes 177k training data and 33k validation data where both the 3D human poses and body segments are... | Provide a detailed description of the following dataset: MP-3DHP: Multi-Person 3D Human Pose Dataset |
3D Lane Synthetic Dataset | This is a synthetic dataset constructed to stimulate the development and evaluation of 3D lane detection methods. | Provide a detailed description of the following dataset: 3D Lane Synthetic Dataset |
Yelp2018 | The Yelp2018 dataset is adopted from the 2018 edition of the yelp challenge. Wherein local businesses like restaurants and bars are viewed as items. We use the same 10-core setting in order to ensure data quality. | Provide a detailed description of the following dataset: Yelp2018 |
CEAHB2021-5 | Ancient books script identification of Chinese ethnic minorities with deep convolutional neural networks via multi-branch and spatial pyramid pooling
Automatic classification of ancient books is an important component of the digital platform of ancient books. In view of the ancient books script identification task o... | Provide a detailed description of the following dataset: CEAHB2021-5 |
TLHDIBD2021 | Hybrid-CBF: A hybrid classification and binarization framework for historical Tai Le document image binarization
The binarization of historical documents is very important and more challenging than the binarization of ordinary documents. As a result of the serious noise pollution found on the historical Tai Le documen... | Provide a detailed description of the following dataset: TLHDIBD2021 |
4DMatch | A benchmark for matching and registration of partial point clouds with time-varying geometry. It is constructed using randomly selected 1761 sequences from [DeformingThings4D](/dataset/deformingthings4d). | Provide a detailed description of the following dataset: 4DMatch |
WDC LSPM | Many e-shops have started to mark-up product data within their HTML pages using the schema.org vocabulary. The Web Data Commons project regularly extracts such data from the Common Crawl, a large public web crawl. The Web Data Commons Training and Test Sets for Large-Scale Product Matching contain product offers from d... | Provide a detailed description of the following dataset: WDC LSPM |
Evaluating registrations of serial sections with distortions of the ground truths. Supplemental data | This is the supplemental data for our paper on how to benchmark registrations of serial sections with ground truths. There are three main modalities and one further, as a reference. | Provide a detailed description of the following dataset: Evaluating registrations of serial sections with distortions of the ground truths. Supplemental data |
UTFPR-SBD3 | The semantic segmentation of clothes is a challenging task due to the wide variety of clothing styles, layers and shapes.
The UTFPR-SBD3 contains 4,500 images manually annotated at pixel level in 18 classes plus background.
To ensure the high quality of the dataset, all images were manually annotated at the pixel lev... | Provide a detailed description of the following dataset: UTFPR-SBD3 |
FGraDA | Previous research for adapting a general neural machine translation (NMT) model into a specific domain usually neglects the diversity in translation within the same domain, which is a core problem for domain adaptation in real- world scenarios. One representative of such challenging scenarios is to deploy a translation... | Provide a detailed description of the following dataset: FGraDA |
IMDB-WIKI-SbS | IMDB-WIKI-SbS is a new large-scale dataset for evaluation pairwise comparisons, building on the success of a well-known benchmark for computer vision systems IMDB-WIKI. This dataset uses the age information offered by IMDB-WIKI as ground truth while providing a balanced distribution of ages and genders of people in pho... | Provide a detailed description of the following dataset: IMDB-WIKI-SbS |
LIRIS human activities dataset | The LIRIS human activities dataset contains (gray/rgb/depth) videos showing people performing various activities taken from daily life (discussing, telphone calls, giving an item etc.). The dataset is fully annotated, where the annotation not only contains information on the action class but also its spatial and tempor... | Provide a detailed description of the following dataset: LIRIS human activities dataset |
CoVaxLies v1 | CoVaxLies v1 includes 17 known Misinformation Targets (MisTs) found on Twitter about the covid-19 vaccines. Language experts annotated tweets as Relevant or Not Relevant, and then further annotated Relevant tweets with Stance towards each MisT. This collection is a first step in providing large-scale resources for misi... | Provide a detailed description of the following dataset: CoVaxLies v1 |
Freibrug Cars | An object-centric dataset consiting of 52 RGB sequences of cars | Provide a detailed description of the following dataset: Freibrug Cars |
LSUI | We released a large-scale underwater image (LSUI) dataset including 5004 image pairs, which involve richer underwater scenes (lighting conditions, water types and target categories) and better visual quality reference images than the existing ones. | Provide a detailed description of the following dataset: LSUI |
notebookcdg | Inspired by Wang et al. 2021, we decided to utilize the top-voted and well-documented Kaggle notebooks to construct the notebookCDGdataset
We collected the top 10% highly-voted notebooks from the top 20 popular competitions on Kaggle (e.g. Titanic). We checked the data policy of each of the 20 competitions, none of ... | Provide a detailed description of the following dataset: notebookcdg |
Abt-Buy | The Abt-Buy dataset for entity resolution derives from the online retailers Abt.com and Buy.com. The dataset contains 1081 entities from abt.com and 1092 entities from buy.com as well as a gold standard (perfect mapping) with 1097 matching record pairs between the two data sources. The common attributes between the tw... | Provide a detailed description of the following dataset: Abt-Buy |
Amazon-Google | The Amazon-Google dataset for entity resolution derives from the online retailers Amazon.com and the product search service of Google accessible through the Google Base Data API. The dataset contains 1363 entities from amazon.com and 3226 google products as well as a gold standard (perfect mapping) with 1300 matching ... | Provide a detailed description of the following dataset: Amazon-Google |
MusicBrainz20K | The MusicBrainz20K dataset for entity resolution and entity clustering is based on real records about songs from the MusicBrainz database. Each record is described with the following attributes: artist, title, album, year and length. The records have been modified with the DAPO [1] data generator. The generated dataset... | Provide a detailed description of the following dataset: MusicBrainz20K |
Vehicle-1M | Vehicle-1M involves vehicle images captured across day and night, from head or rear, by multiple surveillance cameras installed in cities. There are totally 936,051 images from 55,527 vehicles and 400 vehicle models in the dataset. Each image is attached with a vehicle ID label denoting its identity in real world as w... | Provide a detailed description of the following dataset: Vehicle-1M |
WikiNEuRal | WikiNEuRal is a high-quality automatically-generated dataset for Multilingual Named Entity Recognition. | Provide a detailed description of the following dataset: WikiNEuRal |
Corrosion Image Data Set for Automating Scientific Assessment of Materials | The study of material corrosion is an important research area, with corrosion degradation of metallic structures causing expenses up to 4% of the global domestic product annually along with major safety risks worldwide. Unfortunately, large-scale and timely scientific discovery of materials has been hindered by the lac... | Provide a detailed description of the following dataset: Corrosion Image Data Set for Automating Scientific Assessment of Materials |
ClimART | Numerical simulations of Earth's weather and climate require substantial amounts of computation. This has led to a growing interest in replacing subroutines that explicitly compute physical processes with approximate machine learning (ML) methods that are fast at inference time. Within weather and climate models, atmos... | Provide a detailed description of the following dataset: ClimART |
IATOS Dataset | Archivos con audios de toses de personas grabadas por celular, segmentados por COVID positivo y negativo según resultado de test RT-PCR. | Provide a detailed description of the following dataset: IATOS Dataset |
GPR1200 | Most publications that aim to optimize neural networks for CBIR, train and test their models on domain specific datasets. It is therefore unclear, if those networks can be used as a general-purpose image feature extractor. After analyzing popular image retrieval test sets we decided to manually curate GPR1200, an easy ... | Provide a detailed description of the following dataset: GPR1200 |
Orchard | Orchard is a diagnostic dataset for systematically evaluating hierarchical reasoning in state-of-the-art neural sequence models | Provide a detailed description of the following dataset: Orchard |
GVFC | This is a new dataset of news headlines and their frames related to the issue of gun violence in the United States. This Gun Violence Frame Corpus (GVFC) was curated and annotated by journalism and communication experts. The articles in this dataset are drawn from a sample of news articles from a list of 30 top U.S. ne... | Provide a detailed description of the following dataset: GVFC |
A dataset of neonatal EEG recordings with seizures annotations | Neonatal seizures are a common emergency in the neonatal intensive care unit (NICU). There are many questions yet to be answered regarding the temporal/spatial characteristics of seizures from different pathologies, response to medication, effects on neurodevelopment and optimal detection. This dataset contains EEG rec... | Provide a detailed description of the following dataset: A dataset of neonatal EEG recordings with seizures annotations |
MMPTRACK | Multi-camera Multiple People Tracking (MMPTRACK) dataset has about 9.6 hours of videos, with over half a million frame-wise annotations. The dataset is densely annotated, e.g., per-frame bounding boxes and person identities are available, as well as camera calibration parameters. Our dataset is recorded with 15 frames ... | Provide a detailed description of the following dataset: MMPTRACK |
MIS-Check Dam | Minor Irrigation Structures Check-Dam Dataset is a public dataset annotated by domain experts using images from Google static map for instance segmentation and object detection tasks.
Google drive link for the dataset:
https://drive.google.com/drive/u/2/folders/16-XNaD6Cfbec7cpJB9_raYz8tl0CEQzZ | Provide a detailed description of the following dataset: MIS-Check Dam |
fluocells | By releasing this dataset, we aim at providing a new testbed for computer vision techniques using Deep Learning. The main peculiarity is the shift from the domain of "natural images" proper of common benchmark dataset to biological imaging. We anticipate that the advantages of doing so could be two-fold: i) fostering r... | Provide a detailed description of the following dataset: fluocells |
MSU Video Alignment and Retrieval Benchmark Suite | Frame-to-frame video alignment/synchronization | Provide a detailed description of the following dataset: MSU Video Alignment and Retrieval Benchmark Suite |
Manually annotated 3-digit occupation codes from the Norwegian 1950 census | Manually annotated 3-digit occupation codes from the Norwegian full count 1950 population census. | Provide a detailed description of the following dataset: Manually annotated 3-digit occupation codes from the Norwegian 1950 census |
Manually annotated 3-digit occupation code training set from the Norwegian 1950 census | The Norwegian Historical Data Centre, 2021, "Manually annotated 3-digit occupation code training set from the Norwegian 1950 census", https://doi.org/10.18710/7JWAZX, DataverseNO, V1 | Provide a detailed description of the following dataset: Manually annotated 3-digit occupation code training set from the Norwegian 1950 census |
DeepSport Dataset | This basketball dataset was acquired under the Walloon region project DeepSport, using the Keemotion system installed in multiple arenas.
We would like to thanks both Keemotion for letting us use their system for raw image acquisition during live productions, and the LNB for the rights on their images. | Provide a detailed description of the following dataset: DeepSport Dataset |
CNTD | Chinese and Naxi scene text detection data set, labelme to json. | Provide a detailed description of the following dataset: CNTD |
CUTE80 | CUTE80 is necessary in order to show the capability of the current text detection method in handling curved texts. | Provide a detailed description of the following dataset: CUTE80 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.