query
stringlengths
17
161
keyphrase_query
stringlengths
3
85
year
int64
2.01k
2.02k
negative_cands
sequence
positive_cands
sequence
abstracts
list
I want to propose a method to learn 6-DoF pose estimation from point clouds for od
lidar odometry
2,020
[ "Long-term visual localization", "S3DIS", "SynthHands", "Completion3D", "3DNet" ]
[ "KITTI", "Argoverse" ]
[ { "dkey": "KITTI", "dval": "KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner. Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities. Álvarez et al. generated ground truth for 323 images from the road detection challenge with three classes: road, vertical, and sky. Zhang et al. annotated 252 (140 for training and 112 for testing) acquisitions – RGB and Velodyne scans – from the tracking challenge for ten object categories: building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence. Ros et al. labeled 170 training images and 46 testing images (from the visual odometry challenge) with 11 classes: building, tree, sky, car, sign, road, pedestrian, fence, pole, sidewalk, and bicyclist." }, { "dkey": "Argoverse", "dval": "Argoverse is a tracking benchmark with over 30K scenarios collected in Pittsburgh and Miami. Each scenario is a sequence of frames sampled at 10 HZ. Each sequence has an interesting object called “agent”, and the task is to predict the future locations of agents in a 3 seconds future horizon. The sequences are split into training, validation and test sets, which have 205,942, 39,472 and 78,143 sequences respectively. These splits have no geographical overlap." }, { "dkey": "Long-term visual localization", "dval": "Long-term visual localization provides a benchmark datasets aimed at evaluating 6 DoF pose estimation accuracy over large appearance variations caused by changes in seasonal (summer, winter, spring, etc.) and illumination (dawn, day, sunset, night) conditions. Each dataset consists of a set of reference images, together with their corresponding ground truth poses, and a set of query images." }, { "dkey": "S3DIS", "dval": "The Stanford 3D Indoor Scene Dataset (S3DIS) dataset contains 6 large-scale indoor areas with 271 rooms. Each point in the scene point cloud is annotated with one of the 13 semantic categories." }, { "dkey": "SynthHands", "dval": "The SynthHands dataset is a dataset for hand pose estimation which consists of real captured hand motion retargeted to a virtual hand with natural backgrounds and interactions with different objects. The dataset contains data for male and female hands, both with and without interaction with objects. While the hand and foreground object are synthtically generated using Unity, the motion was obtained from real performances as described in the accompanying paper. In addition, real object textures and background images (depth and color) were used. Ground truth 3D positions are provided for 21 keypoints of the hand." }, { "dkey": "Completion3D", "dval": "The Completion3D benchmark is a dataset for evaluating state-of-the-art 3D Object Point Cloud Completion methods. Ggiven a partial 3D object point cloud the goal is to infer a complete 3D point cloud for the object." }, { "dkey": "3DNet", "dval": "The 3DNet dataset is a free resource for object class recognition and 6DOF pose estimation from point cloud data. 3DNet provides a large-scale hierarchical CAD-model databases with increasing numbers of classes and difficulty with 10, 60 and 200 object classes together with evaluation datasets that contain thousands of scenes captured with an RGB-D sensor." } ]
I want to use GAN to generate data for semi-supervised learning.
semi-supervised learning images
2,018
[ "VoxPopuli", "DCASE 2018 Task 4", "DTD", "Friedman1", "ExtremeWeather", "C&Z" ]
[ "VIPeR", "Market-1501", "CUHK03" ]
[ { "dkey": "VIPeR", "dval": "The Viewpoint Invariant Pedestrian Recognition (VIPeR) dataset includes 632 people and two outdoor cameras under different viewpoints and light conditions. Each person has one image per camera and each image has been scaled to be 128×48 pixels. It provides the pose angle of each person as 0° (front), 45°, 90° (right), 135°, and 180° (back)." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "VoxPopuli", "dval": "VoxPopuli is a large-scale multilingual corpus providing 100K hours of unlabelled speech data in 23 languages. It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning. VoxPopuli also contains 1.8K hours of transcribed speeches in 16 languages and their aligned oral interpretations into 5 other languages totaling 5.1K hours." }, { "dkey": "DCASE 2018 Task 4", "dval": "DCASE2018 Task 4 is a dataset for large-scale weakly labeled semi-supervised sound event detection in domestic environments. The data are YouTube video excerpts focusing on domestic context which could be used for example in ambient assisted living applications. The domain was chosen due to the scientific challenges (wide variety of sounds, time-localized events...) and potential industrial applications.\nSpecifically, the task employs a subset of “Audioset: An Ontology And Human-Labeled Dataset For Audio Events” by Google. Audioset consists of an expanding ontology of 632 sound event classes and a collection of 2 million human-labeled 10-second sound clips (less than 21% are shorter than 10-seconds) drawn from 2 million Youtube videos. The ontology is specified as a hierarchical graph of event categories, covering a wide range of human and animal sounds, musical instruments and genres, and common everyday environmental sounds.\nTask 4 focuses on a subset of Audioset that consists of 10 classes of sound events: speech, dog, cat, alarm bell ringing, dishes, frying, blender, running water, vacuum cleaner, electric shaver toothbrush." }, { "dkey": "DTD", "dval": "The Describable Textures Dataset (DTD) contains 5640 texture images in the wild. They are annotated with human-centric attributes inspired by the perceptual properties of textures." }, { "dkey": "Friedman1", "dval": "The friedman1 data set is commonly used to test semi-supervised regression methods." }, { "dkey": "ExtremeWeather", "dval": "Encourages machine learning research in this area and to help facilitate further work in understanding and mitigating the effects of climate change." }, { "dkey": "C&Z", "dval": "One of the first datasets (if not the first) to highlight the importance of bias and diversity in the community, which started a revolution afterwards. Introduced in 2014 as integral part of a thesis of Master of Science [1,2] at Carnegie Mellon and City University of Hong Kong. It was later expanded by adding synthetic images generated by a GAN architecture at ETH Zürich (in HDCGAN by Curtó et al. 2017). Being then not only the pioneer of talking about the importance of balanced datasets for learning and vision but also for being the first GAN augmented dataset of faces. \n\nThe original description goes as follows:\n\nA bias-free dataset, containing human faces from different ethnical groups in a wide variety of illumination conditions and image resolutions. C&Z is enhanced with HDCGAN synthetic images, thus being the first GAN augmented dataset of faces.\n\nDataset: https://github.com/curto2/c\n\nSupplement (with scripts to handle the labels): https://github.com/curto2/graphics\n\n[1] https://www.curto.hk/c/decurto.pdf\n\n[2] https://www.zarza.hk/z/dezarza.pdf" } ]
An effective semi-automatic method for cleaning noisy large face datasets with the use of face recognition.
face recognition images
2,020
[ "VoxCeleb2", "300W", "IMDb-Face", "Color FERET", "CPLFW", "MeGlass", "IJB-B" ]
[ "CASIA-WebFace", "CelebA" ]
[ { "dkey": "CASIA-WebFace", "dval": "The CASIA-WebFace dataset is used for face verification and face identification tasks. The dataset contains 494,414 face images of 10,575 real identities collected from the web." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "VoxCeleb2", "dval": "VoxCeleb2 is a large scale speaker recognition dataset obtained automatically from open-source media. VoxCeleb2 consists of over a million utterances from over 6k speakers. Since the dataset is collected ‘in the wild’, the speech segments are corrupted with real world noise including laughter, cross-talk, channel effects, music and other sounds. The dataset is also multilingual, with speech from speakers of 145 different nationalities, covering a wide range of accents, ages, ethnicities and languages. The dataset is audio-visual, so is also useful for a number of other applications, for example – visual speech synthesis, speech separation, cross-modal transfer from face to voice or vice versa and training face recognition from video to complement existing face recognition datasets." }, { "dkey": "300W", "dval": "The 300-W is a face dataset that consists of 300 Indoor and 300 Outdoor in-the-wild images. It covers a large variation of identity, expression, illumination conditions, pose, occlusion and face size. The images were downloaded from google.com by making queries such as “party”, “conference”, “protests”, “football” and “celebrities”. Compared to the rest of in-the-wild datasets, the 300-W database contains a larger percentage of partially-occluded images and covers more expressions than the common “neutral” or “smile”, such as “surprise” or “scream”.\nImages were annotated with the 68-point mark-up using a semi-automatic methodology. The images of the database were carefully selected so that they represent a characteristic sample of challenging but natural face instances under totally unconstrained conditions. Thus, methods that achieve accurate performance on the 300-W database can demonstrate the same accuracy in most realistic cases.\nMany images of the database contain more than one annotated faces (293 images with 1 face, 53 images with 2 faces and 53 images with [3, 7] faces). Consequently, the database consists of 600 annotated face instances, but 399 unique images. Finally, there is a large variety of face sizes. Specifically, 49.3% of the faces have size in the range [48.6k, 2.0M] and the overall mean size is 85k (about 292 × 292) pixels." }, { "dkey": "IMDb-Face", "dval": "IMDb-Face is large-scale noise-controlled dataset for face recognition research. The dataset contains about 1.7 million faces, 59k identities, which is manually cleaned from 2.0 million raw images. All images are obtained from the IMDb website." }, { "dkey": "Color FERET", "dval": "The color FERET database is a dataset for face recognition. It contains 11,338 color images of size 512×768 pixels captured in a semi-controlled environment with 13 different poses from 994 subjects." }, { "dkey": "CPLFW", "dval": "A renovation of Labeled Faces in the Wild (LFW), the de facto standard testbed for unconstraint face verification. \n\nThere are three motivations behind the construction of CPLFW benchmark as follows:\n\n1.Establishing a relatively more difficult database to evaluate the performance of real world face verification so the effectiveness of several face verification methods can be fully justified.\n\n2.Continuing the intensive research on LFW with more realistic consideration on pose intra-class variation and fostering the research on cross-pose face verification in unconstrained situation. The challenge of CPLFW emphasizes pose difference to further enlarge intra-class variance. Also, negative pairs are deliberately selected to avoid different gender or race. CPLFW considers both the large intra-class variance and the tiny inter-class variance simultaneously.\n\n3.Maintaining the data size, the face verification protocol which provides a 'same/different' benchmark and the same identities in LFW, so one can easily apply CPLFW to evaluate the performance of face verification." }, { "dkey": "MeGlass", "dval": "MeGlass is an eyeglass dataset originally designed for eyeglass face recognition evaluation. All the face images are selected and cleaned from MegaFace. Each identity has at least two face images with eyeglass and two face images without eyeglass. It contains 47,817 images from 1,710 different identities." }, { "dkey": "IJB-B", "dval": "The IJB-B dataset is a template-based face dataset that contains 1845 subjects with 11,754 images, 55,025 frames and 7,011 videos where a template consists of a varying number of still images and video frames from different sources. These images and videos are collected from the Internet and are totally unconstrained, with large variations in pose, illumination, image quality etc. In addition, the dataset comes with protocols for 1-to-1 template-based face verification, 1-to-N template-based open-set face identification, and 1-to-N open-set video face identification." } ]
We provide an overview of the current state-of-the-art techniques for semantic segmentation. We review
semantic segmentation images
2,020
[ "Synscapes", "THEODORE", "NuCLS", "SPOT", "ApolloCar3D", "SemArt", "NetHack Learning Environment" ]
[ "COCO", "ScanNet", "ShapeNet", "SBD" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "ScanNet", "dval": "ScanNet is an instance-level indoor RGB-D dataset that includes both 2D and 3D data. It is a collection of labeled voxels rather than points or objects. Up to now, ScanNet v2, the newest version of ScanNet, has collected 1513 annotated scans with an approximate 90% surface coverage. In the semantic segmentation task, this dataset is marked in 20 classes of annotated 3D voxelized objects." }, { "dkey": "ShapeNet", "dval": "ShapeNet is a large scale repository for 3D CAD models developed by researchers from Stanford University, Princeton University and the Toyota Technological Institute at Chicago, USA. The repository contains over 300M models with 220,000 classified into 3,135 classes arranged using WordNet hypernym-hyponym relationships. ShapeNet Parts subset contains 31,693 meshes categorised into 16 common object classes (i.e. table, chair, plane etc.). Each shapes ground truth contains 2-5 parts (with a total of 50 part classes)." }, { "dkey": "SBD", "dval": "The Semantic Boundaries Dataset (SBD) is a dataset for predicting pixels on the boundary of the object (as opposed to the inside of the object with semantic segmentation). The dataset consists of 11318 images from the trainval set of the PASCAL VOC2011 challenge, divided into 8498 training and 2820 test images. This dataset has object instance boundaries with accurate figure/ground masks that are also labeled with one of 20 Pascal VOC classes." }, { "dkey": "Synscapes", "dval": "Synscapes is a synthetic dataset for street scene parsing created using photorealistic rendering techniques, and show state-of-the-art results for training and validation as well as new types of analysis." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "NuCLS", "dval": "The NuCLS dataset contains over 220,000 labeled nuclei from breast cancer images from TCGA. These nuclei were annotated through the collaborative effort of pathologists, pathology residents, and medical students using the Digital Slide Archive. These data can be used in several ways to develop and validate algorithms for nuclear detection, classification, and segmentation, or as a resource to develop and evaluate methods for interrater analysis.\n\nData from both single-rater and multi-rater studies are provided. For single-rater data we provide both pathologist-reviewed and uncorrected annotations. For multi-rater datasets we provide annotations generated with and without suggestions from weak segmentation and classification algorithms." }, { "dkey": "SPOT", "dval": "The SPOT dataset contains 197 reviews originating from the Yelp'13 and IMDB collections ([1][2]), annotated with segment-level polarity labels (positive/neutral/negative). Annotations have been gathered on 2 levels of granulatiry:\n\n\nSentences\nElementary Discourse Units (EDUs), i.e. sub-sentence clauses produced by a state-of-the-art RST parser\n\nThis dataset is intended to aid sentiment analysis research and, in particular, the evaluation of methods that attempt to predict sentiment on a fine-grained, segment-level basis." }, { "dkey": "ApolloCar3D", "dval": "ApolloCar3DT is a dataset that contains 5,277 driving images and over 60K car instances, where each car is fitted with an industry-grade 3D CAD model with absolute model size and semantically labelled keypoints. This dataset is above 20 times larger than PASCAL3D+ and KITTI, the current state-of-the-art." }, { "dkey": "SemArt", "dval": "SemArt is a multi-modal dataset for semantic art understanding. SemArt is a collection of fine-art painting images in which each image is associated to a number of attributes and a textual artistic comment, such as those that appear in art catalogues or museum collections. It contains 21,384 samples that provides artistic comments along with fine-art paintings and their attributes for studying semantic art understanding." }, { "dkey": "NetHack Learning Environment", "dval": "The NetHack Learning Environment (NLE) is a Reinforcement Learning environment based on NetHack 3.6.6. It is designed to provide a standard reinforcement learning interface to the game, and comes with tasks that function as a first step to evaluate agents on this new environment.\nNetHack is one of the oldest and arguably most impactful videogames in history, as well as being one of the hardest roguelikes currently being played by humans. It is procedurally generated, rich in entities and dynamics, and overall an extremely challenging environment for current state-of-the-art RL agents, while being much cheaper to run compared to other challenging testbeds. Through NLE, the authors wish to establish NetHack as one of the next challenges for research in decision making and machine learning." } ]
We introduce a simple yet effective modification to the HRNet architecture, which improves the high-resolution representation
semantic segmentation images
2,019
[ "THEODORE", "DocBank", "IMDB-BINARY", "REDDIT-BINARY", "CARD-660", "COG", "SuperGLUE" ]
[ "WFLW", "AFLW", "Cityscapes" ]
[ { "dkey": "WFLW", "dval": "The Wider Facial Landmarks in the Wild or WFLW database contains 10000 faces (7500 for training and 2500 for testing) with 98 annotated landmarks. This database also features rich attribute annotations in terms of occlusion, head pose, make-up, illumination, blur and expressions." }, { "dkey": "AFLW", "dval": "The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image." }, { "dkey": "Cityscapes", "dval": "Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of around 5000 fine annotated images and 20000 coarse annotated ones. Data was captured in 50 cities during several months, daytimes, and good weather conditions. It was originally recorded as video so the frames were manually selected to have the following features: large number of dynamic objects, varying scene layout, and varying background." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "DocBank", "dval": "A benchmark dataset that contains 500K document pages with fine-grained token-level annotations for document layout analysis. DocBank is constructed using a simple yet effective way with weak supervision from the \\LaTeX{} documents available on the arXiv.com." }, { "dkey": "IMDB-BINARY", "dval": "IMDB-BINARY is a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres." }, { "dkey": "REDDIT-BINARY", "dval": "REDDIT-BINARY consists of graphs corresponding to online discussions on Reddit. In each graph, nodes represent users, and there is an edge between them if at least one of them respond to the other’s comment. There are four popular subreddits, namely, IAmA, AskReddit, TrollXChromosomes, and atheism. IAmA and AskReddit are two question/answer based subreddits, and TrollXChromosomes and atheism are two discussion-based subreddits. A graph is labeled according to whether it belongs to a question/answer-based community or a discussion-based community." }, { "dkey": "CARD-660", "dval": "An expert-annotated word similarity dataset which provides a highly reliable, yet challenging, benchmark for rare word representation techniques." }, { "dkey": "COG", "dval": "A configurable visual question and answer dataset (COG) to parallel experiments in humans and animals. COG is much simpler than the general problem of video analysis, yet it addresses many of the problems relating to visual and logical reasoning and memory -- problems that remain challenging for modern deep learning architectures." }, { "dkey": "SuperGLUE", "dval": "SuperGLUE is a benchmark dataset designed to pose a more rigorous test of language understanding than GLUE. SuperGLUE has the same high-level motivation as GLUE: to provide a simple, hard-to-game measure of progress toward general-purpose language understanding technologies for English. SuperGLUE follows the basic design of GLUE: It consists of a public leaderboard built around eight language understanding tasks, drawing on existing data, accompanied by a single-number\nperformance metric, and an analysis toolkit. However, it improves upon GLUE in several ways:\n\n\nMore challenging tasks: SuperGLUE retains the two hardest tasks in GLUE. The remaining tasks were identified from those submitted to an open call for task proposals and were selected based on difficulty for current NLP approaches.\nMore diverse task formats: The task formats in GLUE are limited to sentence- and sentence-pair classification. The authors expand the set of task formats in SuperGLUE to include\ncoreference resolution and question answering (QA).\nComprehensive human baselines: the authors include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance.\nImproved code support: SuperGLUE is distributed with a new, modular toolkit for work on pretraining, multi-task learning, and transfer learning in NLP, built around standard tools including PyTorch (Paszke et al., 2017) and AllenNLP (Gardner et al., 2017).\nRefined usage rules: The conditions for inclusion on the SuperGLUE leaderboard were revamped to ensure fair competition, an informative leaderboard, and full credit\nassignment to data and task creators." } ]
I want to train a multi-person pose estimation system for 3
3d multi-person pose estimation images
2,019
[ "PoseTrack", "LSP", "V-COCO", "Drive&Act", "UMDFaces", "COCO-WholeBody" ]
[ "MuPoTS-3D", "COCO" ]
[ { "dkey": "MuPoTS-3D", "dval": "MuPoTs-3D (Multi-person Pose estimation Test Set in 3D) is a dataset for pose estimation composed of more than 8,000 frames from 20 real-world scenes with up to three subjects. The poses are annotated with a 14-point skeleton model." }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "PoseTrack", "dval": "The PoseTrack dataset is a large-scale benchmark for multi-person pose estimation and tracking in videos. It requires not only pose estimation in single frames, but also temporal tracking across frames. It contains 514 videos including 66,374 frames in total, split into 300, 50 and 208 videos for training, validation and test set respectively. For training videos, 30 frames from the center are annotated. For validation and test videos, besides 30 frames from the center, every fourth frame is also annotated for evaluating long range articulated tracking. The annotations include 15 body keypoints location, a unique person id and a head bounding box for each person instance." }, { "dkey": "LSP", "dval": "The Leeds Sports Pose (LSP) dataset is widely used as the benchmark for human pose estimation. The original LSP dataset contains 2,000 images of sportspersons gathered from Flickr, 1000 for training and 1000 for testing. Each image is annotated with 14 joint locations, where left and right joints are consistently labelled from a person-centric viewpoint. The extended LSP dataset contains additional 10,000 images labeled for training.\n\nImage: Sumer et al" }, { "dkey": "V-COCO", "dval": "Verbs in COCO (V-COCO) is a dataset that builds off COCO for human-object interaction detection. V-COCO provides 10,346 images (2,533 for training, 2,867 for validating and 4,946 for testing) and 16,199 person instances. Each person has annotations for 29 action categories and there are no interaction labels including objects." }, { "dkey": "Drive&Act", "dval": "The Drive&Act dataset is a state of the art multi modal benchmark for driver behavior recognition. The dataset includes 3D skeletons in addition to frame-wise hierarchical labels of 9.6 Million frames captured by 6 different views and 3 modalities (RGB, IR and depth).\n\nIt offers following key features:\n\n\n12h of video data in 29 long sequences\nCalibrated multi view camera system with 5 views\nMulti modal videos: NIR, Depth and Color data\nMarkerless motion capture: 3D Body Pose and Head Pose\nModel of the static interior of the car\n83 manually annotated hierarchical activity labels:\nLevel 1: Long running tasks (12)\nLevel 2: Semantic actions (34)\nLevel 3: Object Interaction tripplets [action|object|location] (6|17|14)" }, { "dkey": "UMDFaces", "dval": "UMDFaces is a face dataset divided into two parts:\n\n\nStill Images - 367,888 face annotations for 8,277 subjects.\nVideo Frames - Over 3.7 million annotated video frames from over 22,000 videos of 3100 subjects.\n\nPart 1 - Still Images\n\nThe dataset contains 367,888 face annotations for 8,277 subjects divided into 3 batches. The annotations contain human curated bounding boxes for faces and estimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network.\n\nPart 2 - Video Frames\n\nThe second part contains 3,735,476 annotated video frames extracted from a total of 22,075 for 3,107 subjects. The annotations contain the estimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network." }, { "dkey": "COCO-WholeBody", "dval": "COCO-WholeBody is an extension of COCO dataset with whole-body annotations. There are 4 types of bounding boxes (person box, face box, left-hand box, and right-hand box) and 133 keypoints (17 for body, 6 for feet, 68 for face and 42 for hands) annotations for each person in the image." } ]
A framework to fuse low-level hand-crafted and mid-level attribute based deep features for
person re-identification images
2,018
[ "VIsual PERception (VIPER)", "PA-100K", "HowTo100M", "FC100", "Obstacle Tower" ]
[ "VIPeR", "Market-1501" ]
[ { "dkey": "VIPeR", "dval": "The Viewpoint Invariant Pedestrian Recognition (VIPeR) dataset includes 632 people and two outdoor cameras under different viewpoints and light conditions. Each person has one image per camera and each image has been scaled to be 128×48 pixels. It provides the pose angle of each person as 0° (front), 45°, 90° (right), 135°, and 180° (back)." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "VIsual PERception (VIPER)", "dval": "VIPER is a benchmark suite for visual perception. The benchmark is based on more than 250K high-resolution video frames, all annotated with ground-truth data for both low-level and high-level vision tasks, including optical flow, semantic instance segmentation, object detection and tracking, object-level 3D scene layout, and visual odometry. Ground-truth data for all tasks is available for every frame. The data was collected while driving, riding, and walking a total of 184 kilometers in diverse ambient conditions in a realistic virtual world." }, { "dkey": "PA-100K", "dval": "PA-100K is a recent-proposed large pedestrian attribute dataset, with 100,000 images in total collected from outdoor surveillance cameras. It is split into 80,000 images for the training set, and 10,000 for the validation set and 10,000 for the test set. This dataset is labeled by 26 binary attributes. The common features existing in both selected dataset is that the images are blurry due to the relatively low resolution and the positive ratio of each binary attribute is low." }, { "dkey": "HowTo100M", "dval": "HowTo100M is a large-scale dataset of narrated videos with an emphasis on instructional videos where content creators teach complex tasks with an explicit intention of explaining the visual content on screen. HowTo100M features a total of:\n\n\n136M video clips with captions sourced from 1.2M Youtube videos (15 years of video)\n23k activities from domains such as cooking, hand crafting, personal care, gardening or fitness\n\nEach video is associated with a narration available as subtitles automatically downloaded from Youtube." }, { "dkey": "FC100", "dval": "The FC100 dataset (Fewshot-CIFAR100) is a newly split dataset based on CIFAR-100 for few-shot learning. It contains 20 high-level categories which are divided into 12, 4, 4 categories for training, validation and test. There are 60, 20, 20 low-level classes in the corresponding split containing 600 images of size 32 × 32 per class. Smaller image size makes it more challenging for few-shot learning." }, { "dkey": "Obstacle Tower", "dval": "Obstacle Tower is a high fidelity, 3D, 3rd person, procedurally generated environment for reinforcement learning. An agent playing Obstacle Tower must learn to solve both low-level control and high-level planning problems in tandem while learning from pixels and a sparse reward signal. Unlike other benchmarks such as the Arcade Learning Environment, evaluation of agent performance in Obstacle Tower is based on an agent’s ability to perform well on unseen instances of the environment." } ]
I am interested in object detection and object recognition from images and text.
object detection images text
2,019
[ "COCO-Tasks", "UAVDT", "MOT15", "MOT17", "PASCAL VOC 2007", "COVERAGE" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "COCO-Tasks", "dval": "Comprises about 40,000 images where the most suitable objects for 14 tasks have been annotated." }, { "dkey": "UAVDT", "dval": "UAVDT is a large scale challenging UAV Detection and Tracking benchmark (i.e., about 80, 000 representative frames from 10 hours raw videos) for 3 important fundamental tasks, i.e., object DETection\n(DET), Single Object Tracking (SOT) and Multiple Object Tracking (MOT).\n\nThe dataset is captured by UAVs in various complex scenarios. The objects of\ninterest in this benchmark are vehicles. The frames are manually annotated with bounding boxes and some useful attributes, e.g., vehicle category and occlusion. \n\nThe UAVDT benchmark consists of 100 video sequences, which are selected\nfrom over 10 hours of videos taken with an UAV platform at a number of locations in urban areas, representing various common scenes including squares, arterial streets, toll stations, highways, crossings and T-junctions. The videos\nare recorded at 30 frames per seconds (fps), with the JPEG image resolution of 1080 × 540 pixels." }, { "dkey": "MOT15", "dval": "MOT2015 is a dataset for multiple object tracking. It contains 11 different indoor and outdoor scenes of public places with pedestrians as the objects of interest, where camera motion, camera angle and imaging condition vary greatly. The dataset provides detections generated by the ACF-based detector." }, { "dkey": "MOT17", "dval": "The Multiple Object Tracking 17 (MOT17) dataset is a dataset for multiple object tracking. Similar to its previous version MOT16, this challenge contains seven different indoor and outdoor scenes of public places with pedestrians as the objects of interest. A video for each scene is divided into two clips, one for training and the other for testing. The dataset provides detections of objects in the video frames with three detectors, namely SDP, Faster-RCNN and DPM. The challenge accepts both on-line and off-line tracking approaches, where the latter are allowed to use the future video frames to predict tracks." }, { "dkey": "PASCAL VOC 2007", "dval": "PASCAL VOC 2007 is a dataset for image recognition. The twenty object classes that have been selected are:\n\nPerson: person\nAnimal: bird, cat, cow, dog, horse, sheep\nVehicle: aeroplane, bicycle, boat, bus, car, motorbike, train\nIndoor: bottle, chair, dining table, potted plant, sofa, tv/monitor\n\nThe dataset can be used for image classification and object detection tasks." }, { "dkey": "COVERAGE", "dval": "COVERAGE contains copymove forged (CMFD) images and their originals with similar but genuine objects (SGOs). COVERAGE is designed to highlight and address tamper detection ambiguity of popular methods, caused by self-similarity within natural images. In COVERAGE, forged–original pairs are annotated with (i) the duplicated and forged region masks, and (ii) the tampering factor/similarity metric. For benchmarking, forgery quality is evaluated using (i) computer vision-based methods, and (ii) human detection performance." } ]
I want to train a supervised model for object recognition.
object recognition images
2,020
[ "SNIPS", "ConvAI2", "Libri-Light", "EPIC-KITCHENS-100", "COCO-Tasks", "CLUECorpus2020" ]
[ "COCO", "DeepFashion" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "DeepFashion", "dval": "DeepFashion is a dataset containing around 800K diverse fashion images with their rich annotations (46 categories, 1,000 descriptive attributes, bounding boxes and landmark information) ranging from well-posed product images to real-world-like consumer photos." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "Libri-Light", "dval": "Libri-Light is a collection of spoken English audio suitable for training speech recognition systems under limited or no supervision. It is derived from open-source audio books from the LibriVox project. It contains over 60K hours of audio." }, { "dkey": "EPIC-KITCHENS-100", "dval": "This paper introduces the pipeline to scale the largest dataset in egocentric vision EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (EPIC-KITCHENS-55), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection also enables evaluating the \"test of time\" - i.e. whether models trained on data collected in 2018 can generalise to new footage collected under the same hypotheses albeit \"two years on\".\nThe dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics." }, { "dkey": "COCO-Tasks", "dval": "Comprises about 40,000 images where the most suitable objects for 14 tasks have been annotated." }, { "dkey": "CLUECorpus2020", "dval": "CLUECorpus2020 is a large-scale corpus that can be used directly for self-supervised learning such as pre-training of a language model, or language generation. It has 100G raw corpus with 35 billion Chinese characters, which is retrieved from Common Crawl." } ]
A facial landmark detection model with a robust and accurate face alignment algorithm, which can be used in
facial landmark detection images
2,019
[ "LS3D-W", "AFLW2000-3D", "FaceForensics", "SoF", "WFLW" ]
[ "COFW", "AFW", "AFLW", "300W" ]
[ { "dkey": "COFW", "dval": "The Caltech Occluded Faces in the Wild (COFW) dataset is designed to present faces in real-world conditions. Faces show large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food, hands, microphones,
etc.). All images were hand annotated using the same 29 landmarks as in LFPW. Both the landmark positions as well as their occluded/unoccluded state were annotated. The faces are occluded to different degrees, with large variations in the type of occlusions encountered. COFW has an average occlusion of over 23." }, { "dkey": "AFW", "dval": "AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box." }, { "dkey": "AFLW", "dval": "The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image." }, { "dkey": "300W", "dval": "The 300-W is a face dataset that consists of 300 Indoor and 300 Outdoor in-the-wild images. It covers a large variation of identity, expression, illumination conditions, pose, occlusion and face size. The images were downloaded from google.com by making queries such as “party”, “conference”, “protests”, “football” and “celebrities”. Compared to the rest of in-the-wild datasets, the 300-W database contains a larger percentage of partially-occluded images and covers more expressions than the common “neutral” or “smile”, such as “surprise” or “scream”.\nImages were annotated with the 68-point mark-up using a semi-automatic methodology. The images of the database were carefully selected so that they represent a characteristic sample of challenging but natural face instances under totally unconstrained conditions. Thus, methods that achieve accurate performance on the 300-W database can demonstrate the same accuracy in most realistic cases.\nMany images of the database contain more than one annotated faces (293 images with 1 face, 53 images with 2 faces and 53 images with [3, 7] faces). Consequently, the database consists of 600 annotated face instances, but 399 unique images. Finally, there is a large variety of face sizes. Specifically, 49.3% of the faces have size in the range [48.6k, 2.0M] and the overall mean size is 85k (about 292 × 292) pixels." }, { "dkey": "LS3D-W", "dval": "A 3D facial landmark dataset of around 230,000 images." }, { "dkey": "AFLW2000-3D", "dval": "AFLW2000-3D is a dataset of 2000 images that have been annotated with image-level 68-point 3D facial landmarks. This dataset is used for evaluation of 3D facial landmark detection models. The head poses are very diverse and often hard to be detected by a CNN-based face detector." }, { "dkey": "FaceForensics", "dval": "FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces. This dataset has two versions:\n\n\n\nSource-to-Target: where the authors reenact over 1000 videos with new facial expressions extracted from other videos, which e.g. can be used to train a classifier to detect fake images or videos.\n\n\n\nSelfreenactment: where the authors use Face2Face to reenact the facial expressions of videos with their own facial expressions as input to get pairs of videos, which e.g. can be used to train supervised generative refinement models." }, { "dkey": "SoF", "dval": "The Specs on Faces (SoF) dataset, a collection of 42,592 (2,662×16) images for 112 persons (66 males and 46 females) who wear glasses under different illumination conditions. The dataset is FREE for reasonable academic fair use. The dataset presents a new challenge regarding face detection and recognition. It is focused on two challenges: harsh illumination environments and face occlusions, which highly affect face detection, recognition, and classification. The glasses are the common natural occlusion in all images of the dataset. However, there are two more synthetic occlusions (nose and mouth) added to each image. Moreover, three image filters, that may evade face detectors and facial recognition systems, were applied to each image. All generated images are categorized into three levels of difficulty (easy, medium, and hard). That enlarges the number of images to be 42,592 images (26,112 male images and 16,480 female images). There is metadata for each image that contains many information such as: the subject ID, facial landmarks, face and glasses rectangles, gender and age labels, year that the photo was taken, facial emotion, glasses type, and more." }, { "dkey": "WFLW", "dval": "The Wider Facial Landmarks in the Wild or WFLW database contains 10000 faces (7500 for training and 2500 for testing) with 98 annotated landmarks. This database also features rich attribute annotations in terms of occlusion, head pose, make-up, illumination, blur and expressions." } ]
Our system for biomedical question answering.
biomedical question answering text
2,020
[ "QUASAR-T", "QUASAR-S", "SQuAD-shifts", "CoQA", "TweetQA", "HotpotQA" ]
[ "BioASQ", "SQuAD" ]
[ { "dkey": "BioASQ", "dval": "BioASQ is a question answering dataset. Instances in the BioASQ dataset are composed of a question (Q), human-annotated answers (A), and the relevant contexts (C) (also called snippets)." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "QUASAR-T", "dval": "QUASAR-T is a large-scale dataset aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. It consists of 43,013 open-domain trivia questions and their answers obtained from various internet sources. ClueWeb09 serves as the background corpus for extracting these answers. The answers to these questions are free-form spans of text, though most are noun phrases." }, { "dkey": "QUASAR-S", "dval": "QUASAR-S is a large-scale dataset aimed at evaluating systems designed to comprehend a natural language query and extract its answer from a large corpus of text. It consists of 37,362 cloze-style (fill-in-the-gap) queries constructed from definitions of software entity tags on the popular website Stack Overflow. The posts and comments on the website serve as the background corpus for answering the cloze questions. The answer to each question is restricted to be another software entity, from an output vocabulary of 4874 entities." }, { "dkey": "SQuAD-shifts", "dval": "Provides four new test sets for the Stanford Question Answering Dataset (SQuAD) and evaluate the ability of question-answering systems to generalize to new data." }, { "dkey": "CoQA", "dval": "CoQA is a large-scale dataset for building Conversational Question Answering systems. The goal of the CoQA challenge is to measure the ability of machines to understand a text passage and answer a series of interconnected questions that appear in a conversation.\n\nCoQA contains 127,000+ questions with answers collected from 8000+ conversations. Each conversation is collected by pairing two crowdworkers to chat about a passage in the form of questions and answers. The unique features of CoQA include 1) the questions are conversational; 2) the answers can be free-form text; 3) each answer also comes with an evidence subsequence highlighted in the passage; and 4) the passages are collected from seven diverse domains. CoQA has a lot of challenging phenomena not present in existing reading comprehension datasets, e.g., coreference and pragmatic reasoning." }, { "dkey": "TweetQA", "dval": "With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effectiveness of many applications that rely on real-time knowledge. While previous question answering (QA) datasets have concentrated on formal text like news and Wikipedia, the first large-scale dataset for QA over social media data is presented. To make sure the tweets are meaningful and contain interesting information, tweets used by journalists to write news articles are gathered. Then human annotators are asked to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, the answer are allowed to be abstractive. The task requires model to read a short tweet and a question and outputs a text phrase (does not need to be in the tweet) as the answer." }, { "dkey": "HotpotQA", "dval": "HotpotQA is a question answering dataset collected on the English Wikipedia, containing about 113K crowd-sourced questions that are constructed to require the introduction paragraphs of two Wikipedia articles to answer. Each question in the dataset comes with the two gold paragraphs, as well as a list of sentences in these paragraphs that crowdworkers identify as supporting facts necessary to answer the question. \n\nA diverse range of reasoning strategies are featured in HotpotQA, including questions involving missing entities in the question, intersection questions (What satisfies property A and property B?), and comparison questions, where two entities are compared by a common attribute, among others. In the few-document distractor setting, the QA models are given ten paragraphs in which the gold paragraphs are guaranteed to be found; in the open-domain fullwiki setting, the models are only given the question and the entire Wikipedia. Models are evaluated on their answer accuracy and explainability, where the former is measured as overlap between the predicted and gold answers with exact match (EM) and unigram F1, and the latter concerns how well the predicted supporting fact sentences match human annotation (Supporting Fact EM/F1). A joint metric is also reported on this dataset, which encourages systems to perform well on both tasks simultaneously." } ]
We propose a learning-based method to detect image manipulations. The proposed method learns a forensic
image forensics images
2,018
[ "FaceForensics++", "DeeperForensics-1.0", "UASOL", "REDS" ]
[ "FaceForensics", "CelebA" ]
[ { "dkey": "FaceForensics", "dval": "FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces. This dataset has two versions:\n\n\n\nSource-to-Target: where the authors reenact over 1000 videos with new facial expressions extracted from other videos, which e.g. can be used to train a classifier to detect fake images or videos.\n\n\n\nSelfreenactment: where the authors use Face2Face to reenact the facial expressions of videos with their own facial expressions as input to get pairs of videos, which e.g. can be used to train supervised generative refinement models." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "FaceForensics++", "dval": "FaceForensics++ is a forensics dataset consisting of 1000 original video sequences that have been manipulated with four automated face manipulation methods: Deepfakes, Face2Face, FaceSwap and NeuralTextures. The data has been sourced from 977 youtube videos and all videos contain a trackable mostly frontal face without occlusions which enables automated tampering methods to generate realistic forgeries." }, { "dkey": "DeeperForensics-1.0", "dval": "DeeperForensics-1.0 represents the largest face forgery detection dataset by far, with 60,000 videos constituted by a total of 17.6 million frames, 10 times larger than existing datasets of the same kind. The full dataset includes 48,475 source videos and 11,000 manipulated videos. The source videos are collected on 100 paid and consented actors from 26 countries, and the manipulated videos are generated by a newly proposed many-to-many end-to-end face swapping method, DF-VAE. 7 types of real-world perturbations at 5 intensity levels are employed to ensure a larger scale and higher diversity." }, { "dkey": "UASOL", "dval": "The UASOL an RGB-D stereo dataset, that contains 160902 frames, filmed at 33 different scenes, each with between 2 k and 10 k frames. The frames show different paths from the perspective of a pedestrian, including sidewalks, trails, roads, etc. The images were extracted from video files with 15 fps at HD2K resolution with a size of 2280 × 1282 pixels. The dataset also provides a GPS geolocalization tag for each second of the sequences and reflects different climatological conditions. It also involved up to 4 different persons filming the dataset at different moments of the day.\n\nWe propose a train, validation and test split to train the network. \nAdditionally, we introduce a subset of 676 pairs of RGB Stereo images and their respective depth, which we extracted randomly from the entire dataset. This given test set is introduced to make comparability possible between the different methods trained with the dataset." }, { "dkey": "REDS", "dval": "The realistic and dynamic scenes (REDS) dataset was proposed in the NTIRE19 Challenge. The dataset is composed of 300 video sequences with resolution of 720×1,280, and each video has 100 frames, where the training set, the validation set and the testing set have 240, 30 and 30 videos, respectively" } ]
A novel extension of the matched filter (MF) approach for retinal blood vessel extraction.
vessel extraction retinal images
2,010
[ "ROSE", "IntrA", "ORVS", "HRF" ]
[ "STARE", "DRIVE" ]
[ { "dkey": "STARE", "dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.." }, { "dkey": "DRIVE", "dval": "The Digital Retinal Images for Vessel Extraction (DRIVE) dataset is a dataset for retinal vessel segmentation. It consists of a total of JPEG 40 color fundus images; including 7 abnormal pathology cases. The images were obtained from a diabetic retinopathy screening program in the Netherlands. The images were acquired using Canon CR5 non-mydriatic 3CCD camera with FOV equals to 45 degrees. Each image resolution is 584*565 pixels with eight bits per color channel (3 channels). \n\nThe set of 40 images was equally divided into 20 images for the training set and 20 images for the testing set. Inside both sets, for each image, there is circular field of view (FOV) mask of diameter that is approximately 540 pixels. Inside training set, for each image, one manual segmentation by an ophthalmological expert has been applied. Inside testing set, for each image, two manual segmentations have been applied by two different observers, where the first observer segmentation is accepted as the ground-truth for performance evaluation." }, { "dkey": "ROSE", "dval": "Retinal OCTA SEgmentation dataset (ROSE) consists of 229 OCTA images with vessel annotations at either centerline-level or pixel level." }, { "dkey": "IntrA", "dval": "IntrA is an open-access 3D intracranial aneurysm dataset that makes the application of points-based and mesh-based classification and segmentation models available. This dataset can be used to diagnose intracranial aneurysms and to extract the neck for a clipping operation in medicine and other areas of deep learning, such as normal estimation and surface reconstruction.\n\n103 3D models of entire brain vessels are collected by reconstructing scanned 2D MRA images of patients (the raw 2D MRA images are not published due to medical ethics).\n1909 blood vessel segments are generated automatically from the complete models, including 1694 healthy vessel segments and 215 aneurysm segments for diagnosis.\n116 aneurysm segments are divided and annotated manually by medical experts; the scale of each aneurysm segment is based on the need for a preoperative examination.\nGeodesic distance matrices are computed and included for each annotated 3D segment, because the expression of the geodesic distance is more accurate than Euclidean distance according to the shape of vessels." }, { "dkey": "ORVS", "dval": "The ORVS dataset has been newly established as a collaboration between the computer science and visual-science departments at the University of Calgary.\n\nThis dataset contains 49 images (42 training and seven testing images) collected from a clinic in Calgary-Canada. All images were acquired with a Zeiss Visucam 200 with 30 degrees field of view (FOV). The image size is 1444×1444 with 24 bits per pixel. Images and are stored in JPEG format with low compression, which is common in ophthalmology practice. All images were manually traced by an expert who a has been working in the field of retinal-image analysis and went through training. The expert was asked to label all pixels belonging to retinal vessels. The Windows Paint 3D tool was used to manually label the images." }, { "dkey": "HRF", "dval": "The HRF dataset is a dataset for retinal vessel segmentation which comprises 45 images and is organized as 15 subsets. Each subset contains one healthy fundus image, one image of patient with diabetic retinopathy and one glaucoma image. The image sizes are 3,304 x 2,336, with a training/testing image split of 22/23." } ]
We propose a unified end-to-end trainable neural network to address the semi-supervised video object
video object segmentation
2,019
[ "THEODORE", "DeeperForensics-1.0", "WikiReading", "Places", "EyeCar" ]
[ "DAVIS", "COCO" ]
[ { "dkey": "DAVIS", "dval": "The Densely Annotation Video Segmentation dataset (DAVIS) is a high quality and high resolution densely annotated video segmentation dataset under two resolutions, 480p and 1080p. There are 50 video sequences with 3455 densely annotated frames in pixel level. 30 videos with 2079 frames are for training and 20 videos with 1376 frames are for validation." }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "DeeperForensics-1.0", "dval": "DeeperForensics-1.0 represents the largest face forgery detection dataset by far, with 60,000 videos constituted by a total of 17.6 million frames, 10 times larger than existing datasets of the same kind. The full dataset includes 48,475 source videos and 11,000 manipulated videos. The source videos are collected on 100 paid and consented actors from 26 countries, and the manipulated videos are generated by a newly proposed many-to-many end-to-end face swapping method, DF-VAE. 7 types of real-world perturbations at 5 intensity levels are employed to ensure a larger scale and higher diversity." }, { "dkey": "WikiReading", "dval": "WikiReading is a large-scale natural language understanding task and publicly-available dataset with 18 million instances. The task is to predict textual values from the structured knowledge base Wikidata by reading the text of the corresponding Wikipedia articles. The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs)." }, { "dkey": "Places", "dval": "The Places dataset is proposed for scene recognition and contains more than 2.5 million images covering more than 205 scene categories with more than 5,000 images per category." }, { "dkey": "EyeCar", "dval": "EyeCar is a dataset of driving videos of vehicles involved in rear-end collisions paired with eye fixation data captured from human subjects. It contains 21 front-view videos that were captured in various traffic, weather, and day light conditions. Each video is 30sec in length and contains typical driving tasks (e.g., lanekeeping, merging-in, and braking) ending to rear-end collisions." } ]
A novel attention network for action recognition based on Hierarchical Multi-scale RNNs.
action recognition video
2,017
[ "Drive&Act", "HVU", "G3D", "PadChest", "BlendedMVS", "PKU-MMD", "ISIA Food-500" ]
[ "ImageNet", "HMDB51" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "HMDB51", "dval": "The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. The original evaluation scheme uses three different training/testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. The average accuracy over these three splits is used to measure the final performance." }, { "dkey": "Drive&Act", "dval": "The Drive&Act dataset is a state of the art multi modal benchmark for driver behavior recognition. The dataset includes 3D skeletons in addition to frame-wise hierarchical labels of 9.6 Million frames captured by 6 different views and 3 modalities (RGB, IR and depth).\n\nIt offers following key features:\n\n\n12h of video data in 29 long sequences\nCalibrated multi view camera system with 5 views\nMulti modal videos: NIR, Depth and Color data\nMarkerless motion capture: 3D Body Pose and Head Pose\nModel of the static interior of the car\n83 manually annotated hierarchical activity labels:\nLevel 1: Long running tasks (12)\nLevel 2: Semantic actions (34)\nLevel 3: Object Interaction tripplets [action|object|location] (6|17|14)" }, { "dkey": "HVU", "dval": "HVU is organized hierarchically in a semantic taxonomy that focuses on multi-label and multi-task video understanding as a comprehensive problem that encompasses the recognition of multiple semantic aspects in the dynamic scene. HVU contains approx.~572k videos in total with 9 million annotations for training, validation, and test set spanning over 3142 labels. HVU encompasses semantic aspects defined on categories of scenes, objects, actions, events, attributes, and concepts which naturally captures the real-world scenarios." }, { "dkey": "G3D", "dval": "The Gaming 3D Dataset (G3D) focuses on real-time action recognition in a gaming scenario. It contains 10 subjects performing 20 gaming actions: “punch right”, “punch left”, “kick right”, “kick left”, “defend”, “golf swing”, “tennis swing forehand”, “tennis swing backhand”, “tennis serve”, “throw bowling ball”, “aim and fire gun”, “walk”, “run”, “jump”, “climb”, “crouch”, “steer a car”, “wave”, “flap” and “clap”." }, { "dkey": "PadChest", "dval": "PadChest is a labeled large-scale, high resolution chest x-ray dataset for the automated exploration\nof medical images along with their associated reports. This dataset includes more than 160,000\nimages obtained from 67,000 patients that were interpreted and reported by radiologists at Hospital\nSan Juan Hospital (Spain) from 2009 to 2017, covering six different position views and additional\ninformation on image acquisition and patient demography. The reports were labeled with 174 different\nradiographic findings, 19 differential diagnoses and 104 anatomic locations organized as a hierarchical\ntaxonomy and mapped onto standard Unified Medical Language System (UMLS) terminology. Of\nthese reports, 27% were manually annotated by trained physicians and the remaining set was labeled\nusing a supervised method based on a recurrent neural network with attention mechanisms. The labels\ngenerated were then validated in an independent test set achieving a 0.93 Micro-F1 score." }, { "dkey": "BlendedMVS", "dval": "BlendedMVS is a novel large-scale dataset, to provide sufficient training ground truth for learning-based MVS. The dataset was created by applying a 3D reconstruction pipeline to recover high-quality textured meshes from images of well-selected scenes. Then, these mesh models were rendered to color images and depth maps." }, { "dkey": "PKU-MMD", "dval": "The PKU-MMD dataset is a large skeleton-based action detection dataset. It contains 1076 long untrimmed video sequences performed by 66 subjects in three camera views. 51 action categories are annotated, resulting almost 20,000 action instances and 5.4 million frames in total. Similar to NTU RGB+D, there are also two recommended evaluate protocols, i.e. cross-subject and cross-view." }, { "dkey": "ISIA Food-500", "dval": "Includes 500 categories from the list in the Wikipedia and 399,726 images, a more comprehensive food dataset that surpasses existing popular benchmark datasets by category coverage and data volume." } ]
We present a new deep point cloud rendering pipeline through multi-plane projections.
novel view synthesis point clouds
2,019
[ "TORCS", "Flightmare Simulator", "2D-3D Match Dataset", "Shiny dataset", "KITTI-Depth", "Completion3D", "RELLIS-3D" ]
[ "ScanNet", "Matterport3D" ]
[ { "dkey": "ScanNet", "dval": "ScanNet is an instance-level indoor RGB-D dataset that includes both 2D and 3D data. It is a collection of labeled voxels rather than points or objects. Up to now, ScanNet v2, the newest version of ScanNet, has collected 1513 annotated scans with an approximate 90% surface coverage. In the semantic segmentation task, this dataset is marked in 20 classes of annotated 3D voxelized objects." }, { "dkey": "Matterport3D", "dval": "The Matterport3D dataset is a large RGB-D dataset for scene understanding in indoor environments. It contains 10,800 panoramic views inside 90 real building-scale scenes, constructed from 194,400 RGB-D images. Each scene is a residential building consisting of multiple rooms and floor levels, and is annotated with surface construction, camera poses, and semantic segmentation." }, { "dkey": "TORCS", "dval": "TORCS (The Open Racing Car Simulator) is a driving simulator. It is capable of simulating the essential elements of vehicular dynamics such as mass, rotational inertia, collision, mechanics of suspensions, links and differentials, friction and aerodynamics. Physics simulation is simplified and is carried out through Euler integration of differential equations at a temporal discretization level of 0.002 seconds. The rendering pipeline is lightweight and based on OpenGL that can be turned off for faster training. TORCS offers a large variety of tracks and cars as free assets. It also provides a number of programmed robot cars with different levels of performance that can be used to benchmark the performance of human players and software driving agents. TORCS was built with the goal of developing Artificial Intelligence for vehicular control and has been used extensively by the machine learning community ever since its inception." }, { "dkey": "Flightmare Simulator", "dval": "Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc." }, { "dkey": "2D-3D Match Dataset", "dval": "2D-3D Match Dataset is a new dataset of 2D-3D correspondences by leveraging the availability of several 3D datasets from RGB-D scans. Specifically, the data from SceneNN and 3DMatch are used. The training dataset consists of 110 RGB-D scans, of which 56 scenes are from SceneNN and 54 scenes are from 3DMatch. The 2D-3D correspondence data is generated as follows. Given a 3D point which is randomly sampled from a 3D point cloud, a set of 3D patches from different scanning views are extracted. To find a 2D-3D correspondence, for each 3D patch, its 3D position is re-projected into all RGB-D frames for which the point lies in the camera frustum, taking occlusion into account. The corresponding local 2D patches around the re-projected point are extracted. In total, around 1.4 millions 2D-3D correspondences are collected." }, { "dkey": "Shiny dataset", "dval": "The shiny folder contains 8 scenes with challenging view-dependent effects used in our paper. We also provide additional scenes in the shiny_extended folder. \nThe test images for each scene used in our paper consist of one of every eight images in alphabetical order.\n\nEach scene contains the following directory structure:\nscene/\n dense/\n cameras.bin\n images.bin\n points3D.bin\n project.ini\n images/\n image_name1.png\n image_name2.png\n ...\n image_nameN.png\n images_distort/\n image_name1.png\n image_name2.png\n ...\n image_nameN.png\n sparse/\n cameras.bin\n images.bin\n points3D.bin\n project.ini\n database.db\n hwf_cxcy.npy\n planes.txt\n poses_bounds.npy\n\n\ndense/ folder contains COLMAP's output [1] after the input images are undistorted.\nimages/ folder contains undistorted images. (We use these images in our experiments.)\nimages_distort/ folder contains raw images taken from a smartphone.\nsparse/ folder contains COLMAP's sparse reconstruction output [1].\n\nOur poses_bounds.npy is similar to the LLFF[2] file format with a slight modification. This file stores a Nx14 numpy array, where N is the number of cameras. Each row in this array is split into two parts of sizes 12 and 2. The first part, when reshaped into 3x4, represents the camera extrinsic (camera-to-world transformation), and the second part with two dimensions stores the distances from that point of view to the first and last planes (near, far). These distances are computed automatically based on the scene’s statistics using LLFF’s code. (For details on how these are computed, see this code) \n\nhwf_cxcy.npy stores the camera intrinsic (height, width, focal length, principal point x, principal point y) in a 1x5 numpy array.\n\nplanes.txt stores information about the MPI planes. The first two numbers are the distances from a reference camera to the first and last planes (near, far). The third number tells whether the planes are placed equidistantly in the depth space (0) or inverse depth space (1). The last number is the padding size in pixels on all four sides of each of the MPI planes. I.e., the total dimension of each plane is (H + 2 * padding, W + 2 * padding).\n\nReferences:\n\n\n[1]: COLMAP structure from motion (Schönberger and Frahm, 2016).\n[2]: Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines (Mildenhall et al., 2019)." }, { "dkey": "KITTI-Depth", "dval": "The KITTI-Depth dataset includes depth maps from projected LiDAR point clouds that were matched against the depth estimation from the stereo cameras. The depth images are highly sparse with only 5% of the pixels available and the rest is missing. The dataset has 86k training images, 7k validation images, and 1k test set images on the benchmark server with no access to the ground truth." }, { "dkey": "Completion3D", "dval": "The Completion3D benchmark is a dataset for evaluating state-of-the-art 3D Object Point Cloud Completion methods. Ggiven a partial 3D object point cloud the goal is to infer a complete 3D point cloud for the object." }, { "dkey": "RELLIS-3D", "dval": "RELLIS-3D is a multi-modal dataset for off-road robotics. It was collected in an off-road environment containing annotations for 13,556 LiDAR scans and 6,235 images. The data was collected on the Rellis Campus of Texas A&M University and presents challenges to existing algorithms related to class imbalance and environmental topography. The dataset also provides full-stack sensor data in ROS bag format, including RGB camera images, LiDAR point clouds, a pair of stereo images, high-precision GPS measurement, and IMU data." } ]
I want to learn a method for synthesizing realistic and diverse images without paired training images.
image-to-image translation images
2,020
[ "MuseScore", "DIV2K", "ACDC", "COVERAGE", "FaceForensics++", "GoPro" ]
[ "GTA5", "CelebA" ]
[ { "dkey": "GTA5", "dval": "The GTA5 dataset contains 24966 synthetic images with pixel level semantic annotation. The images have been rendered using the open-world video game Grand Theft Auto 5 and are all from the car perspective in the streets of American-style virtual cities. There are 19 semantic classes which are compatible with the ones of Cityscapes dataset." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "MuseScore", "dval": "The MuseScore dataset is a collection of 344,166 audio and MIDI pairs downloaded from MuseScore website. The audio is usually synthesized by the MuseScore synthesizer. The audio clips have diverse musical genres and are about two mins long on average.\n\nDue to copyright issues the dataset is not publicly available, but can be collected and processed with the provided source code." }, { "dkey": "DIV2K", "dval": "DIV2K is a popular single-image super-resolution dataset which contains 1,000 images with different scenes and is splitted to 800 for training, 100 for validation and 100 for testing. It was collected for NTIRE2017 and NTIRE2018 Super-Resolution Challenges in order to encourage research on image super-resolution with more realistic degradation. This dataset contains low resolution images with different types of degradations. Apart from the standard bicubic downsampling, several types of degradations are considered in synthesizing low resolution images for different tracks of the challenges. Track 2 of NTIRE 2017 contains low resolution images with unknown x4 downscaling. Track 2 and track 4 of NTIRE 2018 correspond to realistic mild ×4 and realistic wild ×4 adverse conditions, respectively. Low-resolution images under realistic mild x4 setting suffer from motion blur, Poisson noise and pixel shifting. Degradations under realistic wild x4 setting are further extended to be of different levels from image to image." }, { "dkey": "ACDC", "dval": "The goal of the Automated Cardiac Diagnosis Challenge (ACDC) challenge is to:\n\n\ncompare the performance of automatic methods on the segmentation of the left ventricular endocardium and epicardium as the right ventricular endocardium for both end diastolic and end systolic phase instances;\ncompare the performance of automatic methods for the classification of the examinations in five classes (normal case, heart failure with infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, abnormal right ventricle).\n\nThe overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon. Acquired data were fully anonymized and handled within the regulations set by the local ethical committee of the Hospital of Dijon (France). Our dataset covers several well-defined pathologies with enough cases to (1) properly train machine learning methods and (2) clearly assess the variations of the main physiological parameters obtained from cine-MRI (in particular diastolic volume and ejection fraction). The dataset is composed of 150 exams (all from different patients) divided into 5 evenly distributed subgroups (4 pathological plus 1 healthy subject groups) as described below. Furthermore, each patient comes with the following additional information : weight, height, as well as the diastolic and systolic phase instants.\n\nThe database is made available to participants through two datasets from the dedicated online evaluation website after a personal registration: i) a training dataset of 100 patients along with the corresponding manual references based on the analysis of one clinical expert; ii) a testing dataset composed of 50 new patients, without manual annotations but with the patient information given above. The raw input images are provided through the Nifti format." }, { "dkey": "COVERAGE", "dval": "COVERAGE contains copymove forged (CMFD) images and their originals with similar but genuine objects (SGOs). COVERAGE is designed to highlight and address tamper detection ambiguity of popular methods, caused by self-similarity within natural images. In COVERAGE, forged–original pairs are annotated with (i) the duplicated and forged region masks, and (ii) the tampering factor/similarity metric. For benchmarking, forgery quality is evaluated using (i) computer vision-based methods, and (ii) human detection performance." }, { "dkey": "FaceForensics++", "dval": "FaceForensics++ is a forensics dataset consisting of 1000 original video sequences that have been manipulated with four automated face manipulation methods: Deepfakes, Face2Face, FaceSwap and NeuralTextures. The data has been sourced from 977 youtube videos and all videos contain a trackable mostly frontal face without occlusions which enables automated tampering methods to generate realistic forgeries." }, { "dkey": "GoPro", "dval": "The GoPro dataset for deblurring consists of 3,214 blurred images with the size of 1,280×720 that are divided into 2,103 training images and 1,111 test images. The dataset consists of pairs of a realistic blurry image and the corresponding ground truth shapr image that are obtained by a high-speed camera." } ]
This paper explores knowledge distillation techniques to enhance the robustness of reading comprehension systems. Our method shows that distill
reading comprehension text
2,018
[ "WebChild", "ImageNet-32", "Taskonomy", "DREAM", "DROP", "MuPoTS-3D", "MC-AFP" ]
[ "NarrativeQA", "SQuAD" ]
[ { "dkey": "NarrativeQA", "dval": "The NarrativeQA dataset includes a list of documents with Wikipedia summaries, links to full stories, and questions and answers." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "WebChild", "dval": "One of the largest commonsense knowledge bases available, describing over 2 million disambiguated concepts and activities, connected by over 18 million assertions." }, { "dkey": "ImageNet-32", "dval": "Imagenet32 is a huge dataset made up of small images called the down-sampled version of Imagenet. Imagenet32 is composed of 1,281,167 training data and 50,000 test data with 1,000 labels." }, { "dkey": "Taskonomy", "dval": "Taskonomy provides a large and high-quality dataset of varied indoor scenes.\n\n\nComplete pixel-level geometric information via aligned meshes.\nSemantic information via knowledge distillation from ImageNet, MS COCO, and MIT Places.\nGlobally consistent camera poses. Complete camera intrinsics.\nHigh-definition images.\n3x times big as ImageNet." }, { "dkey": "DREAM", "dval": "DREAM is a multiple-choice Dialogue-based REAding comprehension exaMination dataset. In contrast to existing reading comprehension datasets, DREAM is the first to focus on in-depth multi-turn multi-party dialogue understanding.\n\nDREAM contains 10,197 multiple choice questions for 6,444 dialogues, collected from English-as-a-foreign-language examinations designed by human experts. DREAM is likely to present significant challenges for existing reading comprehension systems: 84% of answers are non-extractive, 85% of questions require reasoning beyond a single sentence, and 34% of questions also involve commonsense knowledge." }, { "dkey": "DROP", "dval": "Discrete Reasoning Over Paragraphs DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets. The questions consist of passages extracted from Wikipedia articles. The dataset is split into a training set of about 77,000 questions, a development set of around 9,500 questions and a hidden test set similar in size to the development set." }, { "dkey": "MuPoTS-3D", "dval": "MuPoTs-3D (Multi-person Pose estimation Test Set in 3D) is a dataset for pose estimation composed of more than 8,000 frames from 20 real-world scenes with up to three subjects. The poses are annotated with a 14-point skeleton model." }, { "dkey": "MC-AFP", "dval": "A dataset of around 2 million examples for machine reading-comprehension." } ]
A novel approach to fine-tune a general-purpose segmentation model using a single forward pass.
video object segmentation
2,018
[ "THEODORE", "D-HAZY", "NumerSense", "TextSeg", "EPIC-KITCHENS-100", "MEDIQA-AnS" ]
[ "DAVIS", "COCO" ]
[ { "dkey": "DAVIS", "dval": "The Densely Annotation Video Segmentation dataset (DAVIS) is a high quality and high resolution densely annotated video segmentation dataset under two resolutions, 480p and 1080p. There are 50 video sequences with 3455 densely annotated frames in pixel level. 30 videos with 2079 frames are for training and 20 videos with 1376 frames are for validation." }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "D-HAZY", "dval": "The D-HAZY dataset is generated from NYU depth indoor image collection. D-HAZY contains depth map for each indoor hazy image. It contains 1400+ real images and corresponding depth maps used to synthesize hazy scenes based on Koschmieder’s light propagation mode" }, { "dkey": "NumerSense", "dval": "Contains 13.6k masked-word-prediction probes, 10.5k for fine-tuning and 3.1k for testing." }, { "dkey": "TextSeg", "dval": "TextSeg is a large-scale fine-annotated and multi-purpose text detection and segmentation dataset, collecting scene and design text with six types of annotations: word- and character-wise bounding polygons, masks and transcriptions." }, { "dkey": "EPIC-KITCHENS-100", "dval": "This paper introduces the pipeline to scale the largest dataset in egocentric vision EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (EPIC-KITCHENS-55), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection also enables evaluating the \"test of time\" - i.e. whether models trained on data collected in 2018 can generalise to new footage collected under the same hypotheses albeit \"two years on\".\nThe dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics." }, { "dkey": "MEDIQA-AnS", "dval": "The first summarization collection containing question-driven summaries of answers to consumer health questions. This dataset can be used to evaluate single or multi-document summaries generated by algorithms using extractive or abstractive approaches." } ]
In this paper, we propose a novel hubness-aware loss function for learning
text-image matching images
2,019
[ "ORVS", "fMoW", "LFW", "arXiv Summarization Dataset" ]
[ "COCO", "Flickr30k" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "Flickr30k", "dval": "The Flickr30k dataset contains 31,000 images collected from Flickr, together with 5 reference sentences provided by human annotators." }, { "dkey": "ORVS", "dval": "The ORVS dataset has been newly established as a collaboration between the computer science and visual-science departments at the University of Calgary.\n\nThis dataset contains 49 images (42 training and seven testing images) collected from a clinic in Calgary-Canada. All images were acquired with a Zeiss Visucam 200 with 30 degrees field of view (FOV). The image size is 1444×1444 with 24 bits per pixel. Images and are stored in JPEG format with low compression, which is common in ophthalmology practice. All images were manually traced by an expert who a has been working in the field of retinal-image analysis and went through training. The expert was asked to label all pixels belonging to retinal vessels. The Windows Paint 3D tool was used to manually label the images." }, { "dkey": "fMoW", "dval": "Functional Map of the World (fMoW) is a dataset that aims to inspire the development of machine learning models capable of predicting the functional purpose of buildings and land use from temporal sequences of satellite images and a rich set of metadata features." }, { "dkey": "LFW", "dval": "The LFW dataset contains 13,233 images of faces collected from the web. This dataset consists of the 5749 identities with 1680 people with two or more images. In the standard LFW evaluation protocol the verification accuracies are reported on 6000 face pairs." }, { "dkey": "arXiv Summarization Dataset", "dval": "This is a dataset for evaluating summarisation methods for research papers." } ]
We investigate how to reduce the dimension of flow models to achieve improved likelihood scores.
generative modelling images
2,019
[ "Imagewoof", "NAB", "PieAPP dataset", "ROSTD" ]
[ "CIFAR-10", "CelebA" ]
[ { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "Imagewoof", "dval": "Imagewoof is a subset of 10 dog breed classes from Imagenet. The breeds are: Australian terrier, Border terrier, Samoyed, Beagle, Shih-Tzu, English foxhound, Rhodesian ridgeback, Dingo, Golden retriever, Old English sheepdog." }, { "dkey": "NAB", "dval": "The First Temporal Benchmark Designed to Evaluate Real-time Anomaly Detectors Benchmark\n\nThe growth of the Internet of Things has created an abundance of streaming data. Finding anomalies in this data can provide valuable insights into opportunities or failures. Yet it’s difficult to achieve, due to the need to process data in real time, continuously learn and make predictions. How do we evaluate and compare various real-time anomaly detection techniques? \n\nThe Numenta Anomaly Benchmark (NAB) provides a standard, open source framework for evaluating real-time anomaly detection algorithms on streaming data. Through a controlled, repeatable environment of open-source tools, NAB rewards detectors that find anomalies as soon as possible, trigger no false alarms, and automatically adapt to any changing statistics. \n\nNAB comprises two main components: a scoring system designed for streaming data and a dataset with labeled, real-world time-series data." }, { "dkey": "PieAPP dataset", "dval": "The PieAPP dataset is a large-scale dataset used for training and testing perceptually-consistent image-error prediction algorithms.\nThe dataset can be downloaded from: server containing a zip file with all data (2.2GB) or Google Drive (ideal for quick browsing). \n\nThe dataset contains undistorted high-quality reference images and several distorted versions of these reference images. Pairs of distorted images corresponding to a reference image are labeled with probability of preference labels.\n These labels indicate the fraction of human population that considers one image to be visually closer to the reference over another in the pair.\nTo ensure reliable pairwise probability of preference labels, we query 40 human subjects via Amazon Mechanical Turk for each image pair.\nWe then obtain the percentage of people who selected image A over B as the ground-truth label for this pair, which is the probability of preference of A over B (the supplementary document explains the choice of using 40 human subjects to capture accurate probabilities).\nThis approach is more robust because it is easier to identify the visually closer image than to assign quality scores, and does not suffer from set-dependency or scalability issues like Swiss tournaments since we never label the images with per-image quality scores (see the associated paper and supplementary document for issues with such existing labeling schemes). \nA pairwise learning framework, discussed in the paper, can be used to train image error predictors on the PieAPP dataset.\n\nDataset statistics\nWe make this dataset available for non-commercial and educational purposes only. \nThe dataset contains a total of 200 undistorted reference images, divided into train / validation / test split.\nThese reference images are derived from the Waterloo Exploration Dataset. We release the subset of 200 reference images used in PieAPP from the Waterloo Exploration Dataset with permissions for non-commercial, educational, use from the authors.\nThe users of the PieAPP dataset are requested to cite the Waterloo Exploration Dataset for the reference images, along with PieAPP dataset, as mentioned here.\n\nThe training + validation set contain a total of 160 reference images and test set contains 40 reference images.\nA total of 19,680 distorted images are generated for the train/val set and pairwise probability of preference labels for 77,280 image pairs are made available (derived from querying 40 human subjects for a pairwise comparison + max-likelihood estimation of some missing pairs).\n\nFor test set, 15 distorted images per reference (total 600 distorted images) are created and all possible pairwise comparisons (total 4200) are performed to label each image pair with a probability of preference derived from 40 human subjects' votes.\n\nOverall, the PieAPP dataset provides a total of 20,280 distorted images derived from 200 reference images, and 81,480 pairwise probability-of-preference labels.\n\nMore details of dataset collection can be found in Sec.4 of the paper and supplementary document." }, { "dkey": "ROSTD", "dval": "A dataset of 4K out-of-domain (OOD) examples for the publicly available dataset from (Schuster et al. 2019). In contrast to existing settings which synthesize OOD examples by holding out a subset of classes, the examples were authored by annotators with apriori instructions to be out-of-domain with respect to the sentences in an existing dataset." } ]
A method for robust face alignment using a deep neural network architecture.
face alignment images
2,017
[ "SI-SCORE", "30MQA", "HIGGS Data Set", "Word Sense Disambiguation: a Unified Evaluation Framework and Empirical Comparison" ]
[ "AFW", "300W" ]
[ { "dkey": "AFW", "dval": "AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box." }, { "dkey": "300W", "dval": "The 300-W is a face dataset that consists of 300 Indoor and 300 Outdoor in-the-wild images. It covers a large variation of identity, expression, illumination conditions, pose, occlusion and face size. The images were downloaded from google.com by making queries such as “party”, “conference”, “protests”, “football” and “celebrities”. Compared to the rest of in-the-wild datasets, the 300-W database contains a larger percentage of partially-occluded images and covers more expressions than the common “neutral” or “smile”, such as “surprise” or “scream”.\nImages were annotated with the 68-point mark-up using a semi-automatic methodology. The images of the database were carefully selected so that they represent a characteristic sample of challenging but natural face instances under totally unconstrained conditions. Thus, methods that achieve accurate performance on the 300-W database can demonstrate the same accuracy in most realistic cases.\nMany images of the database contain more than one annotated faces (293 images with 1 face, 53 images with 2 faces and 53 images with [3, 7] faces). Consequently, the database consists of 600 annotated face instances, but 399 unique images. Finally, there is a large variety of face sizes. Specifically, 49.3% of the faces have size in the range [48.6k, 2.0M] and the overall mean size is 85k (about 292 × 292) pixels." }, { "dkey": "SI-SCORE", "dval": "A synthetic dataset uses for a systematic analysis across common factors of variation." }, { "dkey": "30MQA", "dval": "An enormous question answer pair corpus produced by applying a novel neural network architecture on the knowledge base Freebase to transduce facts into natural language questions." }, { "dkey": "HIGGS Data Set", "dval": "The data has been produced using Monte Carlo simulations. The first 21 features (columns 2-22) are kinematic properties measured by the particle detectors in the accelerator. The last seven features are functions of the first 21 features; these are high-level features derived by physicists to help discriminate between the two classes. There is an interest in using deep learning methods to obviate the need for physicists to manually develop such features. Benchmark results using Bayesian Decision Trees from a standard physics package and 5-layer neural networks are presented in the original paper. The last 500,000 examples are used as a test set." }, { "dkey": "Word Sense Disambiguation: a Unified Evaluation Framework and Empirical Comparison", "dval": "The Evaluation framework of Raganato et al. 2017 includes two training sets (SemCor-Miller et al., 1993- and OMSTI-Taghipour and Ng, 2015-) and five test sets from the Senseval/SemEval series (Edmonds and Cotton, 2001; Snyder and Palmer, 2004; Pradhan et al., 2007; Navigli et al., 2013; Moro and Navigli, 2015), standardized to the same format and sense inventory (i.e. WordNet 3.0).\n\nTypically, there are two kinds of approach for WSD: supervised (which make use of sense-annotated training data) and knowledge-based (which make use of the properties of lexical resources).\n\nSupervised: The most widely used training corpus used is SemCor, with 226,036 sense annotations from 352 documents manually annotated. All supervised systems in the evaluation table are trained on SemCor. Some supervised methods, particularly neural architectures, usually employ the SemEval 2007 dataset as development set (marked by *). The most usual baseline is the Most Frequent Sense (MFS) heuristic, which selects for each target word the most frequent sense in the training data.\n\nKnowledge-based: Knowledge-based systems usually exploit WordNet or BabelNet as semantic network. The first sense given by the underlying sense inventory (i.e. WordNet 3.0) is included as a baseline.\n\nDescription from NLP Progress" } ]
I want to use an image-text matching system for generic image retrieval.
image-sentence matching images sentences
2,017
[ "Lakh MIDI Dataset", "Spoken-SQuAD", "CTC", "RecipeQA", "Fashion IQ" ]
[ "Flickr30k", "COCO" ]
[ { "dkey": "Flickr30k", "dval": "The Flickr30k dataset contains 31,000 images collected from Flickr, together with 5 reference sentences provided by human annotators." }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "Lakh MIDI Dataset", "dval": "The Lakh MIDI dataset is a collection of 176,581 unique MIDI files, 45,129 of which have been matched and aligned to entries in the Million Song Dataset. Its goal is to facilitate large-scale music information retrieval, both symbolic (using the MIDI files alone) and audio content-based (using information extracted from the MIDI files as annotations for the matched audio files). Around 10% of all MIDI files include timestamped lyrics events with lyrics are often transcribed at the word, syllable or character level.\n\nLMD-full denotes the whole dataset. LMD-matched is the subset of LMD-full that consists of MIDI files matched with the Million Song Dataset entries. LMD-aligned contains all the files of LMD-matched, aligned to preview MP3s from the Million Song Dataset.\n\nA lakh is a unit of measure used in the Indian number system which signifies 100,000." }, { "dkey": "Spoken-SQuAD", "dval": "In SpokenSQuAD, the document is in spoken form, the input question is in the form of text and the answer to each question is always a span in the document. The following procedures were used to generate spoken documents from the original SQuAD dataset. First, the Google text-to-speech system was used to generate the spoken version of the articles in SQuAD. Then CMU Sphinx was sued to generate the corresponding ASR transcriptions. The SQuAD training set was used to generate the training set of Spoken SQuAD, and SQuAD development set was used to generate the testing set for Spoken SQuAD. If the answer of a question did not exist in the ASR transcriptions of the associated article, the question-answer pair was removed from the dataset because these examples are too difficult for listening comprehension machine at this stage." }, { "dkey": "CTC", "dval": "A dataset that allows exploration of cross-modal retrieval where images contain scene-text instances." }, { "dkey": "RecipeQA", "dval": "RecipeQA is a dataset for multimodal comprehension of cooking recipes. It consists of over 36K question-answer pairs automatically generated from approximately 20K unique recipes with step-by-step instructions and images. Each question in RecipeQA involves multiple modalities such as titles, descriptions or images, and working towards an answer requires (i) joint understanding of images and text, (ii) capturing the temporal flow of events, and (iii) making sense of procedural knowledge." }, { "dkey": "Fashion IQ", "dval": "Fashion IQ support and advance research on interactive fashion image retrieval. Fashion IQ is the first fashion dataset to provide human-generated captions that distinguish similar pairs of garment images together with side-information consisting of real-world product descriptions and derived visual attribute labels for these images." } ]
I'd like to encode a sentence into a continuous vector.
sentence encoding text
2,016
[ "NCI1", "SentEval", "SEWA DB", "MagnaTagATune", "Discovery Dataset" ]
[ "COCO", "BookCorpus" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "BookCorpus", "dval": "BookCorpus is a large collection of free novel books written by unpublished authors, which contains 11,038 books (around 74M sentences and 1G words) of 16 different sub-genres (e.g., Romance, Historical, Adventure, etc.)." }, { "dkey": "NCI1", "dval": "The NCI1 dataset comes from the cheminformatics domain, where each input graph is used as representation of a chemical compound: each vertex stands for an atom of the molecule, and edges between vertices represent bonds between atoms. This dataset is relative to anti-cancer screens where the chemicals are assessed as positive or negative to cell lung cancer. Each vertex has an input label representing the corresponding atom type, encoded by a one-hot-encoding scheme into a vector of 0/1 elements." }, { "dkey": "SentEval", "dval": "SentEval is a toolkit for evaluating the quality of universal sentence representations. SentEval encompasses a variety of tasks, including binary and multi-class classification, natural language inference and sentence similarity. The set of tasks was selected based on what appears to be the community consensus regarding the appropriate evaluations for universal sentence representations. The toolkit comes with scripts to download and preprocess datasets, and an easy interface to evaluate sentence encoders." }, { "dkey": "SEWA DB", "dval": "A database of more than 2000 minutes of audio-visual data of 398 people coming from six cultures, 50% female, and uniformly spanning the age range of 18 to 65 years old. Subjects were recorded in two different contexts: while watching adverts and while discussing adverts in a video chat. The database includes rich annotations of the recordings in terms of facial landmarks, facial action units (FAU), various vocalisations, mirroring, and continuously valued valence, arousal, liking, agreement, and prototypic examples of (dis)liking. This database aims to be an extremely valuable resource for researchers in affective computing and automatic human sensing and is expected to push forward the research in human behaviour analysis, including cultural studies." }, { "dkey": "MagnaTagATune", "dval": "MagnaTagATune dataset contains 25,863 music clips. Each clip is a 29-seconds-long excerpt belonging to one of the 5223 songs, 445 albums and 230 artists. The clips span a broad range of genres like Classical, New Age, Electronica, Rock, Pop, World, Jazz, Blues, Metal, Punk, and more. Each audio clip is supplied with a vector of binary annotations of 188 tags. These annotations are obtained by humans playing the two-player online TagATune game. In this game, the two players are either presented with the same or a different audio clip. Subsequently, they are asked to come up with tags for their specific audio clip. Afterward, players view each other’s tags and are asked to decide whether they were presented the same audio clip. Tags are only assigned when more than two players agreed. The annotations include tags like ’singer’, ’no singer’, ’violin’, ’drums’, ’classical’, ’jazz’. The top 50 most popular tags are typically used for evaluation to ensure that there is enough training data for each tag. There are 16 parts, and researchers comonnly use parts 1-12 for training, part 13 for validation and parts 14-16 for testing." }, { "dkey": "Discovery Dataset", "dval": "The Discovery datasets consists of adjacent sentence pairs (s1,s2) with a discourse marker (y) that occurred at the beginning of s2. They were extracted from the depcc web corpus.\n\nMarkers prediction can be used in order to train a sentence encoders. Discourse markers can be considered as noisy labels for various semantic tasks, such as entailment (y=therefore), subjectivity analysis (y=personally) or sentiment analysis (y=sadly), similarity (y=similarly), typicality, (y=curiously) ...\n\nThe specificity of this dataset is the diversity of the markers, since previously used data used only ~10 imbalanced classes. The author of the dataset provide:\n\n\na list of the 174 discourse markers\na Base version of the dataset with 1.74 million pairs (10k examples per marker)\na Big version with 3.4 million pairs\na Hard version with 1.74 million pairs where the connective couldn't be predicted with a fastText linear model" } ]
We propose a novel method for the automatic segmentation of vessel trees in retinal fundus images.
retinal vessel segmentation fundus images
2,015
[ "RITE", "HRF", "ADAM", "G1020", "ROSE", "ORVS" ]
[ "STARE", "DRIVE", "CHASE_DB1" ]
[ { "dkey": "STARE", "dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.." }, { "dkey": "DRIVE", "dval": "The Digital Retinal Images for Vessel Extraction (DRIVE) dataset is a dataset for retinal vessel segmentation. It consists of a total of JPEG 40 color fundus images; including 7 abnormal pathology cases. The images were obtained from a diabetic retinopathy screening program in the Netherlands. The images were acquired using Canon CR5 non-mydriatic 3CCD camera with FOV equals to 45 degrees. Each image resolution is 584*565 pixels with eight bits per color channel (3 channels). \n\nThe set of 40 images was equally divided into 20 images for the training set and 20 images for the testing set. Inside both sets, for each image, there is circular field of view (FOV) mask of diameter that is approximately 540 pixels. Inside training set, for each image, one manual segmentation by an ophthalmological expert has been applied. Inside testing set, for each image, two manual segmentations have been applied by two different observers, where the first observer segmentation is accepted as the ground-truth for performance evaluation." }, { "dkey": "CHASE_DB1", "dval": "CHASE_DB1 is a dataset for retinal vessel segmentation which contains 28 color retina images with the size of 999×960 pixels which are collected from both left and right eyes of 14 school children. Each image is annotated by two independent human experts." }, { "dkey": "RITE", "dval": "The RITE (Retinal Images vessel Tree Extraction) is a database that enables comparative studies on segmentation or classification of arteries and veins on retinal fundus images, which is established based on the public available DRIVE database (Digital Retinal Images for Vessel Extraction).\n\nRITE contains 40 sets of images, equally separated into a training subset and a test subset, the same as DRIVE. The two subsets are built from the corresponding two subsets in DRIVE. For each set, there is a fundus photograph, a vessel reference standard, and a Arteries/Veins (A/V) reference standard. \n\n\nThe fundus photograph is inherited from DRIVE. \nFor the training set, the vessel reference standard is a modified version of 1st_manual from DRIVE. \nFor the test set, the vessel reference standard is 2nd_manual from DRIVE. \nFor the A/V reference standard, four types of vessels are labelled using four colors based on the vessel reference standard. \nArteries are labelled in red; veins are labelled in blue; the overlapping of arteries and veins are labelled in green; the vessels which are uncertain are labelled in white. \nThe fundus photograph is in tif format. And the vessel reference standard and the A/V reference standard are in png format. \n\nThe dataset is described in more detail in our paper, which you will cite if you use the dataset in any way: \n\nHu Q, Abràmoff MD, Garvin MK. Automated separation of binary overlapping trees in low-contrast color retinal images. Med Image Comput Comput Assist Interv. 2013;16(Pt 2):436-43. PubMed PMID: 24579170 https://doi.org/10.1007/978-3-642-40763-5_54" }, { "dkey": "HRF", "dval": "The HRF dataset is a dataset for retinal vessel segmentation which comprises 45 images and is organized as 15 subsets. Each subset contains one healthy fundus image, one image of patient with diabetic retinopathy and one glaucoma image. The image sizes are 3,304 x 2,336, with a training/testing image split of 22/23." }, { "dkey": "ADAM", "dval": "ADAM is organized as a half day Challenge, a Satellite Event of the ISBI 2020 conference in Iowa City, Iowa, USA.\n\nThe ADAM challenge focuses on the investigation and development of algorithms associated with the diagnosis of Age-related Macular degeneration (AMD) and segmentation of lesions in fundus photos from AMD patients. The goal of the challenge is to evaluate and compare automated algorithms for the detection of AMD on a common dataset of retinal fundus images. We invite the medical image analysis community to participate by developing and testing existing and novel automated fundus classification and segmentation methods.\n\nInstructions: \nADAM: Automatic Detection challenge on Age-related Macular degeneration\n\nLink: https://amd.grand-challenge.org\n\nAge-related macular degeneration, abbreviated as AMD, is a degenerative disorder in the macular region. It mainly occurs in people older than 45 years old and its incidence rate is even higher than diabetic retinopathy in the elderly. \n\nThe etiology of AMD is not fully understood, which could be related to multiple factors, including genetics, chronic photodestruction effect, and nutritional disorder. AMD is classified into Dry AMD and Wet AMD. Dry AMD (also called nonexudative AMD) is not neovascular. It is characterized by progressive atrophy of retinal pigment epithelium (RPE). In the late stage, drusen and the large area of atrophy could be observed under ophthalmoscopy. Wet AMD (also called neovascular or exudative AMD), is characterized by active neovascularization under RPE, subsequently causing exudation, hemorrhage, and scarring, and will eventually cause irreversible damage to the photoreceptors and rapid vision loss if left untreated.\n\nAn early diagnosis of AMD is crucial to treatment and prognosis. Fundus photo is one of the basic examinations. The current dataset is composed of AMD and non-AMD (myopia, normal control, etc.) photos. Typical signs of AMD that can be found in these photos include drusen, exudation, hemorrhage, etc. \n\nThe ADAM challenge has 4 tasks:\n\nTask 1: Classification of AMD and non-AMD fundus images.\n\nTask 2: Detection and segmentation of optic disc.\n\nTask 3: Localization of fovea.\n\nTask 4: Detection and Segmentation of lesions from fundus images." }, { "dkey": "G1020", "dval": "A large publicly available retinal fundus image dataset for glaucoma classification called G1020. The dataset is curated by conforming to standard practices in routine ophthalmology and it is expected to serve as standard benchmark dataset for glaucoma detection. This database consists of 1020 high resolution colour fundus images and provides ground truth annotations for glaucoma diagnosis, optic disc and optic cup segmentation, vertical cup-to-disc ratio, size of neuroretinal rim in inferior, superior, nasal and temporal quadrants, and bounding box location for optic disc." }, { "dkey": "ROSE", "dval": "Retinal OCTA SEgmentation dataset (ROSE) consists of 229 OCTA images with vessel annotations at either centerline-level or pixel level." }, { "dkey": "ORVS", "dval": "The ORVS dataset has been newly established as a collaboration between the computer science and visual-science departments at the University of Calgary.\n\nThis dataset contains 49 images (42 training and seven testing images) collected from a clinic in Calgary-Canada. All images were acquired with a Zeiss Visucam 200 with 30 degrees field of view (FOV). The image size is 1444×1444 with 24 bits per pixel. Images and are stored in JPEG format with low compression, which is common in ophthalmology practice. All images were manually traced by an expert who a has been working in the field of retinal-image analysis and went through training. The expert was asked to label all pixels belonging to retinal vessels. The Windows Paint 3D tool was used to manually label the images." } ]
A simple self-attention network that is more efficient than state-of-the-art non-local
image classification
2,020
[ "LibriSpeech", "Places", "ISIA Food-500", "DADA-2000", "GitHub Typo Corpus", "YouTube-8M", "VQA-HAT" ]
[ "COCO", "Cityscapes" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "Cityscapes", "dval": "Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of around 5000 fine annotated images and 20000 coarse annotated ones. Data was captured in 50 cities during several months, daytimes, and good weather conditions. It was originally recorded as video so the frames were manually selected to have the following features: large number of dynamic objects, varying scene layout, and varying background." }, { "dkey": "LibriSpeech", "dval": "The LibriSpeech corpus is a collection of approximately 1,000 hours of audiobooks that are a part of the LibriVox project. Most of the audiobooks come from the Project Gutenberg. The training data is split into 3 partitions of 100hr, 360hr, and 500hr sets while the dev and test data are split into the ’clean’ and ’other’ categories, respectively, depending upon how well or challenging Automatic Speech Recognition systems would perform against. Each of the dev and test sets is around 5hr in audio length. This corpus also provides the n-gram language models and the corresponding texts excerpted from the Project Gutenberg books, which contain 803M tokens and 977K unique words." }, { "dkey": "Places", "dval": "The Places dataset is proposed for scene recognition and contains more than 2.5 million images covering more than 205 scene categories with more than 5,000 images per category." }, { "dkey": "ISIA Food-500", "dval": "Includes 500 categories from the list in the Wikipedia and 399,726 images, a more comprehensive food dataset that surpasses existing popular benchmark datasets by category coverage and data volume." }, { "dkey": "DADA-2000", "dval": "DADA-2000 is a large-scale benchmark with 2000 video sequences (named as DADA-2000) is contributed with laborious annotation for driver attention (fixation, saccade, focusing time), accident objects/intervals, as well as the accident categories, and superior performance to state-of-the-arts are provided by thorough evaluations." }, { "dkey": "GitHub Typo Corpus", "dval": "Are you the kind of person who makes a lot of typos when writing code? Or are you the one who fixes them by making \"fix typo\" commits? Either way, thank you—you contributed to the state-of-the-art in the NLP field.\n\nGitHub Typo Corpus is a large-scale dataset of misspellings and grammatical errors along with their corrections harvested from GitHub. It contains more than 350k edits and 65M characters in more than 15 languages, making it the largest dataset of misspellings to date." }, { "dkey": "YouTube-8M", "dval": "The YouTube-8M dataset is a large scale video dataset, which includes more than 7 million videos with 4716 classes labeled by the annotation system. The dataset consists of three parts: training set, validate set, and test set. In the training set, each class contains at least 100 training videos. Features of these videos are extracted by the state-of-the-art popular pre-trained models and released for public use. Each video contains audio and visual modality. Based on the visual information, videos are divided into 24 topics, such as sports, game, arts & entertainment, etc" }, { "dkey": "VQA-HAT", "dval": "VQA-HAT (Human ATtention) is a dataset to evaluate the informative regions of an image depending on the question being asked about it. The dataset consists of human visual attention maps over the images in the original VQA dataset. It contains more than 60k attention maps." } ]
We introduce a simple yet effective method to perform knowledge base completion based on word embedding. We show that
knowledge base completion text
2,018
[ "BDD100K", "REDDIT-BINARY", "THEODORE", "UASOL" ]
[ "FB15k", "WN18" ]
[ { "dkey": "FB15k", "dval": "The FB15k dataset contains knowledge base relation triples and textual mentions of Freebase entity pairs. It has a total of 592,213 triplets with 14,951 entities and 1,345 relationships. FB15K-237 is a variant of the original dataset where inverse relations are removed, since it was found that a large number of test triplets could be obtained by inverting triplets in the training set." }, { "dkey": "WN18", "dval": "The WN18 dataset has 18 relations scraped from WordNet for roughly 41,000 synsets, resulting in 141,442 triplets. It was found out that a large number of the test triplets can be found in the training set with another relation or the inverse relation. Therefore, a new version of the dataset WN18RR has been proposed to address this issue." }, { "dkey": "BDD100K", "dval": "Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page." }, { "dkey": "REDDIT-BINARY", "dval": "REDDIT-BINARY consists of graphs corresponding to online discussions on Reddit. In each graph, nodes represent users, and there is an edge between them if at least one of them respond to the other’s comment. There are four popular subreddits, namely, IAmA, AskReddit, TrollXChromosomes, and atheism. IAmA and AskReddit are two question/answer based subreddits, and TrollXChromosomes and atheism are two discussion-based subreddits. A graph is labeled according to whether it belongs to a question/answer-based community or a discussion-based community." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "UASOL", "dval": "The UASOL an RGB-D stereo dataset, that contains 160902 frames, filmed at 33 different scenes, each with between 2 k and 10 k frames. The frames show different paths from the perspective of a pedestrian, including sidewalks, trails, roads, etc. The images were extracted from video files with 15 fps at HD2K resolution with a size of 2280 × 1282 pixels. The dataset also provides a GPS geolocalization tag for each second of the sequences and reflects different climatological conditions. It also involved up to 4 different persons filming the dataset at different moments of the day.\n\nWe propose a train, validation and test split to train the network. \nAdditionally, we introduce a subset of 676 pairs of RGB Stereo images and their respective depth, which we extracted randomly from the entire dataset. This given test set is introduced to make comparability possible between the different methods trained with the dataset." } ]
A representation learning framework, named structure transfer machine (STM), is proposed to transfer
representation learning image text
2,020
[ "DialoGLUE", "DRCD", "STAR", "BLUE", "XCOPA", "SpeakingFaces" ]
[ "ImageNet", "OTB" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "OTB", "dval": "Object Tracking Benchmark (OTB) is a visual tracking benchmark that is widely used to evaluate the performance of a visual tracking algorithm. The dataset contains a total of 100 sequences and each is annotated frame-by-frame with bounding boxes and 11 challenge attributes. OTB-2013 dataset contains 51 sequences and the OTB-2015 dataset contains all 100 sequences of the OTB dataset." }, { "dkey": "DialoGLUE", "dval": "DialoGLUE is a natural language understanding benchmark for task-oriented dialogue designed to encourage dialogue research in representation-based transfer, domain adaptation, and sample-efficient task learning. It consisting of 7 task-oriented dialogue datasets covering 4 distinct natural language understanding tasks." }, { "dkey": "DRCD", "dval": "Delta Reading Comprehension Dataset (DRCD) is an open domain traditional Chinese machine reading comprehension (MRC) dataset. This dataset aimed to be a standard Chinese machine reading comprehension dataset, which can be a source dataset in transfer learning. The dataset contains 10,014 paragraphs from 2,108 Wikipedia articles and 30,000+ questions generated by annotators." }, { "dkey": "STAR", "dval": "A schema-guided task-oriented dialog dataset consisting of 127,833 utterances and knowledge base queries across 5,820 task-oriented dialogs in 13 domains that is especially designed to facilitate task and domain transfer learning in task-oriented dialog." }, { "dkey": "BLUE", "dval": "The BLUE benchmark consists of five different biomedicine text-mining tasks with ten corpora. These tasks cover a diverse range of text genres (biomedical literature and clinical notes), dataset sizes, and degrees of difficulty and, more importantly, highlight common biomedicine text-mining challenges." }, { "dkey": "XCOPA", "dval": "The Cross-lingual Choice of Plausible Alternatives (XCOPA) dataset is a benchmark to evaluate the ability of machine learning models to transfer commonsense reasoning across languages. The dataset is the translation and reannotation of the English COPA (Roemmele et al. 2011) and covers 11 languages from 11 families and several areas around the globe. The dataset is challenging as it requires both the command of world knowledge and the ability to generalise to new languages." }, { "dkey": "SpeakingFaces", "dval": "SpeakingFaces is a publicly-available large-scale dataset developed to support multimodal machine learning research in contexts that utilize a combination of thermal, visual, and audio data streams; examples include human-computer interaction (HCI), biometric authentication, recognition systems, domain transfer, and speech recognition. SpeakingFaces is comprised of well-aligned high-resolution thermal and visual spectra image streams of fully-framed faces synchronized with audio recordings of each subject speaking approximately 100 imperative phrases." } ]
I want to train an object detector from the ParallelEye dataset.
object detection images
2,018
[ "SNIPS", "GQA", "MOT17", "SCUT-CTW1500", "DOTA", "MOT15" ]
[ "COCO", "KITTI" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "KITTI", "dval": "KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner. Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities. Álvarez et al. generated ground truth for 323 images from the road detection challenge with three classes: road, vertical, and sky. Zhang et al. annotated 252 (140 for training and 112 for testing) acquisitions – RGB and Velodyne scans – from the tracking challenge for ten object categories: building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence. Ros et al. labeled 170 training images and 46 testing images (from the visual odometry challenge) with 11 classes: building, tree, sky, car, sign, road, pedestrian, fence, pole, sidewalk, and bicyclist." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "GQA", "dval": "The GQA dataset is a large-scale visual question answering dataset with real images from the Visual Genome dataset and balanced question-answer pairs. Each training and validation image is also associated with scene graph annotations describing the classes and attributes of those objects in the scene, and their pairwise relations. Along with the images and question-answer pairs, the GQA dataset provides two types of pre-extracted visual features for each image – convolutional grid features of size 7×7×2048 extracted from a ResNet-101 network trained on ImageNet, and object detection features of size Ndet×2048 (where Ndet is the number of detected objects in each image with a maximum of 100 per image) from a Faster R-CNN detector." }, { "dkey": "MOT17", "dval": "The Multiple Object Tracking 17 (MOT17) dataset is a dataset for multiple object tracking. Similar to its previous version MOT16, this challenge contains seven different indoor and outdoor scenes of public places with pedestrians as the objects of interest. A video for each scene is divided into two clips, one for training and the other for testing. The dataset provides detections of objects in the video frames with three detectors, namely SDP, Faster-RCNN and DPM. The challenge accepts both on-line and off-line tracking approaches, where the latter are allowed to use the future video frames to predict tracks." }, { "dkey": "SCUT-CTW1500", "dval": "The SCUT-CTW1500 dataset contains 1,500 images: 1,000 for training and 500 for testing. In particular, it provides 10,751 cropped text instance images, including 3,530 with curved text. The images are manually harvested from the Internet, image libraries such as Google Open-Image, or phone cameras. The dataset contains a lot of horizontal and multi-oriented text." }, { "dkey": "DOTA", "dval": "DOTA is a large-scale dataset for object detection in aerial images. It can be used to develop and evaluate object detectors in aerial images. The images are collected from different sensors and platforms. Each image is of the size in the range from 800 × 800 to 20,000 × 20,000 pixels and contains objects exhibiting a wide variety of scales, orientations, and shapes. The instances in DOTA images are annotated by experts in aerial image interpretation by arbitrary (8 d.o.f.) quadrilateral. We will continue to update DOTA, to grow in size and scope to reflect evolving real-world conditions. Now it has three versions:\n\nDOTA-v1.0 contains 15 common categories, 2,806 images and 188, 282 instances. The proportions of the training set, validation set, and testing set in DOTA-v1.0 are 1/2, 1/6, and 1/3, respectively.\n\nDOTA-v1.5 uses the same images as DOTA-v1.0, but the extremely small instances (less than 10 pixels) are also annotated. Moreover, a new category, ”container crane” is added. It contains 403,318 instances in total. The number of images and dataset splits are the same as DOTA-v1.0. This version was released for the DOAI Challenge 2019 on Object Detection in Aerial Images in conjunction with IEEE CVPR 2019.\n\nDOTA-v2.0 collects more Google Earth, GF-2 Satellite, and aerial images. There are 18 common categories, 11,268 images and 1,793,658 instances in DOTA-v2.0. Compared to DOTA-v1.5, it further adds the new categories of ”airport” and ”helipad”. The 11,268 images of DOTA are split into training, validation, test-dev, and test-challenge sets. To avoid the problem of overfitting, the proportion of training and validation set is smaller than the test set. Furthermore, we have two test sets, namely test-dev and test-challenge. Training contains 1,830 images and 268,627 instances. Validation contains 593 images and 81,048 instances. We released the images and ground truths for training and validation sets. Test-dev contains 2,792 images and 353,346 instances. We released the images but not the ground truths. Test-challenge contains 6,053 images and 1,090,637 instances." }, { "dkey": "MOT15", "dval": "MOT2015 is a dataset for multiple object tracking. It contains 11 different indoor and outdoor scenes of public places with pedestrians as the objects of interest, where camera motion, camera angle and imaging condition vary greatly. The dataset provides detections generated by the ACF-based detector." } ]
A simple yet effective training strategy for person re-identification.
person re-identification images
2,018
[ "SYSU-MM01", "Airport", "Partial-iLIDS", "CUHK02", "DocBank", "IMDB-BINARY" ]
[ "DukeMTMC-reID", "Market-1501", "CUHK03" ]
[ { "dkey": "DukeMTMC-reID", "dval": "The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images.\n\nNOTE: This dataset has been retracted." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "SYSU-MM01", "dval": "The SYSU-MM01 is a dataset collected for the Visible-Infrared Re-identification problem. The images in the dataset were obtained from 491 different persons by recording them using 4 RGB and 2 infrared cameras. Within the dataset, the persons are divided into 3 fixed splits to create training, validation and test sets. In the training set, there are 20284 RGB and 9929 infrared images of 296 persons. The validation set contains 1974 RGB and 1980 infrared images of 99 persons. The testing set consists of the images of 96 persons where 3803 infrared images are used as query and 301 randomly selected RGB images are used as gallery." }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "Partial-iLIDS", "dval": "Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras. Some images contain people occluded by other individuals or luggage." }, { "dkey": "CUHK02", "dval": "CUHK02 is a dataset for person re-identification. It contains 1,816 identities from two disjoint camera views. Each identity has two samples per camera view making a total of 7,264 images. It is used for Person Re-identification." }, { "dkey": "DocBank", "dval": "A benchmark dataset that contains 500K document pages with fine-grained token-level annotations for document layout analysis. DocBank is constructed using a simple yet effective way with weak supervision from the \\LaTeX{} documents available on the arXiv.com." }, { "dkey": "IMDB-BINARY", "dval": "IMDB-BINARY is a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres." } ]
In this paper, we systematically analyze and compare several neural network designs (and their variations)
sentence pair modeling text paragraph-level
2,018
[ "SI-SCORE", "THEODORE", "EYEDIAP", "DOTmark", "DAGM2007" ]
[ "WikiQA", "SNLI" ]
[ { "dkey": "WikiQA", "dval": "The WikiQA corpus is a publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering. In order to reflect the true information need of general users, Bing query logs were used as the question source. Each question is linked to a Wikipedia page that potentially has the answer. Because the summary section of a Wikipedia page provides the basic and usually most important information about the topic, sentences in this section were used as the candidate answers. The corpus includes 3,047 questions and 29,258 sentences, where 1,473 sentences were labeled as answer sentences to their corresponding questions." }, { "dkey": "SNLI", "dval": "The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are image captions from Flickr30k, while hypotheses were generated by crowd-sourced annotators who were shown a premise and asked to generate entailing, contradicting, and neutral sentences. Annotators were instructed to judge the relation between sentences given that they describe the same event. Each pair is labeled as “entailment”, “neutral”, “contradiction” or “-”, where “-” indicates that an agreement could not be reached." }, { "dkey": "SI-SCORE", "dval": "A synthetic dataset uses for a systematic analysis across common factors of variation." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "EYEDIAP", "dval": "The EYEDIAP dataset is a dataset for gaze estimation from remote RGB, and RGB-D (standard vision and depth), cameras. The recording methodology was designed by systematically including, and isolating, most of the variables which affect the remote gaze estimation algorithms:\n\n\nHead pose variations.\nPerson variation.\nChanges in ambient and sensing condition.\nTypes of target: screen or 3D object." }, { "dkey": "DOTmark", "dval": "DOTmark is a benchmark for discrete optimal transport, which is designed to serve as a neutral collection of problems, where discrete optimal transport methods can be tested, compared to one another, and brought to their limits on large-scale instances. It consists of a variety of grayscale images, in various resolutions and classes, such as several types of randomly generated images, classical test images and real data from microscopy." }, { "dkey": "DAGM2007", "dval": "This is a synthetic dataset for defect detection on textured surfaces. It was originally created for a competition at the 2007 symposium of the DAGM (Deutsche Arbeitsgemeinschaft für Mustererkennung e.V., the German chapter of the International Association for Pattern Recognition). The competition was hosted together with the GNSS (German Chapter of the European Neural Network Society).\n\nAfter the competition, the dataset has been used as a test dataset in multiple projects and research papers. It is publicly available from the University of Heidelberg website (Heidelberg Collaboratory for Image Processing).\n\nThe data is artificially generated, but similar to real world problems. The first six out of ten datasets, denoted as development datasets, are supposed to be used for algorithm development. The remaining four datasets, which are referred to as competition datasets, can be used to evaluate the performance. Researchers should consider not using or analyzing the competition datasets before the development is completed as a code of honour." } ]
We study how the BERT model architecture can be combined with bidirectional LSTM to create a joint modeling framework for
question answering text paragraph-level
2,020
[ "BDD100K", "RoboNet", "MIMIC-CXR", "BLURB", "NAB" ]
[ "GLUE", "SQuAD" ]
[ { "dkey": "GLUE", "dval": "General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE and WNLI." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "BDD100K", "dval": "Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page." }, { "dkey": "RoboNet", "dval": "An open database for sharing robotic experience, which provides an initial pool of 15 million video frames, from 7 different robot platforms, and study how it can be used to learn generalizable models for vision-based robotic manipulation." }, { "dkey": "MIMIC-CXR", "dval": "MIMIC-CXR from Massachusetts Institute of Technology presents 371,920 chest X-rays associated with 227,943 imaging studies from 65,079 patients. The studies were performed at Beth Israel Deaconess Medical Center in Boston, MA." }, { "dkey": "BLURB", "dval": "BLURB is a collection of resources for biomedical natural language processing. In general domains such as newswire and the Web, comprehensive benchmarks and leaderboards such as GLUE have greatly accelerated progress in open-domain NLP. In biomedicine, however, such resources are ostensibly scarce. In the past, there have been a plethora of shared tasks in biomedical NLP, such as BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These efforts have played a significant role in fueling interest and progress by the research community, but they typically focus on individual tasks. The advent of neural language models such as BERTs provides a unifying foundation to leverage transfer learning from unlabeled text to support a wide range of NLP applications. To accelerate progress in biomedical pretraining strategies and task-specific methods, it is thus imperative to create a broad-coverage benchmark encompassing diverse biomedical tasks.\n\nInspired by prior efforts toward this direction (e.g., BLUE), BLURB (short for Biomedical Language Understanding and Reasoning Benchmark) was created. BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact." }, { "dkey": "NAB", "dval": "The First Temporal Benchmark Designed to Evaluate Real-time Anomaly Detectors Benchmark\n\nThe growth of the Internet of Things has created an abundance of streaming data. Finding anomalies in this data can provide valuable insights into opportunities or failures. Yet it’s difficult to achieve, due to the need to process data in real time, continuously learn and make predictions. How do we evaluate and compare various real-time anomaly detection techniques? \n\nThe Numenta Anomaly Benchmark (NAB) provides a standard, open source framework for evaluating real-time anomaly detection algorithms on streaming data. Through a controlled, repeatable environment of open-source tools, NAB rewards detectors that find anomalies as soon as possible, trigger no false alarms, and automatically adapt to any changing statistics. \n\nNAB comprises two main components: a scoring system designed for streaming data and a dataset with labeled, real-world time-series data." } ]
I want to compare different methods for non-extractive commonsense QA.
commonsense qa text
2,019
[ "TVQA", "UASOL", "ACDC", "ATOMIC", "MLQA", "SNIPS" ]
[ "ConceptNet", "CommonsenseQA" ]
[ { "dkey": "ConceptNet", "dval": "ConceptNet is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources that include expert-created resources, crowd-sourcing, and games with a purpose. It is designed to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use." }, { "dkey": "CommonsenseQA", "dval": "The CommonsenseQA is a dataset for commonsense question answering task. The dataset consists of 12,247 questions with 5 choices each.\nThe dataset was generated by Amazon Mechanical Turk workers in the following process (an example is provided in parentheses):\n\n\na crowd worker observes a source concept from ConceptNet (“River”) and three target concepts (“Waterfall”, “Bridge”, “Valley”) that are all related by the same ConceptNet relation (“AtLocation”),\nthe worker authors three questions, one per target concept, such that only that particular target concept is the answer, while the other two distractor concepts are not, (“Where on a river can you hold a cup upright to catch water on a sunny day?”, “Where can I stand on a river to see water falling without getting wet?”, “I’m crossing the river, my feet are wet but my body is dry, where am I?”)\nfor each question, another worker chooses one additional distractor from Concept Net (“pebble”, “stream”, “bank”), and the author another distractor (“mountain”, “bottom”, “island”) manually." }, { "dkey": "TVQA", "dval": "The TVQA dataset is a large-scale vido dataset for video question answering. It is based on 6 popular TV shows (Friends, The Big Bang Theory, How I Met Your Mother, House M.D., Grey's Anatomy, Castle). It includes 152,545 QA pairs from 21,793 TV show clips. The QA pairs are split into the ratio of 8:1:1 for training, validation, and test sets. The TVQA dataset provides the sequence of video frames extracted at 3 FPS, the corresponding subtitles with the video clips, and the query consisting of a question and four answer candidates. Among the four answer candidates, there is only one correct answer." }, { "dkey": "UASOL", "dval": "The UASOL an RGB-D stereo dataset, that contains 160902 frames, filmed at 33 different scenes, each with between 2 k and 10 k frames. The frames show different paths from the perspective of a pedestrian, including sidewalks, trails, roads, etc. The images were extracted from video files with 15 fps at HD2K resolution with a size of 2280 × 1282 pixels. The dataset also provides a GPS geolocalization tag for each second of the sequences and reflects different climatological conditions. It also involved up to 4 different persons filming the dataset at different moments of the day.\n\nWe propose a train, validation and test split to train the network. \nAdditionally, we introduce a subset of 676 pairs of RGB Stereo images and their respective depth, which we extracted randomly from the entire dataset. This given test set is introduced to make comparability possible between the different methods trained with the dataset." }, { "dkey": "ACDC", "dval": "The goal of the Automated Cardiac Diagnosis Challenge (ACDC) challenge is to:\n\n\ncompare the performance of automatic methods on the segmentation of the left ventricular endocardium and epicardium as the right ventricular endocardium for both end diastolic and end systolic phase instances;\ncompare the performance of automatic methods for the classification of the examinations in five classes (normal case, heart failure with infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, abnormal right ventricle).\n\nThe overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon. Acquired data were fully anonymized and handled within the regulations set by the local ethical committee of the Hospital of Dijon (France). Our dataset covers several well-defined pathologies with enough cases to (1) properly train machine learning methods and (2) clearly assess the variations of the main physiological parameters obtained from cine-MRI (in particular diastolic volume and ejection fraction). The dataset is composed of 150 exams (all from different patients) divided into 5 evenly distributed subgroups (4 pathological plus 1 healthy subject groups) as described below. Furthermore, each patient comes with the following additional information : weight, height, as well as the diastolic and systolic phase instants.\n\nThe database is made available to participants through two datasets from the dedicated online evaluation website after a personal registration: i) a training dataset of 100 patients along with the corresponding manual references based on the analysis of one clinical expert; ii) a testing dataset composed of 50 new patients, without manual annotations but with the patient information given above. The raw input images are provided through the Nifti format." }, { "dkey": "ATOMIC", "dval": "ATOMIC is an atlas of everyday commonsense reasoning, organized through 877k textual descriptions of inferential knowledge. Compared to existing resources that center around taxonomic knowledge, ATOMIC focuses on inferential knowledge organized as typed if-then relations with variables (e.g., \"if X pays Y a compliment, then Y will likely return the compliment\")." }, { "dkey": "MLQA", "dval": "MLQA (MultiLingual Question Answering) is a benchmark dataset for evaluating cross-lingual question answering performance. MLQA consists of over 5K extractive QA instances (12K in English) in SQuAD format in seven languages - English, Arabic, German, Spanish, Hindi, Vietnamese and Simplified Chinese. MLQA is highly parallel, with QA instances parallel between 4 different languages on average." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." } ]
I want to learn an unsupervised sampling method for point clouds.
point cloud sampling clouds
2,019
[ "Completion3D", "Flightmare Simulator", "S3DIS", "2D-3D-S", "DublinCity" ]
[ "ShapeNet", "ModelNet" ]
[ { "dkey": "ShapeNet", "dval": "ShapeNet is a large scale repository for 3D CAD models developed by researchers from Stanford University, Princeton University and the Toyota Technological Institute at Chicago, USA. The repository contains over 300M models with 220,000 classified into 3,135 classes arranged using WordNet hypernym-hyponym relationships. ShapeNet Parts subset contains 31,693 meshes categorised into 16 common object classes (i.e. table, chair, plane etc.). Each shapes ground truth contains 2-5 parts (with a total of 50 part classes)." }, { "dkey": "ModelNet", "dval": "The ModelNet40 dataset contains synthetic object point clouds. As the most widely used benchmark for point cloud analysis, ModelNet40 is popular because of its various categories, clean shapes, well-constructed dataset, etc. The original ModelNet40 consists of 12,311 CAD-generated meshes in 40 categories (such as airplane, car, plant, lamp), of which 9,843 are used for training while the rest 2,468 are reserved for testing. The corresponding point cloud data points are uniformly sampled from the mesh surfaces, and then further preprocessed by moving to the origin and scaling into a unit sphere." }, { "dkey": "Completion3D", "dval": "The Completion3D benchmark is a dataset for evaluating state-of-the-art 3D Object Point Cloud Completion methods. Ggiven a partial 3D object point cloud the goal is to infer a complete 3D point cloud for the object." }, { "dkey": "Flightmare Simulator", "dval": "Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc." }, { "dkey": "S3DIS", "dval": "The Stanford 3D Indoor Scene Dataset (S3DIS) dataset contains 6 large-scale indoor areas with 271 rooms. Each point in the scene point cloud is annotated with one of the 13 semantic categories." }, { "dkey": "2D-3D-S", "dval": "The 2D-3D-S dataset provides a variety of mutually registered modalities from 2D, 2.5D and 3D domains, with instance-level semantic and geometric annotations. It covers over 6,000 m2 collected in 6 large-scale indoor areas that originate from 3 different buildings. It contains over 70,000 RGB images, along with the corresponding depths, surface normals, semantic annotations, global XYZ images (all in forms of both regular and 360° equirectangular images) as well as camera information. It also includes registered raw and semantically annotated 3D meshes and point clouds. The dataset enables development of joint and cross-modal learning models and potentially unsupervised approaches utilizing the regularities present in large-scale indoor spaces." }, { "dkey": "DublinCity", "dval": "A novel benchmark dataset that includes a manually annotated point cloud for over 260 million laser scanning points into 100'000 (approx.) assets from Dublin LiDAR point cloud [12] in 2015. Objects are labelled into 13 classes using hierarchical levels of detail from large (i.e., building, vegetation and ground) to refined (i.e., window, door and tree) elements." } ]
I want to train a residual neural network for 3D
3d shape model classification point clouds
2,017
[ "Indian Pines", "NVGesture", "SNIPS", "I-HAZE", "UNITOPATHO" ]
[ "ImageNet", "ModelNet" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "ModelNet", "dval": "The ModelNet40 dataset contains synthetic object point clouds. As the most widely used benchmark for point cloud analysis, ModelNet40 is popular because of its various categories, clean shapes, well-constructed dataset, etc. The original ModelNet40 consists of 12,311 CAD-generated meshes in 40 categories (such as airplane, car, plant, lamp), of which 9,843 are used for training while the rest 2,468 are reserved for testing. The corresponding point cloud data points are uniformly sampled from the mesh surfaces, and then further preprocessed by moving to the origin and scaling into a unit sphere." }, { "dkey": "Indian Pines", "dval": "Indian Pines is a Hyperspectral image segmentation dataset. The input data consists of hyperspectral bands over a single landscape in Indiana, US, (Indian Pines data set) with 145×145 pixels. For each pixel, the data set contains 220 spectral reflectance bands which represent different portions of the electromagnetic spectrum in the wavelength range 0.4−2.5⋅10−6." }, { "dkey": "NVGesture", "dval": "The NVGesture dataset focuses on touchless driver controlling. It contains 1532 dynamic gestures fallen into 25 classes. It includes 1050 samples for training and 482 for testing. The videos are recorded with three modalities (RGB, depth, and infrared)." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "I-HAZE", "dval": "The I-Haze dataset contains 25 indoor hazy images (size 2833×4657 pixels) training. It has 5 hazy images for validation along with their corresponding ground truth images." }, { "dkey": "UNITOPATHO", "dval": "Histopathological characterization of colorectal polyps allows to tailor patients' management and follow up with the ultimate aim of avoiding or promptly detecting an invasive carcinoma. Colorectal polyps characterization relies on the histological analysis of tissue samples to determine the polyps malignancy and dysplasia grade. Deep neural networks achieve outstanding accuracy in medical patterns recognition, however they require large sets of annotated training images. We introduce UniToPatho, an annotated dataset of 9536 hematoxylin and eosin stained patches extracted from 292 whole-slide images, meant for training deep neural networks for colorectal polyps classification and adenomas grading. The slides are acquired through a Hamamatsu Nanozoomer S210 scanner at 20× magnification (0.4415 μm/px)" } ]
We introduce a new neural network architecture for variational autoencoding, Associative Compression Networks (AC
image generation
2,018
[ "30MQA", "UNITOPATHO", "BVI-DVC", "AOLP", "SI-SCORE" ]
[ "ImageNet", "CIFAR-10", "CelebA" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "30MQA", "dval": "An enormous question answer pair corpus produced by applying a novel neural network architecture on the knowledge base Freebase to transduce facts into natural language questions." }, { "dkey": "UNITOPATHO", "dval": "Histopathological characterization of colorectal polyps allows to tailor patients' management and follow up with the ultimate aim of avoiding or promptly detecting an invasive carcinoma. Colorectal polyps characterization relies on the histological analysis of tissue samples to determine the polyps malignancy and dysplasia grade. Deep neural networks achieve outstanding accuracy in medical patterns recognition, however they require large sets of annotated training images. We introduce UniToPatho, an annotated dataset of 9536 hematoxylin and eosin stained patches extracted from 292 whole-slide images, meant for training deep neural networks for colorectal polyps classification and adenomas grading. The slides are acquired through a Hamamatsu Nanozoomer S210 scanner at 20× magnification (0.4415 μm/px)" }, { "dkey": "BVI-DVC", "dval": "Contains 800 sequences at various spatial resolutions from 270p to 2160p and has been evaluated on ten existing network architectures for four different coding tools." }, { "dkey": "AOLP", "dval": "The application-oriented license plate (AOLP) benchmark database has 2049 images of Taiwan license plates. This database is categorized into three subsets: access control (AC) with 681 samples, traffic law enforcement (LE) with 757 samples, and road patrol (RP) with 611 samples. AC refers to the cases that a vehicle passes a fixed passage with a lower speed or full stop. This is the easiest situation. The images are captured under different illuminations and different weather conditions. LE refers to the cases that a vehicle violates traffic laws and is captured by roadside camera. The background are really cluttered, with road sign and multiple plates in one image. RP refers to the cases that the camera is held on a patrolling vehicle, and the images are taken with arbitrary viewpoints and distances." }, { "dkey": "SI-SCORE", "dval": "A synthetic dataset uses for a systematic analysis across common factors of variation." } ]
I want to use a neural network to answer questions based on
text-based question answering
2,020
[ "30MQA", "ARCD", "CommonsenseQA", "TVQA" ]
[ "WikiQA", "MovieQA", "QNLI" ]
[ { "dkey": "WikiQA", "dval": "The WikiQA corpus is a publicly available set of question and sentence pairs, collected and annotated for research on open-domain question answering. In order to reflect the true information need of general users, Bing query logs were used as the question source. Each question is linked to a Wikipedia page that potentially has the answer. Because the summary section of a Wikipedia page provides the basic and usually most important information about the topic, sentences in this section were used as the candidate answers. The corpus includes 3,047 questions and 29,258 sentences, where 1,473 sentences were labeled as answer sentences to their corresponding questions." }, { "dkey": "MovieQA", "dval": "The MovieQA dataset is a dataset for movie question answering. to evaluate automatic story comprehension from both video and text. The data set consists of almost 15,000 multiple choice question answers obtained from over 400 movies and features high semantic diversity. Each question comes with a set of five highly plausible answers; only one of which is correct. The questions can be answered using multiple sources of information: movie clips, plots, subtitles, and for a subset scripts and DVS." }, { "dkey": "QNLI", "dval": "The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark." }, { "dkey": "30MQA", "dval": "An enormous question answer pair corpus produced by applying a novel neural network architecture on the knowledge base Freebase to transduce facts into natural language questions." }, { "dkey": "ARCD", "dval": "Composed of 1,395 questions posed by crowdworkers on Wikipedia articles, and a machine translation of the Stanford Question Answering Dataset (Arabic-SQuAD)." }, { "dkey": "CommonsenseQA", "dval": "The CommonsenseQA is a dataset for commonsense question answering task. The dataset consists of 12,247 questions with 5 choices each.\nThe dataset was generated by Amazon Mechanical Turk workers in the following process (an example is provided in parentheses):\n\n\na crowd worker observes a source concept from ConceptNet (“River”) and three target concepts (“Waterfall”, “Bridge”, “Valley”) that are all related by the same ConceptNet relation (“AtLocation”),\nthe worker authors three questions, one per target concept, such that only that particular target concept is the answer, while the other two distractor concepts are not, (“Where on a river can you hold a cup upright to catch water on a sunny day?”, “Where can I stand on a river to see water falling without getting wet?”, “I’m crossing the river, my feet are wet but my body is dry, where am I?”)\nfor each question, another worker chooses one additional distractor from Concept Net (“pebble”, “stream”, “bank”), and the author another distractor (“mountain”, “bottom”, “island”) manually." }, { "dkey": "TVQA", "dval": "The TVQA dataset is a large-scale vido dataset for video question answering. It is based on 6 popular TV shows (Friends, The Big Bang Theory, How I Met Your Mother, House M.D., Grey's Anatomy, Castle). It includes 152,545 QA pairs from 21,793 TV show clips. The QA pairs are split into the ratio of 8:1:1 for training, validation, and test sets. The TVQA dataset provides the sequence of video frames extracted at 3 FPS, the corresponding subtitles with the video clips, and the query consisting of a question and four answer candidates. Among the four answer candidates, there is only one correct answer." } ]
The goal of this task is to evaluate whether our proposed SWATS strategy improves model performance on language modeling
language modeling text
2,017
[ "BDD100K", "GEM", "SNLI-VE", "Syn2Real", "SuperGLUE", "BLURB" ]
[ "ImageNet", "Penn Treebank", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "Penn Treebank", "dval": "The English Penn Treebank (PTB) corpus, and in particular the section of the corpus corresponding to the articles of Wall Street Journal (WSJ), is one of the most known and used corpus for the evaluation of models for sequence labelling. The task consists of annotating each word with its Part-of-Speech tag. In the most common split of this corpus, sections from 0 to 18 are used for training (38 219 sentences, 912 344 tokens), sections from 19 to 21 are used for validation (5 527 sentences, 131 768 tokens), and sections from 22 to 24 are used for testing (5 462 sentences, 129 654 tokens).\nThe corpus is also commonly used for character-level and word-level Language Modelling." }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "BDD100K", "dval": "Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page." }, { "dkey": "GEM", "dval": "Generation, Evaluation, and Metrics (GEM) is a benchmark environment for Natural Language Generation with a focus on its Evaluation, both through human annotations and automated Metrics.\n\nGEM aims to:\n\n\nmeasure NLG progress across 13 datasets spanning many NLG tasks and languages.\nprovide an in-depth analysis of data and models presented via data statements and challenge sets.\ndevelop standards for evaluation of generated text using both automated and human metrics.\n\nIt is our goal to regularly update GEM and to encourage toward more inclusive practices in dataset development by extending existing data or developing datasets for additional languages." }, { "dkey": "SNLI-VE", "dval": "Visual Entailment (VE) consists of image-sentence pairs whereby a premise is defined by an image, rather than a natural language sentence as in traditional Textual Entailment tasks. The goal of a trained VE model is to predict whether the image semantically entails the text. SNLI-VE is a dataset for VE which is based on the Stanford Natural Language Inference corpus and Flickr30k dataset." }, { "dkey": "Syn2Real", "dval": "Syn2Real, a synthetic-to-real visual domain adaptation benchmark meant to encourage further development of robust domain transfer methods. The goal is to train a model on a synthetic \"source\" domain and then update it so that its performance improves on a real \"target\" domain, without using any target annotations. It includes three tasks, illustrated in figures above: the more traditional closed-set classification task with a known set of categories; the less studied open-set classification task with unknown object categories in the target domain; and the object detection task, which involves localizing instances of objects by predicting their bounding boxes and corresponding class labels." }, { "dkey": "SuperGLUE", "dval": "SuperGLUE is a benchmark dataset designed to pose a more rigorous test of language understanding than GLUE. SuperGLUE has the same high-level motivation as GLUE: to provide a simple, hard-to-game measure of progress toward general-purpose language understanding technologies for English. SuperGLUE follows the basic design of GLUE: It consists of a public leaderboard built around eight language understanding tasks, drawing on existing data, accompanied by a single-number\nperformance metric, and an analysis toolkit. However, it improves upon GLUE in several ways:\n\n\nMore challenging tasks: SuperGLUE retains the two hardest tasks in GLUE. The remaining tasks were identified from those submitted to an open call for task proposals and were selected based on difficulty for current NLP approaches.\nMore diverse task formats: The task formats in GLUE are limited to sentence- and sentence-pair classification. The authors expand the set of task formats in SuperGLUE to include\ncoreference resolution and question answering (QA).\nComprehensive human baselines: the authors include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance.\nImproved code support: SuperGLUE is distributed with a new, modular toolkit for work on pretraining, multi-task learning, and transfer learning in NLP, built around standard tools including PyTorch (Paszke et al., 2017) and AllenNLP (Gardner et al., 2017).\nRefined usage rules: The conditions for inclusion on the SuperGLUE leaderboard were revamped to ensure fair competition, an informative leaderboard, and full credit\nassignment to data and task creators." }, { "dkey": "BLURB", "dval": "BLURB is a collection of resources for biomedical natural language processing. In general domains such as newswire and the Web, comprehensive benchmarks and leaderboards such as GLUE have greatly accelerated progress in open-domain NLP. In biomedicine, however, such resources are ostensibly scarce. In the past, there have been a plethora of shared tasks in biomedical NLP, such as BioCreative, BioNLP Shared Tasks, SemEval, and BioASQ, to name just a few. These efforts have played a significant role in fueling interest and progress by the research community, but they typically focus on individual tasks. The advent of neural language models such as BERTs provides a unifying foundation to leverage transfer learning from unlabeled text to support a wide range of NLP applications. To accelerate progress in biomedical pretraining strategies and task-specific methods, it is thus imperative to create a broad-coverage benchmark encompassing diverse biomedical tasks.\n\nInspired by prior efforts toward this direction (e.g., BLUE), BLURB (short for Biomedical Language Understanding and Reasoning Benchmark) was created. BLURB comprises of a comprehensive benchmark for PubMed-based biomedical NLP applications, as well as a leaderboard for tracking progress by the community. BLURB includes thirteen publicly available datasets in six diverse tasks. To avoid placing undue emphasis on tasks with many available datasets, such as named entity recognition (NER), BLURB reports the macro average across all tasks as the main score. The BLURB leaderboard is model-agnostic. Any system capable of producing the test predictions using the same training and development data can participate. The main goal of BLURB is to lower the entry barrier in biomedical NLP and help accelerate progress in this vitally important field for positive societal and human impact." } ]
The annotation for this task is to identify the segments in the instructional video
instructional video segmentation asr tokens
2,019
[ "COIN", "CrossTask", "Microsoft Research Multimodal Aligned Recipe Corpus", "MECCANO", "Cityscapes-VPS", "PixelHelp" ]
[ "YouCook2", "How2" ]
[ { "dkey": "YouCook2", "dval": "YouCook2 is the largest task-oriented, instructional video dataset in the vision community. It contains 2000 long untrimmed videos from 89 cooking recipes; on average, each distinct recipe has 22 videos. The procedure steps for each video are annotated with temporal boundaries and described by imperative English sentences (see the example below). The videos were downloaded from YouTube and are all in the third-person viewpoint. All the videos are unconstrained and can be performed by individual persons at their houses with unfixed cameras. YouCook2 contains rich recipe types and various cooking styles from all over the world." }, { "dkey": "How2", "dval": "The How2 dataset contains 13,500 videos, or 300 hours of speech, and is split into 185,187 training, 2022 development (dev), and 2361 test utterances. It has subtitles in English and crowdsourced Portuguese translations." }, { "dkey": "COIN", "dval": "The COIN dataset (a large-scale dataset for COmprehensive INstructional video analysis) consists of 11,827 videos related to 180 different tasks in 12 domains (e.g., vehicles, gadgets, etc.) related to our daily life. The videos are all collected from YouTube. The average length of a video is 2.36 minutes. Each video is labelled with 3.91 step segments, where each segment lasts 14.91 seconds on average. In total, the dataset contains videos of 476 hours, with 46,354 annotated segments." }, { "dkey": "CrossTask", "dval": "CrossTask dataset contains instructional videos, collected for 83 different tasks. For each task an ordered list of steps with manual descriptions is provided. The dataset is divided in two parts: 18 primary and 65 related tasks. Videos for the primary tasks are collected manually and provided with annotations for temporal step boundaries. Videos for the related tasks are collected automatically and don't have annotations." }, { "dkey": "Microsoft Research Multimodal Aligned Recipe Corpus", "dval": "To construct the MICROSOFT RESEARCH MULTIMODAL ALIGNED RECIPE CORPUS the authors first extract a large number of text and video recipes from the web. The goal is to find joint alignments between multiple text recipes and multiple video recipes for the same dish. The task is challenging, as different recipes vary in their order of instructions and use of ingredients. Moreover, video instructions can be noisy, and text and video instructions include different levels of specificity in their descriptions." }, { "dkey": "MECCANO", "dval": "The MECCANO dataset is the first dataset of egocentric videos to study human-object interactions in industrial-like settings.\nThe MECCANO dataset has been acquired in an industrial-like scenario in which subjects built a toy model of a motorbike. We considered 20 object classes which include the 16 classes categorizing the 49 components, the two tools (screwdriver and wrench), the instructions booklet and a partial_model class.\n\nAdditional details related to the MECCANO:\n\n20 different subjects in 2 countries (IT, U.K.)\nVideo Acquisition: 1920x1080 at 12.00 fps\n11 training videos and 9 validation/test videos\n8857 video segments temporally annotated indicating the verbs which describe the actions performed\n64349 active objects annotated with bounding boxes\n12 verb classes, 20 objects classes and 61 action classes" }, { "dkey": "Cityscapes-VPS", "dval": "Cityscapes-VPS is a video extension of the Cityscapes validation split. It provides 2500-frame panoptic labels that temporally extend the 500 Cityscapes image-panoptic labels. There are total 3000-frame panoptic labels which correspond to 5, 10, 15, 20, 25, and 30th frames of each 500 videos, where all instance ids are associated over time. It not only supports video panoptic segmentation (VPS) task, but also provides super-set annotations for video semantic segmentation (VSS) and video instance segmentation (VIS) tasks." }, { "dkey": "PixelHelp", "dval": "PixelHelp includes 187 multi-step instructions of 4 task categories deined in https://support.google.com/pixelphone and annotated by human. This dataset includes 88 general tasks, such as configuring accounts, 38 Gmail tasks, 31 Chrome tasks, and 30 Photos related tasks. This dataset is an updated opensource version of the original PixelHelp dataset, which was used for testing the end-to-end grounding quality of the model in paper \"Mapping Natural Language Instructions to Mobile UI Action Sequences\". The similar accuracy is acquired on this version of the dataset." } ]
I want to learn a deep person ReID model that can handle image resolutions.
person re-identification images
2,018
[ "DukeMTMC-reID", "VGGFace2", "Partial-REID", "Occluded REID" ]
[ "Market-1501", "CUHK03" ]
[ { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "DukeMTMC-reID", "dval": "The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images.\n\nNOTE: This dataset has been retracted." }, { "dkey": "VGGFace2", "dval": "The VGGFace2 dataset is made of around 3.31 million images divided into 9131 classes, each representing a different person identity. The dataset is divided into two splits, one for the training and one for test. The latter contains around 170000 images divided into 500 identities while all the other images belong to the remaining 8631 classes available for training. While constructing the datasets, the authors focused their efforts on reaching a very low label noise and a high pose and age diversity thus, making the VGGFace2 dataset a suitable choice to train state-of-the-art deep learning models on face-related tasks. The images of the training set have an average resolution of 137x180 pixels, with less than 1% at a resolution below 32 pixels (considering the shortest side).\n\nCAUTION: Authors note that the distribution of identities in the VGG-Face dataset may not be representative of the global human population. Please be careful of unintended societal, gender, racial and other biases when training or deploying models trained on this data." }, { "dkey": "Partial-REID", "dval": "Partial REID is a specially designed partial person reidentification dataset that includes 600 images from 60 people, with 5 full-body images and 5 occluded images per person. These images were collected on a university campus by 6 cameras from different viewpoints, backgrounds and different types of occlusion. The examples of partial persons in the Partial REID dataset are shown in the Figure." }, { "dkey": "Occluded REID", "dval": "Occluded REID is an occluded person dataset captured by mobile cameras, consisting of 2,000 images of 200 occluded persons (see Fig. (c)). Each identity has 5 full-body person images and 5 occluded person images with different types of occlusion." } ]
We propose a novel pre-training task, Pseudo-Masked Language Modeling (P
pre-training language models text
2,020
[ "NumerSense", "CLUECorpus2020", "THEODORE", "FarsTail", "WikiReading", "KP20k" ]
[ "XSum", "BookCorpus", "GLUE", "SQuAD" ]
[ { "dkey": "XSum", "dval": "The Extreme Summarization (XSum) dataset is a dataset for evaluation of abstractive single-document summarization systems. The goal is to create a short, one-sentence new summary answering the question “What is the article about?”. The dataset consists of 226,711 news articles accompanied with a one-sentence summary. The articles are collected from BBC articles (2010 to 2017) and cover a wide variety of domains (e.g., News, Politics, Sports, Weather, Business, Technology, Science, Health, Family, Education, Entertainment and Arts). The official random split contains 204,045 (90%), 11,332 (5%) and 11,334 (5) documents in training, validation and test sets, respectively." }, { "dkey": "BookCorpus", "dval": "BookCorpus is a large collection of free novel books written by unpublished authors, which contains 11,038 books (around 74M sentences and 1G words) of 16 different sub-genres (e.g., Romance, Historical, Adventure, etc.)." }, { "dkey": "GLUE", "dval": "General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE and WNLI." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "NumerSense", "dval": "Contains 13.6k masked-word-prediction probes, 10.5k for fine-tuning and 3.1k for testing." }, { "dkey": "CLUECorpus2020", "dval": "CLUECorpus2020 is a large-scale corpus that can be used directly for self-supervised learning such as pre-training of a language model, or language generation. It has 100G raw corpus with 35 billion Chinese characters, which is retrieved from Common Crawl." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "FarsTail", "dval": "Natural Language Inference (NLI), also called Textual Entailment, is an important task in NLP with the goal of determining the inference relationship between a premise p and a hypothesis h. It is a three-class problem, where each pair (p, h) is assigned to one of these classes: \"ENTAILMENT\" if the hypothesis can be inferred from the premise, \"CONTRADICTION\" if the hypothesis contradicts the premise, and \"NEUTRAL\" if none of the above holds. There are large datasets such as SNLI, MNLI, and SciTail for NLI in English, but there are few datasets for poor-data languages like Persian. Persian (Farsi) language is a pluricentric language spoken by around 110 million people in countries like Iran, Afghanistan, and Tajikistan. FarsTail is the first relatively large-scale Persian dataset for NLI task. A total of 10,367 samples are generated from a collection of 3,539 multiple-choice questions. The train, validation, and test portions include 7,266, 1,537, and 1,564 instances, respectively." }, { "dkey": "WikiReading", "dval": "WikiReading is a large-scale natural language understanding task and publicly-available dataset with 18 million instances. The task is to predict textual values from the structured knowledge base Wikidata by reading the text of the corresponding Wikipedia articles. The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs)." }, { "dkey": "KP20k", "dval": "KP20k is a large-scale scholarly articles dataset with 528K articles for training, 20K articles for validation and 20K articles for testing." } ]
We study the effects of spurious patterns in VQA datasets and propose a model-agnostic algorithm
visual question answering images text paragraph-level
2,019
[ "BDD100K", "VizWiz", "TDIUC", "VQA-E" ]
[ "SNLI", "SWAG" ]
[ { "dkey": "SNLI", "dval": "The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are image captions from Flickr30k, while hypotheses were generated by crowd-sourced annotators who were shown a premise and asked to generate entailing, contradicting, and neutral sentences. Annotators were instructed to judge the relation between sentences given that they describe the same event. Each pair is labeled as “entailment”, “neutral”, “contradiction” or “-”, where “-” indicates that an agreement could not be reached." }, { "dkey": "SWAG", "dval": "Given a partial description like \"she opened the hood of the car,\" humans can reason about the situation and anticipate what might come next (\"then, she examined the engine\"). SWAG (Situations With Adversarial Generations) is a large-scale dataset for this task of grounded commonsense inference, unifying natural language inference and physically grounded reasoning.\n\nThe dataset consists of 113k multiple choice questions about grounded situations. Each question is a video caption from LSMDC or ActivityNet Captions, with four answer choices about what might happen next in the scene. The correct answer is the (real) video caption for the next event in the video; the three incorrect answers are adversarially generated and human verified, so as to fool machines but not humans. The authors aim for SWAG to be a benchmark for evaluating grounded commonsense NLI and for learning representations." }, { "dkey": "BDD100K", "dval": "Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page." }, { "dkey": "VizWiz", "dval": "The VizWiz-VQA dataset originates from a natural visual question answering setting where blind people each took an image and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. The proposed challenge addresses the following two tasks for this dataset: predict the answer to a visual question and (2) predict whether a visual question cannot be answered." }, { "dkey": "TDIUC", "dval": "Task Directed Image Understanding Challenge (TDIUC) dataset is a Visual Question Answering dataset which consists of 1.6M questions and 170K images sourced from MS COCO and the Visual Genome Dataset. The image-question pairs are split into 12 categories and 4 additional evaluation matrices which help evaluate models’ robustness against answer imbalance and its ability to answer questions that require higher reasoning capability. The TDIUC dataset divides the VQA paradigm into 12 different task directed question types. These include questions that require a simpler task (e.g., object presence, color attribute) and more complex tasks (e.g., counting, positional reasoning). The dataset includes also an “Absurd” question category in which questions are irrelevant to the image contents to help balance the dataset." }, { "dkey": "VQA-E", "dval": "VQA-E is a dataset for Visual Question Answering with Explanation, where the models are required to generate and explanation with the predicted answer. The VQA-E dataset is automatically derived from the VQA v2 dataset by synthesizing a textual explanation for each image-question-answer triple." } ]
AllenNLP is a platform for research on deep learning methods in natural language understanding. AllenNLP is built
language understanding text
2,018
[ "SuperGLUE", "C4", "DialoGLUE", "GLUE", "CUB-200-2011", "ANLI" ]
[ "SNLI", "SQuAD" ]
[ { "dkey": "SNLI", "dval": "The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are image captions from Flickr30k, while hypotheses were generated by crowd-sourced annotators who were shown a premise and asked to generate entailing, contradicting, and neutral sentences. Annotators were instructed to judge the relation between sentences given that they describe the same event. Each pair is labeled as “entailment”, “neutral”, “contradiction” or “-”, where “-” indicates that an agreement could not be reached." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "SuperGLUE", "dval": "SuperGLUE is a benchmark dataset designed to pose a more rigorous test of language understanding than GLUE. SuperGLUE has the same high-level motivation as GLUE: to provide a simple, hard-to-game measure of progress toward general-purpose language understanding technologies for English. SuperGLUE follows the basic design of GLUE: It consists of a public leaderboard built around eight language understanding tasks, drawing on existing data, accompanied by a single-number\nperformance metric, and an analysis toolkit. However, it improves upon GLUE in several ways:\n\n\nMore challenging tasks: SuperGLUE retains the two hardest tasks in GLUE. The remaining tasks were identified from those submitted to an open call for task proposals and were selected based on difficulty for current NLP approaches.\nMore diverse task formats: The task formats in GLUE are limited to sentence- and sentence-pair classification. The authors expand the set of task formats in SuperGLUE to include\ncoreference resolution and question answering (QA).\nComprehensive human baselines: the authors include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance.\nImproved code support: SuperGLUE is distributed with a new, modular toolkit for work on pretraining, multi-task learning, and transfer learning in NLP, built around standard tools including PyTorch (Paszke et al., 2017) and AllenNLP (Gardner et al., 2017).\nRefined usage rules: The conditions for inclusion on the SuperGLUE leaderboard were revamped to ensure fair competition, an informative leaderboard, and full credit\nassignment to data and task creators." }, { "dkey": "C4", "dval": "C4 is a colossal, cleaned version of Common Crawl's web crawl corpus. It was based on Common Crawl dataset: https://commoncrawl.org. It was used to train the T5 text-to-text Transformer models.\n\nThe dataset can be downloaded in a pre-processed form from allennlp." }, { "dkey": "DialoGLUE", "dval": "DialoGLUE is a natural language understanding benchmark for task-oriented dialogue designed to encourage dialogue research in representation-based transfer, domain adaptation, and sample-efficient task learning. It consisting of 7 task-oriented dialogue datasets covering 4 distinct natural language understanding tasks." }, { "dkey": "GLUE", "dval": "General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE and WNLI." }, { "dkey": "CUB-200-2011", "dval": "The Caltech-UCSD Birds-200-2011 (CUB-200-2011) dataset is the most widely-used dataset for fine-grained visual categorization task. It contains 11,788 images of 200 subcategories belonging to birds, 5,994 for training and 5,794 for testing. Each image has detailed annotations: 1 subcategory label, 15 part locations, 312 binary attributes and 1 bounding box. The textual information comes from Reed et al.. They expand the CUB-200-2011 dataset by collecting fine-grained natural language descriptions. Ten single-sentence descriptions are collected for each image. The natural language descriptions are collected through the Amazon Mechanical Turk (AMT) platform, and are required at least 10 words, without any information of subcategories and actions." }, { "dkey": "ANLI", "dval": "The Adversarial Natural Language Inference (ANLI, Nie et al.) is a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. Particular, the data is selected to be difficult to the state-of-the-art models, including BERT and RoBERTa." } ]
We propose a 3D facial reconstruction approach that achieves state-of-the-
3d facial reconstruction image
2,018
[ "FaceWarehouse", "BP4D", "Hollywood 3D dataset", "FRGC" ]
[ "AFLW", "Florence" ]
[ { "dkey": "AFLW", "dval": "The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image." }, { "dkey": "Florence", "dval": "The Florence 3D faces dataset consists of:\n\n\nHigh-resolution 3D scans of human faces from many subjects.\nSeveral video sequences of varying resolution, conditions and zoom level for each subject.\nEach subject is recorded in the following situations:\nIn a controlled setting in HD video.\nIn a less-constrained (but still indoor) setting using a standard, PTZ surveillance camera.\nIn an unconstrained, outdoor environment under challenging recording conditions." }, { "dkey": "FaceWarehouse", "dval": "FaceWarehouse is a 3D facial expression database that provides the facial geometry of 150 subjects, covering a wide range of ages and ethnic backgrounds." }, { "dkey": "BP4D", "dval": "The BP4D-Spontaneous dataset is a 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was obtained using the Facial Action Coding System. Facial features were tracked in both 2D and 3D domains using both person-specific and generic approaches.\nThe database includes forty-one participants (23 women, 18 men). They were 18 – 29 years of age; 11 were Asian, 6 were African-American, 4 were Hispanic, and 20 were Euro-American. An emotion elicitation protocol was designed to elicit emotions of participants effectively. Eight tasks were covered with an interview process and a series of activities to elicit eight emotions.\nThe database is structured by participants. Each participant is associated with 8 tasks. For each task, there are both 3D and 2D videos. As well, the Metadata include manually annotated action units (FACS AU), automatically tracked head pose, and 2D/3D facial landmarks. The database is in the size of about 2.6TB (without compression)." }, { "dkey": "Hollywood 3D dataset", "dval": "A dataset for benchmarking action recognition algorithms in natural environments, while making use of 3D information. The dataset contains around 650 video clips, across 14 classes. In addition, two state of the art action recognition algorithms are extended to make use of the 3D data, and five new interest point detection strategies are also proposed, that extend to the 3D data." }, { "dkey": "FRGC", "dval": "The data for FRGC consists of 50,000 recordings divided into training and validation partitions. The training partition is designed for training algorithms and the validation partition is for assessing performance of an approach in a laboratory setting. The validation partition consists of data from 4,003 subject sessions. A subject session is the set of all images of a person taken each time a person's biometric data is collected and consists of four controlled still images, two uncontrolled still images, and one three-dimensional image. The controlled images were taken in a studio setting, are full frontal facial images taken under two lighting conditions and with two facial expressions (smiling and neutral). The uncontrolled images were taken in varying illumination conditions; e.g., hallways, atriums, or outside. Each set of uncontrolled images contains two expressions, smiling and neutral. The 3D image was taken under controlled illumination conditions. The 3D images consist of both a range and a texture image. The 3D images were acquired by a Minolta Vivid 900/910 series sensor." } ]
I want to train a supervised model for person re-identification from images.
person re-identification images
2,019
[ "SYSU-MM01", "Airport", "CUHK02", "Partial-iLIDS", "Partial-REID", "Occluded REID" ]
[ "Market-1501", "CUHK03" ]
[ { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "SYSU-MM01", "dval": "The SYSU-MM01 is a dataset collected for the Visible-Infrared Re-identification problem. The images in the dataset were obtained from 491 different persons by recording them using 4 RGB and 2 infrared cameras. Within the dataset, the persons are divided into 3 fixed splits to create training, validation and test sets. In the training set, there are 20284 RGB and 9929 infrared images of 296 persons. The validation set contains 1974 RGB and 1980 infrared images of 99 persons. The testing set consists of the images of 96 persons where 3803 infrared images are used as query and 301 randomly selected RGB images are used as gallery." }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "CUHK02", "dval": "CUHK02 is a dataset for person re-identification. It contains 1,816 identities from two disjoint camera views. Each identity has two samples per camera view making a total of 7,264 images. It is used for Person Re-identification." }, { "dkey": "Partial-iLIDS", "dval": "Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras. Some images contain people occluded by other individuals or luggage." }, { "dkey": "Partial-REID", "dval": "Partial REID is a specially designed partial person reidentification dataset that includes 600 images from 60 people, with 5 full-body images and 5 occluded images per person. These images were collected on a university campus by 6 cameras from different viewpoints, backgrounds and different types of occlusion. The examples of partial persons in the Partial REID dataset are shown in the Figure." }, { "dkey": "Occluded REID", "dval": "Occluded REID is an occluded person dataset captured by mobile cameras, consisting of 2,000 images of 200 occluded persons (see Fig. (c)). Each identity has 5 full-body person images and 5 occluded person images with different types of occlusion." } ]
A method based on deep metric learning with margin sample mining loss for person re-identification.
person re-identification images
2,017
[ "DukeMTMC-reID", "Airport", "JHMDB", "CUHK02" ]
[ "MARS", "CUHK03" ]
[ { "dkey": "MARS", "dval": "MARS (Motion Analysis and Re-identification Set) is a large scale video based person reidentification dataset, an extension of the Market-1501 dataset. It has been collected from six near-synchronized cameras. It consists of 1,261 different pedestrians, who are captured by at least 2 cameras. The variations in poses, colors and illuminations of pedestrians, as well as the poor image quality, make it very difficult to yield high matching accuracy. Moreover, the dataset contains 3,248 distractors in order to make it more realistic. Deformable Part Model and GMMCP tracker were used to automatically generate the tracklets (mostly 25-50 frames long)." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "DukeMTMC-reID", "dval": "The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images.\n\nNOTE: This dataset has been retracted." }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "JHMDB", "dval": "JHMDB is an action recognition dataset that consists of 960 video sequences belonging to 21 actions. It is a subset of the larger HMDB51 dataset collected from digitized movies and YouTube videos. The dataset contains video and annotation for puppet flow per frame (approximated optimal flow on the person), puppet mask per frame, joint positions per frame, action label per clip and meta label per clip (camera motion, visible body parts, camera viewpoint, number of people, video quality)." }, { "dkey": "CUHK02", "dval": "CUHK02 is a dataset for person re-identification. It contains 1,816 identities from two disjoint camera views. Each identity has two samples per camera view making a total of 7,264 images. It is used for Person Re-identification." } ]
I want to use transfer learning to train a small "student" model to learn a function
transfer learning images
2,020
[ "CLEVR", "KLEJ", "fMoW", "ORVS", "BDD100K", "EgoShots" ]
[ "CIFAR-10", "CelebA" ]
[ { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "CLEVR", "dval": "CLEVR (Compositional Language and Elementary Visual Reasoning) is a synthetic Visual Question Answering dataset. It contains images of 3D-rendered objects; each image comes with a number of highly compositional questions that fall into different categories. Those categories fall into 5 classes of tasks: Exist, Count, Compare Integer, Query Attribute and Compare Attribute. The CLEVR dataset consists of: a training set of 70k images and 700k questions, a validation set of 15k images and 150k questions, A test set of 15k images and 150k questions about objects, answers, scene graphs and functional programs for all train and validation images and questions. Each object present in the scene, aside of position, is characterized by a set of four attributes: 2 sizes: large, small, 3 shapes: square, cylinder, sphere, 2 material types: rubber, metal, 8 color types: gray, blue, brown, yellow, red, green, purple, cyan, resulting in 96 unique combinations." }, { "dkey": "KLEJ", "dval": "The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding task.\n\nKey benchmark features:\n\n\nIt contains a diverse set of tasks from different domains and with different objectives.\nMost tasks are created from existing datasets but the authors also released the new sentiment analysis dataset from an e-commerce domain.\nIt includes tasks which have relatively small datasets and require extensive external knowledge to solve them. It promotes the usage of transfer learning instead of training separate models from scratch.\n\nThe name KLEJ (English: GLUE) is an abbreviation for Kompleksowa Lista Ewaluacji Językowych (English: Comprehensive List of Language Evaluations) and refers to the GLUE benchmark." }, { "dkey": "fMoW", "dval": "Functional Map of the World (fMoW) is a dataset that aims to inspire the development of machine learning models capable of predicting the functional purpose of buildings and land use from temporal sequences of satellite images and a rich set of metadata features." }, { "dkey": "ORVS", "dval": "The ORVS dataset has been newly established as a collaboration between the computer science and visual-science departments at the University of Calgary.\n\nThis dataset contains 49 images (42 training and seven testing images) collected from a clinic in Calgary-Canada. All images were acquired with a Zeiss Visucam 200 with 30 degrees field of view (FOV). The image size is 1444×1444 with 24 bits per pixel. Images and are stored in JPEG format with low compression, which is common in ophthalmology practice. All images were manually traced by an expert who a has been working in the field of retinal-image analysis and went through training. The expert was asked to label all pixels belonging to retinal vessels. The Windows Paint 3D tool was used to manually label the images." }, { "dkey": "BDD100K", "dval": "Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page." }, { "dkey": "EgoShots", "dval": "Egoshots is a 2-month Ego-vision Dataset with Autographer Wearable Camera annotated \"for free\" with transfer learning. Three state of the art pre-trained image captioning models are used. The dataset represents the life of 2 interns while working at Philips Research (Netherlands) (May-July 2015) generously donating their data." } ]
We propose an effective attack method that exploits intrinsic movement pattern and regional relative motion among video frames.
video classification
2,020
[ "MoVi", "MovieShots", "UASOL", "MSU-MFSD" ]
[ "UCF101", "HMDB51" ]
[ { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "HMDB51", "dval": "The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. The original evaluation scheme uses three different training/testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. The average accuracy over these three splits is used to measure the final performance." }, { "dkey": "MoVi", "dval": "Contains 60 female and 30 male actors performing a collection of 20 predefined everyday actions and sports movements, and one self-chosen movement." }, { "dkey": "MovieShots", "dval": "MovieShots is a dataset to facilitate the shot type analysis in videos. It is a large-scale shot type annotation set that contains 46K shots from 7,858 movies covering a wide\nvariety of movie genres to ensure the inclusion of all scale and movement types of shot. Each shot has two attributes, shot scale and shot movement.\n\nShot scale has five categories: 1) long shot (LS) is taken from a long distance, sometimes as far as a quarter of a mile away; 2) full shot (FS) barely includes the human body in full; 3) medium shot (MS) contains a figure from the knees or waist up; 4) close-up shot (CS) concentrates on a relatively small object, showing the face of the hand of a person; (5) extreme close-up shot (ECS) shows even smaller parts such as the image of an eye or a mouth.\n\nShot movement has four categories: 1) in static shot, the camera is fixed but the subject is flexible to move; 2) for motion shot, the camera moves or rotates; 3) the camera zooms in for push shot, and 4) zooms out for pull shot. While all the four movement types are widely used in movies, the use of push and pull shots only takes a very small portion. The usage of different shots usually depends on the movie genres and the preferences of the filmmakers." }, { "dkey": "UASOL", "dval": "The UASOL an RGB-D stereo dataset, that contains 160902 frames, filmed at 33 different scenes, each with between 2 k and 10 k frames. The frames show different paths from the perspective of a pedestrian, including sidewalks, trails, roads, etc. The images were extracted from video files with 15 fps at HD2K resolution with a size of 2280 × 1282 pixels. The dataset also provides a GPS geolocalization tag for each second of the sequences and reflects different climatological conditions. It also involved up to 4 different persons filming the dataset at different moments of the day.\n\nWe propose a train, validation and test split to train the network. \nAdditionally, we introduce a subset of 676 pairs of RGB Stereo images and their respective depth, which we extracted randomly from the entire dataset. This given test set is introduced to make comparability possible between the different methods trained with the dataset." }, { "dkey": "MSU-MFSD", "dval": "The MSU-MFSD dataset contains 280 video recordings of genuine and attack faces. 35 individuals have participated in the development of this database with a total of 280 videos. Two kinds of cameras with different resolutions (720×480 and 640×480) were used to record the videos from the 35 individuals. For the real accesses, each individual has two video recordings captured with the Laptop cameras and Android, respectively. For the video attacks, two types of cameras, the iPhone and Canon cameras were used to capture high definition videos on each of the subject. The videos taken with Canon camera were then replayed on iPad Air screen to generate the HD replay attacks while the videos recorded by the iPhone mobile were replayed itself to generate the mobile replay attacks. Photo attacks were produced by printing the 35 subjects’ photos on A3 papers using HP colour printer. The recording videos with respect to the 35 individuals were divided into training (15 subjects with 120 videos) and testing (40 subjects with 160 videos) datasets, respectively." } ]
This paper proposes a new collaborative distillation approach for neural style transfer with encoder-decoder architecture. We
universal style transfer image
2,020
[ "CONCODE", "WHU", "NAS-Bench-201", "NAS-Bench-101", "NATS-Bench" ]
[ "ImageNet", "COCO" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "CONCODE", "dval": "A new large dataset with over 100,000 examples consisting of Java classes from online code repositories, and develop a new encoder-decoder architecture that models the interaction between the method documentation and the class environment." }, { "dkey": "WHU", "dval": "Created for MVS tasks and is a large-scale multi-view aerial dataset generated from a highly accurate 3D digital surface model produced from thousands of real aerial images with precise camera parameters." }, { "dkey": "NAS-Bench-201", "dval": "NAS-Bench-201 is a benchmark (and search space) for neural architecture search. Each architecture consists of a predefined skeleton with a stack of the searched cell. In this way, architecture search is transformed into the problem of searching a good cell." }, { "dkey": "NAS-Bench-101", "dval": "NAS-Bench-101 is the first public architecture dataset for NAS research. To build NASBench-101, the authors carefully constructed a compact, yet expressive, search space, exploiting graph isomorphisms to identify 423k unique convolutional\narchitectures. The authors trained and evaluated all of these architectures multiple times on CIFAR-10 and compiled the results into a large dataset of over 5 million trained models. This allows researchers to evaluate the quality of a diverse range of models in milliseconds by querying the precomputed dataset." }, { "dkey": "NATS-Bench", "dval": "A unified benchmark on searching for both topology and size, for (almost) any up-to-date NAS algorithm. NATS-Bench includes the search space of 15,625 neural cell candidates for architecture topology and 32,768 for architecture size on three datasets." } ]
I want to train a supervised model for action recognition from video.
action recognition video
2,017
[ "EPIC-KITCHENS-100", "Kinetics", "AViD", "Kinetics-600", "NTU RGB+D", "Charades" ]
[ "UCF101", "HMDB51" ]
[ { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "HMDB51", "dval": "The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. The original evaluation scheme uses three different training/testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. The average accuracy over these three splits is used to measure the final performance." }, { "dkey": "EPIC-KITCHENS-100", "dval": "This paper introduces the pipeline to scale the largest dataset in egocentric vision EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (EPIC-KITCHENS-55), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection also enables evaluating the \"test of time\" - i.e. whether models trained on data collected in 2018 can generalise to new footage collected under the same hypotheses albeit \"two years on\".\nThe dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics." }, { "dkey": "Kinetics", "dval": "The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 600 human action classes with at least 600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class. The videos are collected from YouTube." }, { "dkey": "AViD", "dval": "Is a collection of action videos from many different countries. The motivation is to create a public dataset that would benefit training and pretraining of action recognition models for everybody, rather than making it useful for limited countries." }, { "dkey": "Kinetics-600", "dval": "The Kinetics-600 is a large-scale action recognition dataset which consists of around 480K videos from 600 action categories. The 480K videos are divided into 390K, 30K, 60K for training, validation and test sets, respectively. Each video in the dataset is a 10-second clip of action moment annotated from raw YouTube video. It is an extensions of the Kinetics-400 dataset." }, { "dkey": "NTU RGB+D", "dval": "NTU RGB+D is a large-scale dataset for RGB-D human action recognition. It involves 56,880 samples of 60 action classes collected from 40 subjects. The actions can be generally divided into three categories: 40 daily actions (e.g., drinking, eating, reading), nine health-related actions (e.g., sneezing, staggering, falling down), and 11 mutual actions (e.g., punching, kicking, hugging). These actions take place under 17 different scene conditions corresponding to 17 video sequences (i.e., S001–S017). The actions were captured using three cameras with different horizontal imaging viewpoints, namely, −45∘,0∘, and +45∘. Multi-modality information is provided for action characterization, including depth maps, 3D skeleton joint position, RGB frames, and infrared sequences. The performance evaluation is performed by a cross-subject test that split the 40 subjects into training and test groups, and by a cross-view test that employed one camera (+45∘) for testing, and the other two cameras for training." }, { "dkey": "Charades", "dval": "The Charades dataset is composed of 9,848 videos of daily indoors activities with an average length of 30 seconds, involving interactions with 46 objects classes in 15 types of indoor scenes and containing a vocabulary of 30 verbs leading to 157 action classes. Each video in this dataset is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacting objects. 267 different users were presented with a sentence, which includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence. In total, the dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos. In the standard split there are7,986 training video and 1,863 validation video." } ]
ReenactGAN is capable of transferring facial movements and expressions from monocular video input of
face reenactment video
2,018
[ "FaceForensics", "SAMM Long Videos", "Oulu-CASIA", "MonoPerfCap Dataset", "JAFFE" ]
[ "WFLW", "DISFA" ]
[ { "dkey": "WFLW", "dval": "The Wider Facial Landmarks in the Wild or WFLW database contains 10000 faces (7500 for training and 2500 for testing) with 98 annotated landmarks. This database also features rich attribute annotations in terms of occlusion, head pose, make-up, illumination, blur and expressions." }, { "dkey": "DISFA", "dval": "The Denver Intensity of Spontaneous Facial Action (DISFA) dataset consists of 27 videos of 4844 frames each, with 130,788 images in total. Action unit annotations are on different levels of intensity, which are ignored in the following experiments and action units are either set or unset. DISFA was selected from a wider range of databases popular in the field of facial expression recognition because of the high number of smiles, i.e. action unit 12. In detail, 30,792 have this action unit set, 82,176 images have some action unit(s) set and 48,612 images have no action unit(s) set at all." }, { "dkey": "FaceForensics", "dval": "FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces. This dataset has two versions:\n\n\n\nSource-to-Target: where the authors reenact over 1000 videos with new facial expressions extracted from other videos, which e.g. can be used to train a classifier to detect fake images or videos.\n\n\n\nSelfreenactment: where the authors use Face2Face to reenact the facial expressions of videos with their own facial expressions as input to get pairs of videos, which e.g. can be used to train supervised generative refinement models." }, { "dkey": "SAMM Long Videos", "dval": "The SAMM Long Videos dataset consists of 147 long videos with 343 macro-expressions and 159 micro-expressions. The dataset is FACS-coded with detailed Action Units." }, { "dkey": "Oulu-CASIA", "dval": "The Oulu-CASIA NIR&VIS facial expression database consists of six expressions (surprise, happiness, sadness, anger, fear and disgust) from 80 people between 23 and 58 years old. 73.8% of the subjects are males. The subjects were asked to sit on a chair in the observation room in a way that he/ she is in front of camera. Camera-face distance is about 60 cm. Subjects were asked to make a facial expression according to an expression example shown in picture sequences. The imaging hardware works at the rate of 25 frames per second and the image resolution is 320 × 240 pixels." }, { "dkey": "MonoPerfCap Dataset", "dval": "MonoPerfCap is a benchmark dataset for human 3D performance capture from monocular video input consisting of around 40k frames, which covers a variety of different scenarios." }, { "dkey": "JAFFE", "dval": "The JAFFE dataset consists of 213 images of different facial expressions from 10 different Japanese female subjects. Each subject was asked to do 7 facial expressions (6 basic facial expressions and neutral) and the images were annotated with average semantic ratings on each facial expression by 60 annotators." } ]
I want to train an unsupervised learning model to learn a representation from images.
unsupervised feature learning images
2,018
[ "Icentia11K", "STL-10", "VoxPopuli", "CC100", "PTC", "Semantic Scholar", "WebVision" ]
[ "ImageNet", "ShapeNet" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "ShapeNet", "dval": "ShapeNet is a large scale repository for 3D CAD models developed by researchers from Stanford University, Princeton University and the Toyota Technological Institute at Chicago, USA. The repository contains over 300M models with 220,000 classified into 3,135 classes arranged using WordNet hypernym-hyponym relationships. ShapeNet Parts subset contains 31,693 meshes categorised into 16 common object classes (i.e. table, chair, plane etc.). Each shapes ground truth contains 2-5 parts (with a total of 50 part classes)." }, { "dkey": "Icentia11K", "dval": "Public ECG dataset of continuous raw signals for representation learning containing 11 thousand patients and 2 billion labelled beats." }, { "dkey": "STL-10", "dval": "The STL-10 is an image dataset derived from ImageNet and popularly used to evaluate algorithms of unsupervised feature learning or self-taught learning. Besides 100,000 unlabeled images, it contains 13,000 labeled images from 10 object classes (such as birds, cats, trucks), among which 5,000 images are partitioned for training while the remaining 8,000 images for testing. All the images are color images with 96×96 pixels in size." }, { "dkey": "VoxPopuli", "dval": "VoxPopuli is a large-scale multilingual corpus providing 100K hours of unlabelled speech data in 23 languages. It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning. VoxPopuli also contains 1.8K hours of transcribed speeches in 16 languages and their aligned oral interpretations into 5 other languages totaling 5.1K hours." }, { "dkey": "CC100", "dval": "This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository." }, { "dkey": "PTC", "dval": "PTC is a collection of 344 chemical compounds represented as graphs which report the carcinogenicity for rats. There are 19 node labels for each node." }, { "dkey": "Semantic Scholar", "dval": "The Semantic Scholar corpus (S2) is composed of titles from scientific papers published in machine learning conferences and journals from 1985 to 2017, split by year (33 timesteps).\n\nImage Source: [http://s2-public-api-prod.us-west-2.elasticbeanstalk.com/corpus/] (http://s2-public-api-prod.us-west-2.elasticbeanstalk.com/corpus/)" }, { "dkey": "WebVision", "dval": "The WebVision dataset is designed to facilitate the research on learning visual representation from noisy web data. It is a large scale web images dataset that contains more than 2.4 million of images crawled from the Flickr website and Google Images search. \n\nThe same 1,000 concepts as the ILSVRC 2012 dataset are used for querying images, such that a bunch of existing approaches can be directly investigated and compared to the models trained from the ILSVRC 2012 dataset, and also makes it possible to study the dataset bias issue in the large scale scenario. The textual information accompanied with those images (e.g., caption, user tags, or description) are also provided as additional meta information. A validation set contains 50,000 images (50 images per category) is provided to facilitate the algorithmic development." } ]
A novel deep neural network is proposed for age and gender estimation from unconstrained face images. This method achieves
age gender estimation images unconstrained
2,017
[ "UMDFaces", "UTKFace", "MegaFace", "UNITOPATHO", "AFLW", "300W" ]
[ "Adience", "CIFAR-10" ]
[ { "dkey": "Adience", "dval": "The Adience dataset, published in 2014, contains 26,580 photos across 2,284 subjects with a binary gender label and one label from eight different age groups, partitioned into five splits. The key principle of the data set is to capture the images as close to real world conditions as possible, including all variations in appearance, pose, lighting condition and image quality, to name a few." }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "UMDFaces", "dval": "UMDFaces is a face dataset divided into two parts:\n\n\nStill Images - 367,888 face annotations for 8,277 subjects.\nVideo Frames - Over 3.7 million annotated video frames from over 22,000 videos of 3100 subjects.\n\nPart 1 - Still Images\n\nThe dataset contains 367,888 face annotations for 8,277 subjects divided into 3 batches. The annotations contain human curated bounding boxes for faces and estimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network.\n\nPart 2 - Video Frames\n\nThe second part contains 3,735,476 annotated video frames extracted from a total of 22,075 for 3,107 subjects. The annotations contain the estimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network." }, { "dkey": "UTKFace", "dval": "The UTKFace dataset is a large-scale face dataset with long age span (range from 0 to 116 years old). The dataset consists of over 20,000 face images with annotations of age, gender, and ethnicity. The images cover large variation in pose, facial expression, illumination, occlusion, resolution, etc. This dataset could be used on a variety of tasks, e.g., face detection, age estimation, age progression/regression, landmark localization, etc." }, { "dkey": "MegaFace", "dval": "MegaFace was a publicly available dataset which is used for evaluating the performance of face recognition algorithms with up to a million distractors (i.e., up to a million people who are not in the test set). MegaFace contains 1M images from 690K individuals with unconstrained pose, expression, lighting, and exposure. MegaFace captures many different subjects rather than many images of a small number of subjects. The gallery set of MegaFace is collected from a subset of Flickr. The probe set of MegaFace used in the challenge consists of two databases; Facescrub and FGNet. FGNet contains 975 images of 82 individuals, each with several images spanning ages from 0 to 69. Facescrub dataset contains more than 100K face images of 530 people. The MegaFace challenge evaluates performance of face recognition algorithms by increasing the numbers of “distractors” (going from 10 to 1M) in the gallery set. In order to evaluate the face recognition algorithms fairly, MegaFace challenge has two protocols including large or small training sets. If a training set has more than 0.5M images and 20K subjects, it is considered as large. Otherwise, it is considered as small.\n\nNOTE: This dataset has been retired." }, { "dkey": "UNITOPATHO", "dval": "Histopathological characterization of colorectal polyps allows to tailor patients' management and follow up with the ultimate aim of avoiding or promptly detecting an invasive carcinoma. Colorectal polyps characterization relies on the histological analysis of tissue samples to determine the polyps malignancy and dysplasia grade. Deep neural networks achieve outstanding accuracy in medical patterns recognition, however they require large sets of annotated training images. We introduce UniToPatho, an annotated dataset of 9536 hematoxylin and eosin stained patches extracted from 292 whole-slide images, meant for training deep neural networks for colorectal polyps classification and adenomas grading. The slides are acquired through a Hamamatsu Nanozoomer S210 scanner at 20× magnification (0.4415 μm/px)" }, { "dkey": "AFLW", "dval": "The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image." }, { "dkey": "300W", "dval": "The 300-W is a face dataset that consists of 300 Indoor and 300 Outdoor in-the-wild images. It covers a large variation of identity, expression, illumination conditions, pose, occlusion and face size. The images were downloaded from google.com by making queries such as “party”, “conference”, “protests”, “football” and “celebrities”. Compared to the rest of in-the-wild datasets, the 300-W database contains a larger percentage of partially-occluded images and covers more expressions than the common “neutral” or “smile”, such as “surprise” or “scream”.\nImages were annotated with the 68-point mark-up using a semi-automatic methodology. The images of the database were carefully selected so that they represent a characteristic sample of challenging but natural face instances under totally unconstrained conditions. Thus, methods that achieve accurate performance on the 300-W database can demonstrate the same accuracy in most realistic cases.\nMany images of the database contain more than one annotated faces (293 images with 1 face, 53 images with 2 faces and 53 images with [3, 7] faces). Consequently, the database consists of 600 annotated face instances, but 399 unique images. Finally, there is a large variety of face sizes. Specifically, 49.3% of the faces have size in the range [48.6k, 2.0M] and the overall mean size is 85k (about 292 × 292) pixels." } ]
I want to train a re-identification model from images.
person re-identification images
2,019
[ "SYSU-MM01", "Airport", "CityFlow", "DukeMTMC-reID", "CUHK02", "VeRi-776" ]
[ "VIPeR", "Market-1501", "CUHK03" ]
[ { "dkey": "VIPeR", "dval": "The Viewpoint Invariant Pedestrian Recognition (VIPeR) dataset includes 632 people and two outdoor cameras under different viewpoints and light conditions. Each person has one image per camera and each image has been scaled to be 128×48 pixels. It provides the pose angle of each person as 0° (front), 45°, 90° (right), 135°, and 180° (back)." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "SYSU-MM01", "dval": "The SYSU-MM01 is a dataset collected for the Visible-Infrared Re-identification problem. The images in the dataset were obtained from 491 different persons by recording them using 4 RGB and 2 infrared cameras. Within the dataset, the persons are divided into 3 fixed splits to create training, validation and test sets. In the training set, there are 20284 RGB and 9929 infrared images of 296 persons. The validation set contains 1974 RGB and 1980 infrared images of 99 persons. The testing set consists of the images of 96 persons where 3803 infrared images are used as query and 301 randomly selected RGB images are used as gallery." }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "CityFlow", "dval": "CityFlow is a city-scale traffic camera dataset consisting of more than 3 hours of synchronized HD videos from 40 cameras across 10 intersections, with the longest distance between two simultaneous cameras being 2.5 km. The dataset contains more than 200K annotated bounding boxes covering a wide range of scenes, viewing angles, vehicle models, and urban traffic flow conditions. \n\nCamera geometry and calibration information are provided to aid spatio-temporal analysis. In addition, a subset of the benchmark is made available for the task of image-based vehicle re-identification (ReID)." }, { "dkey": "DukeMTMC-reID", "dval": "The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images.\n\nNOTE: This dataset has been retracted." }, { "dkey": "CUHK02", "dval": "CUHK02 is a dataset for person re-identification. It contains 1,816 identities from two disjoint camera views. Each identity has two samples per camera view making a total of 7,264 images. It is used for Person Re-identification." }, { "dkey": "VeRi-776", "dval": "VeRi-776 is a vehicle re-identification dataset which contains 49,357 images of 776 vehicles from 20 cameras. The dataset is collected in the real traffic scenario, which is close to the setting of CityFlow. The dataset contains bounding boxes, types, colors and brands." } ]
I want to train a fully supervised model for semantic segmentation from images.
semantic segmentation images
2,019
[ "SBD", "SNIPS", "Virtual KITTI", "ConvAI2" ]
[ "DRIVE", "Cityscapes" ]
[ { "dkey": "DRIVE", "dval": "The Digital Retinal Images for Vessel Extraction (DRIVE) dataset is a dataset for retinal vessel segmentation. It consists of a total of JPEG 40 color fundus images; including 7 abnormal pathology cases. The images were obtained from a diabetic retinopathy screening program in the Netherlands. The images were acquired using Canon CR5 non-mydriatic 3CCD camera with FOV equals to 45 degrees. Each image resolution is 584*565 pixels with eight bits per color channel (3 channels). \n\nThe set of 40 images was equally divided into 20 images for the training set and 20 images for the testing set. Inside both sets, for each image, there is circular field of view (FOV) mask of diameter that is approximately 540 pixels. Inside training set, for each image, one manual segmentation by an ophthalmological expert has been applied. Inside testing set, for each image, two manual segmentations have been applied by two different observers, where the first observer segmentation is accepted as the ground-truth for performance evaluation." }, { "dkey": "Cityscapes", "dval": "Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of around 5000 fine annotated images and 20000 coarse annotated ones. Data was captured in 50 cities during several months, daytimes, and good weather conditions. It was originally recorded as video so the frames were manually selected to have the following features: large number of dynamic objects, varying scene layout, and varying background." }, { "dkey": "SBD", "dval": "The Semantic Boundaries Dataset (SBD) is a dataset for predicting pixels on the boundary of the object (as opposed to the inside of the object with semantic segmentation). The dataset consists of 11318 images from the trainval set of the PASCAL VOC2011 challenge, divided into 8498 training and 2820 test images. This dataset has object instance boundaries with accurate figure/ground masks that are also labeled with one of 20 Pascal VOC classes." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "Virtual KITTI", "dval": "Virtual KITTI is a photo-realistic synthetic video dataset designed to learn and evaluate computer vision models for several video understanding tasks: object detection and multi-object tracking, scene-level and instance-level semantic segmentation, optical flow, and depth estimation.\n\nVirtual KITTI contains 50 high-resolution monocular videos (21,260 frames) generated from five different virtual worlds in urban settings under different imaging and weather conditions. These worlds were created using the Unity game engine and a novel real-to-virtual cloning method. These photo-realistic synthetic videos are automatically, exactly, and fully annotated for 2D and 3D multi-object tracking and at the pixel level with category, instance, flow, and depth labels (cf. below for download links)." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." } ]
I want to add an adversarial training strategy for Transformer-based models.
natural language understanding commonsense reasoning text
2,019
[ "SNIPS", "ConvAI2", "DailyDialog++", "C4", "APRICOT", "BDD100K", "FreiHAND" ]
[ "MRPC", "GLUE", "ARC", "CommonsenseQA" ]
[ { "dkey": "MRPC", "dval": "Microsoft Research Paraphrase Corpus (MRPC) is a corpus consists of 5,801 sentence pairs collected from newswire articles. Each pair is labelled if it is a paraphrase or not by human annotators. The whole set is divided into a training subset (4,076 sentence pairs of which 2,753 are paraphrases) and a test subset (1,725 pairs of which 1,147 are paraphrases)." }, { "dkey": "GLUE", "dval": "General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE and WNLI." }, { "dkey": "ARC", "dval": "The AI2’s Reasoning Challenge (ARC) dataset is a multiple-choice question-answering dataset, containing questions from science exams from grade 3 to grade 9. The dataset is split in two partitions: Easy and Challenge, where the latter partition contains the more difficult questions that require reasoning. Most of the questions have 4 answer choices, with <1% of all the questions having either 3 or 5 answer choices. ARC includes a supporting KB of 14.3M unstructured text passages." }, { "dkey": "CommonsenseQA", "dval": "The CommonsenseQA is a dataset for commonsense question answering task. The dataset consists of 12,247 questions with 5 choices each.\nThe dataset was generated by Amazon Mechanical Turk workers in the following process (an example is provided in parentheses):\n\n\na crowd worker observes a source concept from ConceptNet (“River”) and three target concepts (“Waterfall”, “Bridge”, “Valley”) that are all related by the same ConceptNet relation (“AtLocation”),\nthe worker authors three questions, one per target concept, such that only that particular target concept is the answer, while the other two distractor concepts are not, (“Where on a river can you hold a cup upright to catch water on a sunny day?”, “Where can I stand on a river to see water falling without getting wet?”, “I’m crossing the river, my feet are wet but my body is dry, where am I?”)\nfor each question, another worker chooses one additional distractor from Concept Net (“pebble”, “stream”, “bank”), and the author another distractor (“mountain”, “bottom”, “island”) manually." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "DailyDialog++", "dval": "Consists of (i) five relevant responses for each context and (ii) five adversarially crafted irrelevant responses for each context." }, { "dkey": "C4", "dval": "C4 is a colossal, cleaned version of Common Crawl's web crawl corpus. It was based on Common Crawl dataset: https://commoncrawl.org. It was used to train the T5 text-to-text Transformer models.\n\nThe dataset can be downloaded in a pre-processed form from allennlp." }, { "dkey": "APRICOT", "dval": "APRICOT is a collection of over 1,000 annotated photographs of printed adversarial patches in public locations. The patches target several object categories for three COCO-trained detection models, and the photos represent natural variation in position, distance, lighting conditions, and viewing angle." }, { "dkey": "BDD100K", "dval": "Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page." }, { "dkey": "FreiHAND", "dval": "FreiHAND is a 3D hand pose dataset which records different hand actions performed by 32 people. For each hand image, MANO-based 3D hand pose annotations are provided. It currently contains 32,560 unique training samples and 3960 unique samples for evaluation. The training samples are recorded with a green screen background allowing for background removal. In addition, it applies three different post processing strategies to training samples for data augmentation. However, these post processing strategies are not applied to evaluation samples." } ]
We present an unsupervised approach to reconstruct the 3D shape of an object from an image
3d object reconstruction images
2,018
[ "2D-3D-S", "T-LESS", "Pix3D", "WikiCREM", "IntrA" ]
[ "ShapeNet", "CelebA" ]
[ { "dkey": "ShapeNet", "dval": "ShapeNet is a large scale repository for 3D CAD models developed by researchers from Stanford University, Princeton University and the Toyota Technological Institute at Chicago, USA. The repository contains over 300M models with 220,000 classified into 3,135 classes arranged using WordNet hypernym-hyponym relationships. ShapeNet Parts subset contains 31,693 meshes categorised into 16 common object classes (i.e. table, chair, plane etc.). Each shapes ground truth contains 2-5 parts (with a total of 50 part classes)." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "2D-3D-S", "dval": "The 2D-3D-S dataset provides a variety of mutually registered modalities from 2D, 2.5D and 3D domains, with instance-level semantic and geometric annotations. It covers over 6,000 m2 collected in 6 large-scale indoor areas that originate from 3 different buildings. It contains over 70,000 RGB images, along with the corresponding depths, surface normals, semantic annotations, global XYZ images (all in forms of both regular and 360° equirectangular images) as well as camera information. It also includes registered raw and semantically annotated 3D meshes and point clouds. The dataset enables development of joint and cross-modal learning models and potentially unsupervised approaches utilizing the regularities present in large-scale indoor spaces." }, { "dkey": "T-LESS", "dval": "T-LESS is a dataset for estimating the 6D pose, i.e. translation and rotation, of texture-less rigid objects. The dataset features thirty industry-relevant objects with no significant texture and no discriminative color or reflectance properties. The objects exhibit symmetries and mutual similarities in shape and/or size. Compared to other datasets, a unique property is that some of the objects are parts of others. The dataset includes training and test images that were captured with three synchronized sensors, specifically a structured-light and a time-of-flight RGB-D sensor and a high-resolution RGB camera. There are approximately 39K training and 10K test images from each sensor. Additionally, two types of 3D models are provided for each object, i.e. a manually created CAD model and a semi-automatically reconstructed one. Training images depict individual objects against a black background. Test images originate from twenty test scenes having varying complexity, which increases from simple scenes with several isolated objects to very challenging ones with multiple instances of several objects and with a high amount of clutter and occlusion. The images were captured from a systematically sampled view sphere around the object/scene, and are annotated with accurate ground truth 6D poses of all modeled objects." }, { "dkey": "Pix3D", "dval": "The Pix3D dataset is a large-scale benchmark of diverse image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc." }, { "dkey": "WikiCREM", "dval": "An unsupervised dataset for co-reference resolution. Presented in the publication: Kocijan et. al, WikiCREM: A Large Unsupervised Corpus for Coreference Resolution, presented at EMNLP 2019." }, { "dkey": "IntrA", "dval": "IntrA is an open-access 3D intracranial aneurysm dataset that makes the application of points-based and mesh-based classification and segmentation models available. This dataset can be used to diagnose intracranial aneurysms and to extract the neck for a clipping operation in medicine and other areas of deep learning, such as normal estimation and surface reconstruction.\n\n103 3D models of entire brain vessels are collected by reconstructing scanned 2D MRA images of patients (the raw 2D MRA images are not published due to medical ethics).\n1909 blood vessel segments are generated automatically from the complete models, including 1694 healthy vessel segments and 215 aneurysm segments for diagnosis.\n116 aneurysm segments are divided and annotated manually by medical experts; the scale of each aneurysm segment is based on the need for a preoperative examination.\nGeodesic distance matrices are computed and included for each annotated 3D segment, because the expression of the geodesic distance is more accurate than Euclidean distance according to the shape of vessels." } ]
I'd like to train a model for person re-identification on a large-scale
person re-identification images
2,018
[ "SYSU-MM01", "MARS", "Partial-iLIDS", "CUHK02", "CityFlow" ]
[ "DukeMTMC-reID", "Market-1501" ]
[ { "dkey": "DukeMTMC-reID", "dval": "The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images.\n\nNOTE: This dataset has been retracted." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "SYSU-MM01", "dval": "The SYSU-MM01 is a dataset collected for the Visible-Infrared Re-identification problem. The images in the dataset were obtained from 491 different persons by recording them using 4 RGB and 2 infrared cameras. Within the dataset, the persons are divided into 3 fixed splits to create training, validation and test sets. In the training set, there are 20284 RGB and 9929 infrared images of 296 persons. The validation set contains 1974 RGB and 1980 infrared images of 99 persons. The testing set consists of the images of 96 persons where 3803 infrared images are used as query and 301 randomly selected RGB images are used as gallery." }, { "dkey": "MARS", "dval": "MARS (Motion Analysis and Re-identification Set) is a large scale video based person reidentification dataset, an extension of the Market-1501 dataset. It has been collected from six near-synchronized cameras. It consists of 1,261 different pedestrians, who are captured by at least 2 cameras. The variations in poses, colors and illuminations of pedestrians, as well as the poor image quality, make it very difficult to yield high matching accuracy. Moreover, the dataset contains 3,248 distractors in order to make it more realistic. Deformable Part Model and GMMCP tracker were used to automatically generate the tracklets (mostly 25-50 frames long)." }, { "dkey": "Partial-iLIDS", "dval": "Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras. Some images contain people occluded by other individuals or luggage." }, { "dkey": "CUHK02", "dval": "CUHK02 is a dataset for person re-identification. It contains 1,816 identities from two disjoint camera views. Each identity has two samples per camera view making a total of 7,264 images. It is used for Person Re-identification." }, { "dkey": "CityFlow", "dval": "CityFlow is a city-scale traffic camera dataset consisting of more than 3 hours of synchronized HD videos from 40 cameras across 10 intersections, with the longest distance between two simultaneous cameras being 2.5 km. The dataset contains more than 200K annotated bounding boxes covering a wide range of scenes, viewing angles, vehicle models, and urban traffic flow conditions. \n\nCamera geometry and calibration information are provided to aid spatio-temporal analysis. In addition, a subset of the benchmark is made available for the task of image-based vehicle re-identification (ReID)." } ]
We propose a new training method to mitigate catastrophic forgetting in continual learning. This method, called Direction Concentration Learning
continual learning images
2,019
[ "QM9", "CORe50", "GSL", "ACDC", "CoNLL-2014 Shared Task: Grammatical Error Correction" ]
[ "ImageNet", "SALICON" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "SALICON", "dval": "The SALIency in CONtext (SALICON) dataset contains 10,000 training images, 5,000 validation images and 5,000 test images for saliency prediction. This dataset has been created by annotating saliency in images from MS COCO.\nThe ground-truth saliency annotations include fixations generated from mouse trajectories. To improve the data quality, isolated fixations with low local density have been excluded.\nThe training and validation sets, provided with ground truth, contain the following data fields: image, resolution and gaze.\nThe testing data contains only the image and resolution fields." }, { "dkey": "QM9", "dval": "QM9 provides quantum chemical properties for a relevant, consistent, and comprehensive chemical space of small organic molecules. This database may serve the benchmarking of existing methods, development of new methods, such as hybrid quantum mechanics/machine learning, and systematic identification of structure-property relationships." }, { "dkey": "CORe50", "dval": "CORe50 is a dataset designed for assessing Continual Learning techniques in an Object Recognition context." }, { "dkey": "GSL", "dval": "Dataset Description\nThe Greek Sign Language (GSL) is a large-scale RGB+D dataset, suitable for Sign Language Recognition (SLR) and Sign Language Translation (SLT). The video captures are conducted using an Intel RealSense D435 RGB+D camera at a rate of 30 fps. Both the RGB and the depth streams are acquired in the same spatial resolution of 848×480 pixels. To increase variability in the videos, the camera position and orientation is slightly altered within subsequent recordings. Seven different signers are employed to perform 5 individual and commonly met scenarios in different public services. The average length of each scenario is twenty sentences.\n\nThe dataset contains 10,290 sentence instances, 40,785 gloss instances, 310 unique glosses (vocabulary size) and 331 unique sentences, with 4.23 glosses per sentence on average. Each signer is asked to perform the pre-defined dialogues five consecutive times. In all cases, the simulation considers a deaf person communicating with a single public service employee. The involved signer performs the sequence of glosses of both agents in the discussion. For the annotation of each gloss sequence, GSL linguistic experts are involved. The given annotations are at individual gloss and gloss sequence level. A translation of the gloss sentences to spoken Greek is also provided.\n\nEvaluation\nThe GSL dataset includes the 3 evaluation setups:\n\n\n\nSigner-dependent continuous sign language recognition (GSL SD) – roughly 80% of videos are used for training, corresponding to 8,189 instances. The rest 1,063 (10%) were kept for validation and 1,043 (10%) for testing.\n\n\n\nSigner-independent continuous sign language recognition (GSL SI) – the selected test gloss sequences are not used in the training set, while all the individual glosses exist in the training set. In GSL SI, the recordings of one signer are left out for validation and testing (588 and 881 instances, respectively). The rest 8821 instances are utilized for training.\n\n\n\nIsolated gloss sign language recognition (GSL isol.) – The validation set consists of 2,231 gloss instances, the test set 3,500, while the remaining 34,995 are used for training. All 310 unique glosses are seen in the training set.\n\n\n\nFor more info and results, advice our paper\n\nPaper Abstract: A Comprehensive Study on Sign Language Recognition Methods, Adaloglou et al. 2020\nIn this paper, a comparative experimental assessment of computer vision-based methods for sign language recognition is conducted. By implementing the most recent deep neural network methods in this field, a thorough evaluation on multiple publicly available datasets is performed. The aim of the present study is to provide insights on sign language recognition, focusing on mapping non-segmented video streams to glosses. For this task, two new sequence training criteria, known from the fields of speech and scene text recognition, are introduced. Furthermore, a\nplethora of pretraining schemes are thoroughly discussed. Finally, a new RGB+D dataset for the Greek sign language is created. To the best of our knowledge, this is the first sign language dataset where sentence and gloss level annotations are provided for every video capture.\n\nArxiv link" }, { "dkey": "ACDC", "dval": "The goal of the Automated Cardiac Diagnosis Challenge (ACDC) challenge is to:\n\n\ncompare the performance of automatic methods on the segmentation of the left ventricular endocardium and epicardium as the right ventricular endocardium for both end diastolic and end systolic phase instances;\ncompare the performance of automatic methods for the classification of the examinations in five classes (normal case, heart failure with infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, abnormal right ventricle).\n\nThe overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon. Acquired data were fully anonymized and handled within the regulations set by the local ethical committee of the Hospital of Dijon (France). Our dataset covers several well-defined pathologies with enough cases to (1) properly train machine learning methods and (2) clearly assess the variations of the main physiological parameters obtained from cine-MRI (in particular diastolic volume and ejection fraction). The dataset is composed of 150 exams (all from different patients) divided into 5 evenly distributed subgroups (4 pathological plus 1 healthy subject groups) as described below. Furthermore, each patient comes with the following additional information : weight, height, as well as the diastolic and systolic phase instants.\n\nThe database is made available to participants through two datasets from the dedicated online evaluation website after a personal registration: i) a training dataset of 100 patients along with the corresponding manual references based on the analysis of one clinical expert; ii) a testing dataset composed of 50 new patients, without manual annotations but with the patient information given above. The raw input images are provided through the Nifti format." }, { "dkey": "CoNLL-2014 Shared Task: Grammatical Error Correction", "dval": "CoNLL-2014 will continue the CoNLL tradition of having a high profile shared task in natural language processing. This year's shared task will be grammatical error correction, a continuation of the CoNLL shared task in 2013. A participating system in this shared task is given short English texts written by non-native speakers of English. The system detects the grammatical errors present in the input texts, and returns the corrected essays. The shared task in 2014 will require a participating system to correct all errors present in an essay (i.e., not restricted to just five error types in 2013). Also, the evaluation metric will be changed to F0.5, weighting precision twice as much as recall.\n\nThe grammatical error correction task is impactful since it is estimated that hundreds of millions of people in the world are learning English and they benefit directly from an automated grammar checker. However, for many error types, current grammatical error correction methods do not achieve a high performance and thus more research is needed." } ]
An end-to-end network with attention mechanism to improve object recognition.
object recognition images
2,017
[ "ROCStories", "DDD20", "EyeCar", "THEODORE", "iSUN", "CCPD", "E2E" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "ROCStories", "dval": "ROCStories is a collection of commonsense short stories. The corpus consists of 100,000 five-sentence stories. Each story logically follows everyday topics created by Amazon Mechanical Turk workers. These stories contain a variety of commonsense causal and temporal relations between everyday events. Writers also develop an additional 3,742 Story Cloze Test stories which contain a four-sentence-long body and two candidate endings. The endings were collected by asking Mechanical Turk workers to write both a right ending and a wrong ending after eliminating original endings of given short stories. Both endings were required to make logical sense and include at least one character from the main story line. The published ROCStories dataset is constructed with ROCStories as a training set that includes 98,162 stories that exclude candidate wrong endings, an evaluation set, and a test set, which have the same structure (1 body + 2 candidate endings) and a size of 1,871." }, { "dkey": "DDD20", "dval": "The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames. DDD20 is the longest event camera end-to-end driving dataset to date with 51h of DAVIS event+frame camera and vehicle human control data collected from 4000km of highway and urban driving under a variety of lighting conditions." }, { "dkey": "EyeCar", "dval": "EyeCar is a dataset of driving videos of vehicles involved in rear-end collisions paired with eye fixation data captured from human subjects. It contains 21 front-view videos that were captured in various traffic, weather, and day light conditions. Each video is 30sec in length and contains typical driving tasks (e.g., lanekeeping, merging-in, and braking) ending to rear-end collisions." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "iSUN", "dval": "iSUN is a ground truth of gaze traces on images from the SUN dataset. The collection is partitioned into 6,000 images for training, 926 for validation and 2,000 for test." }, { "dkey": "CCPD", "dval": "The Chinese City Parking Dataset (CCPD) is a dataset for license plate detection and recognition. It contains over 250k unique car images, with license plate location annotations." }, { "dkey": "E2E", "dval": "End-to-End NLG Challenge (E2E) aims to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena." } ]
This is an implementation of the metric learning algorithm for person re-identification.
person re-identification images
2,018
[ "Airport", "Partial-iLIDS", "CUHK02", "DukeMTMC-reID", "SYSU-MM01" ]
[ "VIPeR", "Market-1501" ]
[ { "dkey": "VIPeR", "dval": "The Viewpoint Invariant Pedestrian Recognition (VIPeR) dataset includes 632 people and two outdoor cameras under different viewpoints and light conditions. Each person has one image per camera and each image has been scaled to be 128×48 pixels. It provides the pose angle of each person as 0° (front), 45°, 90° (right), 135°, and 180° (back)." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "Partial-iLIDS", "dval": "Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras. Some images contain people occluded by other individuals or luggage." }, { "dkey": "CUHK02", "dval": "CUHK02 is a dataset for person re-identification. It contains 1,816 identities from two disjoint camera views. Each identity has two samples per camera view making a total of 7,264 images. It is used for Person Re-identification." }, { "dkey": "DukeMTMC-reID", "dval": "The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images.\n\nNOTE: This dataset has been retracted." }, { "dkey": "SYSU-MM01", "dval": "The SYSU-MM01 is a dataset collected for the Visible-Infrared Re-identification problem. The images in the dataset were obtained from 491 different persons by recording them using 4 RGB and 2 infrared cameras. Within the dataset, the persons are divided into 3 fixed splits to create training, validation and test sets. In the training set, there are 20284 RGB and 9929 infrared images of 296 persons. The validation set contains 1974 RGB and 1980 infrared images of 99 persons. The testing set consists of the images of 96 persons where 3803 infrared images are used as query and 301 randomly selected RGB images are used as gallery." } ]
We propose a new approach to unsupervised representation learning for videos. We formulate it as a multi
unsupervised representation learning rgb images
2,020
[ "Icentia11K", "CC100", "VoxPopuli", "Localized Narratives", "MLMA Hate Speech", "BDD100K" ]
[ "ImageNet", "UCF101", "HMDB51" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "HMDB51", "dval": "The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. The original evaluation scheme uses three different training/testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. The average accuracy over these three splits is used to measure the final performance." }, { "dkey": "Icentia11K", "dval": "Public ECG dataset of continuous raw signals for representation learning containing 11 thousand patients and 2 billion labelled beats." }, { "dkey": "CC100", "dval": "This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository." }, { "dkey": "VoxPopuli", "dval": "VoxPopuli is a large-scale multilingual corpus providing 100K hours of unlabelled speech data in 23 languages. It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning. VoxPopuli also contains 1.8K hours of transcribed speeches in 16 languages and their aligned oral interpretations into 5 other languages totaling 5.1K hours." }, { "dkey": "Localized Narratives", "dval": "We propose Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing. Since the voice and the mouse pointer are synchronized, we can localize every single word in the description. This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data. We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available. We provide an extensive analysis of these annotations showing they are diverse, accurate, and efficient to produce. We also demonstrate their utility on the application of controlled image captioning." }, { "dkey": "MLMA Hate Speech", "dval": "A new multilingual multi-aspect hate speech analysis dataset and use it to test the current state-of-the-art multilingual multitask learning approaches." }, { "dkey": "BDD100K", "dval": "Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page." } ]
We present a unified spatiotemporal CNN model for VOS.
video object segmentation
2,019
[ "VATEX", "THEODORE", "FVI", "SONYC-UST-V2", "HRA", "VCR" ]
[ "DAVIS 2017", "DAVIS 2016" ]
[ { "dkey": "DAVIS 2017", "dval": "DAVIS17 is a dataset for video object segmentation. It contains a total of 150 videos - 60 for training, 30 for validation, 60 for testing" }, { "dkey": "DAVIS 2016", "dval": "DAVIS16 is a dataset for video object segmentation which consists of 50 videos in total (30 videos for training and 20 for testing). Per-frame pixel-wise annotations are offered." }, { "dkey": "VATEX", "dval": "VATEX is multilingual, large, linguistically complex, and diverse dataset in terms of both video and natural language descriptions. It has two tasks for video-and-language research: (1) Multilingual Video Captioning, aimed at describing a video in various languages with a compact unified captioning model, and (2) Video-guided Machine Translation, to translate a source language description into the target language using the video information as additional spatiotemporal context." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "FVI", "dval": "The Free-Form Video Inpainting dataset is a dataset used for training and evaluation video inpainting models. It consists of 1940 videos from the YouTube-VOS dataset and 12,600 videos from the YouTube-BoundingBoxes." }, { "dkey": "SONYC-UST-V2", "dval": "A dataset for urban sound tagging with spatiotemporal information. This dataset is aimed for the development and evaluation of machine listening systems for real-world urban noise monitoring. While datasets of urban recordings are available, this dataset provides the opportunity to investigate how spatiotemporal metadata can aid in the prediction of urban sound tags. SONYC-UST-V2 consists of 18510 audio recordings from the \"Sounds of New York City\" (SONYC) acoustic sensor network, including the timestamp of audio acquisition and location of the sensor." }, { "dkey": "HRA", "dval": "A verified-by-experts repository of 3050 human rights violations photographs, labelled with human rights semantic categories, comprising a list of the types of human rights abuses encountered at present." }, { "dkey": "VCR", "dval": "Visual Commonsense Reasoning (VCR) is a large-scale dataset for cognition-level visual understanding. Given a challenging question about an image, machines need to present two sub-tasks: answer correctly and provide a rationale justifying its answer. The VCR dataset contains over 212K (training), 26K (validation) and 25K (testing) questions, answers and rationales derived from 110K movie scenes." } ]
I want to train a supervised model for 360°
360° image classification images videos
2,017
[ "SNIPS", "ConvAI2", "Stanford Cars", "CLUECorpus2020", "Fusion 360 Gallery", "FDDB-360", "YouTube-8M" ]
[ "PanoContext", "SUN360" ]
[ { "dkey": "PanoContext", "dval": "The PanoContext dataset contains 500 annotated cuboid layouts of indoor environments such as bedrooms and living rooms." }, { "dkey": "SUN360", "dval": "The goal of the SUN360 panorama database is to provide academic researchers in computer vision, computer graphics and computational photography, cognition and neuroscience, human perception, machine learning and data mining, with a comprehensive collection of annotated panoramas covering 360x180-degree full view for a large variety of environmental scenes, places and the objects within. To build the core of the dataset, the authors download a huge number of high-resolution panorama images from the Internet, and group them into different place categories. Then, they designed a WebGL annotation tool for annotating the polygons and cuboids for objects in the scene." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "Stanford Cars", "dval": "The Stanford Cars dataset consists of 196 classes of cars with a total of 16,185 images, taken from the rear. The data is divided into almost a 50-50 train/test split with 8,144 training images and 8,041 testing images. Categories are typically at the level of Make, Model, Year. The images are 360×240." }, { "dkey": "CLUECorpus2020", "dval": "CLUECorpus2020 is a large-scale corpus that can be used directly for self-supervised learning such as pre-training of a language model, or language generation. It has 100G raw corpus with 35 billion Chinese characters, which is retrieved from Common Crawl." }, { "dkey": "Fusion 360 Gallery", "dval": "The Fusion 360 Gallery Dataset contains rich 2D and 3D geometry data derived from parametric CAD models. The dataset is produced from designs submitted by users of the CAD package Autodesk Fusion 360 to the Autodesk Online Gallery. The dataset provides valuable data for learning how people design, including sequential CAD design data, designs segmented by modelling operation, and design hierarchy and connectivity data." }, { "dkey": "FDDB-360", "dval": "A 360-degree fisheye-like version of the popular FDDB face detection dataset." }, { "dkey": "YouTube-8M", "dval": "The YouTube-8M dataset is a large scale video dataset, which includes more than 7 million videos with 4716 classes labeled by the annotation system. The dataset consists of three parts: training set, validate set, and test set. In the training set, each class contains at least 100 training videos. Features of these videos are extracted by the state-of-the-art popular pre-trained models and released for public use. Each video contains audio and visual modality. Based on the visual information, videos are divided into 24 topics, such as sports, game, arts & entertainment, etc" } ]
I want to apply the proposed adversarial framework for bias control in person re-identification.
person re-identification video
2,019
[ "SYSU-MM01", "Airport", "Partial-iLIDS", "CUHK02", "P-DESTRE" ]
[ "Market-1501", "CUHK03" ]
[ { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "SYSU-MM01", "dval": "The SYSU-MM01 is a dataset collected for the Visible-Infrared Re-identification problem. The images in the dataset were obtained from 491 different persons by recording them using 4 RGB and 2 infrared cameras. Within the dataset, the persons are divided into 3 fixed splits to create training, validation and test sets. In the training set, there are 20284 RGB and 9929 infrared images of 296 persons. The validation set contains 1974 RGB and 1980 infrared images of 99 persons. The testing set consists of the images of 96 persons where 3803 infrared images are used as query and 301 randomly selected RGB images are used as gallery." }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "Partial-iLIDS", "dval": "Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras. Some images contain people occluded by other individuals or luggage." }, { "dkey": "CUHK02", "dval": "CUHK02 is a dataset for person re-identification. It contains 1,816 identities from two disjoint camera views. Each identity has two samples per camera view making a total of 7,264 images. It is used for Person Re-identification." }, { "dkey": "P-DESTRE", "dval": "Provides consistent ID annotations across multiple days, making it suitable for the extremely challenging problem of person search, i.e., where no clothing information can be reliably used. Apart this feature, the P-DESTRE annotations enable the research on UAV-based pedestrian detection, tracking, re-identification and soft biometric solutions." } ]
I propose an unsupervised method to decouple the lexical and affective information from facial configurations.
emotion recognition video images
2,016
[ "Flightmare Simulator", "RAF-DB", "Aff-Wild", "LOL", "SherLIiC" ]
[ "IEMOCAP", "SEMAINE" ]
[ { "dkey": "IEMOCAP", "dval": "Multimodal Emotion Recognition IEMOCAP The IEMOCAP dataset consists of 151 videos of recorded dialogues, with 2 speakers per session for a total of 302 videos across the dataset. Each segment is annotated for the presence of 9 emotions (angry, excited, fear, sad, surprised, frustrated, happy, disappointed and neutral) as well as valence, arousal and dominance. The dataset is recorded across 5 sessions with 5 pairs of speakers." }, { "dkey": "SEMAINE", "dval": "The SEMAINE videos dataset contains spontaneous data capturing the audiovisual interaction between a human and an operator undertaking the role of an avatar with four personalities: Poppy (happy), Obadiah (gloomy), Spike (angry) and Prudence (pragmatic). The audiovisual sequences have been recorded at a video rate of 25 fps (352 x 288 pixels). The dataset consists of audiovisual interaction between a human and an operator undertaking the role of an agent (Sensitive Artificial Agent). SEMAINE video clips have been annotated with couples of epistemic states such as agreement, interested, certain, concentration, and thoughtful with continuous rating (within the range [1,-1]) where -1 indicates most negative rating (i.e: No concentration at all) and +1 defines the highest (Most concentration). Twenty-four recording sessions are used in the Solid SAL scenario. Recordings are made of both the user and the operator, and there are usually four character interactions in each recording session, providing a total of 95 character interactions and 190 video clips." }, { "dkey": "Flightmare Simulator", "dval": "Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc." }, { "dkey": "RAF-DB", "dval": "The Real-world Affective Faces Database (RAF-DB) is a dataset for facial expression. It contains 29672 facial images tagged with basic or compound expressions by 40 independent taggers. Images in this database are of great variability in subjects' age, gender and ethnicity, head poses, lighting conditions, occlusions, (e.g. glasses, facial hair or self-occlusion), post-processing operations (e.g. various filters and special effects), etc." }, { "dkey": "Aff-Wild", "dval": "Aff-Wild is a dataset for emotion recognition from facial images in a variety of head poses, illumination conditions and occlusions." }, { "dkey": "LOL", "dval": "The LOL dataset is composed of 500 low-light and normal-light image pairs and divided into 485 training pairs and 15 testing pairs. The low-light images contain noise produced during the photo capture process. Most of the images are indoor scenes. All the images have a resolution of 400×600." }, { "dkey": "SherLIiC", "dval": "SherLIiC is a testbed for lexical inference in context (LIiC), consisting of 3985 manually annotated inference rule candidates (InfCands), accompanied by (i) ~960k unlabeled InfCands, and (ii) ~190k typed textual relations between Freebase entities extracted from the large entity-linked corpus ClueWeb09. Each InfCand consists of one of these relations, expressed as a lemmatized dependency path, and two argument placeholders, each linked to one or more Freebase types." } ]
I have a supervised model for object categorization.
object categorization image
2,019
[ "COCO-Tasks", "ConvAI2", "PASCAL3D+", "MECCANO", "PMLB" ]
[ "ImageNet", "COCO" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "COCO-Tasks", "dval": "Comprises about 40,000 images where the most suitable objects for 14 tasks have been annotated." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "PASCAL3D+", "dval": "The Pascal3D+ multi-view dataset consists of images in the wild, i.e., images of object categories exhibiting high variability, captured under uncontrolled settings, in cluttered scenes and under many different poses. Pascal3D+ contains 12 categories of rigid objects selected from the PASCAL VOC 2012 dataset. These objects are annotated with pose information (azimuth, elevation and distance to camera). Pascal3D+ also adds pose annotated images of these 12 categories from the ImageNet dataset." }, { "dkey": "MECCANO", "dval": "The MECCANO dataset is the first dataset of egocentric videos to study human-object interactions in industrial-like settings.\nThe MECCANO dataset has been acquired in an industrial-like scenario in which subjects built a toy model of a motorbike. We considered 20 object classes which include the 16 classes categorizing the 49 components, the two tools (screwdriver and wrench), the instructions booklet and a partial_model class.\n\nAdditional details related to the MECCANO:\n\n20 different subjects in 2 countries (IT, U.K.)\nVideo Acquisition: 1920x1080 at 12.00 fps\n11 training videos and 9 validation/test videos\n8857 video segments temporally annotated indicating the verbs which describe the actions performed\n64349 active objects annotated with bounding boxes\n12 verb classes, 20 objects classes and 61 action classes" }, { "dkey": "PMLB", "dval": "The Penn Machine Learning Benchmarks (PMLB) is a large, curated set of benchmark datasets used to evaluate and compare supervised machine learning algorithms. These datasets cover a broad range of applications, and include binary/multi-class classification problems and regression problems, as well as combinations of categorical, ordinal, and continuous features." } ]
I want to train a supervised model for vehicle Re-ID.
vehicle re-id images
2,019
[ "SNIPS", "DukeMTMC-reID", "ConvAI2", "Waymo Open Dataset", "EuRoC MAV" ]
[ "VehicleID", "VeRi-776" ]
[ { "dkey": "VehicleID", "dval": "The “VehicleID” dataset contains CARS captured during the daytime by multiple real-world surveillance cameras distributed in a small city in China. There are 26,267 vehicles (221,763 images in total) in the entire dataset. Each image is attached with an id label corresponding to its identity in real world. In addition, the dataset contains manually labelled 10319 vehicles (90196 images in total) of their vehicle model information(i.e.“MINI-cooper”, “Audi A6L” and “BWM 1 Series”)." }, { "dkey": "VeRi-776", "dval": "VeRi-776 is a vehicle re-identification dataset which contains 49,357 images of 776 vehicles from 20 cameras. The dataset is collected in the real traffic scenario, which is close to the setting of CityFlow. The dataset contains bounding boxes, types, colors and brands." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "DukeMTMC-reID", "dval": "The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images.\n\nNOTE: This dataset has been retracted." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "Waymo Open Dataset", "dval": "The Waymo Open Dataset is comprised of high resolution sensor data collected by autonomous vehicles operated by the Waymo Driver in a wide variety of conditions. \n\nThe Waymo Open Dataset currently contains 1,950 segments. The authors plan to grow this dataset in the future. Currently the datasets includes:\n\n\n1,950 segments of 20s each, collected at 10Hz (390,000 frames) in diverse geographies and conditions\nSensor data\n1 mid-range lidar\n4 short-range lidars\n5 cameras (front and sides)\nSynchronized lidar and camera data\nLidar to camera projections\nSensor calibrations and vehicle poses\n\n\nLabeled data\nLabels for 4 object classes - Vehicles, Pedestrians, Cyclists, Signs\nHigh-quality labels for lidar data in 1,200 segments\n12.6M 3D bounding box labels with tracking IDs on lidar data\nHigh-quality labels for camera data in 1,000 segments\n11.8M 2D bounding box labels with tracking IDs on camera data" }, { "dkey": "EuRoC MAV", "dval": "EuRoC MAV is a visual-inertial datasets collected on-board a Micro Aerial Vehicle (MAV). The dataset contains stereo images, synchronized IMU measurements, and accurate motion and structure ground-truth. The datasets facilitates the design and evaluation of visual-inertial localization algorithms on real flight data" } ]
A novel image dataset construction approach by employing multiple textual queries.
image classification images
2,017
[ "FollowUp", "WikiHop", "WebVision", "LEAF-QA", "Fashion 144K" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "FollowUp", "dval": "1000 query triples on 120 tables." }, { "dkey": "WikiHop", "dval": "WikiHop is a multi-hop question-answering dataset. The query of WikiHop is constructed with entities and relations from WikiData, while supporting documents are from WikiReading. A bipartite graph connecting entities and documents is first built and the answer for each query is located by traversal on this graph. Candidates that are type-consistent with the answer and share the same relation in query with the answer are included, resulting in a set of candidates. Thus, WikiHop is a multi-choice style reading comprehension data set. There are totally about 43K samples in training set, 5K samples in development set and 2.5K samples in test set. The test set is not provided. The task is to predict the correct answer given a query and multiple supporting documents.\n\nThe dataset includes a masked variant, where all candidates and their mentions in the supporting documents are replaced by random but consistent placeholder tokens." }, { "dkey": "WebVision", "dval": "The WebVision dataset is designed to facilitate the research on learning visual representation from noisy web data. It is a large scale web images dataset that contains more than 2.4 million of images crawled from the Flickr website and Google Images search. \n\nThe same 1,000 concepts as the ILSVRC 2012 dataset are used for querying images, such that a bunch of existing approaches can be directly investigated and compared to the models trained from the ILSVRC 2012 dataset, and also makes it possible to study the dataset bias issue in the large scale scenario. The textual information accompanied with those images (e.g., caption, user tags, or description) are also provided as additional meta information. A validation set contains 50,000 images (50 images per category) is provided to facilitate the algorithmic development." }, { "dkey": "LEAF-QA", "dval": "LEAF-QA, a comprehensive dataset of 250,000 densely annotated figures/charts, constructed from real-world open data sources, along with ~2 million question-answer (QA) pairs querying the structure and semantics of these charts. LEAF-QA highlights the problem of multimodal QA, which is notably different from conventional visual QA (VQA), and has recently gained interest in the community. Furthermore, LEAF-QA is significantly more complex than previous attempts at chart QA, viz. FigureQA and DVQA, which present only limited variations in chart data. LEAF-QA being constructed from real-world sources, requires a novel architecture to enable question answering." }, { "dkey": "Fashion 144K", "dval": "Fashion 144K is a novel heterogeneous dataset with 144,169 user posts containing diverse image, textual and meta information." } ]
I want to develop an algorithm for optic disc segmentation from fundus images.
optic disc detection images paragraph-level
2,015
[ "G1020", "ADAM", "HRF", "MVSEC" ]
[ "STARE", "DRIVE" ]
[ { "dkey": "STARE", "dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.." }, { "dkey": "DRIVE", "dval": "The Digital Retinal Images for Vessel Extraction (DRIVE) dataset is a dataset for retinal vessel segmentation. It consists of a total of JPEG 40 color fundus images; including 7 abnormal pathology cases. The images were obtained from a diabetic retinopathy screening program in the Netherlands. The images were acquired using Canon CR5 non-mydriatic 3CCD camera with FOV equals to 45 degrees. Each image resolution is 584*565 pixels with eight bits per color channel (3 channels). \n\nThe set of 40 images was equally divided into 20 images for the training set and 20 images for the testing set. Inside both sets, for each image, there is circular field of view (FOV) mask of diameter that is approximately 540 pixels. Inside training set, for each image, one manual segmentation by an ophthalmological expert has been applied. Inside testing set, for each image, two manual segmentations have been applied by two different observers, where the first observer segmentation is accepted as the ground-truth for performance evaluation." }, { "dkey": "G1020", "dval": "A large publicly available retinal fundus image dataset for glaucoma classification called G1020. The dataset is curated by conforming to standard practices in routine ophthalmology and it is expected to serve as standard benchmark dataset for glaucoma detection. This database consists of 1020 high resolution colour fundus images and provides ground truth annotations for glaucoma diagnosis, optic disc and optic cup segmentation, vertical cup-to-disc ratio, size of neuroretinal rim in inferior, superior, nasal and temporal quadrants, and bounding box location for optic disc." }, { "dkey": "ADAM", "dval": "ADAM is organized as a half day Challenge, a Satellite Event of the ISBI 2020 conference in Iowa City, Iowa, USA.\n\nThe ADAM challenge focuses on the investigation and development of algorithms associated with the diagnosis of Age-related Macular degeneration (AMD) and segmentation of lesions in fundus photos from AMD patients. The goal of the challenge is to evaluate and compare automated algorithms for the detection of AMD on a common dataset of retinal fundus images. We invite the medical image analysis community to participate by developing and testing existing and novel automated fundus classification and segmentation methods.\n\nInstructions: \nADAM: Automatic Detection challenge on Age-related Macular degeneration\n\nLink: https://amd.grand-challenge.org\n\nAge-related macular degeneration, abbreviated as AMD, is a degenerative disorder in the macular region. It mainly occurs in people older than 45 years old and its incidence rate is even higher than diabetic retinopathy in the elderly. \n\nThe etiology of AMD is not fully understood, which could be related to multiple factors, including genetics, chronic photodestruction effect, and nutritional disorder. AMD is classified into Dry AMD and Wet AMD. Dry AMD (also called nonexudative AMD) is not neovascular. It is characterized by progressive atrophy of retinal pigment epithelium (RPE). In the late stage, drusen and the large area of atrophy could be observed under ophthalmoscopy. Wet AMD (also called neovascular or exudative AMD), is characterized by active neovascularization under RPE, subsequently causing exudation, hemorrhage, and scarring, and will eventually cause irreversible damage to the photoreceptors and rapid vision loss if left untreated.\n\nAn early diagnosis of AMD is crucial to treatment and prognosis. Fundus photo is one of the basic examinations. The current dataset is composed of AMD and non-AMD (myopia, normal control, etc.) photos. Typical signs of AMD that can be found in these photos include drusen, exudation, hemorrhage, etc. \n\nThe ADAM challenge has 4 tasks:\n\nTask 1: Classification of AMD and non-AMD fundus images.\n\nTask 2: Detection and segmentation of optic disc.\n\nTask 3: Localization of fovea.\n\nTask 4: Detection and Segmentation of lesions from fundus images." }, { "dkey": "HRF", "dval": "The HRF dataset is a dataset for retinal vessel segmentation which comprises 45 images and is organized as 15 subsets. Each subset contains one healthy fundus image, one image of patient with diabetic retinopathy and one glaucoma image. The image sizes are 3,304 x 2,336, with a training/testing image split of 22/23." }, { "dkey": "MVSEC", "dval": "The Multi Vehicle Stereo Event Camera (MVSEC) dataset is a collection of data designed for the development of novel 3D perception algorithms for event based cameras. Stereo event data is collected from car, motorbike, hexacopter and handheld data, and fused with lidar, IMU, motion capture and GPS to provide ground truth pose and depth images." } ]
A novel framework to learn view-specific person re-identification across a large network with more
person re-identification images
2,017
[ "CUHK02", "Airport", "iLIDS-VID", "SYSU-MM01", "CityFlow", "PRID2011" ]
[ "VIPeR", "Market-1501", "CUHK03" ]
[ { "dkey": "VIPeR", "dval": "The Viewpoint Invariant Pedestrian Recognition (VIPeR) dataset includes 632 people and two outdoor cameras under different viewpoints and light conditions. Each person has one image per camera and each image has been scaled to be 128×48 pixels. It provides the pose angle of each person as 0° (front), 45°, 90° (right), 135°, and 180° (back)." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "CUHK02", "dval": "CUHK02 is a dataset for person re-identification. It contains 1,816 identities from two disjoint camera views. Each identity has two samples per camera view making a total of 7,264 images. It is used for Person Re-identification." }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "iLIDS-VID", "dval": "The iLIDS-VID dataset is a person re-identification dataset which involves 300 different pedestrians observed across two disjoint camera views in public open space. It comprises 600 image sequences of 300 distinct individuals, with one pair of image sequences from two camera views for each person. Each image sequence has variable length ranging from 23 to 192 image frames, with an average number of 73. The iLIDS-VID dataset is very challenging due to clothing similarities among people, lighting and viewpoint variations across camera views, cluttered background and random occlusions." }, { "dkey": "SYSU-MM01", "dval": "The SYSU-MM01 is a dataset collected for the Visible-Infrared Re-identification problem. The images in the dataset were obtained from 491 different persons by recording them using 4 RGB and 2 infrared cameras. Within the dataset, the persons are divided into 3 fixed splits to create training, validation and test sets. In the training set, there are 20284 RGB and 9929 infrared images of 296 persons. The validation set contains 1974 RGB and 1980 infrared images of 99 persons. The testing set consists of the images of 96 persons where 3803 infrared images are used as query and 301 randomly selected RGB images are used as gallery." }, { "dkey": "CityFlow", "dval": "CityFlow is a city-scale traffic camera dataset consisting of more than 3 hours of synchronized HD videos from 40 cameras across 10 intersections, with the longest distance between two simultaneous cameras being 2.5 km. The dataset contains more than 200K annotated bounding boxes covering a wide range of scenes, viewing angles, vehicle models, and urban traffic flow conditions. \n\nCamera geometry and calibration information are provided to aid spatio-temporal analysis. In addition, a subset of the benchmark is made available for the task of image-based vehicle re-identification (ReID)." }, { "dkey": "PRID2011", "dval": "PRID 2011 is a person reidentification dataset that provides multiple person trajectories recorded from two different static surveillance cameras, monitoring crosswalks and sidewalks. The dataset shows a clean background, and the people in the dataset are rarely occluded. In the dataset, 200 people appear in both views. Among the 200 people, 178 people have more than 20 appearances" } ]
I want to estimate the scale of monocular SLAM using deep learning.
monocular slam images
2,019
[ "DENSE", "MuCo-3DHP", "Make3D", "Rent3D", "DIODE", "Flightmare Simulator" ]
[ "CARLA", "KITTI" ]
[ { "dkey": "CARLA", "dval": "CARLA (CAR Learning to Act) is an open simulator for urban driving, developed as an open-source layer over Unreal Engine 4. Technically, it operates similarly to, as an open source layer over Unreal Engine 4 that provides sensors in the form of RGB cameras (with customizable positions), ground truth depth maps, ground truth semantic segmentation maps with 12 semantic classes designed for driving (road, lane marking, traffic sign, sidewalk and so on), bounding boxes for dynamic objects in the environment, and measurements of the agent itself (vehicle location and orientation)." }, { "dkey": "KITTI", "dval": "KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner. Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities. Álvarez et al. generated ground truth for 323 images from the road detection challenge with three classes: road, vertical, and sky. Zhang et al. annotated 252 (140 for training and 112 for testing) acquisitions – RGB and Velodyne scans – from the tracking challenge for ten object categories: building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence. Ros et al. labeled 170 training images and 46 testing images (from the visual odometry challenge) with 11 classes: building, tree, sky, car, sign, road, pedestrian, fence, pole, sidewalk, and bicyclist." }, { "dkey": "DENSE", "dval": "DENSE (Depth Estimation oN Synthetic Events) is a new dataset with synthetic events and perfect ground truth." }, { "dkey": "MuCo-3DHP", "dval": "MuCo-3DHP is a large scale training data set showing real images of sophisticated multi-person interactions and occlusions." }, { "dkey": "Make3D", "dval": "The Make3D dataset is a monocular Depth Estimation dataset that contains 400 single training RGB and depth map pairs, and 134 test samples. The RGB images have high resolution, while the depth maps are provided at low resolution." }, { "dkey": "Rent3D", "dval": "A dataset which contains over 200 apartments." }, { "dkey": "DIODE", "dval": "Diode Dense Indoor/Outdoor DEpth (DIODE) is the first standard dataset for monocular depth estimation comprising diverse indoor and outdoor scenes acquired with the same hardware setup. The training set consists of 8574 indoor and 16884 outdoor samples from 20 scans each. The validation set contains 325 indoor and 446 outdoor samples with each set from 10 different scans. The ground truth density for the indoor training and validation splits are approximately 99.54% and 99%, respectively. The density of the outdoor sets are naturally lower with 67.19% for training and 78.33% for validation subsets. The indoor and outdoor ranges for the dataset are 50m and 300m, respectively." }, { "dkey": "Flightmare Simulator", "dval": "Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc." } ]
We propose an approach for semi-supervised semantic segmentation that exploits unlabeled data. Our approach reduces the
semantic segmentation images
2,019
[ "CSPubSum", "Word Sense Disambiguation: a Unified Evaluation Framework and Empirical Comparison", "TableBank", "VoxPopuli", "Delicious" ]
[ "SBD", "Cityscapes" ]
[ { "dkey": "SBD", "dval": "The Semantic Boundaries Dataset (SBD) is a dataset for predicting pixels on the boundary of the object (as opposed to the inside of the object with semantic segmentation). The dataset consists of 11318 images from the trainval set of the PASCAL VOC2011 challenge, divided into 8498 training and 2820 test images. This dataset has object instance boundaries with accurate figure/ground masks that are also labeled with one of 20 Pascal VOC classes." }, { "dkey": "Cityscapes", "dval": "Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of around 5000 fine annotated images and 20000 coarse annotated ones. Data was captured in 50 cities during several months, daytimes, and good weather conditions. It was originally recorded as video so the frames were manually selected to have the following features: large number of dynamic objects, varying scene layout, and varying background." }, { "dkey": "CSPubSum", "dval": "CSPubSum is a dataset for summarisation of computer science publications, created by exploiting a large resource of author provided summaries and show straightforward ways of extending it further." }, { "dkey": "Word Sense Disambiguation: a Unified Evaluation Framework and Empirical Comparison", "dval": "The Evaluation framework of Raganato et al. 2017 includes two training sets (SemCor-Miller et al., 1993- and OMSTI-Taghipour and Ng, 2015-) and five test sets from the Senseval/SemEval series (Edmonds and Cotton, 2001; Snyder and Palmer, 2004; Pradhan et al., 2007; Navigli et al., 2013; Moro and Navigli, 2015), standardized to the same format and sense inventory (i.e. WordNet 3.0).\n\nTypically, there are two kinds of approach for WSD: supervised (which make use of sense-annotated training data) and knowledge-based (which make use of the properties of lexical resources).\n\nSupervised: The most widely used training corpus used is SemCor, with 226,036 sense annotations from 352 documents manually annotated. All supervised systems in the evaluation table are trained on SemCor. Some supervised methods, particularly neural architectures, usually employ the SemEval 2007 dataset as development set (marked by *). The most usual baseline is the Most Frequent Sense (MFS) heuristic, which selects for each target word the most frequent sense in the training data.\n\nKnowledge-based: Knowledge-based systems usually exploit WordNet or BabelNet as semantic network. The first sense given by the underlying sense inventory (i.e. WordNet 3.0) is included as a baseline.\n\nDescription from NLP Progress" }, { "dkey": "TableBank", "dval": "To address the need for a standard open domain table benchmark dataset, the author propose a novel weak supervision approach to automatically create the TableBank, which is orders of magnitude larger than existing human labeled datasets for table analysis. Distinct from traditional weakly supervised training set, our approach can obtain not only large scale but also high quality training data.\n\nNowadays, there are a great number of electronic documents on the web such as Microsoft Word (.docx) and Latex (.tex) files. These online documents contain mark-up tags for tables in their source code by nature. Intuitively, one can manipulate these source code by adding bounding box using the mark-up language within each document. For Word documents, the internal Office XML code can be modified where the borderline of each table is identified. For Latex documents, the tex code can be also modified where bounding boxes of tables are recognized. In this way, high-quality labeled data is created for a variety of domains such as business documents, official fillings, research papers etc, which is tremendously beneficial for large-scale table analysis tasks.\n\nThe TableBank dataset totally consists of 417,234 high quality labeled tables as well as their original documents in a variety of domains." }, { "dkey": "VoxPopuli", "dval": "VoxPopuli is a large-scale multilingual corpus providing 100K hours of unlabelled speech data in 23 languages. It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning. VoxPopuli also contains 1.8K hours of transcribed speeches in 16 languages and their aligned oral interpretations into 5 other languages totaling 5.1K hours." }, { "dkey": "Delicious", "dval": "Delicious : This data set contains tagged web pages retrieved from the website delicious.com." } ]
We propose a novel scale-aware pixelwise object proposal network, SPOP-net, which can
object proposal images
2,016
[ "TableBank", "MaskedFace-Net", "THEODORE", "Localized Narratives", "PA-100K", "LOGO-Net" ]
[ "COCO", "SBD" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "SBD", "dval": "The Semantic Boundaries Dataset (SBD) is a dataset for predicting pixels on the boundary of the object (as opposed to the inside of the object with semantic segmentation). The dataset consists of 11318 images from the trainval set of the PASCAL VOC2011 challenge, divided into 8498 training and 2820 test images. This dataset has object instance boundaries with accurate figure/ground masks that are also labeled with one of 20 Pascal VOC classes." }, { "dkey": "TableBank", "dval": "To address the need for a standard open domain table benchmark dataset, the author propose a novel weak supervision approach to automatically create the TableBank, which is orders of magnitude larger than existing human labeled datasets for table analysis. Distinct from traditional weakly supervised training set, our approach can obtain not only large scale but also high quality training data.\n\nNowadays, there are a great number of electronic documents on the web such as Microsoft Word (.docx) and Latex (.tex) files. These online documents contain mark-up tags for tables in their source code by nature. Intuitively, one can manipulate these source code by adding bounding box using the mark-up language within each document. For Word documents, the internal Office XML code can be modified where the borderline of each table is identified. For Latex documents, the tex code can be also modified where bounding boxes of tables are recognized. In this way, high-quality labeled data is created for a variety of domains such as business documents, official fillings, research papers etc, which is tremendously beneficial for large-scale table analysis tasks.\n\nThe TableBank dataset totally consists of 417,234 high quality labeled tables as well as their original documents in a variety of domains." }, { "dkey": "MaskedFace-Net", "dval": "Proposes three types of masked face detection dataset; namely, the Correctly Masked Face Dataset (CMFD), the Incorrectly Masked Face Dataset (IMFD) and their combination for the global masked face detection (MaskedFace-Net)." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "Localized Narratives", "dval": "We propose Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing. Since the voice and the mouse pointer are synchronized, we can localize every single word in the description. This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data. We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available. We provide an extensive analysis of these annotations showing they are diverse, accurate, and efficient to produce. We also demonstrate their utility on the application of controlled image captioning." }, { "dkey": "PA-100K", "dval": "PA-100K is a recent-proposed large pedestrian attribute dataset, with 100,000 images in total collected from outdoor surveillance cameras. It is split into 80,000 images for the training set, and 10,000 for the validation set and 10,000 for the test set. This dataset is labeled by 26 binary attributes. The common features existing in both selected dataset is that the images are blurry due to the relatively low resolution and the positive ratio of each binary attribute is low." }, { "dkey": "LOGO-Net", "dval": "A large-scale logo image database for logo detection and brand recognition from real-world product images." } ]
We propose a deep architecture to incorporate the transferred semantic attributes from images and videos for video captioning
video captioning
2,017
[ "Cholec80", "SICK", "DPC-Captions", "BVI-DVC", "COG" ]
[ "COCO", "MSVD" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "MSVD", "dval": "The Microsoft Research Video Description Corpus (MSVD) dataset consists of about 120K sentences collected during the summer of 2010. Workers on Mechanical Turk were paid to watch a short video snippet and then summarize the action in a single sentence. The result is a set of roughly parallel descriptions of more than 2,000 video snippets. Because the workers were urged to complete the task in the language of their choice, both paraphrase and bilingual alternations are captured in the data." }, { "dkey": "Cholec80", "dval": "Cholec80 is an endoscopic video dataset containing 80 videos of cholecystectomy surgeries performed by 13 surgeons. The videos are captured at 25 fps and downsampled to 1 fps for processing. The whole dataset is labeled with the phase and tool presence annotations. The phases have been defined by a senior surgeon in Strasbourg hospital, France. Since the tools are sometimes hardly visible in the images and thus difficult to be recognized visually, a tool is defined as present in an image if at least half of the tool tip is visible.\n\n[https://arxiv.org/pdf/1602.03012.pdf]" }, { "dkey": "SICK", "dval": "The Sentences Involving Compositional Knowledge (SICK) dataset is a dataset for compositional distributional semantics. It includes a large number of sentence pairs that are rich in the lexical, syntactic and semantic phenomena. Each pair of sentences is annotated in two dimensions: relatedness and entailment. The relatedness score ranges from 1 to 5, and Pearson’s r is used for evaluation; the entailment relation is categorical, consisting of entailment, contradiction, and neutral. There are 4439 pairs in the train split, 495 in the trial split used for development and 4906 in the test split. The sentence pairs are generated from image and video caption datasets before being paired up using some algorithm." }, { "dkey": "DPC-Captions", "dval": "This is an open-source image captions dataset for the aesthetic evaluation of images.\nThe dataset is called DPC-Captions, which contains comments of up to five aesthetic attributes of one image through knowledge transfer from a full-annotated small-scale dataset." }, { "dkey": "BVI-DVC", "dval": "Contains 800 sequences at various spatial resolutions from 270p to 2160p and has been evaluated on ten existing network architectures for four different coding tools." }, { "dkey": "COG", "dval": "A configurable visual question and answer dataset (COG) to parallel experiments in humans and animals. COG is much simpler than the general problem of video analysis, yet it addresses many of the problems relating to visual and logical reasoning and memory -- problems that remain challenging for modern deep learning architectures." } ]
A method for clustering based on Deep Discriminative Clustering (DDC), which performs unsupervised
clustering image, text audio
2,019
[ "DUC 2004", "NewSHead", "WGISD", "SFEW" ]
[ "ImageNet", "AudioSet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "AudioSet", "dval": "Audioset is an audio event dataset, which consists of over 2M human-annotated 10-second video clips. These clips are collected from YouTube, therefore many of which are in poor-quality and contain multiple sound-sources. A hierarchical ontology of 632 event classes is employed to annotate these data, which means that the same sound could be annotated as different labels. For example, the sound of barking is annotated as Animal, Pets, and Dog. All the videos are split into Evaluation/Balanced-Train/Unbalanced-Train set." }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "DUC 2004", "dval": "The DUC2004 dataset is a dataset for document summarization. Is designed and used for testing only. It consists of 500 news articles, each paired with four human written summaries. Specifically it consists of 50 clusters of Text REtrieval Conference (TREC) documents, from the following collections: AP newswire, 1998-2000; New York Times newswire, 1998-2000; Xinhua News Agency (English version), 1996-2000. Each cluster contained on average 10 documents." }, { "dkey": "NewSHead", "dval": "The NewSHead dataset contains 369,940 English stories with 932,571 unique URLs, among which there are 359,940 stories for training, 5,000 for validation, and 5,000 for testing, respectively. Each news story contains at least three (and up to five) articles.\n\nThe dataset is collected from news stories published between May 2018 and May 2019, where a proprietary clustering algorithm iteratively loads articles published in a time window and groups them based on content similarity. Up to five representative articles are picked from the cluster for generating the story headline. Curators from a crowd-sourcing platform are requested to provide a headline of up to 35 characters to describe the major information covered by the story." }, { "dkey": "WGISD", "dval": "Embrapa Wine Grape Instance Segmentation Dataset (WGISD) contains grape clusters properly annotated in 300 images and a novel annotation methodology for segmentation of complex objects in natural images." }, { "dkey": "SFEW", "dval": "The Static Facial Expressions in the Wild (SFEW) dataset is a dataset for facial expression recognition. It was created by selecting static frames from the AFEW database by computing key frames based on facial point clustering. The most commonly used version, SFEW 2.0, was the benchmarking data for the SReco sub-challenge in EmotiW 2015. SFEW 2.0 has been divided into three sets: Train (958 samples), Val (436 samples) and Test (372 samples). Each of the images is assigned to one of seven expression categories, i.e., anger, disgust, fear, neutral, happiness, sadness, and surprise. The expression labels of the training and validation sets are publicly available, whereas those of the testing set are held back by the challenge organizer." } ]
We introduce a new approach for 3D mesh segmentation, where a
3d semantic segmentation mesh
2,019
[ "IntrA", "SUM", "THEODORE", "MPI FAUST Dataset" ]
[ "ScanNet", "Matterport3D" ]
[ { "dkey": "ScanNet", "dval": "ScanNet is an instance-level indoor RGB-D dataset that includes both 2D and 3D data. It is a collection of labeled voxels rather than points or objects. Up to now, ScanNet v2, the newest version of ScanNet, has collected 1513 annotated scans with an approximate 90% surface coverage. In the semantic segmentation task, this dataset is marked in 20 classes of annotated 3D voxelized objects." }, { "dkey": "Matterport3D", "dval": "The Matterport3D dataset is a large RGB-D dataset for scene understanding in indoor environments. It contains 10,800 panoramic views inside 90 real building-scale scenes, constructed from 194,400 RGB-D images. Each scene is a residential building consisting of multiple rooms and floor levels, and is annotated with surface construction, camera poses, and semantic segmentation." }, { "dkey": "IntrA", "dval": "IntrA is an open-access 3D intracranial aneurysm dataset that makes the application of points-based and mesh-based classification and segmentation models available. This dataset can be used to diagnose intracranial aneurysms and to extract the neck for a clipping operation in medicine and other areas of deep learning, such as normal estimation and surface reconstruction.\n\n103 3D models of entire brain vessels are collected by reconstructing scanned 2D MRA images of patients (the raw 2D MRA images are not published due to medical ethics).\n1909 blood vessel segments are generated automatically from the complete models, including 1694 healthy vessel segments and 215 aneurysm segments for diagnosis.\n116 aneurysm segments are divided and annotated manually by medical experts; the scale of each aneurysm segment is based on the need for a preoperative examination.\nGeodesic distance matrices are computed and included for each annotated 3D segment, because the expression of the geodesic distance is more accurate than Euclidean distance according to the shape of vessels." }, { "dkey": "SUM", "dval": "SUM is a new benchmark dataset of semantic urban meshes which covers about 4 km2 in Helsinki (Finland), with six classes: Ground, Vegetation, Building, Water, Vehicle, and Boat.\n\nThe authors used Helsinki 3D textured meshes as input and annotated them as a benchmark dataset of semantic urban meshes. The Helsinki's raw dataset covers about 12 km2 and was generated in 2017 from oblique aerial images that have about a 7.5 cm ground sampling distance (GSD) using an off-the-shelf commercial software namely ContextCapture.\n\nThe entire region of Helsinki is split into tiles, and each of them covers about 250 m2." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "MPI FAUST Dataset", "dval": "Contains 300 scans of 10 people in a wide range of poses together with an evaluation methodology." } ]
I want to train a supervised model for action recognition from videos.
action recognition video
2,019
[ "EPIC-KITCHENS-100", "Kinetics", "AViD", "Kinetics-600", "NTU RGB+D", "Charades" ]
[ "UCF101", "HMDB51" ]
[ { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "HMDB51", "dval": "The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. The original evaluation scheme uses three different training/testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. The average accuracy over these three splits is used to measure the final performance." }, { "dkey": "EPIC-KITCHENS-100", "dval": "This paper introduces the pipeline to scale the largest dataset in egocentric vision EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (EPIC-KITCHENS-55), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection also enables evaluating the \"test of time\" - i.e. whether models trained on data collected in 2018 can generalise to new footage collected under the same hypotheses albeit \"two years on\".\nThe dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics." }, { "dkey": "Kinetics", "dval": "The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 600 human action classes with at least 600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class. The videos are collected from YouTube." }, { "dkey": "AViD", "dval": "Is a collection of action videos from many different countries. The motivation is to create a public dataset that would benefit training and pretraining of action recognition models for everybody, rather than making it useful for limited countries." }, { "dkey": "Kinetics-600", "dval": "The Kinetics-600 is a large-scale action recognition dataset which consists of around 480K videos from 600 action categories. The 480K videos are divided into 390K, 30K, 60K for training, validation and test sets, respectively. Each video in the dataset is a 10-second clip of action moment annotated from raw YouTube video. It is an extensions of the Kinetics-400 dataset." }, { "dkey": "NTU RGB+D", "dval": "NTU RGB+D is a large-scale dataset for RGB-D human action recognition. It involves 56,880 samples of 60 action classes collected from 40 subjects. The actions can be generally divided into three categories: 40 daily actions (e.g., drinking, eating, reading), nine health-related actions (e.g., sneezing, staggering, falling down), and 11 mutual actions (e.g., punching, kicking, hugging). These actions take place under 17 different scene conditions corresponding to 17 video sequences (i.e., S001–S017). The actions were captured using three cameras with different horizontal imaging viewpoints, namely, −45∘,0∘, and +45∘. Multi-modality information is provided for action characterization, including depth maps, 3D skeleton joint position, RGB frames, and infrared sequences. The performance evaluation is performed by a cross-subject test that split the 40 subjects into training and test groups, and by a cross-view test that employed one camera (+45∘) for testing, and the other two cameras for training." }, { "dkey": "Charades", "dval": "The Charades dataset is composed of 9,848 videos of daily indoors activities with an average length of 30 seconds, involving interactions with 46 objects classes in 15 types of indoor scenes and containing a vocabulary of 30 verbs leading to 157 action classes. Each video in this dataset is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacting objects. 267 different users were presented with a sentence, which includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence. In total, the dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos. In the standard split there are7,986 training video and 1,863 validation video." } ]
I want to train a supervised model for action recognition from videos.
action recognition video
2,016
[ "EPIC-KITCHENS-100", "Kinetics", "AViD", "Kinetics-600", "NTU RGB+D", "Charades" ]
[ "UCF101", "HMDB51" ]
[ { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "HMDB51", "dval": "The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. The original evaluation scheme uses three different training/testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. The average accuracy over these three splits is used to measure the final performance." }, { "dkey": "EPIC-KITCHENS-100", "dval": "This paper introduces the pipeline to scale the largest dataset in egocentric vision EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (EPIC-KITCHENS-55), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection also enables evaluating the \"test of time\" - i.e. whether models trained on data collected in 2018 can generalise to new footage collected under the same hypotheses albeit \"two years on\".\nThe dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics." }, { "dkey": "Kinetics", "dval": "The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 600 human action classes with at least 600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class. The videos are collected from YouTube." }, { "dkey": "AViD", "dval": "Is a collection of action videos from many different countries. The motivation is to create a public dataset that would benefit training and pretraining of action recognition models for everybody, rather than making it useful for limited countries." }, { "dkey": "Kinetics-600", "dval": "The Kinetics-600 is a large-scale action recognition dataset which consists of around 480K videos from 600 action categories. The 480K videos are divided into 390K, 30K, 60K for training, validation and test sets, respectively. Each video in the dataset is a 10-second clip of action moment annotated from raw YouTube video. It is an extensions of the Kinetics-400 dataset." }, { "dkey": "NTU RGB+D", "dval": "NTU RGB+D is a large-scale dataset for RGB-D human action recognition. It involves 56,880 samples of 60 action classes collected from 40 subjects. The actions can be generally divided into three categories: 40 daily actions (e.g., drinking, eating, reading), nine health-related actions (e.g., sneezing, staggering, falling down), and 11 mutual actions (e.g., punching, kicking, hugging). These actions take place under 17 different scene conditions corresponding to 17 video sequences (i.e., S001–S017). The actions were captured using three cameras with different horizontal imaging viewpoints, namely, −45∘,0∘, and +45∘. Multi-modality information is provided for action characterization, including depth maps, 3D skeleton joint position, RGB frames, and infrared sequences. The performance evaluation is performed by a cross-subject test that split the 40 subjects into training and test groups, and by a cross-view test that employed one camera (+45∘) for testing, and the other two cameras for training." }, { "dkey": "Charades", "dval": "The Charades dataset is composed of 9,848 videos of daily indoors activities with an average length of 30 seconds, involving interactions with 46 objects classes in 15 types of indoor scenes and containing a vocabulary of 30 verbs leading to 157 action classes. Each video in this dataset is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacting objects. 267 different users were presented with a sentence, which includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence. In total, the dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos. In the standard split there are7,986 training video and 1,863 validation video." } ]
A vessel centerline extraction algorithm is proposed.
vessel centerline extraction retinal images
2,006
[ "ROSE", "RITE", "IntrA", "VOT2018", "Medical Segmentation Decathlon", "Hollywood 3D dataset" ]
[ "STARE", "DRIVE" ]
[ { "dkey": "STARE", "dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.." }, { "dkey": "DRIVE", "dval": "The Digital Retinal Images for Vessel Extraction (DRIVE) dataset is a dataset for retinal vessel segmentation. It consists of a total of JPEG 40 color fundus images; including 7 abnormal pathology cases. The images were obtained from a diabetic retinopathy screening program in the Netherlands. The images were acquired using Canon CR5 non-mydriatic 3CCD camera with FOV equals to 45 degrees. Each image resolution is 584*565 pixels with eight bits per color channel (3 channels). \n\nThe set of 40 images was equally divided into 20 images for the training set and 20 images for the testing set. Inside both sets, for each image, there is circular field of view (FOV) mask of diameter that is approximately 540 pixels. Inside training set, for each image, one manual segmentation by an ophthalmological expert has been applied. Inside testing set, for each image, two manual segmentations have been applied by two different observers, where the first observer segmentation is accepted as the ground-truth for performance evaluation." }, { "dkey": "ROSE", "dval": "Retinal OCTA SEgmentation dataset (ROSE) consists of 229 OCTA images with vessel annotations at either centerline-level or pixel level." }, { "dkey": "RITE", "dval": "The RITE (Retinal Images vessel Tree Extraction) is a database that enables comparative studies on segmentation or classification of arteries and veins on retinal fundus images, which is established based on the public available DRIVE database (Digital Retinal Images for Vessel Extraction).\n\nRITE contains 40 sets of images, equally separated into a training subset and a test subset, the same as DRIVE. The two subsets are built from the corresponding two subsets in DRIVE. For each set, there is a fundus photograph, a vessel reference standard, and a Arteries/Veins (A/V) reference standard. \n\n\nThe fundus photograph is inherited from DRIVE. \nFor the training set, the vessel reference standard is a modified version of 1st_manual from DRIVE. \nFor the test set, the vessel reference standard is 2nd_manual from DRIVE. \nFor the A/V reference standard, four types of vessels are labelled using four colors based on the vessel reference standard. \nArteries are labelled in red; veins are labelled in blue; the overlapping of arteries and veins are labelled in green; the vessels which are uncertain are labelled in white. \nThe fundus photograph is in tif format. And the vessel reference standard and the A/V reference standard are in png format. \n\nThe dataset is described in more detail in our paper, which you will cite if you use the dataset in any way: \n\nHu Q, Abràmoff MD, Garvin MK. Automated separation of binary overlapping trees in low-contrast color retinal images. Med Image Comput Comput Assist Interv. 2013;16(Pt 2):436-43. PubMed PMID: 24579170 https://doi.org/10.1007/978-3-642-40763-5_54" }, { "dkey": "IntrA", "dval": "IntrA is an open-access 3D intracranial aneurysm dataset that makes the application of points-based and mesh-based classification and segmentation models available. This dataset can be used to diagnose intracranial aneurysms and to extract the neck for a clipping operation in medicine and other areas of deep learning, such as normal estimation and surface reconstruction.\n\n103 3D models of entire brain vessels are collected by reconstructing scanned 2D MRA images of patients (the raw 2D MRA images are not published due to medical ethics).\n1909 blood vessel segments are generated automatically from the complete models, including 1694 healthy vessel segments and 215 aneurysm segments for diagnosis.\n116 aneurysm segments are divided and annotated manually by medical experts; the scale of each aneurysm segment is based on the need for a preoperative examination.\nGeodesic distance matrices are computed and included for each annotated 3D segment, because the expression of the geodesic distance is more accurate than Euclidean distance according to the shape of vessels." }, { "dkey": "VOT2018", "dval": "VOT2018 is a dataset for visual object tracking. It consists of 60 challenging videos collected from real-life datasets." }, { "dkey": "Medical Segmentation Decathlon", "dval": "The Medical Segmentation Decathlon is a collection of medical image segmentation datasets. It contains a total of 2,633 three-dimensional images collected across multiple anatomies of interest, multiple modalities and multiple sources. Specifically, it contains data for the following body organs or parts: Brain, Heart, Liver, Hippocampus, Prostate, Lung, Pancreas, Hepatic Vessel, Spleen and Colon." }, { "dkey": "Hollywood 3D dataset", "dval": "A dataset for benchmarking action recognition algorithms in natural environments, while making use of 3D information. The dataset contains around 650 video clips, across 14 classes. In addition, two state of the art action recognition algorithms are extended to make use of the 3D data, and five new interest point detection strategies are also proposed, that extend to the 3D data." } ]
I want to evaluate different face alignment methods on a set of images.
face alignment image
2,015
[ "MegaFace", "CPLFW", "COVERAGE", "SNIPS", "Real Blur Dataset", "ACDC" ]
[ "AFW", "300W" ]
[ { "dkey": "AFW", "dval": "AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box." }, { "dkey": "300W", "dval": "The 300-W is a face dataset that consists of 300 Indoor and 300 Outdoor in-the-wild images. It covers a large variation of identity, expression, illumination conditions, pose, occlusion and face size. The images were downloaded from google.com by making queries such as “party”, “conference”, “protests”, “football” and “celebrities”. Compared to the rest of in-the-wild datasets, the 300-W database contains a larger percentage of partially-occluded images and covers more expressions than the common “neutral” or “smile”, such as “surprise” or “scream”.\nImages were annotated with the 68-point mark-up using a semi-automatic methodology. The images of the database were carefully selected so that they represent a characteristic sample of challenging but natural face instances under totally unconstrained conditions. Thus, methods that achieve accurate performance on the 300-W database can demonstrate the same accuracy in most realistic cases.\nMany images of the database contain more than one annotated faces (293 images with 1 face, 53 images with 2 faces and 53 images with [3, 7] faces). Consequently, the database consists of 600 annotated face instances, but 399 unique images. Finally, there is a large variety of face sizes. Specifically, 49.3% of the faces have size in the range [48.6k, 2.0M] and the overall mean size is 85k (about 292 × 292) pixels." }, { "dkey": "MegaFace", "dval": "MegaFace was a publicly available dataset which is used for evaluating the performance of face recognition algorithms with up to a million distractors (i.e., up to a million people who are not in the test set). MegaFace contains 1M images from 690K individuals with unconstrained pose, expression, lighting, and exposure. MegaFace captures many different subjects rather than many images of a small number of subjects. The gallery set of MegaFace is collected from a subset of Flickr. The probe set of MegaFace used in the challenge consists of two databases; Facescrub and FGNet. FGNet contains 975 images of 82 individuals, each with several images spanning ages from 0 to 69. Facescrub dataset contains more than 100K face images of 530 people. The MegaFace challenge evaluates performance of face recognition algorithms by increasing the numbers of “distractors” (going from 10 to 1M) in the gallery set. In order to evaluate the face recognition algorithms fairly, MegaFace challenge has two protocols including large or small training sets. If a training set has more than 0.5M images and 20K subjects, it is considered as large. Otherwise, it is considered as small.\n\nNOTE: This dataset has been retired." }, { "dkey": "CPLFW", "dval": "A renovation of Labeled Faces in the Wild (LFW), the de facto standard testbed for unconstraint face verification. \n\nThere are three motivations behind the construction of CPLFW benchmark as follows:\n\n1.Establishing a relatively more difficult database to evaluate the performance of real world face verification so the effectiveness of several face verification methods can be fully justified.\n\n2.Continuing the intensive research on LFW with more realistic consideration on pose intra-class variation and fostering the research on cross-pose face verification in unconstrained situation. The challenge of CPLFW emphasizes pose difference to further enlarge intra-class variance. Also, negative pairs are deliberately selected to avoid different gender or race. CPLFW considers both the large intra-class variance and the tiny inter-class variance simultaneously.\n\n3.Maintaining the data size, the face verification protocol which provides a 'same/different' benchmark and the same identities in LFW, so one can easily apply CPLFW to evaluate the performance of face verification." }, { "dkey": "COVERAGE", "dval": "COVERAGE contains copymove forged (CMFD) images and their originals with similar but genuine objects (SGOs). COVERAGE is designed to highlight and address tamper detection ambiguity of popular methods, caused by self-similarity within natural images. In COVERAGE, forged–original pairs are annotated with (i) the duplicated and forged region masks, and (ii) the tampering factor/similarity metric. For benchmarking, forgery quality is evaluated using (i) computer vision-based methods, and (ii) human detection performance." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "Real Blur Dataset", "dval": "The dataset consists of 4,738 pairs of images of 232 different scenes including reference pairs. All images were captured both in the camera raw and JPEG formats, hence generating two datasets: RealBlur-R from the raw images, and RealBlur-J from the JPEG images. Each training set consists of 3,758 image pairs, while each test set consists of 980 image pairs.\n\nThe deblurring result is first aligned to its ground truth sharp image using a homography estimated by the enhanced correlation coefficients method, and PSNR or SSIM is computed in sRGB color space." }, { "dkey": "ACDC", "dval": "The goal of the Automated Cardiac Diagnosis Challenge (ACDC) challenge is to:\n\n\ncompare the performance of automatic methods on the segmentation of the left ventricular endocardium and epicardium as the right ventricular endocardium for both end diastolic and end systolic phase instances;\ncompare the performance of automatic methods for the classification of the examinations in five classes (normal case, heart failure with infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, abnormal right ventricle).\n\nThe overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon. Acquired data were fully anonymized and handled within the regulations set by the local ethical committee of the Hospital of Dijon (France). Our dataset covers several well-defined pathologies with enough cases to (1) properly train machine learning methods and (2) clearly assess the variations of the main physiological parameters obtained from cine-MRI (in particular diastolic volume and ejection fraction). The dataset is composed of 150 exams (all from different patients) divided into 5 evenly distributed subgroups (4 pathological plus 1 healthy subject groups) as described below. Furthermore, each patient comes with the following additional information : weight, height, as well as the diastolic and systolic phase instants.\n\nThe database is made available to participants through two datasets from the dedicated online evaluation website after a personal registration: i) a training dataset of 100 patients along with the corresponding manual references based on the analysis of one clinical expert; ii) a testing dataset composed of 50 new patients, without manual annotations but with the patient information given above. The raw input images are provided through the Nifti format." } ]
A method to recover a parametric 3D human mesh from a single image
3d human mesh recovery single image
2,020
[ "BlendedMVS", "ABC Dataset", "SBU Captions Dataset", "SUM", "2D-3D-S", "FaceScape", "ITOP" ]
[ "MPII", "COCO" ]
[ { "dkey": "MPII", "dval": "The MPII Human Pose Dataset for single person pose estimation is composed of about 25K images of which 15K are training samples, 3K are validation samples and 7K are testing samples (which labels are withheld by the authors). The images are taken from YouTube videos covering 410 different human activities and the poses are manually annotated with up to 16 body joints." }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "BlendedMVS", "dval": "BlendedMVS is a novel large-scale dataset, to provide sufficient training ground truth for learning-based MVS. The dataset was created by applying a 3D reconstruction pipeline to recover high-quality textured meshes from images of well-selected scenes. Then, these mesh models were rendered to color images and depth maps." }, { "dkey": "ABC Dataset", "dval": "The ABC Dataset is a collection of one million Computer-Aided Design (CAD) models for research of geometric deep learning methods and applications. Each model is a collection of explicitly parametrized curves and surfaces, providing ground truth for differential quantities, patch segmentation, geometric feature detection, and shape reconstruction. Sampling the parametric descriptions of surfaces and curves allows generating data in different formats and resolutions, enabling fair comparisons for a wide range of geometric learning algorithms." }, { "dkey": "SBU Captions Dataset", "dval": "A collection that allows researchers to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results." }, { "dkey": "SUM", "dval": "SUM is a new benchmark dataset of semantic urban meshes which covers about 4 km2 in Helsinki (Finland), with six classes: Ground, Vegetation, Building, Water, Vehicle, and Boat.\n\nThe authors used Helsinki 3D textured meshes as input and annotated them as a benchmark dataset of semantic urban meshes. The Helsinki's raw dataset covers about 12 km2 and was generated in 2017 from oblique aerial images that have about a 7.5 cm ground sampling distance (GSD) using an off-the-shelf commercial software namely ContextCapture.\n\nThe entire region of Helsinki is split into tiles, and each of them covers about 250 m2." }, { "dkey": "2D-3D-S", "dval": "The 2D-3D-S dataset provides a variety of mutually registered modalities from 2D, 2.5D and 3D domains, with instance-level semantic and geometric annotations. It covers over 6,000 m2 collected in 6 large-scale indoor areas that originate from 3 different buildings. It contains over 70,000 RGB images, along with the corresponding depths, surface normals, semantic annotations, global XYZ images (all in forms of both regular and 360° equirectangular images) as well as camera information. It also includes registered raw and semantically annotated 3D meshes and point clouds. The dataset enables development of joint and cross-modal learning models and potentially unsupervised approaches utilizing the regularities present in large-scale indoor spaces." }, { "dkey": "FaceScape", "dval": "FaceScape dataset provides 3D face models, parametric models and multi-view images in large-scale and high-quality. The camera parameters, the age and gender of the subjects are also included. The data have been released to public for non-commercial research purpose." }, { "dkey": "ITOP", "dval": "The ITOP dataset consists of 40K training and 10K testing depth images for each of the front-view and top-view tracks. This dataset contains depth images with 20 actors who perform 15 sequences each and is recorded by two Asus Xtion Pro cameras. The ground-truth of this dataset is the 3D coordinates of 15 body joints." } ]
I'm trying to improve my trilinear interaction model in VQA.
visual question answering images text
2,019
[ "VQA-CP", "VQA-E", "VizWiz", "VQA-HAT", "KnowIT VQA" ]
[ "TDIUC", "Visual7W" ]
[ { "dkey": "TDIUC", "dval": "Task Directed Image Understanding Challenge (TDIUC) dataset is a Visual Question Answering dataset which consists of 1.6M questions and 170K images sourced from MS COCO and the Visual Genome Dataset. The image-question pairs are split into 12 categories and 4 additional evaluation matrices which help evaluate models’ robustness against answer imbalance and its ability to answer questions that require higher reasoning capability. The TDIUC dataset divides the VQA paradigm into 12 different task directed question types. These include questions that require a simpler task (e.g., object presence, color attribute) and more complex tasks (e.g., counting, positional reasoning). The dataset includes also an “Absurd” question category in which questions are irrelevant to the image contents to help balance the dataset." }, { "dkey": "Visual7W", "dval": "Visual7W is a large-scale visual question answering (QA) dataset, with object-level groundings and multimodal answers. Each question starts with one of the seven Ws, what, where, when, who, why, how and which. It is collected from 47,300 COCO iamges and it has 327,929 QA pairs, together with 1,311,756 human-generated multiple-choices and 561,459 object groundings from 36,579 categories." }, { "dkey": "VQA-CP", "dval": "The VQA-CP dataset was constructed by reorganizing VQA v2 such that the correlation between the question type and correct answer differs in the training and test splits. For example, the most common answer to questions starting with What sport… is tennis in the training set, but skiing in the test set. A model that guesses an answer primarily from the question will perform poorly." }, { "dkey": "VQA-E", "dval": "VQA-E is a dataset for Visual Question Answering with Explanation, where the models are required to generate and explanation with the predicted answer. The VQA-E dataset is automatically derived from the VQA v2 dataset by synthesizing a textual explanation for each image-question-answer triple." }, { "dkey": "VizWiz", "dval": "The VizWiz-VQA dataset originates from a natural visual question answering setting where blind people each took an image and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. The proposed challenge addresses the following two tasks for this dataset: predict the answer to a visual question and (2) predict whether a visual question cannot be answered." }, { "dkey": "VQA-HAT", "dval": "VQA-HAT (Human ATtention) is a dataset to evaluate the informative regions of an image depending on the question being asked about it. The dataset consists of human visual attention maps over the images in the original VQA dataset. It contains more than 60k attention maps." }, { "dkey": "KnowIT VQA", "dval": "KnowIT VQA is a video dataset with 24,282 human-generated question-answer pairs about The Big Bang Theory. The dataset combines visual, textual and temporal coherence reasoning together with knowledge-based questions, which need of the experience obtained from the viewing of the series to be answered." } ]
SpanBERT: Span-level BERT: Pre-training of deep bidirectional transformers for language understanding.
question answering text
2,019
[ "ANLI", "MeDAL", "SuperGLUE", "ASNQ", "Penn Treebank", "SLURP", "XTREME" ]
[ "MRPC", "NewsQA", "GLUE", "SQuAD", "TriviaQA" ]
[ { "dkey": "MRPC", "dval": "Microsoft Research Paraphrase Corpus (MRPC) is a corpus consists of 5,801 sentence pairs collected from newswire articles. Each pair is labelled if it is a paraphrase or not by human annotators. The whole set is divided into a training subset (4,076 sentence pairs of which 2,753 are paraphrases) and a test subset (1,725 pairs of which 1,147 are paraphrases)." }, { "dkey": "NewsQA", "dval": "The NewsQA dataset is a crowd-sourced machine reading comprehension dataset of 120,000 question-answer pairs.\n\n\nDocuments are CNN news articles.\nQuestions are written by human users in natural language.\nAnswers may be multiword passages of the source text.\nQuestions may be unanswerable.\nNewsQA is collected using a 3-stage, siloed process.\nQuestioners see only an article’s headline and highlights.\nAnswerers see the question and the full article, then select an answer passage.\nValidators see the article, the question, and a set of answers that they rank.\nNewsQA is more natural and more challenging than previous datasets." }, { "dkey": "GLUE", "dval": "General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE and WNLI." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "TriviaQA", "dval": "TriviaQA is a realistic text-based question answering dataset which includes 950K question-answer pairs from 662K documents collected from Wikipedia and the web. This dataset is more challenging than standard QA benchmark datasets such as Stanford Question Answering Dataset (SQuAD), as the answers for a question may not be directly obtained by span prediction and the context is very long. TriviaQA dataset consists of both human-verified and machine-generated QA subsets." }, { "dkey": "ANLI", "dval": "The Adversarial Natural Language Inference (ANLI, Nie et al.) is a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. Particular, the data is selected to be difficult to the state-of-the-art models, including BERT and RoBERTa." }, { "dkey": "MeDAL", "dval": "The Medical Dataset for Abbreviation Disambiguation for Natural Language Understanding (MeDAL) is a large medical text dataset curated for abbreviation disambiguation, designed for natural language understanding pre-training in the medical domain. It was published at the ClinicalNLP workshop at EMNLP." }, { "dkey": "SuperGLUE", "dval": "SuperGLUE is a benchmark dataset designed to pose a more rigorous test of language understanding than GLUE. SuperGLUE has the same high-level motivation as GLUE: to provide a simple, hard-to-game measure of progress toward general-purpose language understanding technologies for English. SuperGLUE follows the basic design of GLUE: It consists of a public leaderboard built around eight language understanding tasks, drawing on existing data, accompanied by a single-number\nperformance metric, and an analysis toolkit. However, it improves upon GLUE in several ways:\n\n\nMore challenging tasks: SuperGLUE retains the two hardest tasks in GLUE. The remaining tasks were identified from those submitted to an open call for task proposals and were selected based on difficulty for current NLP approaches.\nMore diverse task formats: The task formats in GLUE are limited to sentence- and sentence-pair classification. The authors expand the set of task formats in SuperGLUE to include\ncoreference resolution and question answering (QA).\nComprehensive human baselines: the authors include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance.\nImproved code support: SuperGLUE is distributed with a new, modular toolkit for work on pretraining, multi-task learning, and transfer learning in NLP, built around standard tools including PyTorch (Paszke et al., 2017) and AllenNLP (Gardner et al., 2017).\nRefined usage rules: The conditions for inclusion on the SuperGLUE leaderboard were revamped to ensure fair competition, an informative leaderboard, and full credit\nassignment to data and task creators." }, { "dkey": "ASNQ", "dval": "A large scale dataset to enable the transfer step, exploiting the Natural Questions dataset." }, { "dkey": "Penn Treebank", "dval": "The English Penn Treebank (PTB) corpus, and in particular the section of the corpus corresponding to the articles of Wall Street Journal (WSJ), is one of the most known and used corpus for the evaluation of models for sequence labelling. The task consists of annotating each word with its Part-of-Speech tag. In the most common split of this corpus, sections from 0 to 18 are used for training (38 219 sentences, 912 344 tokens), sections from 19 to 21 are used for validation (5 527 sentences, 131 768 tokens), and sections from 22 to 24 are used for testing (5 462 sentences, 129 654 tokens).\nThe corpus is also commonly used for character-level and word-level Language Modelling." }, { "dkey": "SLURP", "dval": "A new challenging dataset in English spanning 18 domains, which is substantially bigger and linguistically more diverse than existing datasets." }, { "dkey": "XTREME", "dval": "The Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark was introduced to encourage more research on multilingual transfer learning,. XTREME covers 40 typologically diverse languages spanning 12 language families and includes 9 tasks that require reasoning about different levels of syntax or semantics.\n\nThe languages in XTREME are selected to maximize language diversity, coverage in existing tasks, and availability of training data. The languages in XTREME are selected to maximize language diversity, coverage in existing tasks, and availability of training data. Among these are many under-studied languages, such as the Dravidian languages Tamil (spoken in southern India, Sri Lanka, and Singapore), Telugu and Malayalam (spoken mainly in southern India), and the Niger-Congo languages Swahili and Yoruba, spoken in Africa." } ]
Adversarial attacks on person re-identification models.
person re-identification images
2,020
[ "Airport", "Partial-iLIDS", "CUHK02", "SYSU-MM01", "APRICOT", "P-DESTRE" ]
[ "Market-1501", "CUHK03" ]
[ { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "Partial-iLIDS", "dval": "Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras. Some images contain people occluded by other individuals or luggage." }, { "dkey": "CUHK02", "dval": "CUHK02 is a dataset for person re-identification. It contains 1,816 identities from two disjoint camera views. Each identity has two samples per camera view making a total of 7,264 images. It is used for Person Re-identification." }, { "dkey": "SYSU-MM01", "dval": "The SYSU-MM01 is a dataset collected for the Visible-Infrared Re-identification problem. The images in the dataset were obtained from 491 different persons by recording them using 4 RGB and 2 infrared cameras. Within the dataset, the persons are divided into 3 fixed splits to create training, validation and test sets. In the training set, there are 20284 RGB and 9929 infrared images of 296 persons. The validation set contains 1974 RGB and 1980 infrared images of 99 persons. The testing set consists of the images of 96 persons where 3803 infrared images are used as query and 301 randomly selected RGB images are used as gallery." }, { "dkey": "APRICOT", "dval": "APRICOT is a collection of over 1,000 annotated photographs of printed adversarial patches in public locations. The patches target several object categories for three COCO-trained detection models, and the photos represent natural variation in position, distance, lighting conditions, and viewing angle." }, { "dkey": "P-DESTRE", "dval": "Provides consistent ID annotations across multiple days, making it suitable for the extremely challenging problem of person search, i.e., where no clothing information can be reliably used. Apart this feature, the P-DESTRE annotations enable the research on UAV-based pedestrian detection, tracking, re-identification and soft biometric solutions." } ]
A CNN-based domain adaptation tracker based on the CNNs.
domain adaptation video
2,018
[ "THEODORE", "LAG", "McMaster", "AFLW2000-3D", "FDDB", "MMED", "G3D" ]
[ "OTB", "VOT2017" ]
[ { "dkey": "OTB", "dval": "Object Tracking Benchmark (OTB) is a visual tracking benchmark that is widely used to evaluate the performance of a visual tracking algorithm. The dataset contains a total of 100 sequences and each is annotated frame-by-frame with bounding boxes and 11 challenge attributes. OTB-2013 dataset contains 51 sequences and the OTB-2015 dataset contains all 100 sequences of the OTB dataset." }, { "dkey": "VOT2017", "dval": "VOT2017 is a Visual Object Tracking dataset for different tasks that contains 60 short sequences annotated with 6 different attributes." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "LAG", "dval": "Includes 5,824 fundus images labeled with either positive glaucoma (2,392) or negative glaucoma (3,432)." }, { "dkey": "McMaster", "dval": "The McMaster dataset is a dataset for color demosaicing, which contains 18 cropped images of size 500×500." }, { "dkey": "AFLW2000-3D", "dval": "AFLW2000-3D is a dataset of 2000 images that have been annotated with image-level 68-point 3D facial landmarks. This dataset is used for evaluation of 3D facial landmark detection models. The head poses are very diverse and often hard to be detected by a CNN-based face detector." }, { "dkey": "FDDB", "dval": "The Face Detection Dataset and Benchmark (FDDB) dataset is a collection of labeled faces from Faces in the Wild dataset. It contains a total of 5171 face annotations, where images are also of various resolution, e.g. 363x450 and 229x410. The dataset incorporates a range of challenges, including difficult pose angles, out-of-focus faces and low resolution. Both greyscale and color images are included." }, { "dkey": "MMED", "dval": "Contains 25,165 textual news articles collected from hundreds of news media sites (e.g., Yahoo News, Google News, CNN News.) and 76,516 image posts shared on Flickr social media, which are annotated according to 412 real-world events. The dataset is collected to explore the problem of organizing heterogeneous data contributed by professionals and amateurs in different data domains, and the problem of transferring event knowledge obtained from one data domain to heterogeneous data domain, thus summarizing the data with different contributors." }, { "dkey": "G3D", "dval": "The Gaming 3D Dataset (G3D) focuses on real-time action recognition in a gaming scenario. It contains 10 subjects performing 20 gaming actions: “punch right”, “punch left”, “kick right”, “kick left”, “defend”, “golf swing”, “tennis swing forehand”, “tennis swing backhand”, “tennis serve”, “throw bowling ball”, “aim and fire gun”, “walk”, “run”, “jump”, “climb”, “crouch”, “steer a car”, “wave”, “flap” and “clap”." } ]
We propose a simple, fast and easy to implement algorithm LOSSGRAD (locally optimal step-
image segmentation images
2,019
[ "Localized Narratives", "word2word", "GVGAI", "Griddly", "3RScan" ]
[ "CIFAR-10", "CelebA" ]
[ { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "Localized Narratives", "dval": "We propose Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing. Since the voice and the mouse pointer are synchronized, we can localize every single word in the description. This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data. We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available. We provide an extensive analysis of these annotations showing they are diverse, accurate, and efficient to produce. We also demonstrate their utility on the application of controlled image captioning." }, { "dkey": "word2word", "dval": "word2word contains easy-to-use word translations for 3,564 language pairs.\n\n\nA large collection of freely & publicly available bilingual lexicons for 3,564 language pairs across 62 unique languages.\nEasy-to-use Python interface for accessing top-k word translations and for building a new bilingual lexicon from a custom parallel corpus.\nConstructed using a simple approach that yields bilingual lexicons with high coverage and competitive translation quality." }, { "dkey": "GVGAI", "dval": "The General Video Game AI (GVGAI) framework is widely used in research which features a corpus of over 100 single-player games and 60 two-player games. These are fairly small games, each focusing on specific mechanics or skills the players should be able to demonstrate, including clones of classic arcade games such as Space Invaders, puzzle games like Sokoban, adventure games like Zelda or game-theory problems such as the Iterative Prisoners Dilemma. All games are real-time and require players to make decisions in only 40ms at every game tick, although not all games explicitly reward or require fast reactions; in fact, some of the best game-playing approaches add up the time in the beginning of the game to run Breadth-First Search in puzzle games in order to find an accurate solution. However, given the large variety of games (many of which are stochastic and difficult to predict accurately), scoring systems and termination conditions, all unknown to the players, highly-adaptive general methods are needed to tackle the diverse challenges proposed." }, { "dkey": "Griddly", "dval": "Griddly is an environment for grid-world based research. Griddly provides a highly optimized game state and rendering engine with a flexible high-level interface for configuring environments. Not only does Griddly offer simple interfaces for single, multi-player and RTS games, but also multiple methods of rendering, configurable partial observability and interfaces for procedural content generation." }, { "dkey": "3RScan", "dval": "A novel dataset and benchmark, which features 1482 RGB-D scans of 478 environments across multiple time steps. Each scene includes several objects whose positions change over time, together with ground truth annotations of object instances and their respective 6DoF mappings among re-scans." } ]
I want to train a semi-supervised model for audio-visual speech
audio-visual speech recognition video
2,018
[ "VoxPopuli", "YouTube-8M", "DCASE 2018 Task 4", "Libri-Light", "AVE", "VoxCeleb2" ]
[ "LRS2", "LRW" ]
[ { "dkey": "LRS2", "dval": "The Oxford-BBC Lip Reading Sentences 2 (LRS2) dataset is one of the largest publicly available datasets for lip reading sentences in-the-wild. The database consists of mainly news and talk shows from BBC programs. Each sentence is up to 100 characters in length. The training, validation and test sets are divided according to broadcast date. It is a challenging set since it contains thousands of speakers without speaker labels and large variation in head pose. The pre-training set contains 96,318 utterances, the training set contains 45,839 utterances, the validation set contains 1,082 utterances and the test set contains 1,242 utterances." }, { "dkey": "LRW", "dval": "The Lip Reading in the Wild (LRW) dataset a large-scale audio-visual database that contains 500 different words from over 1,000 speakers. Each utterance has 29 frames, whose boundary is centered around the target word. The database is divided into training, validation and test sets. The training set contains at least 800 utterances for each class while the validation and test sets contain 50 utterances." }, { "dkey": "VoxPopuli", "dval": "VoxPopuli is a large-scale multilingual corpus providing 100K hours of unlabelled speech data in 23 languages. It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning. VoxPopuli also contains 1.8K hours of transcribed speeches in 16 languages and their aligned oral interpretations into 5 other languages totaling 5.1K hours." }, { "dkey": "YouTube-8M", "dval": "The YouTube-8M dataset is a large scale video dataset, which includes more than 7 million videos with 4716 classes labeled by the annotation system. The dataset consists of three parts: training set, validate set, and test set. In the training set, each class contains at least 100 training videos. Features of these videos are extracted by the state-of-the-art popular pre-trained models and released for public use. Each video contains audio and visual modality. Based on the visual information, videos are divided into 24 topics, such as sports, game, arts & entertainment, etc" }, { "dkey": "DCASE 2018 Task 4", "dval": "DCASE2018 Task 4 is a dataset for large-scale weakly labeled semi-supervised sound event detection in domestic environments. The data are YouTube video excerpts focusing on domestic context which could be used for example in ambient assisted living applications. The domain was chosen due to the scientific challenges (wide variety of sounds, time-localized events...) and potential industrial applications.\nSpecifically, the task employs a subset of “Audioset: An Ontology And Human-Labeled Dataset For Audio Events” by Google. Audioset consists of an expanding ontology of 632 sound event classes and a collection of 2 million human-labeled 10-second sound clips (less than 21% are shorter than 10-seconds) drawn from 2 million Youtube videos. The ontology is specified as a hierarchical graph of event categories, covering a wide range of human and animal sounds, musical instruments and genres, and common everyday environmental sounds.\nTask 4 focuses on a subset of Audioset that consists of 10 classes of sound events: speech, dog, cat, alarm bell ringing, dishes, frying, blender, running water, vacuum cleaner, electric shaver toothbrush." }, { "dkey": "Libri-Light", "dval": "Libri-Light is a collection of spoken English audio suitable for training speech recognition systems under limited or no supervision. It is derived from open-source audio books from the LibriVox project. It contains over 60K hours of audio." }, { "dkey": "AVE", "dval": "To investigate three temporal localization tasks: supervised and weakly-supervised audio-visual event localization, and cross-modality localization." }, { "dkey": "VoxCeleb2", "dval": "VoxCeleb2 is a large scale speaker recognition dataset obtained automatically from open-source media. VoxCeleb2 consists of over a million utterances from over 6k speakers. Since the dataset is collected ‘in the wild’, the speech segments are corrupted with real world noise including laughter, cross-talk, channel effects, music and other sounds. The dataset is also multilingual, with speech from speakers of 145 different nationalities, covering a wide range of accents, ages, ethnicities and languages. The dataset is audio-visual, so is also useful for a number of other applications, for example – visual speech synthesis, speech separation, cross-modal transfer from face to voice or vice versa and training face recognition from video to complement existing face recognition datasets." } ]
I want to train a fully-supervised model for semantic segmentation from images.
semantic segmentation images
2,018
[ "SBD", "SNIPS", "Virtual KITTI", "ConvAI2" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "SBD", "dval": "The Semantic Boundaries Dataset (SBD) is a dataset for predicting pixels on the boundary of the object (as opposed to the inside of the object with semantic segmentation). The dataset consists of 11318 images from the trainval set of the PASCAL VOC2011 challenge, divided into 8498 training and 2820 test images. This dataset has object instance boundaries with accurate figure/ground masks that are also labeled with one of 20 Pascal VOC classes." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "Virtual KITTI", "dval": "Virtual KITTI is a photo-realistic synthetic video dataset designed to learn and evaluate computer vision models for several video understanding tasks: object detection and multi-object tracking, scene-level and instance-level semantic segmentation, optical flow, and depth estimation.\n\nVirtual KITTI contains 50 high-resolution monocular videos (21,260 frames) generated from five different virtual worlds in urban settings under different imaging and weather conditions. These worlds were created using the Unity game engine and a novel real-to-virtual cloning method. These photo-realistic synthetic videos are automatically, exactly, and fully annotated for 2D and 3D multi-object tracking and at the pixel level with category, instance, flow, and depth labels (cf. below for download links)." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." } ]
I want to synthesize person images conditioned on both pose and appearance information.
person image synthesis
2,019
[ "PRID2011", "LIP", "AFLW", "Long-term visual localization", "CUHK-PEDES", "Adience" ]
[ "DeepFashion", "Market-1501" ]
[ { "dkey": "DeepFashion", "dval": "DeepFashion is a dataset containing around 800K diverse fashion images with their rich annotations (46 categories, 1,000 descriptive attributes, bounding boxes and landmark information) ranging from well-posed product images to real-world-like consumer photos." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "PRID2011", "dval": "PRID 2011 is a person reidentification dataset that provides multiple person trajectories recorded from two different static surveillance cameras, monitoring crosswalks and sidewalks. The dataset shows a clean background, and the people in the dataset are rarely occluded. In the dataset, 200 people appear in both views. Among the 200 people, 178 people have more than 20 appearances" }, { "dkey": "LIP", "dval": "The LIP (Look into Person) dataset is a large-scale dataset focusing on semantic understanding of a person. It contains 50,000 images with elaborated pixel-wise annotations of 19 semantic human part labels and 2D human poses with 16 key points. The images are collected from real-world scenarios and the subjects appear with challenging poses and view, heavy occlusions, various appearances and low resolution." }, { "dkey": "AFLW", "dval": "The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image." }, { "dkey": "Long-term visual localization", "dval": "Long-term visual localization provides a benchmark datasets aimed at evaluating 6 DoF pose estimation accuracy over large appearance variations caused by changes in seasonal (summer, winter, spring, etc.) and illumination (dawn, day, sunset, night) conditions. Each dataset consists of a set of reference images, together with their corresponding ground truth poses, and a set of query images." }, { "dkey": "CUHK-PEDES", "dval": "The CUHK-PEDES dataset is a caption-annotated pedestrian dataset. It contains 40,206 images over 13,003 persons. Images are collected from five existing person re-identification datasets, CUHK03, Market-1501, SSM, VIPER, and CUHK01 while each image is annotated with 2 text descriptions by crowd-sourcing workers. Sentences incorporate rich details about person appearances, actions, poses." }, { "dkey": "Adience", "dval": "The Adience dataset, published in 2014, contains 26,580 photos across 2,284 subjects with a binary gender label and one label from eight different age groups, partitioned into five splits. The key principle of the data set is to capture the images as close to real world conditions as possible, including all variations in appearance, pose, lighting condition and image quality, to name a few." } ]
We present a method for localizing face landmarks of varying sizes, poses and occlusions.
face alignment images
2,016
[ "COFW", "UTKFace", "WFLW", "LaPa", "SoF", "300W" ]
[ "Helen", "AFW" ]
[ { "dkey": "Helen", "dval": "The HELEN dataset is composed of 2330 face images of 400×400 pixels with labeled facial components generated through manually-annotated contours along eyes, eyebrows, nose, lips and jawline." }, { "dkey": "AFW", "dval": "AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box." }, { "dkey": "COFW", "dval": "The Caltech Occluded Faces in the Wild (COFW) dataset is designed to present faces in real-world conditions. Faces show large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food, hands, microphones,
etc.). All images were hand annotated using the same 29 landmarks as in LFPW. Both the landmark positions as well as their occluded/unoccluded state were annotated. The faces are occluded to different degrees, with large variations in the type of occlusions encountered. COFW has an average occlusion of over 23." }, { "dkey": "UTKFace", "dval": "The UTKFace dataset is a large-scale face dataset with long age span (range from 0 to 116 years old). The dataset consists of over 20,000 face images with annotations of age, gender, and ethnicity. The images cover large variation in pose, facial expression, illumination, occlusion, resolution, etc. This dataset could be used on a variety of tasks, e.g., face detection, age estimation, age progression/regression, landmark localization, etc." }, { "dkey": "WFLW", "dval": "The Wider Facial Landmarks in the Wild or WFLW database contains 10000 faces (7500 for training and 2500 for testing) with 98 annotated landmarks. This database also features rich attribute annotations in terms of occlusion, head pose, make-up, illumination, blur and expressions." }, { "dkey": "LaPa", "dval": "A large-scale Landmark guided face Parsing dataset (LaPa) for face parsing. It consists of more than 22,000 facial images with abundant variations in expression, pose and occlusion, and each image of LaPa is provided with a 11-category pixel-level label map and 106-point landmarks." }, { "dkey": "SoF", "dval": "The Specs on Faces (SoF) dataset, a collection of 42,592 (2,662×16) images for 112 persons (66 males and 46 females) who wear glasses under different illumination conditions. The dataset is FREE for reasonable academic fair use. The dataset presents a new challenge regarding face detection and recognition. It is focused on two challenges: harsh illumination environments and face occlusions, which highly affect face detection, recognition, and classification. The glasses are the common natural occlusion in all images of the dataset. However, there are two more synthetic occlusions (nose and mouth) added to each image. Moreover, three image filters, that may evade face detectors and facial recognition systems, were applied to each image. All generated images are categorized into three levels of difficulty (easy, medium, and hard). That enlarges the number of images to be 42,592 images (26,112 male images and 16,480 female images). There is metadata for each image that contains many information such as: the subject ID, facial landmarks, face and glasses rectangles, gender and age labels, year that the photo was taken, facial emotion, glasses type, and more." }, { "dkey": "300W", "dval": "The 300-W is a face dataset that consists of 300 Indoor and 300 Outdoor in-the-wild images. It covers a large variation of identity, expression, illumination conditions, pose, occlusion and face size. The images were downloaded from google.com by making queries such as “party”, “conference”, “protests”, “football” and “celebrities”. Compared to the rest of in-the-wild datasets, the 300-W database contains a larger percentage of partially-occluded images and covers more expressions than the common “neutral” or “smile”, such as “surprise” or “scream”.\nImages were annotated with the 68-point mark-up using a semi-automatic methodology. The images of the database were carefully selected so that they represent a characteristic sample of challenging but natural face instances under totally unconstrained conditions. Thus, methods that achieve accurate performance on the 300-W database can demonstrate the same accuracy in most realistic cases.\nMany images of the database contain more than one annotated faces (293 images with 1 face, 53 images with 2 faces and 53 images with [3, 7] faces). Consequently, the database consists of 600 annotated face instances, but 399 unique images. Finally, there is a large variety of face sizes. Specifically, 49.3% of the faces have size in the range [48.6k, 2.0M] and the overall mean size is 85k (about 292 × 292) pixels." } ]
This paper proposes a multi-choice question answering approach that relies on pre-trained language
commonsense question answering text
2,019
[ "DAQUAR", "Visual Genome", "DREAM", "ARC", "MultiRC", "MEDIQA-AnS", "CLOTH" ]
[ "ConceptNet", "GLUE" ]
[ { "dkey": "ConceptNet", "dval": "ConceptNet is a knowledge graph that connects words and phrases of natural language with labeled edges. Its knowledge is collected from many sources that include expert-created resources, crowd-sourcing, and games with a purpose. It is designed to represent the general knowledge involved in understanding language, improving natural language applications by allowing the application to better understand the meanings behind the words people use." }, { "dkey": "GLUE", "dval": "General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE and WNLI." }, { "dkey": "DAQUAR", "dval": "DAQUAR (DAtaset for QUestion Answering on Real-world images) is a dataset of human question answer pairs about images." }, { "dkey": "Visual Genome", "dval": "Visual Genome contains Visual Question Answering data in a multi-choice setting. It consists of 101,174 images from MSCOCO with 1.7 million QA pairs, 17 questions per image on average. Compared to the Visual Question Answering dataset, Visual Genome represents a more balanced distribution over 6 question types: What, Where, When, Who, Why and How. The Visual Genome dataset also presents 108K images with densely annotated objects, attributes and relationships." }, { "dkey": "DREAM", "dval": "DREAM is a multiple-choice Dialogue-based REAding comprehension exaMination dataset. In contrast to existing reading comprehension datasets, DREAM is the first to focus on in-depth multi-turn multi-party dialogue understanding.\n\nDREAM contains 10,197 multiple choice questions for 6,444 dialogues, collected from English-as-a-foreign-language examinations designed by human experts. DREAM is likely to present significant challenges for existing reading comprehension systems: 84% of answers are non-extractive, 85% of questions require reasoning beyond a single sentence, and 34% of questions also involve commonsense knowledge." }, { "dkey": "ARC", "dval": "The AI2’s Reasoning Challenge (ARC) dataset is a multiple-choice question-answering dataset, containing questions from science exams from grade 3 to grade 9. The dataset is split in two partitions: Easy and Challenge, where the latter partition contains the more difficult questions that require reasoning. Most of the questions have 4 answer choices, with <1% of all the questions having either 3 or 5 answer choices. ARC includes a supporting KB of 14.3M unstructured text passages." }, { "dkey": "MultiRC", "dval": "MultiRC (Multi-Sentence Reading Comprehension) is a dataset of short paragraphs and multi-sentence questions, i.e., questions that can be answered by combining information from multiple sentences of the paragraph.\nThe dataset was designed with three key challenges in mind:\n* The number of correct answer-options for each question is not pre-specified. This removes the over-reliance on answer-options and forces them to decide on the correctness of each candidate answer independently of others. In other words, the task is not to simply identify the best answer-option, but to evaluate the correctness of each answer-option individually.\n* The correct answer(s) is not required to be a span in the text.\n* The paragraphs in the dataset have diverse provenance by being extracted from 7 different domains such as news, fiction, historical text etc., and hence are expected to be more diverse in their contents as compared to single-domain datasets.\nThe entire corpus consists of around 10K questions (including about 6K multiple-sentence questions). The 60% of the data is released as training and development data. The rest of the data is saved for evaluation and every few months a new unseen additional data is included for evaluation to prevent unintentional overfitting over time." }, { "dkey": "MEDIQA-AnS", "dval": "The first summarization collection containing question-driven summaries of answers to consumer health questions. This dataset can be used to evaluate single or multi-document summaries generated by algorithms using extractive or abstractive approaches." }, { "dkey": "CLOTH", "dval": "The Cloze Test by Teachers (CLOTH) benchmark is a collection of nearly 100,000 4-way multiple-choice cloze-style questions from middle- and high school-level English language exams, where the answer fills a blank in a given text. Each question is labeled with a type of deep reasoning it involves, where the four possible types are grammar, short-term reasoning, matching/paraphrasing, and long-term reasoning, i.e., reasoning over multiple sentences" } ]
This paper proposes an approach for single-view 3D reconstruction from
3d reconstruction 2d image pascal voc
2,017
[ "Deep Fashion3D", "MegaDepth", "People Snapshot Dataset", "WHU", "3DMatch" ]
[ "ShapeNet", "Cityscapes" ]
[ { "dkey": "ShapeNet", "dval": "ShapeNet is a large scale repository for 3D CAD models developed by researchers from Stanford University, Princeton University and the Toyota Technological Institute at Chicago, USA. The repository contains over 300M models with 220,000 classified into 3,135 classes arranged using WordNet hypernym-hyponym relationships. ShapeNet Parts subset contains 31,693 meshes categorised into 16 common object classes (i.e. table, chair, plane etc.). Each shapes ground truth contains 2-5 parts (with a total of 50 part classes)." }, { "dkey": "Cityscapes", "dval": "Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of around 5000 fine annotated images and 20000 coarse annotated ones. Data was captured in 50 cities during several months, daytimes, and good weather conditions. It was originally recorded as video so the frames were manually selected to have the following features: large number of dynamic objects, varying scene layout, and varying background." }, { "dkey": "Deep Fashion3D", "dval": "A novel benchmark and dataset for the evaluation of image-based garment reconstruction systems. Deep Fashion3D contains 2078 models reconstructed from real garments, which covers 10 different categories and 563 garment instances. It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images. In addition, each garment is randomly posed to enhance the variety of real clothing deformations." }, { "dkey": "MegaDepth", "dval": "The MegaDepth dataset is a dataset for single-view depth prediction that includes 196 different locations reconstructed from COLMAP SfM/MVS." }, { "dkey": "People Snapshot Dataset", "dval": "Enables detailed human body model reconstruction in clothing from a single monocular RGB video without requiring a pre scanned template or manually clicked points." }, { "dkey": "WHU", "dval": "Created for MVS tasks and is a large-scale multi-view aerial dataset generated from a highly accurate 3D digital surface model produced from thousands of real aerial images with precise camera parameters." }, { "dkey": "3DMatch", "dval": "The 3DMATCH benchmark evaluates how well descriptors (both 2D and 3D) can establish correspondences between RGB-D frames of different views. The dataset contains 2D RGB-D patches and 3D patches (local TDF voxel grid volumes) of wide-baselined correspondences. \n\nThe pixel size of each 2D patch is determined by the projection of the 0.3m3 local 3D patch around the interest point onto the image plane." } ]
In this paper, we propose a conditional Generative Adversarial Network (cGAN) framework for
image synthesis text
2,017
[ "ISTD", "FDF", "Raindrop", "BraTS 2014", "CDD Dataset (season-varying)", "Clothing1M" ]
[ "Chairs", "CelebA" ]
[ { "dkey": "Chairs", "dval": "The Chairs dataset contains rendered images of around 1000 different three-dimensional chair models." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "ISTD", "dval": "The Image Shadow Triplets dataset (ISTD) is a dataset for shadow understanding that contains 1870 image triplets of shadow image, shadow mask, and shadow-free image." }, { "dkey": "FDF", "dval": "A diverse dataset of human faces, including unconventional poses, occluded faces, and a vast variability in backgrounds." }, { "dkey": "Raindrop", "dval": "Raindrop is a set of image pairs, where\neach pair contains exactly the same background scene, yet\none is degraded by raindrops and the other one is free from\nraindrops. To obtain this, the images are captured through two pieces of exactly the\nsame glass: one sprayed with water, and the other is left\nclean. The dataset consists of 1,119 pairs of images, with various\nbackground scenes and raindrops. They were captured with a Sony A6000\nand a Canon EOS 60." }, { "dkey": "BraTS 2014", "dval": "BRATS 2014 is a brain tumor segmentation dataset." }, { "dkey": "CDD Dataset (season-varying)", "dval": "" }, { "dkey": "Clothing1M", "dval": "Clothing1M contains 1M clothing images in 14 classes. It is a dataset with noisy labels, since the data is collected from several online shopping websites and include many mislabelled samples. This dataset also contains 50k, 14k, and 10k images with clean labels for training, validation, and testing, respectively." } ]
I want to train a supervised model for interactive segmentation.
interactive segmentation images
2,018
[ "SNIPS", "ConvAI2", "NYU-VP", "CLUECorpus2020", "MECCANO" ]
[ "KITTI", "Cityscapes" ]
[ { "dkey": "KITTI", "dval": "KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner. Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities. Álvarez et al. generated ground truth for 323 images from the road detection challenge with three classes: road, vertical, and sky. Zhang et al. annotated 252 (140 for training and 112 for testing) acquisitions – RGB and Velodyne scans – from the tracking challenge for ten object categories: building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence. Ros et al. labeled 170 training images and 46 testing images (from the visual odometry challenge) with 11 classes: building, tree, sky, car, sign, road, pedestrian, fence, pole, sidewalk, and bicyclist." }, { "dkey": "Cityscapes", "dval": "Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of around 5000 fine annotated images and 20000 coarse annotated ones. Data was captured in 50 cities during several months, daytimes, and good weather conditions. It was originally recorded as video so the frames were manually selected to have the following features: large number of dynamic objects, varying scene layout, and varying background." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "NYU-VP", "dval": "NYU-VP is a new dataset for multi-model fitting, vanishing point (VP) estimation in this case. Each image is annotated with up to eight vanishing points, and pre-extracted line segments are provided which act as data points for a robust estimator. Due to its size, the dataset is the first to allow for supervised learning of a multi-model fitting task." }, { "dkey": "CLUECorpus2020", "dval": "CLUECorpus2020 is a large-scale corpus that can be used directly for self-supervised learning such as pre-training of a language model, or language generation. It has 100G raw corpus with 35 billion Chinese characters, which is retrieved from Common Crawl." }, { "dkey": "MECCANO", "dval": "The MECCANO dataset is the first dataset of egocentric videos to study human-object interactions in industrial-like settings.\nThe MECCANO dataset has been acquired in an industrial-like scenario in which subjects built a toy model of a motorbike. We considered 20 object classes which include the 16 classes categorizing the 49 components, the two tools (screwdriver and wrench), the instructions booklet and a partial_model class.\n\nAdditional details related to the MECCANO:\n\n20 different subjects in 2 countries (IT, U.K.)\nVideo Acquisition: 1920x1080 at 12.00 fps\n11 training videos and 9 validation/test videos\n8857 video segments temporally annotated indicating the verbs which describe the actions performed\n64349 active objects annotated with bounding boxes\n12 verb classes, 20 objects classes and 61 action classes" } ]
A new loss, called support neighbor loss, for training deep convolutional neural networks for person re
person re-identification images
2,018
[ "BraTS 2017", "GoPro", "Flickr30k", "COVIDx", "Birdsnap", "JHMDB" ]
[ "Market-1501", "CUHK03" ]
[ { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "BraTS 2017", "dval": "The BRATS2017 dataset. It contains 285 brain tumor MRI scans, with four MRI modalities as T1, T1ce, T2, and Flair for each scan. The dataset also provides full masks for brain tumors, with labels for ED, ET, NET/NCR. The segmentation evaluation is based on three tasks: WT, TC and ET segmentation." }, { "dkey": "GoPro", "dval": "The GoPro dataset for deblurring consists of 3,214 blurred images with the size of 1,280×720 that are divided into 2,103 training images and 1,111 test images. The dataset consists of pairs of a realistic blurry image and the corresponding ground truth shapr image that are obtained by a high-speed camera." }, { "dkey": "Flickr30k", "dval": "The Flickr30k dataset contains 31,000 images collected from Flickr, together with 5 reference sentences provided by human annotators." }, { "dkey": "COVIDx", "dval": "An open access benchmark dataset comprising of 13,975 CXR images across 13,870 patient cases, with the largest number of publicly available COVID-19 positive cases to the best of the authors' knowledge." }, { "dkey": "Birdsnap", "dval": "Birdsnap is a large bird dataset consisting of 49,829 images from 500 bird species with 47,386 images used for training and 2,443 images used for testing." }, { "dkey": "JHMDB", "dval": "JHMDB is an action recognition dataset that consists of 960 video sequences belonging to 21 actions. It is a subset of the larger HMDB51 dataset collected from digitized movies and YouTube videos. The dataset contains video and annotation for puppet flow per frame (approximated optimal flow on the person), puppet mask per frame, joint positions per frame, action label per clip and meta label per clip (camera motion, visible body parts, camera viewpoint, number of people, video quality)." } ]
A method for retinal vessel segmentation.
retinal vessel segmentation images
2,014
[ "ROSE", "RITE", "HRF", "ORVS", "CHASE_DB1" ]
[ "STARE", "DRIVE" ]
[ { "dkey": "STARE", "dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.." }, { "dkey": "DRIVE", "dval": "The Digital Retinal Images for Vessel Extraction (DRIVE) dataset is a dataset for retinal vessel segmentation. It consists of a total of JPEG 40 color fundus images; including 7 abnormal pathology cases. The images were obtained from a diabetic retinopathy screening program in the Netherlands. The images were acquired using Canon CR5 non-mydriatic 3CCD camera with FOV equals to 45 degrees. Each image resolution is 584*565 pixels with eight bits per color channel (3 channels). \n\nThe set of 40 images was equally divided into 20 images for the training set and 20 images for the testing set. Inside both sets, for each image, there is circular field of view (FOV) mask of diameter that is approximately 540 pixels. Inside training set, for each image, one manual segmentation by an ophthalmological expert has been applied. Inside testing set, for each image, two manual segmentations have been applied by two different observers, where the first observer segmentation is accepted as the ground-truth for performance evaluation." }, { "dkey": "ROSE", "dval": "Retinal OCTA SEgmentation dataset (ROSE) consists of 229 OCTA images with vessel annotations at either centerline-level or pixel level." }, { "dkey": "RITE", "dval": "The RITE (Retinal Images vessel Tree Extraction) is a database that enables comparative studies on segmentation or classification of arteries and veins on retinal fundus images, which is established based on the public available DRIVE database (Digital Retinal Images for Vessel Extraction).\n\nRITE contains 40 sets of images, equally separated into a training subset and a test subset, the same as DRIVE. The two subsets are built from the corresponding two subsets in DRIVE. For each set, there is a fundus photograph, a vessel reference standard, and a Arteries/Veins (A/V) reference standard. \n\n\nThe fundus photograph is inherited from DRIVE. \nFor the training set, the vessel reference standard is a modified version of 1st_manual from DRIVE. \nFor the test set, the vessel reference standard is 2nd_manual from DRIVE. \nFor the A/V reference standard, four types of vessels are labelled using four colors based on the vessel reference standard. \nArteries are labelled in red; veins are labelled in blue; the overlapping of arteries and veins are labelled in green; the vessels which are uncertain are labelled in white. \nThe fundus photograph is in tif format. And the vessel reference standard and the A/V reference standard are in png format. \n\nThe dataset is described in more detail in our paper, which you will cite if you use the dataset in any way: \n\nHu Q, Abràmoff MD, Garvin MK. Automated separation of binary overlapping trees in low-contrast color retinal images. Med Image Comput Comput Assist Interv. 2013;16(Pt 2):436-43. PubMed PMID: 24579170 https://doi.org/10.1007/978-3-642-40763-5_54" }, { "dkey": "HRF", "dval": "The HRF dataset is a dataset for retinal vessel segmentation which comprises 45 images and is organized as 15 subsets. Each subset contains one healthy fundus image, one image of patient with diabetic retinopathy and one glaucoma image. The image sizes are 3,304 x 2,336, with a training/testing image split of 22/23." }, { "dkey": "ORVS", "dval": "The ORVS dataset has been newly established as a collaboration between the computer science and visual-science departments at the University of Calgary.\n\nThis dataset contains 49 images (42 training and seven testing images) collected from a clinic in Calgary-Canada. All images were acquired with a Zeiss Visucam 200 with 30 degrees field of view (FOV). The image size is 1444×1444 with 24 bits per pixel. Images and are stored in JPEG format with low compression, which is common in ophthalmology practice. All images were manually traced by an expert who a has been working in the field of retinal-image analysis and went through training. The expert was asked to label all pixels belonging to retinal vessels. The Windows Paint 3D tool was used to manually label the images." }, { "dkey": "CHASE_DB1", "dval": "CHASE_DB1 is a dataset for retinal vessel segmentation which contains 28 color retina images with the size of 999×960 pixels which are collected from both left and right eyes of 14 school children. Each image is annotated by two independent human experts." } ]
In this paper, we propose an end-to-end discriminative correlation filter (DCF)-
target localization images
2,020
[ "E2E", "VOT2018", "DIPS", "DDD20", "DeeperForensics-1.0" ]
[ "OTB", "VOT2016" ]
[ { "dkey": "OTB", "dval": "Object Tracking Benchmark (OTB) is a visual tracking benchmark that is widely used to evaluate the performance of a visual tracking algorithm. The dataset contains a total of 100 sequences and each is annotated frame-by-frame with bounding boxes and 11 challenge attributes. OTB-2013 dataset contains 51 sequences and the OTB-2015 dataset contains all 100 sequences of the OTB dataset." }, { "dkey": "VOT2016", "dval": "VOT2016 is a video dataset for visual object tracking. It contains 60 video clips and 21,646 corresponding ground truth maps with pixel-wise annotation of salient objects." }, { "dkey": "E2E", "dval": "End-to-End NLG Challenge (E2E) aims to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena." }, { "dkey": "VOT2018", "dval": "VOT2018 is a dataset for visual object tracking. It consists of 60 challenging videos collected from real-life datasets." }, { "dkey": "DIPS", "dval": "Contains biases but is two orders of magnitude larger than those used previously." }, { "dkey": "DDD20", "dval": "The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames. DDD20 is the longest event camera end-to-end driving dataset to date with 51h of DAVIS event+frame camera and vehicle human control data collected from 4000km of highway and urban driving under a variety of lighting conditions." }, { "dkey": "DeeperForensics-1.0", "dval": "DeeperForensics-1.0 represents the largest face forgery detection dataset by far, with 60,000 videos constituted by a total of 17.6 million frames, 10 times larger than existing datasets of the same kind. The full dataset includes 48,475 source videos and 11,000 manipulated videos. The source videos are collected on 100 paid and consented actors from 26 countries, and the manipulated videos are generated by a newly proposed many-to-many end-to-end face swapping method, DF-VAE. 7 types of real-world perturbations at 5 intensity levels are employed to ensure a larger scale and higher diversity." } ]
I want to train a deep neural network model for image classification.
image classification images
2,019
[ "UNITOPATHO", "COWC", "Birdsnap", "WikiReading", "GoPro" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "UNITOPATHO", "dval": "Histopathological characterization of colorectal polyps allows to tailor patients' management and follow up with the ultimate aim of avoiding or promptly detecting an invasive carcinoma. Colorectal polyps characterization relies on the histological analysis of tissue samples to determine the polyps malignancy and dysplasia grade. Deep neural networks achieve outstanding accuracy in medical patterns recognition, however they require large sets of annotated training images. We introduce UniToPatho, an annotated dataset of 9536 hematoxylin and eosin stained patches extracted from 292 whole-slide images, meant for training deep neural networks for colorectal polyps classification and adenomas grading. The slides are acquired through a Hamamatsu Nanozoomer S210 scanner at 20× magnification (0.4415 μm/px)" }, { "dkey": "COWC", "dval": "The Cars Overhead With Context (COWC) data set is a large set of annotated cars from overhead. It is useful for training a device such as a deep neural network to learn to detect and/or count cars." }, { "dkey": "Birdsnap", "dval": "Birdsnap is a large bird dataset consisting of 49,829 images from 500 bird species with 47,386 images used for training and 2,443 images used for testing." }, { "dkey": "WikiReading", "dval": "WikiReading is a large-scale natural language understanding task and publicly-available dataset with 18 million instances. The task is to predict textual values from the structured knowledge base Wikidata by reading the text of the corresponding Wikipedia articles. The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs)." }, { "dkey": "GoPro", "dval": "The GoPro dataset for deblurring consists of 3,214 blurred images with the size of 1,280×720 that are divided into 2,103 training images and 1,111 test images. The dataset consists of pairs of a realistic blurry image and the corresponding ground truth shapr image that are obtained by a high-speed camera." } ]
I want to segment blood vessels from fundus images.
blood vessel segmentation fundus imaging
2,017
[ "RITE", "HRF", "IntrA", "ADAM", "G1020" ]
[ "STARE", "DRIVE" ]
[ { "dkey": "STARE", "dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.." }, { "dkey": "DRIVE", "dval": "The Digital Retinal Images for Vessel Extraction (DRIVE) dataset is a dataset for retinal vessel segmentation. It consists of a total of JPEG 40 color fundus images; including 7 abnormal pathology cases. The images were obtained from a diabetic retinopathy screening program in the Netherlands. The images were acquired using Canon CR5 non-mydriatic 3CCD camera with FOV equals to 45 degrees. Each image resolution is 584*565 pixels with eight bits per color channel (3 channels). \n\nThe set of 40 images was equally divided into 20 images for the training set and 20 images for the testing set. Inside both sets, for each image, there is circular field of view (FOV) mask of diameter that is approximately 540 pixels. Inside training set, for each image, one manual segmentation by an ophthalmological expert has been applied. Inside testing set, for each image, two manual segmentations have been applied by two different observers, where the first observer segmentation is accepted as the ground-truth for performance evaluation." }, { "dkey": "RITE", "dval": "The RITE (Retinal Images vessel Tree Extraction) is a database that enables comparative studies on segmentation or classification of arteries and veins on retinal fundus images, which is established based on the public available DRIVE database (Digital Retinal Images for Vessel Extraction).\n\nRITE contains 40 sets of images, equally separated into a training subset and a test subset, the same as DRIVE. The two subsets are built from the corresponding two subsets in DRIVE. For each set, there is a fundus photograph, a vessel reference standard, and a Arteries/Veins (A/V) reference standard. \n\n\nThe fundus photograph is inherited from DRIVE. \nFor the training set, the vessel reference standard is a modified version of 1st_manual from DRIVE. \nFor the test set, the vessel reference standard is 2nd_manual from DRIVE. \nFor the A/V reference standard, four types of vessels are labelled using four colors based on the vessel reference standard. \nArteries are labelled in red; veins are labelled in blue; the overlapping of arteries and veins are labelled in green; the vessels which are uncertain are labelled in white. \nThe fundus photograph is in tif format. And the vessel reference standard and the A/V reference standard are in png format. \n\nThe dataset is described in more detail in our paper, which you will cite if you use the dataset in any way: \n\nHu Q, Abràmoff MD, Garvin MK. Automated separation of binary overlapping trees in low-contrast color retinal images. Med Image Comput Comput Assist Interv. 2013;16(Pt 2):436-43. PubMed PMID: 24579170 https://doi.org/10.1007/978-3-642-40763-5_54" }, { "dkey": "HRF", "dval": "The HRF dataset is a dataset for retinal vessel segmentation which comprises 45 images and is organized as 15 subsets. Each subset contains one healthy fundus image, one image of patient with diabetic retinopathy and one glaucoma image. The image sizes are 3,304 x 2,336, with a training/testing image split of 22/23." }, { "dkey": "IntrA", "dval": "IntrA is an open-access 3D intracranial aneurysm dataset that makes the application of points-based and mesh-based classification and segmentation models available. This dataset can be used to diagnose intracranial aneurysms and to extract the neck for a clipping operation in medicine and other areas of deep learning, such as normal estimation and surface reconstruction.\n\n103 3D models of entire brain vessels are collected by reconstructing scanned 2D MRA images of patients (the raw 2D MRA images are not published due to medical ethics).\n1909 blood vessel segments are generated automatically from the complete models, including 1694 healthy vessel segments and 215 aneurysm segments for diagnosis.\n116 aneurysm segments are divided and annotated manually by medical experts; the scale of each aneurysm segment is based on the need for a preoperative examination.\nGeodesic distance matrices are computed and included for each annotated 3D segment, because the expression of the geodesic distance is more accurate than Euclidean distance according to the shape of vessels." }, { "dkey": "ADAM", "dval": "ADAM is organized as a half day Challenge, a Satellite Event of the ISBI 2020 conference in Iowa City, Iowa, USA.\n\nThe ADAM challenge focuses on the investigation and development of algorithms associated with the diagnosis of Age-related Macular degeneration (AMD) and segmentation of lesions in fundus photos from AMD patients. The goal of the challenge is to evaluate and compare automated algorithms for the detection of AMD on a common dataset of retinal fundus images. We invite the medical image analysis community to participate by developing and testing existing and novel automated fundus classification and segmentation methods.\n\nInstructions: \nADAM: Automatic Detection challenge on Age-related Macular degeneration\n\nLink: https://amd.grand-challenge.org\n\nAge-related macular degeneration, abbreviated as AMD, is a degenerative disorder in the macular region. It mainly occurs in people older than 45 years old and its incidence rate is even higher than diabetic retinopathy in the elderly. \n\nThe etiology of AMD is not fully understood, which could be related to multiple factors, including genetics, chronic photodestruction effect, and nutritional disorder. AMD is classified into Dry AMD and Wet AMD. Dry AMD (also called nonexudative AMD) is not neovascular. It is characterized by progressive atrophy of retinal pigment epithelium (RPE). In the late stage, drusen and the large area of atrophy could be observed under ophthalmoscopy. Wet AMD (also called neovascular or exudative AMD), is characterized by active neovascularization under RPE, subsequently causing exudation, hemorrhage, and scarring, and will eventually cause irreversible damage to the photoreceptors and rapid vision loss if left untreated.\n\nAn early diagnosis of AMD is crucial to treatment and prognosis. Fundus photo is one of the basic examinations. The current dataset is composed of AMD and non-AMD (myopia, normal control, etc.) photos. Typical signs of AMD that can be found in these photos include drusen, exudation, hemorrhage, etc. \n\nThe ADAM challenge has 4 tasks:\n\nTask 1: Classification of AMD and non-AMD fundus images.\n\nTask 2: Detection and segmentation of optic disc.\n\nTask 3: Localization of fovea.\n\nTask 4: Detection and Segmentation of lesions from fundus images." }, { "dkey": "G1020", "dval": "A large publicly available retinal fundus image dataset for glaucoma classification called G1020. The dataset is curated by conforming to standard practices in routine ophthalmology and it is expected to serve as standard benchmark dataset for glaucoma detection. This database consists of 1020 high resolution colour fundus images and provides ground truth annotations for glaucoma diagnosis, optic disc and optic cup segmentation, vertical cup-to-disc ratio, size of neuroretinal rim in inferior, superior, nasal and temporal quadrants, and bounding box location for optic disc." } ]