query
stringlengths
17
161
keyphrase_query
stringlengths
3
85
year
int64
2.01k
2.02k
negative_cands
sequence
positive_cands
sequence
abstracts
list
I want to implement a real-time action detection system.
action detection video
2,017
[ "NAB", "G3D", "ESAD", "BAR", "SoccerDB" ]
[ "UCF101", "COCO" ]
[ { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "NAB", "dval": "The First Temporal Benchmark Designed to Evaluate Real-time Anomaly Detectors Benchmark\n\nThe growth of the Internet of Things has created an abundance of streaming data. Finding anomalies in this data can provide valuable insights into opportunities or failures. Yet it’s difficult to achieve, due to the need to process data in real time, continuously learn and make predictions. How do we evaluate and compare various real-time anomaly detection techniques? \n\nThe Numenta Anomaly Benchmark (NAB) provides a standard, open source framework for evaluating real-time anomaly detection algorithms on streaming data. Through a controlled, repeatable environment of open-source tools, NAB rewards detectors that find anomalies as soon as possible, trigger no false alarms, and automatically adapt to any changing statistics. \n\nNAB comprises two main components: a scoring system designed for streaming data and a dataset with labeled, real-world time-series data." }, { "dkey": "G3D", "dval": "The Gaming 3D Dataset (G3D) focuses on real-time action recognition in a gaming scenario. It contains 10 subjects performing 20 gaming actions: “punch right”, “punch left”, “kick right”, “kick left”, “defend”, “golf swing”, “tennis swing forehand”, “tennis swing backhand”, “tennis serve”, “throw bowling ball”, “aim and fire gun”, “walk”, “run”, “jump”, “climb”, “crouch”, “steer a car”, “wave”, “flap” and “clap”." }, { "dkey": "ESAD", "dval": "ESAD is a large-scale dataset designed to tackle the problem of surgeon action detection in endoscopic minimally invasive surgery. ESAD aims at contributing to increase the effectiveness and reliability of surgical assistant robots by realistically testing their awareness of the actions performed by a surgeon. The dataset provides bounding box annotation for 21 action classes on real endoscopic video frames captured during prostatectomy, and was used as the basis of a recent MIDL 2020 challenge." }, { "dkey": "BAR", "dval": "Biased Action Recognition (BAR) dataset is a real-world image dataset categorized as six action classes which are biased to distinct places. The authors settle these six action classes by inspecting imSitu, which provides still action images from Google Image Search with action and place labels. In detail, the authors choose action classes where images for each of these candidate actions share common place characteristics. At the same time, the place characteristics of action class candidates should be distinct in order to classify the action only from place attributes. The select pairs are six typical action-place pairs: (Climbing, RockWall), (Diving, Underwater), (Fishing, WaterSurface), (Racing, APavedTrack), (Throwing, PlayingField),and (Vaulting, Sky)." }, { "dkey": "SoccerDB", "dval": "Comprises of 171,191 video segments from 346 high-quality soccer games. The database contains 702,096 bounding boxes, 37,709 essential event labels with time boundary and 17,115 highlight annotations for object detection, action recognition, temporal action localization, and highlight detection tasks." } ]
We propose a deep learning based framework for image relighting. It consists of a generator network which
image relighting images
2,019
[ "AMASS", "Places", "GoPro", "UNSW-NB15" ]
[ "CARLA", "KITTI" ]
[ { "dkey": "CARLA", "dval": "CARLA (CAR Learning to Act) is an open simulator for urban driving, developed as an open-source layer over Unreal Engine 4. Technically, it operates similarly to, as an open source layer over Unreal Engine 4 that provides sensors in the form of RGB cameras (with customizable positions), ground truth depth maps, ground truth semantic segmentation maps with 12 semantic classes designed for driving (road, lane marking, traffic sign, sidewalk and so on), bounding boxes for dynamic objects in the environment, and measurements of the agent itself (vehicle location and orientation)." }, { "dkey": "KITTI", "dval": "KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner. Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities. Álvarez et al. generated ground truth for 323 images from the road detection challenge with three classes: road, vertical, and sky. Zhang et al. annotated 252 (140 for training and 112 for testing) acquisitions – RGB and Velodyne scans – from the tracking challenge for ten object categories: building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence. Ros et al. labeled 170 training images and 46 testing images (from the visual odometry challenge) with 11 classes: building, tree, sky, car, sign, road, pedestrian, fence, pole, sidewalk, and bicyclist." }, { "dkey": "AMASS", "dval": "AMASS is a large database of human motion unifying different optical marker-based motion capture datasets by representing them within a common framework and parameterization. AMASS is readily useful for animation, visualization, and generating training data for deep learning." }, { "dkey": "Places", "dval": "The Places dataset is proposed for scene recognition and contains more than 2.5 million images covering more than 205 scene categories with more than 5,000 images per category." }, { "dkey": "GoPro", "dval": "The GoPro dataset for deblurring consists of 3,214 blurred images with the size of 1,280×720 that are divided into 2,103 training images and 1,111 test images. The dataset consists of pairs of a realistic blurry image and the corresponding ground truth shapr image that are obtained by a high-speed camera." }, { "dkey": "UNSW-NB15", "dval": "UNSW-NB15 is a network intrusion dataset. It contains nine different attacks, includes DoS, worms, Backdoors, and Fuzzers. The dataset contains raw network packets. The number of records in the training set is 175,341 records and the testing set is 82,332 records from the different types, attack and normal.\n\nPaper: UNSW-NB15: a comprehensive data set for network intrusion detection systems" } ]
I want to select sentences to support my answers for the multi-hop questions.
multi-hop question answering text
2,019
[ "WikiHop", "CommonsenseQA", "HybridQA", "GYAFC", "BiPaR", "QNLI", "QED" ]
[ "ARC", "MultiRC" ]
[ { "dkey": "ARC", "dval": "The AI2’s Reasoning Challenge (ARC) dataset is a multiple-choice question-answering dataset, containing questions from science exams from grade 3 to grade 9. The dataset is split in two partitions: Easy and Challenge, where the latter partition contains the more difficult questions that require reasoning. Most of the questions have 4 answer choices, with <1% of all the questions having either 3 or 5 answer choices. ARC includes a supporting KB of 14.3M unstructured text passages." }, { "dkey": "MultiRC", "dval": "MultiRC (Multi-Sentence Reading Comprehension) is a dataset of short paragraphs and multi-sentence questions, i.e., questions that can be answered by combining information from multiple sentences of the paragraph.\nThe dataset was designed with three key challenges in mind:\n* The number of correct answer-options for each question is not pre-specified. This removes the over-reliance on answer-options and forces them to decide on the correctness of each candidate answer independently of others. In other words, the task is not to simply identify the best answer-option, but to evaluate the correctness of each answer-option individually.\n* The correct answer(s) is not required to be a span in the text.\n* The paragraphs in the dataset have diverse provenance by being extracted from 7 different domains such as news, fiction, historical text etc., and hence are expected to be more diverse in their contents as compared to single-domain datasets.\nThe entire corpus consists of around 10K questions (including about 6K multiple-sentence questions). The 60% of the data is released as training and development data. The rest of the data is saved for evaluation and every few months a new unseen additional data is included for evaluation to prevent unintentional overfitting over time." }, { "dkey": "WikiHop", "dval": "WikiHop is a multi-hop question-answering dataset. The query of WikiHop is constructed with entities and relations from WikiData, while supporting documents are from WikiReading. A bipartite graph connecting entities and documents is first built and the answer for each query is located by traversal on this graph. Candidates that are type-consistent with the answer and share the same relation in query with the answer are included, resulting in a set of candidates. Thus, WikiHop is a multi-choice style reading comprehension data set. There are totally about 43K samples in training set, 5K samples in development set and 2.5K samples in test set. The test set is not provided. The task is to predict the correct answer given a query and multiple supporting documents.\n\nThe dataset includes a masked variant, where all candidates and their mentions in the supporting documents are replaced by random but consistent placeholder tokens." }, { "dkey": "CommonsenseQA", "dval": "The CommonsenseQA is a dataset for commonsense question answering task. The dataset consists of 12,247 questions with 5 choices each.\nThe dataset was generated by Amazon Mechanical Turk workers in the following process (an example is provided in parentheses):\n\n\na crowd worker observes a source concept from ConceptNet (“River”) and three target concepts (“Waterfall”, “Bridge”, “Valley”) that are all related by the same ConceptNet relation (“AtLocation”),\nthe worker authors three questions, one per target concept, such that only that particular target concept is the answer, while the other two distractor concepts are not, (“Where on a river can you hold a cup upright to catch water on a sunny day?”, “Where can I stand on a river to see water falling without getting wet?”, “I’m crossing the river, my feet are wet but my body is dry, where am I?”)\nfor each question, another worker chooses one additional distractor from Concept Net (“pebble”, “stream”, “bank”), and the author another distractor (“mountain”, “bottom”, “island”) manually." }, { "dkey": "HybridQA", "dval": "A new large-scale question-answering dataset that requires reasoning on heterogeneous information. Each question is aligned with a Wikipedia table and multiple free-form corpora linked with the entities in the table. The questions are designed to aggregate both tabular information and text information, i.e., lack of either form would render the question unanswerable." }, { "dkey": "GYAFC", "dval": "Grammarly’s Yahoo Answers Formality Corpus (GYAFC) is the largest dataset for any style containing a total of 110K informal / formal sentence pairs.\n\nYahoo Answers is a question answering forum, contains a large number of informal sentences and allows redistribution of data. The authors used the Yahoo Answers L6 corpus to create the GYAFC dataset of informal and formal sentence pairs. In order to ensure a uniform distribution of data, they removed sentences that are questions, contain URLs, and are shorter than 5 words or longer than 25. After these preprocessing steps, 40 million sentences remain. \n\nThe Yahoo Answers corpus consists of several different domains like Business, Entertainment & Music, Travel, Food, etc. Pavlick and Tetreault formality classifier (PT16) shows that the formality level varies significantly\nacross different genres. In order to control for this variation, the authors work with two specific domains that contain the most informal sentences and show results on training and testing within those categories. The authors use the formality classifier from PT16 to identify informal sentences and train this classifier on the Answers genre of the PT16 corpus\nwhich consists of nearly 5,000 randomly selected sentences from Yahoo Answers manually annotated on a scale of -3 (very informal) to 3 (very formal). They find that the domains of Entertainment & Music and Family & Relationships contain the most informal sentences and create the GYAFC dataset using these domains." }, { "dkey": "BiPaR", "dval": "BiPaR is a manually annotated bilingual parallel novel-style machine reading comprehension (MRC) dataset, developed to support monolingual, multilingual and cross-lingual reading comprehension on novels. The biggest difference between BiPaR and existing reading comprehension datasets is that each triple (Passage, Question, Answer) in BiPaR is written in parallel in two languages. BiPaR is diverse in prefixes of questions, answer types and relationships between questions and passages. Answering the questions requires reading comprehension skills of coreference resolution, multi-sentence reasoning, and understanding of implicit causality." }, { "dkey": "QNLI", "dval": "The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark." }, { "dkey": "QED", "dval": "QED is a linguistically principled framework for explanations in question answering. Given a question and a passage, QED represents an explanation of the answer as a combination of discrete, human-interpretable steps:\nsentence selection := identification of a sentence implying an answer to the question\nreferential equality := identification of noun phrases in the question and the answer sentence that refer to the same thing\npredicate entailment := confirmation that the predicate in the sentence entails the predicate in the question once referential equalities are abstracted away.\nThe QED dataset is an expert-annotated dataset of QED explanations build upon a subset of the Google Natural Questions dataset." } ]
This paper proposes a novel self-guiding LSTM (sg-LSTM) image caption
image captioning images text
2,019
[ "Bengali Hate Speech", "Weibo NER", "nocaps", "AOLP", "MSU-MFSD", "MVSEC" ]
[ "COCO", "Flickr30k" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "Flickr30k", "dval": "The Flickr30k dataset contains 31,000 images collected from Flickr, together with 5 reference sentences provided by human annotators." }, { "dkey": "Bengali Hate Speech", "dval": "Introduces three datasets of expressing hate, commonly used topics, and opinions for hate speech detection, document classification, and sentiment analysis, respectively." }, { "dkey": "Weibo NER", "dval": "The Weibo NER dataset is a Chinese Named Entity Recognition dataset drawn from the social media website Sina Weibo." }, { "dkey": "nocaps", "dval": "The nocaps benchmark consists of 166,100 human-generated captions describing 15,100 images from the OpenImages validation and test sets." }, { "dkey": "AOLP", "dval": "The application-oriented license plate (AOLP) benchmark database has 2049 images of Taiwan license plates. This database is categorized into three subsets: access control (AC) with 681 samples, traffic law enforcement (LE) with 757 samples, and road patrol (RP) with 611 samples. AC refers to the cases that a vehicle passes a fixed passage with a lower speed or full stop. This is the easiest situation. The images are captured under different illuminations and different weather conditions. LE refers to the cases that a vehicle violates traffic laws and is captured by roadside camera. The background are really cluttered, with road sign and multiple plates in one image. RP refers to the cases that the camera is held on a patrolling vehicle, and the images are taken with arbitrary viewpoints and distances." }, { "dkey": "MSU-MFSD", "dval": "The MSU-MFSD dataset contains 280 video recordings of genuine and attack faces. 35 individuals have participated in the development of this database with a total of 280 videos. Two kinds of cameras with different resolutions (720×480 and 640×480) were used to record the videos from the 35 individuals. For the real accesses, each individual has two video recordings captured with the Laptop cameras and Android, respectively. For the video attacks, two types of cameras, the iPhone and Canon cameras were used to capture high definition videos on each of the subject. The videos taken with Canon camera were then replayed on iPad Air screen to generate the HD replay attacks while the videos recorded by the iPhone mobile were replayed itself to generate the mobile replay attacks. Photo attacks were produced by printing the 35 subjects’ photos on A3 papers using HP colour printer. The recording videos with respect to the 35 individuals were divided into training (15 subjects with 120 videos) and testing (40 subjects with 160 videos) datasets, respectively." }, { "dkey": "MVSEC", "dval": "The Multi Vehicle Stereo Event Camera (MVSEC) dataset is a collection of data designed for the development of novel 3D perception algorithms for event based cameras. Stereo event data is collected from car, motorbike, hexacopter and handheld data, and fused with lidar, IMU, motion capture and GPS to provide ground truth pose and depth images." } ]
I want to train a model to find the referred object within the image according to the natural
natural language object retrieval images paragraph-level
2,017
[ "SNIPS", "COVERAGE", "ConvAI2", "Image and Video Advertisements", "Market-1501", "CLEVR-Hans" ]
[ "COCO", "ReferItGame" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "ReferItGame", "dval": "The ReferIt dataset contains 130,525 expressions for referring to 96,654 objects in 19,894 images of natural scenes." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "COVERAGE", "dval": "COVERAGE contains copymove forged (CMFD) images and their originals with similar but genuine objects (SGOs). COVERAGE is designed to highlight and address tamper detection ambiguity of popular methods, caused by self-similarity within natural images. In COVERAGE, forged–original pairs are annotated with (i) the duplicated and forged region masks, and (ii) the tampering factor/similarity metric. For benchmarking, forgery quality is evaluated using (i) computer vision-based methods, and (ii) human detection performance." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "Image and Video Advertisements", "dval": "The Image and Video Advertisements collection consists of an image dataset of 64,832 image ads, and a video dataset of 3,477 ads. The data contains rich annotations encompassing the topic and sentiment of the ads, questions and answers describing what actions the viewer is prompted to take and the reasoning that the ad presents to persuade the viewer (\"What should I do according to this ad, and why should I do it? \"), and symbolic references ads make (e.g. a dove symbolizes peace)." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "CLEVR-Hans", "dval": "The CLEVR-Hans data set is a novel confounded visual scene data set, which captures complex compositions of different objects. This data set consists of CLEVR images divided into several classes. \n\nThe membership of a class is based on combinations of objects’ attributes and relations. Additionally, certain classes within the data set are confounded. Thus, within the data set, consisting of train, validation, and test splits, all train, and validation images of confounded classes will be confounded with a specific attribute or combination of attributes.\n\nEach class is represented by 3000 training images, 750 validation images, and 750 test images. The training, validation, and test set splits contain 9000, 2250, and 2250 samples, respectively, for CLEVR-Hans3 and 21000, 5250, and 5250 samples for CLEVR-Hans7. The class distribution is balanced for all data splits.\n\nFor CLEVR-Hans classes for which class rules contain more than three objects, the number of objects to be placed per scene was randomly chosen between the minimal required number of objects for that class and ten, rather than between three and ten, as in the original CLEVR data set.\n\nFinally, the images were created such that the exact combinations of the class rules did not occur in images of other classes. It is possible that a subset of objects from one class rule occur in an image of another class. However, it is not possible that more than one complete class rule is contained in an image." } ]
This is the source code of my paper.
language modeling
2,019
[ "CommonsenseQA", "SNIPS", "PHINC", "ConvAI2", "CONCODE" ]
[ "WebText", "WikiText-103" ]
[ { "dkey": "WebText", "dval": "WebText is an internal OpenAI corpus created by scraping web pages with emphasis on\ndocument quality. The authors scraped all outbound links from\nReddit which received at least 3\nkarma. The authors used the approach as a heuristic indicator for\nwhether other users found the link interesting, educational,\nor just funny.\n\nWebText contains the text subset of these 45 million links. It consists of over 8 million documents\nfor a total of 40 GB of text. All Wikipedia\ndocuments were removed from WebText since it is a common data source\nfor other datasets." }, { "dkey": "WikiText-103", "dval": "The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.\n\nCompared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models that can take advantage of long term dependencies." }, { "dkey": "CommonsenseQA", "dval": "The CommonsenseQA is a dataset for commonsense question answering task. The dataset consists of 12,247 questions with 5 choices each.\nThe dataset was generated by Amazon Mechanical Turk workers in the following process (an example is provided in parentheses):\n\n\na crowd worker observes a source concept from ConceptNet (“River”) and three target concepts (“Waterfall”, “Bridge”, “Valley”) that are all related by the same ConceptNet relation (“AtLocation”),\nthe worker authors three questions, one per target concept, such that only that particular target concept is the answer, while the other two distractor concepts are not, (“Where on a river can you hold a cup upright to catch water on a sunny day?”, “Where can I stand on a river to see water falling without getting wet?”, “I’m crossing the river, my feet are wet but my body is dry, where am I?”)\nfor each question, another worker chooses one additional distractor from Concept Net (“pebble”, “stream”, “bank”), and the author another distractor (“mountain”, “bottom”, “island”) manually." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "PHINC", "dval": "PHINC is a parallel corpus of the 13,738 code-mixed English-Hindi sentences and their corresponding translation in English. The translations of sentences are done manually by the annotators." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "CONCODE", "dval": "A new large dataset with over 100,000 examples consisting of Java classes from online code repositories, and develop a new encoder-decoder architecture that models the interaction between the method documentation and the class environment." } ]
We propose an end-to-end model for cross-lingual transfer learning for question answering. We
question answering text
2,019
[ "iVQA", "ReQA", "EXAMS", "XQA", "XQuAD" ]
[ "DRCD", "NewsQA", "SQuAD" ]
[ { "dkey": "DRCD", "dval": "Delta Reading Comprehension Dataset (DRCD) is an open domain traditional Chinese machine reading comprehension (MRC) dataset. This dataset aimed to be a standard Chinese machine reading comprehension dataset, which can be a source dataset in transfer learning. The dataset contains 10,014 paragraphs from 2,108 Wikipedia articles and 30,000+ questions generated by annotators." }, { "dkey": "NewsQA", "dval": "The NewsQA dataset is a crowd-sourced machine reading comprehension dataset of 120,000 question-answer pairs.\n\n\nDocuments are CNN news articles.\nQuestions are written by human users in natural language.\nAnswers may be multiword passages of the source text.\nQuestions may be unanswerable.\nNewsQA is collected using a 3-stage, siloed process.\nQuestioners see only an article’s headline and highlights.\nAnswerers see the question and the full article, then select an answer passage.\nValidators see the article, the question, and a set of answers that they rank.\nNewsQA is more natural and more challenging than previous datasets." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "iVQA", "dval": "An open-ended VideoQA benchmark that aims to: i) provide a well-defined evaluation by including five correct answer annotations per question and ii) avoid questions which can be answered without the video. \n\niVQA contains 10,000 video clips with one question and five corresponding answers per clip. Moreover, we manually reduce the language bias by excluding questions that could be answered without watching the video." }, { "dkey": "ReQA", "dval": "Retrieval Question-Answering (ReQA) benchmark tests a model’s ability to retrieve relevant answers efficiently from a large set of documents." }, { "dkey": "EXAMS", "dval": "A new benchmark dataset for cross-lingual and multilingual question answering for high school examinations. Collects more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others. EXAMS offers a fine-grained evaluation framework across multiple languages and subjects, which allows precise analysis and comparison of various models." }, { "dkey": "XQA", "dval": "XQA is a data which consists of a total amount of 90k question-answer pairs in nine languages for cross-lingual open-domain question answering." }, { "dkey": "XQuAD", "dval": "XQuAD (Cross-lingual Question Answering Dataset) is a benchmark dataset for evaluating cross-lingual question answering performance. The dataset consists of a subset of 240 paragraphs and 1190 question-answer pairs from the development set of SQuAD v1.1 (Rajpurkar et al., 2016) together with their professional translations into ten languages: Spanish, German, Greek, Russian, Turkish, Arabic, Vietnamese, Thai, Chinese, and Hindi. Consequently, the dataset is entirely parallel across 11 languages." } ]
The proposed model can learn to disentangle appearance and geometric information from image and video sequences in
image/video editing
2,018
[ "Moving MNIST", "irc-disentanglement", "REDS", "3DMatch", "ABC Dataset", "MAFL" ]
[ "CIFAR-10", "CelebA" ]
[ { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "Moving MNIST", "dval": "The Moving MNIST dataset contains 10,000 video sequences, each consisting of 20 frames. In each video sequence, two digits move independently around the frame, which has a spatial resolution of 64×64 pixels. The digits frequently intersect with each other and bounce off the edges of the frame" }, { "dkey": "irc-disentanglement", "dval": "This is a dataset for disentangling conversations on IRC, which is the task of identifying separate conversations in a single stream of messages. It contains disentanglement information for 77,563 messages or IRC." }, { "dkey": "REDS", "dval": "The realistic and dynamic scenes (REDS) dataset was proposed in the NTIRE19 Challenge. The dataset is composed of 300 video sequences with resolution of 720×1,280, and each video has 100 frames, where the training set, the validation set and the testing set have 240, 30 and 30 videos, respectively" }, { "dkey": "3DMatch", "dval": "The 3DMATCH benchmark evaluates how well descriptors (both 2D and 3D) can establish correspondences between RGB-D frames of different views. The dataset contains 2D RGB-D patches and 3D patches (local TDF voxel grid volumes) of wide-baselined correspondences. \n\nThe pixel size of each 2D patch is determined by the projection of the 0.3m3 local 3D patch around the interest point onto the image plane." }, { "dkey": "ABC Dataset", "dval": "The ABC Dataset is a collection of one million Computer-Aided Design (CAD) models for research of geometric deep learning methods and applications. Each model is a collection of explicitly parametrized curves and surfaces, providing ground truth for differential quantities, patch segmentation, geometric feature detection, and shape reconstruction. Sampling the parametric descriptions of surfaces and curves allows generating data in different formats and resolutions, enabling fair comparisons for a wide range of geometric learning algorithms." }, { "dkey": "MAFL", "dval": "The MAFL dataset contains manually annotated facial landmark locations for 19,000 training and 1,000 test images." } ]
A novel cascaded CNN scheme for accurate face landmark localization.
face landmark localization images
2,018
[ "WFLW", "AFLW2000-3D", "UTKFace", "LS3D-W", "LaPa" ]
[ "Helen", "AFW" ]
[ { "dkey": "Helen", "dval": "The HELEN dataset is composed of 2330 face images of 400×400 pixels with labeled facial components generated through manually-annotated contours along eyes, eyebrows, nose, lips and jawline." }, { "dkey": "AFW", "dval": "AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box." }, { "dkey": "WFLW", "dval": "The Wider Facial Landmarks in the Wild or WFLW database contains 10000 faces (7500 for training and 2500 for testing) with 98 annotated landmarks. This database also features rich attribute annotations in terms of occlusion, head pose, make-up, illumination, blur and expressions." }, { "dkey": "AFLW2000-3D", "dval": "AFLW2000-3D is a dataset of 2000 images that have been annotated with image-level 68-point 3D facial landmarks. This dataset is used for evaluation of 3D facial landmark detection models. The head poses are very diverse and often hard to be detected by a CNN-based face detector." }, { "dkey": "UTKFace", "dval": "The UTKFace dataset is a large-scale face dataset with long age span (range from 0 to 116 years old). The dataset consists of over 20,000 face images with annotations of age, gender, and ethnicity. The images cover large variation in pose, facial expression, illumination, occlusion, resolution, etc. This dataset could be used on a variety of tasks, e.g., face detection, age estimation, age progression/regression, landmark localization, etc." }, { "dkey": "LS3D-W", "dval": "A 3D facial landmark dataset of around 230,000 images." }, { "dkey": "LaPa", "dval": "A large-scale Landmark guided face Parsing dataset (LaPa) for face parsing. It consists of more than 22,000 facial images with abundant variations in expression, pose and occlusion, and each image of LaPa is provided with a 11-category pixel-level label map and 106-point landmarks." } ]
We propose a unified model that combines the strengths of two well-established deformable model approaches to the face alignment
face alignment images
2,015
[ "iFakeFaceDB", "PANDORA", "MaskedFace-Net", "SpeakingFaces", "EPIC-KITCHENS-100", "Scan2CAD" ]
[ "AFW", "LFPW" ]
[ { "dkey": "AFW", "dval": "AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box." }, { "dkey": "LFPW", "dval": "The Labeled Face Parts in-the-Wild (LFPW) consists of 1,432 faces from images downloaded from the web using simple text queries on sites such as google.com, flickr.com, and yahoo.com. Each image was labeled by three MTurk workers, and 29 fiducial points, shown below, are included in dataset." }, { "dkey": "iFakeFaceDB", "dval": "iFakeFaceDB is a face image dataset for the study of synthetic face manipulation detection, comprising about 87,000 synthetic face images generated by the Style-GAN model and transformed with the GANprintR approach. All images were aligned and resized to the size of 224 x 224." }, { "dkey": "PANDORA", "dval": "PANDORA is the first large-scale dataset of Reddit comments labeled with three personality models (including the well-established Big 5 model) and demographics (age, gender, and location) for more than 10k users." }, { "dkey": "MaskedFace-Net", "dval": "Proposes three types of masked face detection dataset; namely, the Correctly Masked Face Dataset (CMFD), the Incorrectly Masked Face Dataset (IMFD) and their combination for the global masked face detection (MaskedFace-Net)." }, { "dkey": "SpeakingFaces", "dval": "SpeakingFaces is a publicly-available large-scale dataset developed to support multimodal machine learning research in contexts that utilize a combination of thermal, visual, and audio data streams; examples include human-computer interaction (HCI), biometric authentication, recognition systems, domain transfer, and speech recognition. SpeakingFaces is comprised of well-aligned high-resolution thermal and visual spectra image streams of fully-framed faces synchronized with audio recordings of each subject speaking approximately 100 imperative phrases." }, { "dkey": "EPIC-KITCHENS-100", "dval": "This paper introduces the pipeline to scale the largest dataset in egocentric vision EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (EPIC-KITCHENS-55), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection also enables evaluating the \"test of time\" - i.e. whether models trained on data collected in 2018 can generalise to new footage collected under the same hypotheses albeit \"two years on\".\nThe dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics." }, { "dkey": "Scan2CAD", "dval": "Scan2CAD is an alignment dataset based on 1506 ScanNet scans with 97607 annotated keypoints pairs between 14225 (3049 unique) CAD models from ShapeNet and their counterpart objects in the scans. The top 3 annotated model classes are chairs, tables and cabinets which arises due to the nature of indoor scenes in ScanNet. The number of objects aligned per scene ranges from 1 to 40 with an average of 9.3.\n\nAdditionally, all ShapeNet CAD models used in the Scan2CAD dataset are annotated with their rotational symmetries: either none, 2-fold, 4-fold or infinite rotational symmetries around a canonical axis of the object." } ]
We report the results of our replication study on BERT pretraining. Our best model outperforms every published
language model pretraining text
2,019
[ "GSL", "THEODORE", "ReCAM", "BDD100K", "Horne 2017 Fake News Data" ]
[ "QNLI", "MRPC", "RACE", "GLUE", "SQuAD" ]
[ { "dkey": "QNLI", "dval": "The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark." }, { "dkey": "MRPC", "dval": "Microsoft Research Paraphrase Corpus (MRPC) is a corpus consists of 5,801 sentence pairs collected from newswire articles. Each pair is labelled if it is a paraphrase or not by human annotators. The whole set is divided into a training subset (4,076 sentence pairs of which 2,753 are paraphrases) and a test subset (1,725 pairs of which 1,147 are paraphrases)." }, { "dkey": "RACE", "dval": "The ReAding Comprehension dataset from Examinations (RACE) dataset is a machine reading comprehension dataset consisting of 27,933 passages and 97,867 questions from English exams, targeting Chinese students aged 12-18. RACE consists of two subsets, RACE-M and RACE-H, from middle school and high school exams, respectively. RACE-M has 28,293 questions and RACE-H has 69,574. Each question is associated with 4 candidate answers, one of which is correct. The data generation process of RACE differs from most machine reading comprehension datasets - instead of generating questions and answers by heuristics or crowd-sourcing, questions in RACE are specifically designed for testing human reading skills, and are created by domain experts." }, { "dkey": "GLUE", "dval": "General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE and WNLI." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "GSL", "dval": "Dataset Description\nThe Greek Sign Language (GSL) is a large-scale RGB+D dataset, suitable for Sign Language Recognition (SLR) and Sign Language Translation (SLT). The video captures are conducted using an Intel RealSense D435 RGB+D camera at a rate of 30 fps. Both the RGB and the depth streams are acquired in the same spatial resolution of 848×480 pixels. To increase variability in the videos, the camera position and orientation is slightly altered within subsequent recordings. Seven different signers are employed to perform 5 individual and commonly met scenarios in different public services. The average length of each scenario is twenty sentences.\n\nThe dataset contains 10,290 sentence instances, 40,785 gloss instances, 310 unique glosses (vocabulary size) and 331 unique sentences, with 4.23 glosses per sentence on average. Each signer is asked to perform the pre-defined dialogues five consecutive times. In all cases, the simulation considers a deaf person communicating with a single public service employee. The involved signer performs the sequence of glosses of both agents in the discussion. For the annotation of each gloss sequence, GSL linguistic experts are involved. The given annotations are at individual gloss and gloss sequence level. A translation of the gloss sentences to spoken Greek is also provided.\n\nEvaluation\nThe GSL dataset includes the 3 evaluation setups:\n\n\n\nSigner-dependent continuous sign language recognition (GSL SD) – roughly 80% of videos are used for training, corresponding to 8,189 instances. The rest 1,063 (10%) were kept for validation and 1,043 (10%) for testing.\n\n\n\nSigner-independent continuous sign language recognition (GSL SI) – the selected test gloss sequences are not used in the training set, while all the individual glosses exist in the training set. In GSL SI, the recordings of one signer are left out for validation and testing (588 and 881 instances, respectively). The rest 8821 instances are utilized for training.\n\n\n\nIsolated gloss sign language recognition (GSL isol.) – The validation set consists of 2,231 gloss instances, the test set 3,500, while the remaining 34,995 are used for training. All 310 unique glosses are seen in the training set.\n\n\n\nFor more info and results, advice our paper\n\nPaper Abstract: A Comprehensive Study on Sign Language Recognition Methods, Adaloglou et al. 2020\nIn this paper, a comparative experimental assessment of computer vision-based methods for sign language recognition is conducted. By implementing the most recent deep neural network methods in this field, a thorough evaluation on multiple publicly available datasets is performed. The aim of the present study is to provide insights on sign language recognition, focusing on mapping non-segmented video streams to glosses. For this task, two new sequence training criteria, known from the fields of speech and scene text recognition, are introduced. Furthermore, a\nplethora of pretraining schemes are thoroughly discussed. Finally, a new RGB+D dataset for the Greek sign language is created. To the best of our knowledge, this is the first sign language dataset where sentence and gloss level annotations are provided for every video capture.\n\nArxiv link" }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "ReCAM", "dval": "Tasks\nOur shared task has three subtasks. Subtask 1 and 2 focus on evaluating machine learning models' performance with regard to two definitions of abstractness (Spreen and Schulz, 1966; Changizi, 2008), which we call imperceptibility and nonspecificity, respectively. Subtask 3 aims to provide some insights to their relationships.\n\n• Subtask 1: ReCAM-Imperceptibility\n\nConcrete words refer to things, events, and properties that we can perceive directly with our senses (Spreen and Schulz, 1966; Coltheart 1981; Turney et al., 2011), e.g., donut, trees, and red. In contrast, abstract words refer to ideas and concepts that are distant from immediate perception. Examples include objective, culture, and economy. In subtask 1, the participanting systems are required to perform reading comprehension of abstract meaning for imperceptible concepts.\n\nBelow is an example. Given a passage and a question, your model needs to choose from the five candidates the best one for replacing @placeholder.\n\n• Subtask 2: ReCAM-Nonspecificity\n\nSubtask 2 focuses on a different type of definition. Compared to concrete concepts like groundhog and whale, hypernyms such as vertebrate are regarded as more abstract (Changizi, 2008). \n\n• Subtask 3: ReCAM-Intersection\nSubtask 3 aims to provide more insights to the relationship of the two views on abstractness, In this subtask, we test the performance of a system that is trained on one definition and evaluted on the other." }, { "dkey": "BDD100K", "dval": "Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page." }, { "dkey": "Horne 2017 Fake News Data", "dval": "The Horne 2017 Fake News Data contains two independed news datasets:\n\n\n\nBuzzfeed Political News Data:\n\n\nNews originally analyzed by Craig Silverman of Buzzfeed News in article entitled \" This Analysis Shows How Viral Fake Election News Stories Outperformed Real News On Facebook.\"\nBuzzFeed News used keyword search on the content analysis tool BuzzSumo to find news stories\nPost the analysis of Buzzfeed News, the authors collect the body text and body title of all articles and use the ground truth as set by Buzzfeed as actual ground truth.\nThis data set has fewer clear restrictions on the ground truth, including opinion-based real stories and satire-based fake stories. In our study, the authors manually filter this data set down to contain only \"hard\" news stories and malicious fake news stories. This repository contains the whole dataset with no filtering.\n\n\n\nRandom Political News Data:\n\n\nRandomly collected from three types of sources during 2016.\nSources ground truth determined through: Business Insider’s “Most Trusted” list and Zimdars 2016 Fake news list\nSources:\nReal: Wall Street Journal, The Economist, BBC, NPR, ABC, CBS, USA Today, The Guardian, NBC, The Washington Post\nSatire: The Onion, Huffington Post Satire, Borowitz Report, The Beaverton, Satire Wire, and Faking News\nFake: Ending The Fed, True Pundit, abcnews.com.co, DC Gazette, Liberty Writers News, Before its News, InfoWars, Real News Right Now" } ]
We present a simple and effective model for learning general purpose sentence representations. Our model uses a single
natural language inference text
2,017
[ "GLUE", "SuperGLUE", "Fluent Speech Commands", "BDD100K" ]
[ "SNLI", "MultiNLI" ]
[ { "dkey": "SNLI", "dval": "The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are image captions from Flickr30k, while hypotheses were generated by crowd-sourced annotators who were shown a premise and asked to generate entailing, contradicting, and neutral sentences. Annotators were instructed to judge the relation between sentences given that they describe the same event. Each pair is labeled as “entailment”, “neutral”, “contradiction” or “-”, where “-” indicates that an agreement could not be reached." }, { "dkey": "MultiNLI", "dval": "The Multi-Genre Natural Language Inference (MultiNLI) dataset has 433K sentence pairs. Its size and mode of collection are modeled closely like SNLI. MultiNLI offers ten distinct genres (Face-to-face, Telephone, 9/11, Travel, Letters, Oxford University Press, Slate, Verbatim, Goverment and Fiction) of written and spoken English data. There are matched dev/test sets which are derived from the same sources as those in the training set, and mismatched sets which do not closely resemble any seen at training time." }, { "dkey": "GLUE", "dval": "General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE and WNLI." }, { "dkey": "SuperGLUE", "dval": "SuperGLUE is a benchmark dataset designed to pose a more rigorous test of language understanding than GLUE. SuperGLUE has the same high-level motivation as GLUE: to provide a simple, hard-to-game measure of progress toward general-purpose language understanding technologies for English. SuperGLUE follows the basic design of GLUE: It consists of a public leaderboard built around eight language understanding tasks, drawing on existing data, accompanied by a single-number\nperformance metric, and an analysis toolkit. However, it improves upon GLUE in several ways:\n\n\nMore challenging tasks: SuperGLUE retains the two hardest tasks in GLUE. The remaining tasks were identified from those submitted to an open call for task proposals and were selected based on difficulty for current NLP approaches.\nMore diverse task formats: The task formats in GLUE are limited to sentence- and sentence-pair classification. The authors expand the set of task formats in SuperGLUE to include\ncoreference resolution and question answering (QA).\nComprehensive human baselines: the authors include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance.\nImproved code support: SuperGLUE is distributed with a new, modular toolkit for work on pretraining, multi-task learning, and transfer learning in NLP, built around standard tools including PyTorch (Paszke et al., 2017) and AllenNLP (Gardner et al., 2017).\nRefined usage rules: The conditions for inclusion on the SuperGLUE leaderboard were revamped to ensure fair competition, an informative leaderboard, and full credit\nassignment to data and task creators." }, { "dkey": "Fluent Speech Commands", "dval": "Fluent Speech Commands is an open source audio dataset for spoken language understanding (SLU) experiments. Each utterance is labeled with \"action\", \"object\", and \"location\" values; for example, \"turn the lights on in the kitchen\" has the label {\"action\": \"activate\", \"object\": \"lights\", \"location\": \"kitchen\"}. A model must predict each of these values, and a prediction for an utterance is deemed to be correct only if all values are correct. \n\nThe task is very simple, but the dataset is large and flexible to allow for many types of experiments: for instance, one can vary the number of speakers, or remove all instances of a particular sentence and test whether a model trained on the remaining sentences can generalize." }, { "dkey": "BDD100K", "dval": "Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page." } ]
This paper proposes a new multiple-choice reading comprehension (MCRC) model which performs
multiple-choice reading comprehension text paragraph-level
2,019
[ "DREAM", "DROP", "CosmosQA", "OneStopQA", "C3", "VisualMRC" ]
[ "RACE", "SQuAD" ]
[ { "dkey": "RACE", "dval": "The ReAding Comprehension dataset from Examinations (RACE) dataset is a machine reading comprehension dataset consisting of 27,933 passages and 97,867 questions from English exams, targeting Chinese students aged 12-18. RACE consists of two subsets, RACE-M and RACE-H, from middle school and high school exams, respectively. RACE-M has 28,293 questions and RACE-H has 69,574. Each question is associated with 4 candidate answers, one of which is correct. The data generation process of RACE differs from most machine reading comprehension datasets - instead of generating questions and answers by heuristics or crowd-sourcing, questions in RACE are specifically designed for testing human reading skills, and are created by domain experts." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "DREAM", "dval": "DREAM is a multiple-choice Dialogue-based REAding comprehension exaMination dataset. In contrast to existing reading comprehension datasets, DREAM is the first to focus on in-depth multi-turn multi-party dialogue understanding.\n\nDREAM contains 10,197 multiple choice questions for 6,444 dialogues, collected from English-as-a-foreign-language examinations designed by human experts. DREAM is likely to present significant challenges for existing reading comprehension systems: 84% of answers are non-extractive, 85% of questions require reasoning beyond a single sentence, and 34% of questions also involve commonsense knowledge." }, { "dkey": "DROP", "dval": "Discrete Reasoning Over Paragraphs DROP is a crowdsourced, adversarially-created, 96k-question benchmark, in which a system must resolve references in a question, perhaps to multiple input positions, and perform discrete operations over them (such as addition, counting, or sorting). These operations require a much more comprehensive understanding of the content of paragraphs than what was necessary for prior datasets. The questions consist of passages extracted from Wikipedia articles. The dataset is split into a training set of about 77,000 questions, a development set of around 9,500 questions and a hidden test set similar in size to the development set." }, { "dkey": "CosmosQA", "dval": "CosmosQA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people’s everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context." }, { "dkey": "OneStopQA", "dval": "OneStopQA provides an alternative test set for reading comprehension which alleviates these shortcomings and has a substantially higher human ceiling performance." }, { "dkey": "C3", "dval": "C3 is a free-form multiple-Choice Chinese machine reading Comprehension dataset." }, { "dkey": "VisualMRC", "dval": "VisualMRC is a visual machine reading comprehension dataset that proposes a task: given a question and a document image, a model produces an abstractive answer.\n\nYou can find more details, analyses, and baseline results in the paper, \nVisualMRC: Machine Reading Comprehension on Document Images, AAAI 2021.\n\nStatistics:\n10,197 images\n30,562 QA pairs\n10.53 average question tokens (tokenizing with NLTK tokenizer)\n9.53 average answer tokens (tokenizing wit NLTK tokenizer)\n151.46 average OCR tokens (tokenizing with NLTK tokenizer)" } ]
I have been reading about blood vessel segmentation and tried to reproduce the results.
retinal blood vessel segmentation images
2,017
[ "IntrA", "COCO-Tasks", "ORVS", "SUN3D", "ROSE" ]
[ "STARE", "DRIVE" ]
[ { "dkey": "STARE", "dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.." }, { "dkey": "DRIVE", "dval": "The Digital Retinal Images for Vessel Extraction (DRIVE) dataset is a dataset for retinal vessel segmentation. It consists of a total of JPEG 40 color fundus images; including 7 abnormal pathology cases. The images were obtained from a diabetic retinopathy screening program in the Netherlands. The images were acquired using Canon CR5 non-mydriatic 3CCD camera with FOV equals to 45 degrees. Each image resolution is 584*565 pixels with eight bits per color channel (3 channels). \n\nThe set of 40 images was equally divided into 20 images for the training set and 20 images for the testing set. Inside both sets, for each image, there is circular field of view (FOV) mask of diameter that is approximately 540 pixels. Inside training set, for each image, one manual segmentation by an ophthalmological expert has been applied. Inside testing set, for each image, two manual segmentations have been applied by two different observers, where the first observer segmentation is accepted as the ground-truth for performance evaluation." }, { "dkey": "IntrA", "dval": "IntrA is an open-access 3D intracranial aneurysm dataset that makes the application of points-based and mesh-based classification and segmentation models available. This dataset can be used to diagnose intracranial aneurysms and to extract the neck for a clipping operation in medicine and other areas of deep learning, such as normal estimation and surface reconstruction.\n\n103 3D models of entire brain vessels are collected by reconstructing scanned 2D MRA images of patients (the raw 2D MRA images are not published due to medical ethics).\n1909 blood vessel segments are generated automatically from the complete models, including 1694 healthy vessel segments and 215 aneurysm segments for diagnosis.\n116 aneurysm segments are divided and annotated manually by medical experts; the scale of each aneurysm segment is based on the need for a preoperative examination.\nGeodesic distance matrices are computed and included for each annotated 3D segment, because the expression of the geodesic distance is more accurate than Euclidean distance according to the shape of vessels." }, { "dkey": "COCO-Tasks", "dval": "Comprises about 40,000 images where the most suitable objects for 14 tasks have been annotated." }, { "dkey": "ORVS", "dval": "The ORVS dataset has been newly established as a collaboration between the computer science and visual-science departments at the University of Calgary.\n\nThis dataset contains 49 images (42 training and seven testing images) collected from a clinic in Calgary-Canada. All images were acquired with a Zeiss Visucam 200 with 30 degrees field of view (FOV). The image size is 1444×1444 with 24 bits per pixel. Images and are stored in JPEG format with low compression, which is common in ophthalmology practice. All images were manually traced by an expert who a has been working in the field of retinal-image analysis and went through training. The expert was asked to label all pixels belonging to retinal vessels. The Windows Paint 3D tool was used to manually label the images." }, { "dkey": "SUN3D", "dval": "SUN3D contains a large-scale RGB-D video database, with 8 annotated sequences. Each frame has a semantic segmentation of the objects in the scene and information about the camera pose. It is composed by 415 sequences captured in 254 different spaces, in 41 different buildings. Moreover, some places have been captured multiple times at different moments of the day." }, { "dkey": "ROSE", "dval": "Retinal OCTA SEgmentation dataset (ROSE) consists of 229 OCTA images with vessel annotations at either centerline-level or pixel level." } ]
I want to build a model to automatically determine whether an image is acceptable for diagnosis.
fundus image quality classification images
2,018
[ "ACDC", "SemEval 2014 Task 4 Sub Task 2", "QNLI", "Image and Video Advertisements", "IntrA", "Violin" ]
[ "STARE", "DRIVE" ]
[ { "dkey": "STARE", "dval": "The STARE (Structured Analysis of the Retina) dataset is a dataset for retinal vessel segmentation. It contains 20 equal-sized (700×605) color fundus images. For each image, two groups of annotations are provided.." }, { "dkey": "DRIVE", "dval": "The Digital Retinal Images for Vessel Extraction (DRIVE) dataset is a dataset for retinal vessel segmentation. It consists of a total of JPEG 40 color fundus images; including 7 abnormal pathology cases. The images were obtained from a diabetic retinopathy screening program in the Netherlands. The images were acquired using Canon CR5 non-mydriatic 3CCD camera with FOV equals to 45 degrees. Each image resolution is 584*565 pixels with eight bits per color channel (3 channels). \n\nThe set of 40 images was equally divided into 20 images for the training set and 20 images for the testing set. Inside both sets, for each image, there is circular field of view (FOV) mask of diameter that is approximately 540 pixels. Inside training set, for each image, one manual segmentation by an ophthalmological expert has been applied. Inside testing set, for each image, two manual segmentations have been applied by two different observers, where the first observer segmentation is accepted as the ground-truth for performance evaluation." }, { "dkey": "ACDC", "dval": "The goal of the Automated Cardiac Diagnosis Challenge (ACDC) challenge is to:\n\n\ncompare the performance of automatic methods on the segmentation of the left ventricular endocardium and epicardium as the right ventricular endocardium for both end diastolic and end systolic phase instances;\ncompare the performance of automatic methods for the classification of the examinations in five classes (normal case, heart failure with infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, abnormal right ventricle).\n\nThe overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon. Acquired data were fully anonymized and handled within the regulations set by the local ethical committee of the Hospital of Dijon (France). Our dataset covers several well-defined pathologies with enough cases to (1) properly train machine learning methods and (2) clearly assess the variations of the main physiological parameters obtained from cine-MRI (in particular diastolic volume and ejection fraction). The dataset is composed of 150 exams (all from different patients) divided into 5 evenly distributed subgroups (4 pathological plus 1 healthy subject groups) as described below. Furthermore, each patient comes with the following additional information : weight, height, as well as the diastolic and systolic phase instants.\n\nThe database is made available to participants through two datasets from the dedicated online evaluation website after a personal registration: i) a training dataset of 100 patients along with the corresponding manual references based on the analysis of one clinical expert; ii) a testing dataset composed of 50 new patients, without manual annotations but with the patient information given above. The raw input images are provided through the Nifti format." }, { "dkey": "SemEval 2014 Task 4 Sub Task 2", "dval": "Sentiment analysis is increasingly viewed as a vital task both from an academic and a commercial standpoint. The majority of current approaches, however, attempt to detect the overall polarity of a sentence, paragraph, or text span, regardless of the entities mentioned (e.g., laptops, restaurants) and their aspects (e.g., battery, screen; food, service). By contrast, this task is concerned with aspect based sentiment analysis (ABSA), where the goal is to identify the aspects of given target entities and the sentiment expressed towards each aspect. Datasets consisting of customer reviews with human-authored annotations identifying the mentioned aspects of the target entities and the sentiment polarity of each aspect will be provided.\n\nSubtask 2: Aspect term polarity\n\nFor a given set of aspect terms within a sentence, determine whether the polarity of each aspect term is positive, negative, neutral or conflict (i.e., both positive and negative).\n\nFor example:\n\n“I loved their fajitas” → {fajitas: positive}\n“I hated their fajitas, but their salads were great” → {fajitas: negative, salads: positive}\n“The fajitas are their first plate” → {fajitas: neutral}\n“The fajitas were great to taste, but not to see” → {fajitas: conflict}" }, { "dkey": "QNLI", "dval": "The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark." }, { "dkey": "Image and Video Advertisements", "dval": "The Image and Video Advertisements collection consists of an image dataset of 64,832 image ads, and a video dataset of 3,477 ads. The data contains rich annotations encompassing the topic and sentiment of the ads, questions and answers describing what actions the viewer is prompted to take and the reasoning that the ad presents to persuade the viewer (\"What should I do according to this ad, and why should I do it? \"), and symbolic references ads make (e.g. a dove symbolizes peace)." }, { "dkey": "IntrA", "dval": "IntrA is an open-access 3D intracranial aneurysm dataset that makes the application of points-based and mesh-based classification and segmentation models available. This dataset can be used to diagnose intracranial aneurysms and to extract the neck for a clipping operation in medicine and other areas of deep learning, such as normal estimation and surface reconstruction.\n\n103 3D models of entire brain vessels are collected by reconstructing scanned 2D MRA images of patients (the raw 2D MRA images are not published due to medical ethics).\n1909 blood vessel segments are generated automatically from the complete models, including 1694 healthy vessel segments and 215 aneurysm segments for diagnosis.\n116 aneurysm segments are divided and annotated manually by medical experts; the scale of each aneurysm segment is based on the need for a preoperative examination.\nGeodesic distance matrices are computed and included for each annotated 3D segment, because the expression of the geodesic distance is more accurate than Euclidean distance according to the shape of vessels." }, { "dkey": "Violin", "dval": "Video-and-Language Inference is the task of joint multimodal understanding of video and text. Given a video clip with aligned subtitles as premise, paired with a natural language hypothesis based on the video content, a model needs to infer whether the hypothesis is entailed or contradicted by the given video clip. The Violin dataset is a dataset for this task which consists of 95,322 video-hypothesis pairs from 15,887 video clips, spanning over 582 hours of video. These video clips contain rich content with diverse temporal dynamics, event shifts, and people interactions, collected from two sources: (i) popular TV shows, and (ii) movie clips from YouTube channels." } ]
We propose an end-to-end framework to reconstruct the 3D scene from
semantic reconstruction indoor scenes images
2,020
[ "DIPS", "MLe2e", "E2E", "DeeperForensics-1.0", "THEODORE", "DDD20" ]
[ "COCO", "Pix3D" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "Pix3D", "dval": "The Pix3D dataset is a large-scale benchmark of diverse image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc." }, { "dkey": "DIPS", "dval": "Contains biases but is two orders of magnitude larger than those used previously." }, { "dkey": "MLe2e", "dval": "MLe2 is a dataset for the evaluation of scene text end-to-end reading systems and all intermediate stages such as text detection, script identification and text recognition. The dataset contains a total of 711 scene images covering four different scripts (Latin, Chinese, Kannada, and Hangul)." }, { "dkey": "E2E", "dval": "End-to-End NLG Challenge (E2E) aims to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena." }, { "dkey": "DeeperForensics-1.0", "dval": "DeeperForensics-1.0 represents the largest face forgery detection dataset by far, with 60,000 videos constituted by a total of 17.6 million frames, 10 times larger than existing datasets of the same kind. The full dataset includes 48,475 source videos and 11,000 manipulated videos. The source videos are collected on 100 paid and consented actors from 26 countries, and the manipulated videos are generated by a newly proposed many-to-many end-to-end face swapping method, DF-VAE. 7 types of real-world perturbations at 5 intensity levels are employed to ensure a larger scale and higher diversity." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "DDD20", "dval": "The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames. DDD20 is the longest event camera end-to-end driving dataset to date with 51h of DAVIS event+frame camera and vehicle human control data collected from 4000km of highway and urban driving under a variety of lighting conditions." } ]
We investigate the effectiveness of different pre-trained language models for Question Answering (QA) on four
question answering text
2,019
[ "How2QA", "TweetQA", "SQuAD-shifts", "PAQ", "TVQA" ]
[ "CoQA", "SQuAD" ]
[ { "dkey": "CoQA", "dval": "CoQA is a large-scale dataset for building Conversational Question Answering systems. The goal of the CoQA challenge is to measure the ability of machines to understand a text passage and answer a series of interconnected questions that appear in a conversation.\n\nCoQA contains 127,000+ questions with answers collected from 8000+ conversations. Each conversation is collected by pairing two crowdworkers to chat about a passage in the form of questions and answers. The unique features of CoQA include 1) the questions are conversational; 2) the answers can be free-form text; 3) each answer also comes with an evidence subsequence highlighted in the passage; and 4) the passages are collected from seven diverse domains. CoQA has a lot of challenging phenomena not present in existing reading comprehension datasets, e.g., coreference and pragmatic reasoning." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "How2QA", "dval": "To collect How2QA for video QA task, the same set of selected video clips are presented to another group of AMT workers for multichoice QA annotation. Each worker is assigned with one video segment and asked to write one question with four answer candidates (one correctand three distractors). Similarly, narrations are hidden from the workers to ensure the collected QA pairs are not biased by subtitles. Similar to TVQA, the start and end points are provided for the relevant moment for each question. After filtering low-quality annotations, the final dataset contains 44,007 QA pairs for 22k 60-second clips selected from 9035 videos." }, { "dkey": "TweetQA", "dval": "With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effectiveness of many applications that rely on real-time knowledge. While previous question answering (QA) datasets have concentrated on formal text like news and Wikipedia, the first large-scale dataset for QA over social media data is presented. To make sure the tweets are meaningful and contain interesting information, tweets used by journalists to write news articles are gathered. Then human annotators are asked to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, the answer are allowed to be abstractive. The task requires model to read a short tweet and a question and outputs a text phrase (does not need to be in the tweet) as the answer." }, { "dkey": "SQuAD-shifts", "dval": "Provides four new test sets for the Stanford Question Answering Dataset (SQuAD) and evaluate the ability of question-answering systems to generalize to new data." }, { "dkey": "PAQ", "dval": "Probably Asked Questions (PAQ) is a very large resource of 65M automatically-generated QA-pairs. PAQ is a semi-structured Knowledge Base (KB) of 65M natural language QA-pairs, which models can memorise and/or learn to retrieve from. PAQ differs from traditional KBs in that questions and answers are stored in natural language, and that questions are generated such that they are likely to appear in ODQA datasets. PAQ is automatically constructed using a question generation model and Wikipedia." }, { "dkey": "TVQA", "dval": "The TVQA dataset is a large-scale vido dataset for video question answering. It is based on 6 popular TV shows (Friends, The Big Bang Theory, How I Met Your Mother, House M.D., Grey's Anatomy, Castle). It includes 152,545 QA pairs from 21,793 TV show clips. The QA pairs are split into the ratio of 8:1:1 for training, validation, and test sets. The TVQA dataset provides the sequence of video frames extracted at 3 FPS, the corresponding subtitles with the video clips, and the query consisting of a question and four answer candidates. Among the four answer candidates, there is only one correct answer." } ]
I want to use a supervised model to recognize activities from low-resolution videos.
extreme low-resolution activity recognition images
2,019
[ "TinyVIRAT", "DAiSEE", "DIV2K", "UCF-Crime", "MPII Cooking 2 Dataset", "Composable activities dataset", "FaceForensics" ]
[ "UCF101", "HMDB51" ]
[ { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "HMDB51", "dval": "The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. The original evaluation scheme uses three different training/testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. The average accuracy over these three splits is used to measure the final performance." }, { "dkey": "TinyVIRAT", "dval": "TinyVIRAT contains natural low-resolution activities. The actions in TinyVIRAT videos have multiple labels and they are extracted from surveillance videos which makes them realistic and more challenging." }, { "dkey": "DAiSEE", "dval": "DAiSEE is a multi-label video classification dataset comprising of 9,068 video snippets captured from 112 users for recognizing the user affective states of boredom, confusion, engagement, and frustration \"in the wild\". The dataset has four levels of labels namely - very low, low, high, and very high for each of the affective states, which are crowd annotated and correlated with a gold standard annotation created using a team of expert psychologists." }, { "dkey": "DIV2K", "dval": "DIV2K is a popular single-image super-resolution dataset which contains 1,000 images with different scenes and is splitted to 800 for training, 100 for validation and 100 for testing. It was collected for NTIRE2017 and NTIRE2018 Super-Resolution Challenges in order to encourage research on image super-resolution with more realistic degradation. This dataset contains low resolution images with different types of degradations. Apart from the standard bicubic downsampling, several types of degradations are considered in synthesizing low resolution images for different tracks of the challenges. Track 2 of NTIRE 2017 contains low resolution images with unknown x4 downscaling. Track 2 and track 4 of NTIRE 2018 correspond to realistic mild ×4 and realistic wild ×4 adverse conditions, respectively. Low-resolution images under realistic mild x4 setting suffer from motion blur, Poisson noise and pixel shifting. Degradations under realistic wild x4 setting are further extended to be of different levels from image to image." }, { "dkey": "UCF-Crime", "dval": "The UCF-Crime dataset is a large-scale dataset of 128 hours of videos. It consists of 1900 long and untrimmed real-world surveillance videos, with 13 realistic anomalies including Abuse, Arrest, Arson, Assault, Road Accident, Burglary, Explosion, Fighting, Robbery, Shooting, Stealing, Shoplifting, and Vandalism. These anomalies are selected because they have a significant impact on public safety. \n\nThis dataset can be used for two tasks. First, general anomaly detection considering all anomalies in one group and all normal activities in another group. Second, for recognizing each of 13 anomalous activities." }, { "dkey": "MPII Cooking 2 Dataset", "dval": "A dataset which provides detailed annotations for activity recognition." }, { "dkey": "Composable activities dataset", "dval": "The Composable activities dataset consists of 693 videos that contain activities in 16 classes performed by 14 actors. Each activity is composed of 3 to 11 atomic actions. RGB-D data for each sequence is captured using a Microsoft Kinect sensor and estimate position of relevant body joints.\n\nThe dataset provides annotations of the activity for each video and the actions for each of the four human parts (left/right arm and leg) for each frame in every video." }, { "dkey": "FaceForensics", "dval": "FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces. This dataset has two versions:\n\n\n\nSource-to-Target: where the authors reenact over 1000 videos with new facial expressions extracted from other videos, which e.g. can be used to train a classifier to detect fake images or videos.\n\n\n\nSelfreenactment: where the authors use Face2Face to reenact the facial expressions of videos with their own facial expressions as input to get pairs of videos, which e.g. can be used to train supervised generative refinement models." } ]
I want to learn an action recognition model from trimmed videos.
action recognition videos
2,019
[ "Kinetics-600", "Kinetics", "AViD", "DISFA", "JHMDB", "MTL-AQA", "EPIC-KITCHENS-100" ]
[ "UCF101", "ActivityNet" ]
[ { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "ActivityNet", "dval": "The ActivityNet dataset contains 200 different types of activities and a total of 849 hours of videos collected from YouTube. ActivityNet is the largest benchmark for temporal activity detection to date in terms of both the number of activity categories and number of videos, making the task particularly challenging. Version 1.3 of the dataset contains 19994 untrimmed videos in total and is divided into three disjoint subsets, training, validation, and testing by a ratio of 2:1:1. On average, each activity category has 137 untrimmed videos. Each video on average has 1.41 activities which are annotated with temporal boundaries. The ground-truth annotations of test videos are not public." }, { "dkey": "Kinetics-600", "dval": "The Kinetics-600 is a large-scale action recognition dataset which consists of around 480K videos from 600 action categories. The 480K videos are divided into 390K, 30K, 60K for training, validation and test sets, respectively. Each video in the dataset is a 10-second clip of action moment annotated from raw YouTube video. It is an extensions of the Kinetics-400 dataset." }, { "dkey": "Kinetics", "dval": "The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 600 human action classes with at least 600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class. The videos are collected from YouTube." }, { "dkey": "AViD", "dval": "Is a collection of action videos from many different countries. The motivation is to create a public dataset that would benefit training and pretraining of action recognition models for everybody, rather than making it useful for limited countries." }, { "dkey": "DISFA", "dval": "The Denver Intensity of Spontaneous Facial Action (DISFA) dataset consists of 27 videos of 4844 frames each, with 130,788 images in total. Action unit annotations are on different levels of intensity, which are ignored in the following experiments and action units are either set or unset. DISFA was selected from a wider range of databases popular in the field of facial expression recognition because of the high number of smiles, i.e. action unit 12. In detail, 30,792 have this action unit set, 82,176 images have some action unit(s) set and 48,612 images have no action unit(s) set at all." }, { "dkey": "JHMDB", "dval": "JHMDB is an action recognition dataset that consists of 960 video sequences belonging to 21 actions. It is a subset of the larger HMDB51 dataset collected from digitized movies and YouTube videos. The dataset contains video and annotation for puppet flow per frame (approximated optimal flow on the person), puppet mask per frame, joint positions per frame, action label per clip and meta label per clip (camera motion, visible body parts, camera viewpoint, number of people, video quality)." }, { "dkey": "MTL-AQA", "dval": "A new multitask action quality assessment (AQA) dataset, the largest to date, comprising of more than 1600 diving samples; contains detailed annotations for fine-grained action recognition, commentary generation, and estimating the AQA score. Videos from multiple angles provided wherever available." }, { "dkey": "EPIC-KITCHENS-100", "dval": "This paper introduces the pipeline to scale the largest dataset in egocentric vision EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (EPIC-KITCHENS-55), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection also enables evaluating the \"test of time\" - i.e. whether models trained on data collected in 2018 can generalise to new footage collected under the same hypotheses albeit \"two years on\".\nThe dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics." } ]
I want to train a supervised model that is robust to adversarial perturbations.
adversarial robustness image classification
2,018
[ "ImageNet-P", "NYU-VP", "eQASC", "SNIPS", "DailyDialog++", "APRICOT", "Clothing1M" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "ImageNet-P", "dval": "ImageNet-P consists of noise, blur, weather, and digital distortions. The dataset has validation perturbations; has difficulty levels; has CIFAR-10, Tiny ImageNet, ImageNet 64 × 64, standard, and Inception-sized editions; and has been designed for benchmarking not training networks. ImageNet-P departs from ImageNet-C by having perturbation sequences generated from each ImageNet validation image. Each sequence contains more than 30 frames, so to counteract an increase in dataset size and evaluation time only 10 common perturbations are used." }, { "dkey": "NYU-VP", "dval": "NYU-VP is a new dataset for multi-model fitting, vanishing point (VP) estimation in this case. Each image is annotated with up to eight vanishing points, and pre-extracted line segments are provided which act as data points for a robust estimator. Due to its size, the dataset is the first to allow for supervised learning of a multi-model fitting task." }, { "dkey": "eQASC", "dval": "This dataset contains 98k 2-hop explanations for questions in the QASC dataset, with annotations indicating if they are valid (~25k) or invalid (~73k) explanations.\n\nThis repository addresses the current lack of training data for distinguish valid multihop explanations from invalid, by providing three new datasets. The main one, eQASC, contains 98k explanation annotations for the multihop question answering dataset QASC, and is the first that annotates multiple candidate explanations for each answer.\n\nThe second dataset, eQASC-perturbed, is constructed by crowd-sourcing perturbations (while preserving their validity) of a subset of explanations in QASC, to test consistency and generalization of explanation prediction models. The third dataset eOBQA is constructed by adding explanation annotations to the OBQA dataset to test generalization of models trained on eQASC." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "DailyDialog++", "dval": "Consists of (i) five relevant responses for each context and (ii) five adversarially crafted irrelevant responses for each context." }, { "dkey": "APRICOT", "dval": "APRICOT is a collection of over 1,000 annotated photographs of printed adversarial patches in public locations. The patches target several object categories for three COCO-trained detection models, and the photos represent natural variation in position, distance, lighting conditions, and viewing angle." }, { "dkey": "Clothing1M", "dval": "Clothing1M contains 1M clothing images in 14 classes. It is a dataset with noisy labels, since the data is collected from several online shopping websites and include many mislabelled samples. This dataset also contains 50k, 14k, and 10k images with clean labels for training, validation, and testing, respectively." } ]
We propose a simple yet effective approach to exploit the available dense depth
3d semantic labeling images dense depth maps outdoor street scenes
2,018
[ "DocBank", "IMDB-BINARY", "REDDIT-BINARY", "Localized Narratives", "SBU Captions Dataset", "Shiny dataset" ]
[ "SYNTHIA", "Cityscapes" ]
[ { "dkey": "SYNTHIA", "dval": "The SYNTHIA dataset is a synthetic dataset that consists of 9400 multi-viewpoint photo-realistic frames rendered from a virtual city and comes with pixel-level semantic annotations for 13 classes. Each frame has resolution of 1280 × 960." }, { "dkey": "Cityscapes", "dval": "Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of around 5000 fine annotated images and 20000 coarse annotated ones. Data was captured in 50 cities during several months, daytimes, and good weather conditions. It was originally recorded as video so the frames were manually selected to have the following features: large number of dynamic objects, varying scene layout, and varying background." }, { "dkey": "DocBank", "dval": "A benchmark dataset that contains 500K document pages with fine-grained token-level annotations for document layout analysis. DocBank is constructed using a simple yet effective way with weak supervision from the \\LaTeX{} documents available on the arXiv.com." }, { "dkey": "IMDB-BINARY", "dval": "IMDB-BINARY is a movie collaboration dataset that consists of the ego-networks of 1,000 actors/actresses who played roles in movies in IMDB. In each graph, nodes represent actors/actress, and there is an edge between them if they appear in the same movie. These graphs are derived from the Action and Romance genres." }, { "dkey": "REDDIT-BINARY", "dval": "REDDIT-BINARY consists of graphs corresponding to online discussions on Reddit. In each graph, nodes represent users, and there is an edge between them if at least one of them respond to the other’s comment. There are four popular subreddits, namely, IAmA, AskReddit, TrollXChromosomes, and atheism. IAmA and AskReddit are two question/answer based subreddits, and TrollXChromosomes and atheism are two discussion-based subreddits. A graph is labeled according to whether it belongs to a question/answer-based community or a discussion-based community." }, { "dkey": "Localized Narratives", "dval": "We propose Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing. Since the voice and the mouse pointer are synchronized, we can localize every single word in the description. This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data. We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available. We provide an extensive analysis of these annotations showing they are diverse, accurate, and efficient to produce. We also demonstrate their utility on the application of controlled image captioning." }, { "dkey": "SBU Captions Dataset", "dval": "A collection that allows researchers to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results." }, { "dkey": "Shiny dataset", "dval": "The shiny folder contains 8 scenes with challenging view-dependent effects used in our paper. We also provide additional scenes in the shiny_extended folder. \nThe test images for each scene used in our paper consist of one of every eight images in alphabetical order.\n\nEach scene contains the following directory structure:\nscene/\n dense/\n cameras.bin\n images.bin\n points3D.bin\n project.ini\n images/\n image_name1.png\n image_name2.png\n ...\n image_nameN.png\n images_distort/\n image_name1.png\n image_name2.png\n ...\n image_nameN.png\n sparse/\n cameras.bin\n images.bin\n points3D.bin\n project.ini\n database.db\n hwf_cxcy.npy\n planes.txt\n poses_bounds.npy\n\n\ndense/ folder contains COLMAP's output [1] after the input images are undistorted.\nimages/ folder contains undistorted images. (We use these images in our experiments.)\nimages_distort/ folder contains raw images taken from a smartphone.\nsparse/ folder contains COLMAP's sparse reconstruction output [1].\n\nOur poses_bounds.npy is similar to the LLFF[2] file format with a slight modification. This file stores a Nx14 numpy array, where N is the number of cameras. Each row in this array is split into two parts of sizes 12 and 2. The first part, when reshaped into 3x4, represents the camera extrinsic (camera-to-world transformation), and the second part with two dimensions stores the distances from that point of view to the first and last planes (near, far). These distances are computed automatically based on the scene’s statistics using LLFF’s code. (For details on how these are computed, see this code) \n\nhwf_cxcy.npy stores the camera intrinsic (height, width, focal length, principal point x, principal point y) in a 1x5 numpy array.\n\nplanes.txt stores information about the MPI planes. The first two numbers are the distances from a reference camera to the first and last planes (near, far). The third number tells whether the planes are placed equidistantly in the depth space (0) or inverse depth space (1). The last number is the padding size in pixels on all four sides of each of the MPI planes. I.e., the total dimension of each plane is (H + 2 * padding, W + 2 * padding).\n\nReferences:\n\n\n[1]: COLMAP structure from motion (Schönberger and Frahm, 2016).\n[2]: Local Light Field Fusion: Practical View Synthesis with Prescriptive Sampling Guidelines (Mildenhall et al., 2019)." } ]
A novel hybrid convolutional and transformer model, WaLDORf, that achieves state-of-the-
nlu text
2,019
[ "BraTS 2017", "THEODORE", "Glint360K", "GTEA", "PG-19", "LibriSpeech", "Multi-PIE" ]
[ "QNLI", "GLUE", "SQuAD" ]
[ { "dkey": "QNLI", "dval": "The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark." }, { "dkey": "GLUE", "dval": "General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE and WNLI." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "BraTS 2017", "dval": "The BRATS2017 dataset. It contains 285 brain tumor MRI scans, with four MRI modalities as T1, T1ce, T2, and Flair for each scan. The dataset also provides full masks for brain tumors, with labels for ED, ET, NET/NCR. The segmentation evaluation is based on three tasks: WT, TC and ET segmentation." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "Glint360K", "dval": "The largest and cleanest face recognition dataset Glint360K, \nwhich contains 17,091,657 images of 360,232 individuals, baseline models trained on Glint360K can easily achieve state-of-the-art performance." }, { "dkey": "GTEA", "dval": "The Georgia Tech Egocentric Activities (GTEA) dataset contains seven types of daily activities such as making sandwich, tea, or coffee. Each activity is performed by four different people, thus totally 28 videos. For each video, there are about 20 fine-grained action instances such as take bread, pour ketchup, in approximately one minute." }, { "dkey": "PG-19", "dval": "A new open-vocabulary language modelling benchmark derived from books." }, { "dkey": "LibriSpeech", "dval": "The LibriSpeech corpus is a collection of approximately 1,000 hours of audiobooks that are a part of the LibriVox project. Most of the audiobooks come from the Project Gutenberg. The training data is split into 3 partitions of 100hr, 360hr, and 500hr sets while the dev and test data are split into the ’clean’ and ’other’ categories, respectively, depending upon how well or challenging Automatic Speech Recognition systems would perform against. Each of the dev and test sets is around 5hr in audio length. This corpus also provides the n-gram language models and the corresponding texts excerpted from the Project Gutenberg books, which contain 803M tokens and 977K unique words." }, { "dkey": "Multi-PIE", "dval": "The Multi-PIE (Multi Pose, Illumination, Expressions) dataset consists of face images of 337 subjects taken under different pose, illumination and expressions. The pose range contains 15 discrete views, capturing a face profile-to-profile. Illumination changes were modeled using 19 flashlights located in different places of the room." } ]
Visual question answering (VQA) is an important task in the field of computer
visual question answering images natural language
2,016
[ "VizWiz", "ST-VQA", "VQA-E", "TDIUC" ]
[ "DBpedia", "COCO", "DAQUAR" ]
[ { "dkey": "DBpedia", "dval": "DBpedia (from \"DB\" for \"database\") is a project aiming to extract structured content from the information created in the Wikipedia project. DBpedia allows users to semantically query relationships and properties of Wikipedia resources, including links to other related datasets." }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "DAQUAR", "dval": "DAQUAR (DAtaset for QUestion Answering on Real-world images) is a dataset of human question answer pairs about images." }, { "dkey": "VizWiz", "dval": "The VizWiz-VQA dataset originates from a natural visual question answering setting where blind people each took an image and recorded a spoken question about it, together with 10 crowdsourced answers per visual question. The proposed challenge addresses the following two tasks for this dataset: predict the answer to a visual question and (2) predict whether a visual question cannot be answered." }, { "dkey": "ST-VQA", "dval": "ST-VQA aims to highlight the importance of exploiting high-level semantic information present in images as textual cues in the VQA process." }, { "dkey": "VQA-E", "dval": "VQA-E is a dataset for Visual Question Answering with Explanation, where the models are required to generate and explanation with the predicted answer. The VQA-E dataset is automatically derived from the VQA v2 dataset by synthesizing a textual explanation for each image-question-answer triple." }, { "dkey": "TDIUC", "dval": "Task Directed Image Understanding Challenge (TDIUC) dataset is a Visual Question Answering dataset which consists of 1.6M questions and 170K images sourced from MS COCO and the Visual Genome Dataset. The image-question pairs are split into 12 categories and 4 additional evaluation matrices which help evaluate models’ robustness against answer imbalance and its ability to answer questions that require higher reasoning capability. The TDIUC dataset divides the VQA paradigm into 12 different task directed question types. These include questions that require a simpler task (e.g., object presence, color attribute) and more complex tasks (e.g., counting, positional reasoning). The dataset includes also an “Absurd” question category in which questions are irrelevant to the image contents to help balance the dataset." } ]
I want to build an effective tracking model based on a simple tracking framework.
tracking image sequences
2,019
[ "SNIPS", "ProPara", "Frames Dataset", "PoseTrack" ]
[ "Penn Treebank", "OTB" ]
[ { "dkey": "Penn Treebank", "dval": "The English Penn Treebank (PTB) corpus, and in particular the section of the corpus corresponding to the articles of Wall Street Journal (WSJ), is one of the most known and used corpus for the evaluation of models for sequence labelling. The task consists of annotating each word with its Part-of-Speech tag. In the most common split of this corpus, sections from 0 to 18 are used for training (38 219 sentences, 912 344 tokens), sections from 19 to 21 are used for validation (5 527 sentences, 131 768 tokens), and sections from 22 to 24 are used for testing (5 462 sentences, 129 654 tokens).\nThe corpus is also commonly used for character-level and word-level Language Modelling." }, { "dkey": "OTB", "dval": "Object Tracking Benchmark (OTB) is a visual tracking benchmark that is widely used to evaluate the performance of a visual tracking algorithm. The dataset contains a total of 100 sequences and each is annotated frame-by-frame with bounding boxes and 11 challenge attributes. OTB-2013 dataset contains 51 sequences and the OTB-2015 dataset contains all 100 sequences of the OTB dataset." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "ProPara", "dval": "The ProPara dataset is designed to train and test comprehension of simple paragraphs describing processes (e.g., photosynthesis), designed for the task of predicting, tracking, and answering questions about how entities change during the process.\n\nProPara aims to promote the research in natural language understanding in the context of procedural text. This requires identifying the actions described in the paragraph and tracking state changes happening to the entities involved. The comprehension task is treated as that of predicting, tracking, and answering questions about how entities change during the procedure. The dataset contains 488 paragraphs and 3,300 sentences. Each paragraph is richly annotated with the existence and locations of all the main entities (the “participants”) at every time step (sentence) throughout the procedure (~81,000 annotations).\n\nProPara paragraphs are natural (authored by crowdsourcing) rather than synthetic (e.g., in bAbI). Workers were given a prompt (e.g., “What happens during photosynthesis?”) and then asked to author a series of sentences describing the sequence of events in the procedure. From these sentences, participant entities and their existence and locations were identified. The goal of the challenge is to predict the existence and location of each participant, based on sentences in the paragraph." }, { "dkey": "Frames Dataset", "dval": "This dataset is dialog dataset collected in a Wizard-of-Oz fashion. Two humans talked to each other via a chat interface. One was playing the role of the user and the other one was playing the role of the conversational agent. The latter is called a wizard as a reference to the Wizard of Oz, the man behind the curtain. The wizards had access to a database of 250+ packages, each composed of a hotel and round-trip flights. The users were asked to find the best deal. This resulted in complex dialogues where a user would often consider different options, compare packages, and progressively build the description of her ideal trip." }, { "dkey": "PoseTrack", "dval": "The PoseTrack dataset is a large-scale benchmark for multi-person pose estimation and tracking in videos. It requires not only pose estimation in single frames, but also temporal tracking across frames. It contains 514 videos including 66,374 frames in total, split into 300, 50 and 208 videos for training, validation and test set respectively. For training videos, 30 frames from the center are annotated. For validation and test videos, besides 30 frames from the center, every fourth frame is also annotated for evaluating long range articulated tracking. The annotations include 15 body keypoints location, a unique person id and a head bounding box for each person instance." } ]
I want to use distant supervision to extract evidence sentences from reference documents for MRC tasks.
machine reading comprehension text paragraph-level
2,019
[ "DocRED", "Delicious", "ELI5", "Melinda", "DWIE", "FOBIE" ]
[ "RACE", "SearchQA", "MultiNLI" ]
[ { "dkey": "RACE", "dval": "The ReAding Comprehension dataset from Examinations (RACE) dataset is a machine reading comprehension dataset consisting of 27,933 passages and 97,867 questions from English exams, targeting Chinese students aged 12-18. RACE consists of two subsets, RACE-M and RACE-H, from middle school and high school exams, respectively. RACE-M has 28,293 questions and RACE-H has 69,574. Each question is associated with 4 candidate answers, one of which is correct. The data generation process of RACE differs from most machine reading comprehension datasets - instead of generating questions and answers by heuristics or crowd-sourcing, questions in RACE are specifically designed for testing human reading skills, and are created by domain experts." }, { "dkey": "SearchQA", "dval": "SearchQA was built using an in-production, commercial search engine. It closely reflects the full pipeline of a (hypothetical) general question-answering system, which consists of information retrieval and answer synthesis." }, { "dkey": "MultiNLI", "dval": "The Multi-Genre Natural Language Inference (MultiNLI) dataset has 433K sentence pairs. Its size and mode of collection are modeled closely like SNLI. MultiNLI offers ten distinct genres (Face-to-face, Telephone, 9/11, Travel, Letters, Oxford University Press, Slate, Verbatim, Goverment and Fiction) of written and spoken English data. There are matched dev/test sets which are derived from the same sources as those in the training set, and mismatched sets which do not closely resemble any seen at training time." }, { "dkey": "DocRED", "dval": "DocRED (Document-Level Relation Extraction Dataset) is a relation extraction dataset constructed from Wikipedia and Wikidata. Each document in the dataset is human-annotated with named entity mentions, coreference information, intra- and inter-sentence relations, and supporting evidence. DocRED requires reading multiple sentences in a document to extract entities and infer their relations by synthesizing all information of the document. Along with the human-annotated data, the dataset provides large-scale distantly supervised data.\n\nDocRED contains 132,375 entities and 56,354 relational facts annotated on 5,053 Wikipedia documents. In addition to the human-annotated data, the dataset provides large-scale distantly supervised data over 101,873 documents." }, { "dkey": "Delicious", "dval": "Delicious : This data set contains tagged web pages retrieved from the website delicious.com." }, { "dkey": "ELI5", "dval": "ELI5 is a dataset for long-form question answering. It contains 270K complex, diverse questions that require explanatory multi-sentence answers. Web search results are used as evidence documents to answer each question.\n\nELI5 is also a task in Dodecadialogue." }, { "dkey": "Melinda", "dval": "Introduces a new dataset, MELINDA, for Multimodal biomEdicaL experImeNt methoD clAssification. The dataset is collected in a fully automated distant supervision manner, where the labels are obtained from an existing curated database, and the actual contents are extracted from papers associated with each of the records in the database." }, { "dkey": "DWIE", "dval": "The 'Deutsche Welle corpus for Information Extraction' (DWIE) is a multi-task dataset that combines four main Information Extraction (IE) annotation sub-tasks: (i) Named Entity Recognition (NER), (ii) Coreference Resolution, (iii) Relation Extraction (RE), and (iv) Entity Linking. DWIE is conceived as an entity-centric dataset that describes interactions and properties of conceptual entities on the level of the complete document." }, { "dkey": "FOBIE", "dval": "The Focused Open Biology Information Extraction (FOBIE) dataset aims to support IE from Computer-Aided Biomimetics. The dataset contains ~1,500 sentences from scientific biological texts. These sentences are annotated with TRADE-OFFS and syntactically similar relations between unbounded arguments, as well as argument-modifiers.\n\nThe FOBIE dataset has been used to explore Semi-Open Relation Extraction (SORE). The code for this and instructions can be found inside the SORE folder Readme.md, or in the ReadTheDocs documentations." } ]
I want to train a classifier to classify objects in images.
object classification images
2,018
[ "GYAFC", "UCF101", "Chinese Classifier", "Food-101", "SNIPS", "StreetStyle" ]
[ "ImageNet", "CelebA" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "GYAFC", "dval": "Grammarly’s Yahoo Answers Formality Corpus (GYAFC) is the largest dataset for any style containing a total of 110K informal / formal sentence pairs.\n\nYahoo Answers is a question answering forum, contains a large number of informal sentences and allows redistribution of data. The authors used the Yahoo Answers L6 corpus to create the GYAFC dataset of informal and formal sentence pairs. In order to ensure a uniform distribution of data, they removed sentences that are questions, contain URLs, and are shorter than 5 words or longer than 25. After these preprocessing steps, 40 million sentences remain. \n\nThe Yahoo Answers corpus consists of several different domains like Business, Entertainment & Music, Travel, Food, etc. Pavlick and Tetreault formality classifier (PT16) shows that the formality level varies significantly\nacross different genres. In order to control for this variation, the authors work with two specific domains that contain the most informal sentences and show results on training and testing within those categories. The authors use the formality classifier from PT16 to identify informal sentences and train this classifier on the Answers genre of the PT16 corpus\nwhich consists of nearly 5,000 randomly selected sentences from Yahoo Answers manually annotated on a scale of -3 (very informal) to 3 (very formal). They find that the domains of Entertainment & Music and Family & Relationships contain the most informal sentences and create the GYAFC dataset using these domains." }, { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "Chinese Classifier", "dval": "Classifiers are function words that are used to express quantities in Chinese and are especially difficult for language learners. This dataset of Chinese Classifiers can be used to predict Chinese classifiers from context.\nThe dataset contains a large collection of example sentences for Chinese classifier usage derived from three language corpora (Lancaster Corpus of Mandarin Chinese, UCLA Corpus of Written Chinese and Leiden Weibo Corpus). The data was cleaned and processed for a context-based classifier prediction task." }, { "dkey": "Food-101", "dval": "The Food-101 dataset consists of 101 food categories with 750 training and 250 test images per category, making a total of 101k images. The labels for the test images have been manually cleaned, while the training set contains some noise." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "StreetStyle", "dval": "StreetStyle is a large-scale dataset of photos of people annotated with clothing attributes, and use this dataset to train attribute classifiers via deep learning." } ]
I want to use a video inpainting model to inpaint the frames of
video frame inpainting
2,020
[ "FVI", "DeepFashion", "DTD", "OpenEDS", "SNIPS", "FaceForensics" ]
[ "UCF101", "HMDB51" ]
[ { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "HMDB51", "dval": "The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. The original evaluation scheme uses three different training/testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. The average accuracy over these three splits is used to measure the final performance." }, { "dkey": "FVI", "dval": "The Free-Form Video Inpainting dataset is a dataset used for training and evaluation video inpainting models. It consists of 1940 videos from the YouTube-VOS dataset and 12,600 videos from the YouTube-BoundingBoxes." }, { "dkey": "DeepFashion", "dval": "DeepFashion is a dataset containing around 800K diverse fashion images with their rich annotations (46 categories, 1,000 descriptive attributes, bounding boxes and landmark information) ranging from well-posed product images to real-world-like consumer photos." }, { "dkey": "DTD", "dval": "The Describable Textures Dataset (DTD) contains 5640 texture images in the wild. They are annotated with human-centric attributes inspired by the perceptual properties of textures." }, { "dkey": "OpenEDS", "dval": "OpenEDS (Open Eye Dataset) is a large scale data set of eye-images captured using a virtual-reality (VR) head mounted display mounted with two synchronized eyefacing cameras at a frame rate of 200 Hz under controlled illumination. This dataset is compiled from video capture of the eye-region collected from 152 individual participants and is divided into four subsets: (i) 12,759 images with pixel-level annotations for key eye-regions: iris, pupil and sclera (ii) 252,690 unlabelled eye-images, (iii) 91,200 frames from randomly selected video sequence of 1.5 seconds in duration and (iv) 143 pairs of left and right point cloud data compiled from corneal topography of eye regions collected from a subset, 143 out of 152, participants in the study." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "FaceForensics", "dval": "FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces. This dataset has two versions:\n\n\n\nSource-to-Target: where the authors reenact over 1000 videos with new facial expressions extracted from other videos, which e.g. can be used to train a classifier to detect fake images or videos.\n\n\n\nSelfreenactment: where the authors use Face2Face to reenact the facial expressions of videos with their own facial expressions as input to get pairs of videos, which e.g. can be used to train supervised generative refinement models." } ]
An end-to-end hierarchical action recognition architecture.
action recognition video
2,017
[ "EPIC-KITCHENS-55", "CCPD", "E2E", "PixelHelp", "MLe2e", "RCTW-17", "DDD20" ]
[ "ImageNet", "HMDB51" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "HMDB51", "dval": "The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. The original evaluation scheme uses three different training/testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. The average accuracy over these three splits is used to measure the final performance." }, { "dkey": "EPIC-KITCHENS-55", "dval": "The EPIC-KITCHENS-55 dataset comprises a set of 432 egocentric videos recorded by 32 participants in their kitchens at 60fps with a head mounted camera. There is no guiding script for the participants who freely perform activities in kitchens related to cooking, food preparation or washing up among others. Each video is split into short action segments (mean duration is 3.7s) with specific start and end times and a verb and noun annotation describing the action (e.g. ‘open fridge‘). The verb classes are 125 and the noun classes 331. The dataset is divided into one train and two test splits." }, { "dkey": "CCPD", "dval": "The Chinese City Parking Dataset (CCPD) is a dataset for license plate detection and recognition. It contains over 250k unique car images, with license plate location annotations." }, { "dkey": "E2E", "dval": "End-to-End NLG Challenge (E2E) aims to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena." }, { "dkey": "PixelHelp", "dval": "PixelHelp includes 187 multi-step instructions of 4 task categories deined in https://support.google.com/pixelphone and annotated by human. This dataset includes 88 general tasks, such as configuring accounts, 38 Gmail tasks, 31 Chrome tasks, and 30 Photos related tasks. This dataset is an updated opensource version of the original PixelHelp dataset, which was used for testing the end-to-end grounding quality of the model in paper \"Mapping Natural Language Instructions to Mobile UI Action Sequences\". The similar accuracy is acquired on this version of the dataset." }, { "dkey": "MLe2e", "dval": "MLe2 is a dataset for the evaluation of scene text end-to-end reading systems and all intermediate stages such as text detection, script identification and text recognition. The dataset contains a total of 711 scene images covering four different scripts (Latin, Chinese, Kannada, and Hangul)." }, { "dkey": "RCTW-17", "dval": "Features a large-scale dataset with 12,263 annotated images. Two tasks, namely text localization and end-to-end recognition, are set up. The competition took place from January 20 to May 31, 2017. 23 valid submissions were received from 19 teams." }, { "dkey": "DDD20", "dval": "The dataset was captured with a DAVIS camera that concurrently streams both dynamic vision sensor (DVS) brightness change events and active pixel sensor (APS) intensity frames. DDD20 is the longest event camera end-to-end driving dataset to date with 51h of DAVIS event+frame camera and vehicle human control data collected from 4000km of highway and urban driving under a variety of lighting conditions." } ]
I'm training a question answering system on the SQuAD dataset.
long-form question answering text paragraph-level
2,019
[ "Spoken-SQuAD", "SQuAD", "SQuAD-shifts", "MultiReQA", "TweetQA", "QNLI" ]
[ "ELI5", "WikiSum" ]
[ { "dkey": "ELI5", "dval": "ELI5 is a dataset for long-form question answering. It contains 270K complex, diverse questions that require explanatory multi-sentence answers. Web search results are used as evidence documents to answer each question.\n\nELI5 is also a task in Dodecadialogue." }, { "dkey": "WikiSum", "dval": "WikiSum is a dataset based on English Wikipedia and suitable for a task of multi-document abstractive summarization. In each instance, the input is comprised of a Wikipedia topic (title of article) and a collection of non-Wikipedia reference documents, and the target is the Wikipedia article text. The dataset is restricted to the articles with at least one crawlable citation. The official split divides the articles roughly into 80/10/10 for train/development/test subsets, resulting in 1865750, 233252, and 232998 examples respectively." }, { "dkey": "Spoken-SQuAD", "dval": "In SpokenSQuAD, the document is in spoken form, the input question is in the form of text and the answer to each question is always a span in the document. The following procedures were used to generate spoken documents from the original SQuAD dataset. First, the Google text-to-speech system was used to generate the spoken version of the articles in SQuAD. Then CMU Sphinx was sued to generate the corresponding ASR transcriptions. The SQuAD training set was used to generate the training set of Spoken SQuAD, and SQuAD development set was used to generate the testing set for Spoken SQuAD. If the answer of a question did not exist in the ASR transcriptions of the associated article, the question-answer pair was removed from the dataset because these examples are too difficult for listening comprehension machine at this stage." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "SQuAD-shifts", "dval": "Provides four new test sets for the Stanford Question Answering Dataset (SQuAD) and evaluate the ability of question-answering systems to generalize to new data." }, { "dkey": "MultiReQA", "dval": "MultiReQA is a cross-domain evaluation for retrieval question answering models. Retrieval question answering (ReQA) is the task of retrieving a sentence-level answer to a question from an open corpus. MultiReQA is a new multi-domain ReQA evaluation suite composed of eight retrieval QA tasks drawn from publicly available QA datasets from the MRQA shared task.\nMultiReQA contains the sentence boundary annotation from eight publicly available QA datasets including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, BioASQ, RelationExtraction, and TextbookQA. Five of these datasets, including SearchQA, TriviaQA, HotpotQA, NaturalQuestions, SQuAD, contain both training and test data, and three, in cluding BioASQ, RelationExtraction, TextbookQA, contain only the test data." }, { "dkey": "TweetQA", "dval": "With social media becoming increasingly popular on which lots of news and real-time events are reported, developing automated question answering systems is critical to the effectiveness of many applications that rely on real-time knowledge. While previous question answering (QA) datasets have concentrated on formal text like news and Wikipedia, the first large-scale dataset for QA over social media data is presented. To make sure the tweets are meaningful and contain interesting information, tweets used by journalists to write news articles are gathered. Then human annotators are asked to write questions and answers upon these tweets. Unlike other QA datasets like SQuAD in which the answers are extractive, the answer are allowed to be abstractive. The task requires model to read a short tweet and a question and outputs a text phrase (does not need to be in the tweet) as the answer." }, { "dkey": "QNLI", "dval": "The QNLI (Question-answering NLI) dataset is a Natural Language Inference dataset automatically derived from the Stanford Question Answering Dataset v1.1 (SQuAD). SQuAD v1.1 consists of question-paragraph pairs, where one of the sentences in the paragraph (drawn from Wikipedia) contains the answer to the corresponding question (written by an annotator). The dataset was converted into sentence pair classification by forming a pair between each question and each sentence in the corresponding context, and filtering out pairs with low lexical overlap between the question and the context sentence. The task is to determine whether the context sentence contains the answer to the question. This modified version of the original task removes the requirement that the model select the exact answer, but also removes the simplifying assumptions that the answer is always present in the input and that lexical overlap is a reliable cue. The QNLI dataset is part of GLEU benchmark." } ]
A novel CNN architecture for face detection. The main contribution is a new loss layer for CNNs, which
face detection image
2,016
[ "MMED", "THEODORE", "AFLW2000-3D", "MSU-MFSD", "CNN/Daily Mail", "ReCoRD", "MLSUM" ]
[ "COFW", "AFLW" ]
[ { "dkey": "COFW", "dval": "The Caltech Occluded Faces in the Wild (COFW) dataset is designed to present faces in real-world conditions. Faces show large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food, hands, microphones,
etc.). All images were hand annotated using the same 29 landmarks as in LFPW. Both the landmark positions as well as their occluded/unoccluded state were annotated. The faces are occluded to different degrees, with large variations in the type of occlusions encountered. COFW has an average occlusion of over 23." }, { "dkey": "AFLW", "dval": "The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image." }, { "dkey": "MMED", "dval": "Contains 25,165 textual news articles collected from hundreds of news media sites (e.g., Yahoo News, Google News, CNN News.) and 76,516 image posts shared on Flickr social media, which are annotated according to 412 real-world events. The dataset is collected to explore the problem of organizing heterogeneous data contributed by professionals and amateurs in different data domains, and the problem of transferring event knowledge obtained from one data domain to heterogeneous data domain, thus summarizing the data with different contributors." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "AFLW2000-3D", "dval": "AFLW2000-3D is a dataset of 2000 images that have been annotated with image-level 68-point 3D facial landmarks. This dataset is used for evaluation of 3D facial landmark detection models. The head poses are very diverse and often hard to be detected by a CNN-based face detector." }, { "dkey": "MSU-MFSD", "dval": "The MSU-MFSD dataset contains 280 video recordings of genuine and attack faces. 35 individuals have participated in the development of this database with a total of 280 videos. Two kinds of cameras with different resolutions (720×480 and 640×480) were used to record the videos from the 35 individuals. For the real accesses, each individual has two video recordings captured with the Laptop cameras and Android, respectively. For the video attacks, two types of cameras, the iPhone and Canon cameras were used to capture high definition videos on each of the subject. The videos taken with Canon camera were then replayed on iPad Air screen to generate the HD replay attacks while the videos recorded by the iPhone mobile were replayed itself to generate the mobile replay attacks. Photo attacks were produced by printing the 35 subjects’ photos on A3 papers using HP colour printer. The recording videos with respect to the 35 individuals were divided into training (15 subjects with 120 videos) and testing (40 subjects with 160 videos) datasets, respectively." }, { "dkey": "CNN/Daily Mail", "dval": "CNN/Daily Mail is a dataset for text summarization. Human generated abstractive summary bullets were generated from news stories in CNN and Daily Mail websites as questions (with one of the entities hidden), and stories as the corresponding passages from which the system is expected to answer the fill-in the-blank question. The authors released the scripts that crawl, extract and generate pairs of passages and questions from these websites.\n\nIn all, the corpus has 286,817 training pairs, 13,368 validation pairs and 11,487 test pairs, as defined by their scripts. The source documents in the training set have 766 words spanning 29.74 sentences on an average while the summaries consist of 53 words and 3.72 sentences." }, { "dkey": "ReCoRD", "dval": "Reading Comprehension with Commonsense Reasoning Dataset (ReCoRD) is a large-scale reading comprehension dataset which requires commonsense reasoning. ReCoRD consists of queries automatically generated from CNN/Daily Mail news articles; the answer to each query is a text span from a summarizing passage of the corresponding news. The goal of ReCoRD is to evaluate a machine's ability of commonsense reasoning in reading comprehension. ReCoRD is pronounced as [ˈrɛkərd]." }, { "dkey": "MLSUM", "dval": "A large-scale MultiLingual SUMmarization dataset. Obtained from online newspapers, it contains 1.5M+ article/summary pairs in five different languages -- namely, French, German, Spanish, Russian, Turkish. Together with English newspapers from the popular CNN/Daily mail dataset, the collected data form a large scale multilingual dataset which can enable new research directions for the text summarization community." } ]
The proposed attention-based adversarial defense framework consists of a two-stage pipeline. The first stage is designed
adversarial defense images
2,018
[ "AnimalWeb", "Raindrop", "DramaQA", "ECSSD", "Fakeddit", "WinoGrande", "Spoken-SQuAD" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "AnimalWeb", "dval": "A large-scale, hierarchical annotated dataset of animal faces, featuring 21.9K faces from 334 diverse species and 21 animal orders across biological taxonomy. These faces are captured `in-the-wild' conditions and are consistently annotated with 9 landmarks on key facial features. The proposed dataset is structured and scalable by design; its development underwent four systematic stages involving rigorous, manual annotation effort of over 6K man-hours." }, { "dkey": "Raindrop", "dval": "Raindrop is a set of image pairs, where\neach pair contains exactly the same background scene, yet\none is degraded by raindrops and the other one is free from\nraindrops. To obtain this, the images are captured through two pieces of exactly the\nsame glass: one sprayed with water, and the other is left\nclean. The dataset consists of 1,119 pairs of images, with various\nbackground scenes and raindrops. They were captured with a Sony A6000\nand a Canon EOS 60." }, { "dkey": "DramaQA", "dval": "The DramaQA focuses on two perspectives: 1) Hierarchical QAs as an evaluation metric based on the cognitive developmental stages of human intelligence. 2) Character-centered video annotations to model local coherence of the story. The dataset is built upon the TV drama \"Another Miss Oh\" and it contains 17,983 QA pairs from 23,928 various length video clips, with each QA pair belonging to one of four difficulty levels." }, { "dkey": "ECSSD", "dval": "The Extended Complex Scene Saliency Dataset (ECSSD) is comprised of complex scenes, presenting textures and structures common to real-world images. ECSSD contains 1,000 intricate images and respective ground-truth saliency maps, created as an average of the labeling of five human participants." }, { "dkey": "Fakeddit", "dval": "Fakeddit is a novel multimodal dataset for fake news detection consisting of over 1 million samples from multiple categories of fake news. After being processed through several stages of review, the samples are labeled according to 2-way, 3-way, and 6-way classification categories through distant supervision." }, { "dkey": "WinoGrande", "dval": "WinoGrande is a large-scale dataset of 44k problems, inspired by the original WSC design, but adjusted to improve both the scale and the hardness of the dataset. The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations." }, { "dkey": "Spoken-SQuAD", "dval": "In SpokenSQuAD, the document is in spoken form, the input question is in the form of text and the answer to each question is always a span in the document. The following procedures were used to generate spoken documents from the original SQuAD dataset. First, the Google text-to-speech system was used to generate the spoken version of the articles in SQuAD. Then CMU Sphinx was sued to generate the corresponding ASR transcriptions. The SQuAD training set was used to generate the training set of Spoken SQuAD, and SQuAD development set was used to generate the testing set for Spoken SQuAD. If the answer of a question did not exist in the ASR transcriptions of the associated article, the question-answer pair was removed from the dataset because these examples are too difficult for listening comprehension machine at this stage." } ]
I want to use a CNN-based tracking model.
visual tracking video
2,019
[ "SNIPS", "LAG", "AFLW2000-3D", "DiCOVA", "ConvAI2" ]
[ "OTB", "VOT2017" ]
[ { "dkey": "OTB", "dval": "Object Tracking Benchmark (OTB) is a visual tracking benchmark that is widely used to evaluate the performance of a visual tracking algorithm. The dataset contains a total of 100 sequences and each is annotated frame-by-frame with bounding boxes and 11 challenge attributes. OTB-2013 dataset contains 51 sequences and the OTB-2015 dataset contains all 100 sequences of the OTB dataset." }, { "dkey": "VOT2017", "dval": "VOT2017 is a Visual Object Tracking dataset for different tasks that contains 60 short sequences annotated with 6 different attributes." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "LAG", "dval": "Includes 5,824 fundus images labeled with either positive glaucoma (2,392) or negative glaucoma (3,432)." }, { "dkey": "AFLW2000-3D", "dval": "AFLW2000-3D is a dataset of 2000 images that have been annotated with image-level 68-point 3D facial landmarks. This dataset is used for evaluation of 3D facial landmark detection models. The head poses are very diverse and often hard to be detected by a CNN-based face detector." }, { "dkey": "DiCOVA", "dval": "The DiCOVA Challenge dataset is derived from the Coswara dataset, a crowd-sourced dataset of sound recordings from COVID-19 positive and non-COVID-19 individuals. The Coswara data is collected using a web-application2, launched in April-2020, accessible through the internet by anyone around the globe. The volunteering subjects are advised to record their respiratory sounds in a quiet environment. \n\nEach subject provides 9 audio recordings, namely, (a) shallow and deep breathing (2 nos.), (b) shallow and heavy cough (2 nos.), (c) sustained phonation of vowels [æ] (as in bat), [i] (as in beet), and [u] (as in boot) (3 nos.), and (d) fast and normal pace 1 to 20 number counting (2 nos.). \n\nThe DiCOVA Challenge has two tracks. The participants also provided metadata corresponding to their current health status (includes COVID19 status, any other respiratory ailments, and symptoms), demographic information, age and gender. From this Coswara dataset, two datasets have been created: \n\n(a) Track-1 dataset: composed of cough sound recordings. It t is composed of cough audio data from 1040 subjects.\n(b) Track-2 dataset: composed of deep breathing, vowel [i], and number counting (normal pace) speech recordings. It is composed of audio data from 1199 subjects." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." } ]
Instance mask projection is an end-to-end trainable operator that projects instance
semantic segmentation images top-view grid map sequences autonomous driving
2,019
[ "WikiReading", "KnowledgeNet", "THEODORE", "PKU-MMD", "LSHTC", "SOBA", "ISBDA" ]
[ "COCO", "Cityscapes" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "Cityscapes", "dval": "Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of around 5000 fine annotated images and 20000 coarse annotated ones. Data was captured in 50 cities during several months, daytimes, and good weather conditions. It was originally recorded as video so the frames were manually selected to have the following features: large number of dynamic objects, varying scene layout, and varying background." }, { "dkey": "WikiReading", "dval": "WikiReading is a large-scale natural language understanding task and publicly-available dataset with 18 million instances. The task is to predict textual values from the structured knowledge base Wikidata by reading the text of the corresponding Wikipedia articles. The task contains a rich variety of challenging classification and extraction sub-tasks, making it well-suited for end-to-end models such as deep neural networks (DNNs)." }, { "dkey": "KnowledgeNet", "dval": "KnowledgeNet is a benchmark dataset for the task of automatically populating a knowledge base (Wikidata) with facts expressed in natural language text on the web. KnowledgeNet provides text exhaustively annotated with facts, thus enabling the holistic end-to-end evaluation of knowledge base population systems as a whole, unlike previous benchmarks that are more suitable for the evaluation of individual subcomponents (e.g., entity linking, relation extraction).\n\nFor instance, the dataset contains text expressing the fact (Gennaro Basile; RESIDENCE; Moravia), in the passage: \"Gennaro Basile was an Italian painter, born in Naples but active in the German-speaking countries. He settled at Brünn, in Moravia, and lived about 1756...\"" }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "PKU-MMD", "dval": "The PKU-MMD dataset is a large skeleton-based action detection dataset. It contains 1076 long untrimmed video sequences performed by 66 subjects in three camera views. 51 action categories are annotated, resulting almost 20,000 action instances and 5.4 million frames in total. Similar to NTU RGB+D, there are also two recommended evaluate protocols, i.e. cross-subject and cross-view." }, { "dkey": "LSHTC", "dval": "LSHTC is a dataset for large-scale text classification. The data used in the LSHTC challenges originates from two popular sources: the DBpedia and the ODP (Open Directory Project) directory, also known as DMOZ. DBpedia instances were selected from the english, non-regional Extended Abstracts provided by the DBpedia site. The DMOZ instances consist\nof either Content vectors, Description vectors or both. A Content vectors is obtained by directly indexing the web page using standard indexing chain (preprocessing, stemming/lemmatization, stop-word removal)." }, { "dkey": "SOBA", "dval": "A new dataset called SOBA, named after Shadow-OBject Association, with 3,623 pairs of shadow and object instances in 1,000 photos, each with individual labeled masks." }, { "dkey": "ISBDA", "dval": "Consists of user-generated aerial videos from social media with annotations of instance-level building damage masks. This provides the first benchmark for quantitative evaluation of models to assess building damage using aerial videos." } ]
I want to train a model to answer questions from text.
question answering text
2,018
[ "TextVQA", "RecipeQA", "CommonsenseQA", "BREAK", "TrecQA", "Spoken-SQuAD" ]
[ "WebQuestions", "SQuAD", "TriviaQA" ]
[ { "dkey": "WebQuestions", "dval": "The WebQuestions dataset is a question answering dataset using Freebase as the knowledge base and contains 6,642 question-answer pairs. It was created by crawling questions through the Google Suggest API, and then obtaining answers using Amazon Mechanical Turk. The original split uses 3,778 examples for training and 2,032 for testing. All answers are defined as Freebase entities.\n\nExample questions (answers) in the dataset include “Where did Edgar Allan Poe died?” (baltimore) or “What degrees did Barack Obama get?” (bachelor_of_arts, juris_doctor)." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "TriviaQA", "dval": "TriviaQA is a realistic text-based question answering dataset which includes 950K question-answer pairs from 662K documents collected from Wikipedia and the web. This dataset is more challenging than standard QA benchmark datasets such as Stanford Question Answering Dataset (SQuAD), as the answers for a question may not be directly obtained by span prediction and the context is very long. TriviaQA dataset consists of both human-verified and machine-generated QA subsets." }, { "dkey": "TextVQA", "dval": "TextVQA is a dataset to benchmark visual reasoning based on text in images.\nTextVQA requires models to read and reason about text in images to answer questions about them. Specifically, models need to incorporate a new modality of text present in the images and reason over it to answer TextVQA questions.\n\nStatistics\n* 28,408 images from OpenImages\n* 45,336 questions\n* 453,360 ground truth answers" }, { "dkey": "RecipeQA", "dval": "RecipeQA is a dataset for multimodal comprehension of cooking recipes. It consists of over 36K question-answer pairs automatically generated from approximately 20K unique recipes with step-by-step instructions and images. Each question in RecipeQA involves multiple modalities such as titles, descriptions or images, and working towards an answer requires (i) joint understanding of images and text, (ii) capturing the temporal flow of events, and (iii) making sense of procedural knowledge." }, { "dkey": "CommonsenseQA", "dval": "The CommonsenseQA is a dataset for commonsense question answering task. The dataset consists of 12,247 questions with 5 choices each.\nThe dataset was generated by Amazon Mechanical Turk workers in the following process (an example is provided in parentheses):\n\n\na crowd worker observes a source concept from ConceptNet (“River”) and three target concepts (“Waterfall”, “Bridge”, “Valley”) that are all related by the same ConceptNet relation (“AtLocation”),\nthe worker authors three questions, one per target concept, such that only that particular target concept is the answer, while the other two distractor concepts are not, (“Where on a river can you hold a cup upright to catch water on a sunny day?”, “Where can I stand on a river to see water falling without getting wet?”, “I’m crossing the river, my feet are wet but my body is dry, where am I?”)\nfor each question, another worker chooses one additional distractor from Concept Net (“pebble”, “stream”, “bank”), and the author another distractor (“mountain”, “bottom”, “island”) manually." }, { "dkey": "BREAK", "dval": "Break is a question understanding dataset, aimed at training models to reason over complex questions. It features 83,978 natural language questions, annotated with a new meaning representation, Question Decomposition Meaning Representation (QDMR). Each example has the natural question along with its QDMR representation. Break contains human composed questions, sampled from 10 leading question-answering benchmarks over text, images and databases. This dataset was created by a team of NLP researchers at Tel Aviv University and Allen Institute for AI." }, { "dkey": "TrecQA", "dval": "Text Retrieval Conference Question Answering (TrecQA) is a dataset created from the TREC-8 (1999) to TREC-13 (2004) Question Answering tracks. There are two versions of TrecQA: raw and clean. Both versions have the same training set but their development and test sets differ. The commonly used clean version of the dataset excludes questions in development and test sets with no answers or only positive/negative answers. The clean version has 1,229/65/68 questions and 53,417/1,117/1,442 question-answer pairs for the train/dev/test split." }, { "dkey": "Spoken-SQuAD", "dval": "In SpokenSQuAD, the document is in spoken form, the input question is in the form of text and the answer to each question is always a span in the document. The following procedures were used to generate spoken documents from the original SQuAD dataset. First, the Google text-to-speech system was used to generate the spoken version of the articles in SQuAD. Then CMU Sphinx was sued to generate the corresponding ASR transcriptions. The SQuAD training set was used to generate the training set of Spoken SQuAD, and SQuAD development set was used to generate the testing set for Spoken SQuAD. If the answer of a question did not exist in the ASR transcriptions of the associated article, the question-answer pair was removed from the dataset because these examples are too difficult for listening comprehension machine at this stage." } ]
3D object recognition is an important component of many vision and robotics systems.
3d object recognition voxels pixels
2,016
[ "OCID", "3DNet", "Flightmare Simulator", "HoME" ]
[ "ImageNet", "ModelNet" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "ModelNet", "dval": "The ModelNet40 dataset contains synthetic object point clouds. As the most widely used benchmark for point cloud analysis, ModelNet40 is popular because of its various categories, clean shapes, well-constructed dataset, etc. The original ModelNet40 consists of 12,311 CAD-generated meshes in 40 categories (such as airplane, car, plant, lamp), of which 9,843 are used for training while the rest 2,468 are reserved for testing. The corresponding point cloud data points are uniformly sampled from the mesh surfaces, and then further preprocessed by moving to the origin and scaling into a unit sphere." }, { "dkey": "OCID", "dval": "Developing robot perception systems for handling objects in the real-world requires computer vision algorithms to be carefully scrutinized with respect to the expected operating domain. This demands large quantities of ground truth data to rigorously evaluate the performance of algorithms.\n\nThe Object Cluttered Indoor Dataset is an RGBD-dataset containing point-wise labeled point-clouds for each object. The data was captured using two ASUS-PRO Xtion cameras that are positioned at different heights. It captures diverse settings of objects, background, context, sensor to scene distance, viewpoint angle and lighting conditions. The main purpose of OCID is to allow systematic comparison of existing object segmentation methods in scenes with increasing amount of clutter. In addition OCID does also provide ground-truth data for other vision tasks like object-classification and recognition." }, { "dkey": "3DNet", "dval": "The 3DNet dataset is a free resource for object class recognition and 6DOF pose estimation from point cloud data. 3DNet provides a large-scale hierarchical CAD-model databases with increasing numbers of classes and difficulty with 10, 60 and 200 object classes together with evaluation datasets that contain thousands of scenes captured with an RGB-D sensor." }, { "dkey": "Flightmare Simulator", "dval": "Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc." }, { "dkey": "HoME", "dval": "HoME (Household Multimodal Environment) is a multimodal environment for artificial agents to learn from vision, audio, semantics, physics, and interaction with objects and other agents, all within a realistic context. HoME integrates over 45,000 diverse 3D house layouts based on the SUNCG dataset, a scale which may facilitate learning, generalization, and transfer. HoME is an open-source, OpenAI Gym-compatible platform extensible to tasks in reinforcement learning, language grounding, sound-based navigation, robotics, multi-agent learning, and more." } ]
I want to detect facial landmark and components simultaneously.
landmark-region-based facial detection images
2,019
[ "AFLW2000-3D", "300-VW", "LS3D-W", "AFLW", "AffectNet" ]
[ "Helen", "AFW" ]
[ { "dkey": "Helen", "dval": "The HELEN dataset is composed of 2330 face images of 400×400 pixels with labeled facial components generated through manually-annotated contours along eyes, eyebrows, nose, lips and jawline." }, { "dkey": "AFW", "dval": "AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box." }, { "dkey": "AFLW2000-3D", "dval": "AFLW2000-3D is a dataset of 2000 images that have been annotated with image-level 68-point 3D facial landmarks. This dataset is used for evaluation of 3D facial landmark detection models. The head poses are very diverse and often hard to be detected by a CNN-based face detector." }, { "dkey": "300-VW", "dval": "300 Videos in the Wild (300-VW) is a dataset for evaluating facial landmark tracking algorithms in the wild. The dataset authors collected a large number of long facial videos recorded in the wild. Each video has duration of ~1 minute (at 25-30 fps). All frames have been annotated with regards to the same mark-up (i.e. set of facial landmarks) used in the 300 W competition as well (a total of 68 landmarks). The dataset includes 114 videos (circa 1 min each)." }, { "dkey": "LS3D-W", "dval": "A 3D facial landmark dataset of around 230,000 images." }, { "dkey": "AFLW", "dval": "The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image." }, { "dkey": "AffectNet", "dval": "AffectNet is a large facial expression dataset with around 0.4 million images manually labeled for the presence of eight (neutral, happy, angry, sad, fear, surprise, disgust, contempt) facial expressions along with the intensity of valence and arousal." } ]
I would like to implement the Neural Architecture Search (NAS) approach and apply it
neural architecture search
2,019
[ "NAS-Bench-201", "NAS-Bench-101", "NATS-Bench", "NAS-Bench-1Shot1", "30MQA" ]
[ "ImageNet", "Caltech-101", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "Caltech-101", "dval": "The Caltech101 dataset contains images from 101 object categories (e.g., “helicopter”, “elephant” and “chair” etc.) and a background category that contains the images not from the 101 object categories. For each object category, there are about 40 to 800 images, while most classes have about 50 images. The resolution of the image is roughly about 300×200 pixels." }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "NAS-Bench-201", "dval": "NAS-Bench-201 is a benchmark (and search space) for neural architecture search. Each architecture consists of a predefined skeleton with a stack of the searched cell. In this way, architecture search is transformed into the problem of searching a good cell." }, { "dkey": "NAS-Bench-101", "dval": "NAS-Bench-101 is the first public architecture dataset for NAS research. To build NASBench-101, the authors carefully constructed a compact, yet expressive, search space, exploiting graph isomorphisms to identify 423k unique convolutional\narchitectures. The authors trained and evaluated all of these architectures multiple times on CIFAR-10 and compiled the results into a large dataset of over 5 million trained models. This allows researchers to evaluate the quality of a diverse range of models in milliseconds by querying the precomputed dataset." }, { "dkey": "NATS-Bench", "dval": "A unified benchmark on searching for both topology and size, for (almost) any up-to-date NAS algorithm. NATS-Bench includes the search space of 15,625 neural cell candidates for architecture topology and 32,768 for architecture size on three datasets." }, { "dkey": "NAS-Bench-1Shot1", "dval": "NAS-Bench-1Shot1 draws on the recent large-scale tabular benchmark NAS-Bench-101 for cheap anytime evaluations of one-shot NAS methods." }, { "dkey": "30MQA", "dval": "An enormous question answer pair corpus produced by applying a novel neural network architecture on the knowledge base Freebase to transduce facts into natural language questions." } ]
Multi-task learning for face attribute recognition.
multi-attribute face recognition images
2,018
[ "PETA", "SUN Attribute", "PA-HMDB51", "CompCars", "IJB-A", "HVU" ]
[ "Adience", "CelebA" ]
[ { "dkey": "Adience", "dval": "The Adience dataset, published in 2014, contains 26,580 photos across 2,284 subjects with a binary gender label and one label from eight different age groups, partitioned into five splits. The key principle of the data set is to capture the images as close to real world conditions as possible, including all variations in appearance, pose, lighting condition and image quality, to name a few." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "PETA", "dval": "The PEdesTrian Attribute dataset (PETA) is a dataset fore recognizing pedestrian attributes, such as gender and clothing style, at a far distance. It is of interest in video surveillance scenarios where face and body close-shots and hardly available. It consists of 19,000 pedestrian images with 65 attributes (61 binary and 4 multi-class). Those images contain 8705 persons." }, { "dkey": "SUN Attribute", "dval": "The SUN Attribute dataset consists of 14,340 images from 717 scene categories, and each category is annotated with a taxonomy of 102 discriminate attributes. The dataset can be used for high-level scene understanding and fine-grained scene recognition." }, { "dkey": "PA-HMDB51", "dval": "The Privacy Annotated HMDB51 (PA-HMDB51) dataset is a video-based dataset for evaluating pirvacy protection in visual action recognition algorithms. The dataset contains both target task labels (action) and selected privacy attributes (skin color, face, gender, nudity, and relationship) annotated on a per-frame basis." }, { "dkey": "CompCars", "dval": "The Comprehensive Cars (CompCars) dataset contains data from two scenarios, including images from web-nature and surveillance-nature. The web-nature data contains 163 car makes with 1,716 car models. There are a total of 136,726 images capturing the entire cars and 27,618 images capturing the car parts. The full car images are labeled with bounding boxes and viewpoints. Each car model is labeled with five attributes, including maximum speed, displacement, number of doors, number of seats, and type of car. The surveillance-nature data contains 50,000 car images captured in the front view. \n\nThe dataset can be used for the tasks of:\n\n\nFine-grained classification\nAttribute prediction\nCar model verification\n\nThe dataset can be also used for other tasks such as image ranking, multi-task learning, and 3D reconstruction." }, { "dkey": "IJB-A", "dval": "The IARPA Janus Benchmark A (IJB-A) database is developed with the aim to augment more challenges to the face recognition task by collecting facial images with a wide variations in pose, illumination, expression, resolution and occlusion. IJB-A is constructed by collecting 5,712 images and 2,085 videos from 500 identities, with an average of 11.4 images and 4.2 videos per identity." }, { "dkey": "HVU", "dval": "HVU is organized hierarchically in a semantic taxonomy that focuses on multi-label and multi-task video understanding as a comprehensive problem that encompasses the recognition of multiple semantic aspects in the dynamic scene. HVU contains approx.~572k videos in total with 9 million annotations for training, validation, and test set spanning over 3142 labels. HVU encompasses semantic aspects defined on categories of scenes, objects, actions, events, attributes, and concepts which naturally captures the real-world scenarios." } ]
A deep learning approach for direct regression of [DATASET] human body poses from single images, which
human pose estimation images
2,019
[ "Deep Fashion3D", "ExPose", "EgoDexter", "JTA", "PoseTrack" ]
[ "3DPW", "COCO" ]
[ { "dkey": "3DPW", "dval": "The 3D Poses in the Wild dataset is the first dataset in the wild with accurate 3D poses for evaluation. While other datasets outdoors exist, they are all restricted to a small recording volume. 3DPW is the first one that includes video footage taken from a moving phone camera.\n\nThe dataset includes:\n\n\n60 video sequences.\n2D pose annotations.\n3D poses obtained with the method introduced in the paper.\nCamera poses for every frame in the sequences.\n3D body scans and 3D people models (re-poseable and re-shapeable). Each sequence contains its corresponding models.\n18 3D models in different clothing variations." }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "Deep Fashion3D", "dval": "A novel benchmark and dataset for the evaluation of image-based garment reconstruction systems. Deep Fashion3D contains 2078 models reconstructed from real garments, which covers 10 different categories and 563 garment instances. It provides rich annotations including 3D feature lines, 3D body pose and the corresponded multi-view real images. In addition, each garment is randomly posed to enhance the variety of real clothing deformations." }, { "dkey": "ExPose", "dval": "Curates a dataset of SMPL-X fits on in-the-wild images." }, { "dkey": "EgoDexter", "dval": "The EgoDexter dataset provides both 2D and 3D pose annotations for 4 testing video sequences with 3190 frames. The videos are recorded with body-mounted camera from egocentric viewpoints and contain cluttered backgrounds, fast camera motion, and complex interactions with various objects. Fingertip positions were manually annotated for 1485 out of 3190 frames." }, { "dkey": "JTA", "dval": "JTA is a dataset for people tracking in urban scenarios by exploiting a photorealistic videogame. It is up to now the vastest dataset (about 500.000 frames, almost 10 million body poses) of human body parts for people tracking in urban scenarios." }, { "dkey": "PoseTrack", "dval": "The PoseTrack dataset is a large-scale benchmark for multi-person pose estimation and tracking in videos. It requires not only pose estimation in single frames, but also temporal tracking across frames. It contains 514 videos including 66,374 frames in total, split into 300, 50 and 208 videos for training, validation and test set respectively. For training videos, 30 frames from the center are annotated. For validation and test videos, besides 30 frames from the center, every fourth frame is also annotated for evaluating long range articulated tracking. The annotations include 15 body keypoints location, a unique person id and a head bounding box for each person instance." } ]
A facial expression recognition system with a variant of evolutionary firefly algorithm for feature optimization.
facial expression recognition images
2,016
[ "SFEW", "BP4D", "AffectNet", "RAF-DB", "Oulu-CASIA", "FRGC" ]
[ "MMI", "JAFFE" ]
[ { "dkey": "MMI", "dval": "The MMI Facial Expression Database consists of over 2900 videos and high-resolution still images of 75 subjects. It is fully annotated for the presence of AUs in videos (event coding), and partially coded on frame-level, indicating for each frame whether an AU is in either the neutral, onset, apex or offset phase. A small part was annotated for audio-visual laughters." }, { "dkey": "JAFFE", "dval": "The JAFFE dataset consists of 213 images of different facial expressions from 10 different Japanese female subjects. Each subject was asked to do 7 facial expressions (6 basic facial expressions and neutral) and the images were annotated with average semantic ratings on each facial expression by 60 annotators." }, { "dkey": "SFEW", "dval": "The Static Facial Expressions in the Wild (SFEW) dataset is a dataset for facial expression recognition. It was created by selecting static frames from the AFEW database by computing key frames based on facial point clustering. The most commonly used version, SFEW 2.0, was the benchmarking data for the SReco sub-challenge in EmotiW 2015. SFEW 2.0 has been divided into three sets: Train (958 samples), Val (436 samples) and Test (372 samples). Each of the images is assigned to one of seven expression categories, i.e., anger, disgust, fear, neutral, happiness, sadness, and surprise. The expression labels of the training and validation sets are publicly available, whereas those of the testing set are held back by the challenge organizer." }, { "dkey": "BP4D", "dval": "The BP4D-Spontaneous dataset is a 3D video database of spontaneous facial expressions in a diverse group of young adults. Well-validated emotion inductions were used to elicit expressions of emotion and paralinguistic communication. Frame-level ground-truth for facial actions was obtained using the Facial Action Coding System. Facial features were tracked in both 2D and 3D domains using both person-specific and generic approaches.\nThe database includes forty-one participants (23 women, 18 men). They were 18 – 29 years of age; 11 were Asian, 6 were African-American, 4 were Hispanic, and 20 were Euro-American. An emotion elicitation protocol was designed to elicit emotions of participants effectively. Eight tasks were covered with an interview process and a series of activities to elicit eight emotions.\nThe database is structured by participants. Each participant is associated with 8 tasks. For each task, there are both 3D and 2D videos. As well, the Metadata include manually annotated action units (FACS AU), automatically tracked head pose, and 2D/3D facial landmarks. The database is in the size of about 2.6TB (without compression)." }, { "dkey": "AffectNet", "dval": "AffectNet is a large facial expression dataset with around 0.4 million images manually labeled for the presence of eight (neutral, happy, angry, sad, fear, surprise, disgust, contempt) facial expressions along with the intensity of valence and arousal." }, { "dkey": "RAF-DB", "dval": "The Real-world Affective Faces Database (RAF-DB) is a dataset for facial expression. It contains 29672 facial images tagged with basic or compound expressions by 40 independent taggers. Images in this database are of great variability in subjects' age, gender and ethnicity, head poses, lighting conditions, occlusions, (e.g. glasses, facial hair or self-occlusion), post-processing operations (e.g. various filters and special effects), etc." }, { "dkey": "Oulu-CASIA", "dval": "The Oulu-CASIA NIR&VIS facial expression database consists of six expressions (surprise, happiness, sadness, anger, fear and disgust) from 80 people between 23 and 58 years old. 73.8% of the subjects are males. The subjects were asked to sit on a chair in the observation room in a way that he/ she is in front of camera. Camera-face distance is about 60 cm. Subjects were asked to make a facial expression according to an expression example shown in picture sequences. The imaging hardware works at the rate of 25 frames per second and the image resolution is 320 × 240 pixels." }, { "dkey": "FRGC", "dval": "The data for FRGC consists of 50,000 recordings divided into training and validation partitions. The training partition is designed for training algorithms and the validation partition is for assessing performance of an approach in a laboratory setting. The validation partition consists of data from 4,003 subject sessions. A subject session is the set of all images of a person taken each time a person's biometric data is collected and consists of four controlled still images, two uncontrolled still images, and one three-dimensional image. The controlled images were taken in a studio setting, are full frontal facial images taken under two lighting conditions and with two facial expressions (smiling and neutral). The uncontrolled images were taken in varying illumination conditions; e.g., hallways, atriums, or outside. Each set of uncontrolled images contains two expressions, smiling and neutral. The 3D image was taken under controlled illumination conditions. The 3D images consist of both a range and a texture image. The 3D images were acquired by a Minolta Vivid 900/910 series sensor." } ]
We propose a novel and simple method to train GANs. As an analysis, we show that the limited support problem
gan images
2,019
[ "BDD100K", "UASOL", "Localized Narratives", "SuperGLUE", "THEODORE", "MMDB" ]
[ "CIFAR-10", "CelebA" ]
[ { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "BDD100K", "dval": "Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page." }, { "dkey": "UASOL", "dval": "The UASOL an RGB-D stereo dataset, that contains 160902 frames, filmed at 33 different scenes, each with between 2 k and 10 k frames. The frames show different paths from the perspective of a pedestrian, including sidewalks, trails, roads, etc. The images were extracted from video files with 15 fps at HD2K resolution with a size of 2280 × 1282 pixels. The dataset also provides a GPS geolocalization tag for each second of the sequences and reflects different climatological conditions. It also involved up to 4 different persons filming the dataset at different moments of the day.\n\nWe propose a train, validation and test split to train the network. \nAdditionally, we introduce a subset of 676 pairs of RGB Stereo images and their respective depth, which we extracted randomly from the entire dataset. This given test set is introduced to make comparability possible between the different methods trained with the dataset." }, { "dkey": "Localized Narratives", "dval": "We propose Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing. Since the voice and the mouse pointer are synchronized, we can localize every single word in the description. This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data. We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available. We provide an extensive analysis of these annotations showing they are diverse, accurate, and efficient to produce. We also demonstrate their utility on the application of controlled image captioning." }, { "dkey": "SuperGLUE", "dval": "SuperGLUE is a benchmark dataset designed to pose a more rigorous test of language understanding than GLUE. SuperGLUE has the same high-level motivation as GLUE: to provide a simple, hard-to-game measure of progress toward general-purpose language understanding technologies for English. SuperGLUE follows the basic design of GLUE: It consists of a public leaderboard built around eight language understanding tasks, drawing on existing data, accompanied by a single-number\nperformance metric, and an analysis toolkit. However, it improves upon GLUE in several ways:\n\n\nMore challenging tasks: SuperGLUE retains the two hardest tasks in GLUE. The remaining tasks were identified from those submitted to an open call for task proposals and were selected based on difficulty for current NLP approaches.\nMore diverse task formats: The task formats in GLUE are limited to sentence- and sentence-pair classification. The authors expand the set of task formats in SuperGLUE to include\ncoreference resolution and question answering (QA).\nComprehensive human baselines: the authors include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance.\nImproved code support: SuperGLUE is distributed with a new, modular toolkit for work on pretraining, multi-task learning, and transfer learning in NLP, built around standard tools including PyTorch (Paszke et al., 2017) and AllenNLP (Gardner et al., 2017).\nRefined usage rules: The conditions for inclusion on the SuperGLUE leaderboard were revamped to ensure fair competition, an informative leaderboard, and full credit\nassignment to data and task creators." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "MMDB", "dval": "Multimodal Dyadic Behavior (MMDB) dataset is a unique collection of multimodal (video, audio, and physiological) recordings of the social and communicative behavior of toddlers. The MMDB contains 160 sessions of 3-5 minute semi-structured play interaction between a trained adult examiner and a child between the age of 15 and 30 months. The MMDB dataset supports a novel problem domain for activity recognition, which consists of the decoding of dyadic social interactions between adults and children in a developmental context." } ]
I want to train a supervised model for image inpainting.
image inpainting
2,018
[ "FVI", "SNIPS", "ConvAI2", "I-HAZE", "CLUECorpus2020", "FaceForensics" ]
[ "Places", "CelebA" ]
[ { "dkey": "Places", "dval": "The Places dataset is proposed for scene recognition and contains more than 2.5 million images covering more than 205 scene categories with more than 5,000 images per category." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "FVI", "dval": "The Free-Form Video Inpainting dataset is a dataset used for training and evaluation video inpainting models. It consists of 1940 videos from the YouTube-VOS dataset and 12,600 videos from the YouTube-BoundingBoxes." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "I-HAZE", "dval": "The I-Haze dataset contains 25 indoor hazy images (size 2833×4657 pixels) training. It has 5 hazy images for validation along with their corresponding ground truth images." }, { "dkey": "CLUECorpus2020", "dval": "CLUECorpus2020 is a large-scale corpus that can be used directly for self-supervised learning such as pre-training of a language model, or language generation. It has 100G raw corpus with 35 billion Chinese characters, which is retrieved from Common Crawl." }, { "dkey": "FaceForensics", "dval": "FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces. This dataset has two versions:\n\n\n\nSource-to-Target: where the authors reenact over 1000 videos with new facial expressions extracted from other videos, which e.g. can be used to train a classifier to detect fake images or videos.\n\n\n\nSelfreenactment: where the authors use Face2Face to reenact the facial expressions of videos with their own facial expressions as input to get pairs of videos, which e.g. can be used to train supervised generative refinement models." } ]
I want to train a supervised image recognition model.
image recognition images
2,019
[ "SNIPS", "ConvAI2", "Libri-Light", "EPIC-KITCHENS-100", "I-HAZE", "CLUECorpus2020", "Stanford Cars" ]
[ "DTD", "COCO" ]
[ { "dkey": "DTD", "dval": "The Describable Textures Dataset (DTD) contains 5640 texture images in the wild. They are annotated with human-centric attributes inspired by the perceptual properties of textures." }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "Libri-Light", "dval": "Libri-Light is a collection of spoken English audio suitable for training speech recognition systems under limited or no supervision. It is derived from open-source audio books from the LibriVox project. It contains over 60K hours of audio." }, { "dkey": "EPIC-KITCHENS-100", "dval": "This paper introduces the pipeline to scale the largest dataset in egocentric vision EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (EPIC-KITCHENS-55), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection also enables evaluating the \"test of time\" - i.e. whether models trained on data collected in 2018 can generalise to new footage collected under the same hypotheses albeit \"two years on\".\nThe dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics." }, { "dkey": "I-HAZE", "dval": "The I-Haze dataset contains 25 indoor hazy images (size 2833×4657 pixels) training. It has 5 hazy images for validation along with their corresponding ground truth images." }, { "dkey": "CLUECorpus2020", "dval": "CLUECorpus2020 is a large-scale corpus that can be used directly for self-supervised learning such as pre-training of a language model, or language generation. It has 100G raw corpus with 35 billion Chinese characters, which is retrieved from Common Crawl." }, { "dkey": "Stanford Cars", "dval": "The Stanford Cars dataset consists of 196 classes of cars with a total of 16,185 images, taken from the rear. The data is divided into almost a 50-50 train/test split with 8,144 training images and 8,041 testing images. Categories are typically at the level of Make, Model, Year. The images are 360×240." } ]
I am training a model that classifies images into 1000 classes.
image classification images
2,019
[ "ConvAI2", "FaceForensics", "ShapeNet", "FSDnoisy18k", "CommonsenseQA", "FSS-1000", "GYAFC" ]
[ "ImageNet", "MPII", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "MPII", "dval": "The MPII Human Pose Dataset for single person pose estimation is composed of about 25K images of which 15K are training samples, 3K are validation samples and 7K are testing samples (which labels are withheld by the authors). The images are taken from YouTube videos covering 410 different human activities and the poses are manually annotated with up to 16 body joints." }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "FaceForensics", "dval": "FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces. This dataset has two versions:\n\n\n\nSource-to-Target: where the authors reenact over 1000 videos with new facial expressions extracted from other videos, which e.g. can be used to train a classifier to detect fake images or videos.\n\n\n\nSelfreenactment: where the authors use Face2Face to reenact the facial expressions of videos with their own facial expressions as input to get pairs of videos, which e.g. can be used to train supervised generative refinement models." }, { "dkey": "ShapeNet", "dval": "ShapeNet is a large scale repository for 3D CAD models developed by researchers from Stanford University, Princeton University and the Toyota Technological Institute at Chicago, USA. The repository contains over 300M models with 220,000 classified into 3,135 classes arranged using WordNet hypernym-hyponym relationships. ShapeNet Parts subset contains 31,693 meshes categorised into 16 common object classes (i.e. table, chair, plane etc.). Each shapes ground truth contains 2-5 parts (with a total of 50 part classes)." }, { "dkey": "FSDnoisy18k", "dval": "The FSDnoisy18k dataset is an open dataset containing 42.5 hours of audio across 20 sound event classes, including a small amount of manually-labeled data and a larger quantity of real-world noisy data. The audio content is taken from Freesound, and the dataset was curated using the Freesound Annotator. The noisy set of FSDnoisy18k consists of 15,813 audio clips (38.8h), and the test set consists of 947 audio clips (1.4h) with correct labels. The dataset features two main types of label noise: in-vocabulary (IV) and out-of-vocabulary (OOV). IV applies when, given an observed label that is incorrect or incomplete, the true or missing label is part of the target class set. Analogously, OOV means that the true or missing label is not covered by those 20 classes." }, { "dkey": "CommonsenseQA", "dval": "The CommonsenseQA is a dataset for commonsense question answering task. The dataset consists of 12,247 questions with 5 choices each.\nThe dataset was generated by Amazon Mechanical Turk workers in the following process (an example is provided in parentheses):\n\n\na crowd worker observes a source concept from ConceptNet (“River”) and three target concepts (“Waterfall”, “Bridge”, “Valley”) that are all related by the same ConceptNet relation (“AtLocation”),\nthe worker authors three questions, one per target concept, such that only that particular target concept is the answer, while the other two distractor concepts are not, (“Where on a river can you hold a cup upright to catch water on a sunny day?”, “Where can I stand on a river to see water falling without getting wet?”, “I’m crossing the river, my feet are wet but my body is dry, where am I?”)\nfor each question, another worker chooses one additional distractor from Concept Net (“pebble”, “stream”, “bank”), and the author another distractor (“mountain”, “bottom”, “island”) manually." }, { "dkey": "FSS-1000", "dval": "FSS-1000 is a 1000 class dataset for few-shot segmentation. The dataset contains significant number of objects that have never been seen or annotated in previous datasets, such as tiny daily objects, merchandise, cartoon characters, logos, etc." }, { "dkey": "GYAFC", "dval": "Grammarly’s Yahoo Answers Formality Corpus (GYAFC) is the largest dataset for any style containing a total of 110K informal / formal sentence pairs.\n\nYahoo Answers is a question answering forum, contains a large number of informal sentences and allows redistribution of data. The authors used the Yahoo Answers L6 corpus to create the GYAFC dataset of informal and formal sentence pairs. In order to ensure a uniform distribution of data, they removed sentences that are questions, contain URLs, and are shorter than 5 words or longer than 25. After these preprocessing steps, 40 million sentences remain. \n\nThe Yahoo Answers corpus consists of several different domains like Business, Entertainment & Music, Travel, Food, etc. Pavlick and Tetreault formality classifier (PT16) shows that the formality level varies significantly\nacross different genres. In order to control for this variation, the authors work with two specific domains that contain the most informal sentences and show results on training and testing within those categories. The authors use the formality classifier from PT16 to identify informal sentences and train this classifier on the Answers genre of the PT16 corpus\nwhich consists of nearly 5,000 randomly selected sentences from Yahoo Answers manually annotated on a scale of -3 (very informal) to 3 (very formal). They find that the domains of Entertainment & Music and Family & Relationships contain the most informal sentences and create the GYAFC dataset using these domains." } ]
In this paper, we show how state-of-the-art neural networks can learn to quantify
abstract quantification video
2,017
[ "LogiQA", "THEODORE", "E2E", "ANLI", "SemArt", "RFW", "SGD" ]
[ "ImageNet", "COCO" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "LogiQA", "dval": "LogiQA consists of 8,678 QA instances, covering multiple types of deductive reasoning. Results show that state-of-the-art neural models perform by far worse than human ceiling. The dataset can also serve as a benchmark for reinvestigating logical AI under the deep learning NLP setting." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "E2E", "dval": "End-to-End NLG Challenge (E2E) aims to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena." }, { "dkey": "ANLI", "dval": "The Adversarial Natural Language Inference (ANLI, Nie et al.) is a new large-scale NLI benchmark dataset, collected via an iterative, adversarial human-and-model-in-the-loop procedure. Particular, the data is selected to be difficult to the state-of-the-art models, including BERT and RoBERTa." }, { "dkey": "SemArt", "dval": "SemArt is a multi-modal dataset for semantic art understanding. SemArt is a collection of fine-art painting images in which each image is associated to a number of attributes and a textual artistic comment, such as those that appear in art catalogues or museum collections. It contains 21,384 samples that provides artistic comments along with fine-art paintings and their attributes for studying semantic art understanding." }, { "dkey": "RFW", "dval": "To validate the racial bias of four commercial APIs and four state-of-the-art (SOTA) algorithms." }, { "dkey": "SGD", "dval": "The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 domains, ranging from banks and events to media, calendar, travel, and weather. For most of these domains, the dataset contains multiple different APIs, many of which have overlapping functionalities but different interfaces, which reflects common real-world scenarios. The wide range of available annotations can be used for intent prediction, slot filling, dialogue state tracking, policy imitation learning, language generation, user simulation learning, among other tasks in large-scale virtual assistants. Besides these, the dataset has unseen domains and services in the evaluation set to quantify the performance in zero-shot or few shot settings." } ]
I want to build a robust defense system for adversarial attacks.
adversarial defense images
2,020
[ "Oxford5k", "PHM2017", "UNSW-NB15", "APRICOT", "ECSSD", "DailyDialog++" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "Oxford5k", "dval": "Oxford5K is the Oxford Buildings Dataset, which contains 5062 images collected from Flickr. It offers a set of 55 queries for 11 landmark buildings, five for each landmark." }, { "dkey": "PHM2017", "dval": "PHM2017 is a new dataset consisting of 7,192 English tweets across six diseases and conditions: Alzheimer’s Disease, heart attack (any severity), Parkinson’s disease, cancer (any type), Depression (any severity), and Stroke. The Twitter search API was used to retrieve the data using the colloquial disease names as search keywords, with the expectation of retrieving a high-recall, low precision dataset. After removing the re-tweets and replies, the tweets were manually annotated. The labels are:\n\n\nself-mention. The tweet contains a health mention with a health self-report of the Twitter account owner, e.g., \"However, I worked hard and ran for Tokyo Mayer Election Campaign in January through February, 2014, without publicizing the cancer.\"\nother-mention. The tweet contains a health mention of a health report about someone other than the account owner, e.g., \"Designer with Parkinson’s couldn’t work then engineer invents bracelet + changes her world\"\nawareness. The tweet contains the disease name, but does not mention a specific person, e.g., \"A Month Before a Heart Attack, Your Body Will Warn You With These 8 Signals\"\nnon-health. The tweet contains the disease name, but the tweet topic is not about health. \"Now I can have cancer on my wall for all to see <3\"" }, { "dkey": "UNSW-NB15", "dval": "UNSW-NB15 is a network intrusion dataset. It contains nine different attacks, includes DoS, worms, Backdoors, and Fuzzers. The dataset contains raw network packets. The number of records in the training set is 175,341 records and the testing set is 82,332 records from the different types, attack and normal.\n\nPaper: UNSW-NB15: a comprehensive data set for network intrusion detection systems" }, { "dkey": "APRICOT", "dval": "APRICOT is a collection of over 1,000 annotated photographs of printed adversarial patches in public locations. The patches target several object categories for three COCO-trained detection models, and the photos represent natural variation in position, distance, lighting conditions, and viewing angle." }, { "dkey": "ECSSD", "dval": "The Extended Complex Scene Saliency Dataset (ECSSD) is comprised of complex scenes, presenting textures and structures common to real-world images. ECSSD contains 1,000 intricate images and respective ground-truth saliency maps, created as an average of the labeling of five human participants." }, { "dkey": "DailyDialog++", "dval": "Consists of (i) five relevant responses for each context and (ii) five adversarially crafted irrelevant responses for each context." } ]
I'm trying to understand the effects of various pretraining objectives on the learned
function word comprehension
2,019
[ "ExtremeWeather", "SuperGLUE", "ProofWriter", "CosmosQA", "CHALET", "3D Ken Burns Dataset" ]
[ "COCO", "GLUE", "WikiText-103" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "GLUE", "dval": "General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE and WNLI." }, { "dkey": "WikiText-103", "dval": "The WikiText language modeling dataset is a collection of over 100 million tokens extracted from the set of verified Good and Featured articles on Wikipedia. The dataset is available under the Creative Commons Attribution-ShareAlike License.\n\nCompared to the preprocessed version of Penn Treebank (PTB), WikiText-2 is over 2 times larger and WikiText-103 is over 110 times larger. The WikiText dataset also features a far larger vocabulary and retains the original case, punctuation and numbers - all of which are removed in PTB. As it is composed of full articles, the dataset is well suited for models that can take advantage of long term dependencies." }, { "dkey": "ExtremeWeather", "dval": "Encourages machine learning research in this area and to help facilitate further work in understanding and mitigating the effects of climate change." }, { "dkey": "SuperGLUE", "dval": "SuperGLUE is a benchmark dataset designed to pose a more rigorous test of language understanding than GLUE. SuperGLUE has the same high-level motivation as GLUE: to provide a simple, hard-to-game measure of progress toward general-purpose language understanding technologies for English. SuperGLUE follows the basic design of GLUE: It consists of a public leaderboard built around eight language understanding tasks, drawing on existing data, accompanied by a single-number\nperformance metric, and an analysis toolkit. However, it improves upon GLUE in several ways:\n\n\nMore challenging tasks: SuperGLUE retains the two hardest tasks in GLUE. The remaining tasks were identified from those submitted to an open call for task proposals and were selected based on difficulty for current NLP approaches.\nMore diverse task formats: The task formats in GLUE are limited to sentence- and sentence-pair classification. The authors expand the set of task formats in SuperGLUE to include\ncoreference resolution and question answering (QA).\nComprehensive human baselines: the authors include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance.\nImproved code support: SuperGLUE is distributed with a new, modular toolkit for work on pretraining, multi-task learning, and transfer learning in NLP, built around standard tools including PyTorch (Paszke et al., 2017) and AllenNLP (Gardner et al., 2017).\nRefined usage rules: The conditions for inclusion on the SuperGLUE leaderboard were revamped to ensure fair competition, an informative leaderboard, and full credit\nassignment to data and task creators." }, { "dkey": "ProofWriter", "dval": "The ProofWriter dataset contains many small rulebases of facts and rules, expressed in English. Each rulebase also has a set of questions (English statements) which can either be proven true or false using proofs of various depths, or the answer is “Unknown” (in open-world setting, OWA) or assumed negative (in closed-world setting, CWA).\n\nThe dataset includes full proofs with intermediate conclusions, which models can try to reproduce.\n\nThe dataset supports various tasks:\n\n\nGiven rulebase + question, what is answer + proof (w/intermediates)?\nGiven rulebase, what are all the provable implications?\nGiven rulebase + question without proof, what single fact can be added to make the question true?" }, { "dkey": "CosmosQA", "dval": "CosmosQA is a large-scale dataset of 35.6K problems that require commonsense-based reading comprehension, formulated as multiple-choice questions. It focuses on reading between the lines over a diverse collection of people’s everyday narratives, asking questions concerning on the likely causes or effects of events that require reasoning beyond the exact text spans in the context." }, { "dkey": "CHALET", "dval": "CHALET is a 3D house simulator with support for navigation and manipulation. Unlike existing systems, CHALET supports both a wide range of object manipulation, as well as supporting complex environemnt layouts consisting of multiple rooms. The range of object manipulations includes the ability to pick up and place objects, toggle the state of objects like taps or televesions, open or close containers, and insert or remove objects from these containers. In addition, the simulator comes with 58 rooms that can be combined to create houses, including 10 default house layouts. CHALET is therefore suitable for setting up challenging environments for various AI tasks that require complex language understanding and planning, such as navigation, manipulation, instruction following, and interactive question answering." }, { "dkey": "3D Ken Burns Dataset", "dval": "Provides a large-scale synthetic dataset which contains accurate ground truth depth of various photo-realistic scenes." } ]
I am a newbie of deep learning. I want to learn a simple unified neural
person re-identification images
2,019
[ "Flightmare Simulator", "ConvAI2", "SNIPS", "dMelodies" ]
[ "VehicleID", "CUHK03" ]
[ { "dkey": "VehicleID", "dval": "The “VehicleID” dataset contains CARS captured during the daytime by multiple real-world surveillance cameras distributed in a small city in China. There are 26,267 vehicles (221,763 images in total) in the entire dataset. Each image is attached with an id label corresponding to its identity in real world. In addition, the dataset contains manually labelled 10319 vehicles (90196 images in total) of their vehicle model information(i.e.“MINI-cooper”, “Audi A6L” and “BWM 1 Series”)." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "Flightmare Simulator", "dval": "Flightmare is composed of two main components: a configurable rendering engine built on Unity and a flexible physics engine for dynamics simulation. Those two components are totally decoupled and can run independently from each other. Flightmare comes with several desirable features: (i) a large multi-modal sensor suite, including an interface to extract the 3D point-cloud of the scene; (ii) an API for reinforcement learning which can simulate hundreds of quadrotors in parallel; and (iii) an integration with a virtual-reality headset for interaction with the simulated environment. Flightmare can be used for various applications, including path-planning, reinforcement learning, visual-inertial odometry, deep learning, human-robot interaction, etc." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "dMelodies", "dval": "dMelodies is dataset of simple 2-bar melodies generated using 9 independent latent factors of variation where each data point represents a unique melody based on the following constraints:\n- Each melody will correspond to a unique scale (major, minor, blues, etc.).\n- Each melody plays the arpeggios using the standard I-IV-V-I cadence chord pattern.\n- Bar 1 plays the first 2 chords (6 notes), Bar 2 plays the second 2 chords (6 notes).\n- Each played note is an 8th note." } ]
We propose an end-to-end approach for video object segmentation. We integrate the
instance level video object segmentation
2,018
[ "THEODORE", "DeeperForensics-1.0", "PHOENIX14T", "iVQA", "MEDIA" ]
[ "DAVIS", "DAVIS 2016" ]
[ { "dkey": "DAVIS", "dval": "The Densely Annotation Video Segmentation dataset (DAVIS) is a high quality and high resolution densely annotated video segmentation dataset under two resolutions, 480p and 1080p. There are 50 video sequences with 3455 densely annotated frames in pixel level. 30 videos with 2079 frames are for training and 20 videos with 1376 frames are for validation." }, { "dkey": "DAVIS 2016", "dval": "DAVIS16 is a dataset for video object segmentation which consists of 50 videos in total (30 videos for training and 20 for testing). Per-frame pixel-wise annotations are offered." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "DeeperForensics-1.0", "dval": "DeeperForensics-1.0 represents the largest face forgery detection dataset by far, with 60,000 videos constituted by a total of 17.6 million frames, 10 times larger than existing datasets of the same kind. The full dataset includes 48,475 source videos and 11,000 manipulated videos. The source videos are collected on 100 paid and consented actors from 26 countries, and the manipulated videos are generated by a newly proposed many-to-many end-to-end face swapping method, DF-VAE. 7 types of real-world perturbations at 5 intensity levels are employed to ensure a larger scale and higher diversity." }, { "dkey": "PHOENIX14T", "dval": "Over a period of three years (2009 - 2011) the daily news and weather forecast airings of the German public tv-station PHOENIX featuring sign language interpretation have been recorded and the weather forecasts of a subset of 386 editions have been transcribed using gloss notation. Furthermore, we used automatic speech recognition with manual cleaning to transcribe the original German speech. As such, this corpus allows to train end-to-end sign language translation systems from sign language video input to spoken language.\n\nThe signing is recorded by a stationary color camera placed in front of the sign language interpreters. Interpreters wear dark clothes in front of an artificial grey background with color transition. All recorded videos are at 25 frames per second and the size of the frames is 210 by 260 pixels. Each frame shows the interpreter box only." }, { "dkey": "iVQA", "dval": "An open-ended VideoQA benchmark that aims to: i) provide a well-defined evaluation by including five correct answer annotations per question and ii) avoid questions which can be answered without the video. \n\niVQA contains 10,000 video clips with one question and five corresponding answers per clip. Moreover, we manually reduce the language bias by excluding questions that could be answered without watching the video." }, { "dkey": "MEDIA", "dval": "The MEDIA French corpus is dedicated to semantic extraction from speech in a context of human/machine dialogues. The corpus has manual transcription and conceptual annotation of dialogues from 250 speakers. It is split into the following three parts : (1) the training set (720 dialogues, 12K sentences), (2) the development set (79 dialogues, 1.3K sentences, and (3) the test set (200 dialogues, 3K sentences)." } ]
I want to train a CNN model for face recognition.
face recognition images
2,019
[ "SNIPS", "Glint360K", "ConvAI2", "FDDB", "AFLW2000-3D" ]
[ "CASIA-WebFace", "MegaFace" ]
[ { "dkey": "CASIA-WebFace", "dval": "The CASIA-WebFace dataset is used for face verification and face identification tasks. The dataset contains 494,414 face images of 10,575 real identities collected from the web." }, { "dkey": "MegaFace", "dval": "MegaFace was a publicly available dataset which is used for evaluating the performance of face recognition algorithms with up to a million distractors (i.e., up to a million people who are not in the test set). MegaFace contains 1M images from 690K individuals with unconstrained pose, expression, lighting, and exposure. MegaFace captures many different subjects rather than many images of a small number of subjects. The gallery set of MegaFace is collected from a subset of Flickr. The probe set of MegaFace used in the challenge consists of two databases; Facescrub and FGNet. FGNet contains 975 images of 82 individuals, each with several images spanning ages from 0 to 69. Facescrub dataset contains more than 100K face images of 530 people. The MegaFace challenge evaluates performance of face recognition algorithms by increasing the numbers of “distractors” (going from 10 to 1M) in the gallery set. In order to evaluate the face recognition algorithms fairly, MegaFace challenge has two protocols including large or small training sets. If a training set has more than 0.5M images and 20K subjects, it is considered as large. Otherwise, it is considered as small.\n\nNOTE: This dataset has been retired." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "Glint360K", "dval": "The largest and cleanest face recognition dataset Glint360K, \nwhich contains 17,091,657 images of 360,232 individuals, baseline models trained on Glint360K can easily achieve state-of-the-art performance." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "FDDB", "dval": "The Face Detection Dataset and Benchmark (FDDB) dataset is a collection of labeled faces from Faces in the Wild dataset. It contains a total of 5171 face annotations, where images are also of various resolution, e.g. 363x450 and 229x410. The dataset incorporates a range of challenges, including difficult pose angles, out-of-focus faces and low resolution. Both greyscale and color images are included." }, { "dkey": "AFLW2000-3D", "dval": "AFLW2000-3D is a dataset of 2000 images that have been annotated with image-level 68-point 3D facial landmarks. This dataset is used for evaluation of 3D facial landmark detection models. The head poses are very diverse and often hard to be detected by a CNN-based face detector." } ]
I want to train a supervised model for general human action recognition from video.
general human action recognition video
2,017
[ "Kinetics", "Kinetics-600", "EPIC-KITCHENS-100", "AViD", "HAA500" ]
[ "UCF101", "HMDB51" ]
[ { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "HMDB51", "dval": "The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. The original evaluation scheme uses three different training/testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. The average accuracy over these three splits is used to measure the final performance." }, { "dkey": "Kinetics", "dval": "The Kinetics dataset is a large-scale, high-quality dataset for human action recognition in videos. The dataset consists of around 500,000 video clips covering 600 human action classes with at least 600 video clips for each action class. Each video clip lasts around 10 seconds and is labeled with a single action class. The videos are collected from YouTube." }, { "dkey": "Kinetics-600", "dval": "The Kinetics-600 is a large-scale action recognition dataset which consists of around 480K videos from 600 action categories. The 480K videos are divided into 390K, 30K, 60K for training, validation and test sets, respectively. Each video in the dataset is a 10-second clip of action moment annotated from raw YouTube video. It is an extensions of the Kinetics-400 dataset." }, { "dkey": "EPIC-KITCHENS-100", "dval": "This paper introduces the pipeline to scale the largest dataset in egocentric vision EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (EPIC-KITCHENS-55), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection also enables evaluating the \"test of time\" - i.e. whether models trained on data collected in 2018 can generalise to new footage collected under the same hypotheses albeit \"two years on\".\nThe dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics." }, { "dkey": "AViD", "dval": "Is a collection of action videos from many different countries. The motivation is to create a public dataset that would benefit training and pretraining of action recognition models for everybody, rather than making it useful for limited countries." }, { "dkey": "HAA500", "dval": "HAA500 is a manually annotated human-centric atomic action dataset for action recognition on 500 classes with over 591k labeled frames. Unlike existing atomic action datasets, where coarse-grained atomic actions were labeled with action-verbs, e.g., \"Throw\", HAA500 contains fine-grained atomic actions where only consistent actions fall under the same label, e.g., \"Baseball Pitching\" vs \"Free Throw in Basketball\", to minimize ambiguities in action classification. HAA500 has been carefully curated to capture the movement of human figures with less spatio-temporal label noises to greatly enhance the training of deep neural networks." } ]
I want to translate between 2D image views and
image 3d shape translation images shapes
2,020
[ "3DMatch", "Dayton", "MTNT", "SNIPS", "UAVA" ]
[ "ShapeNet", "Pix3D" ]
[ { "dkey": "ShapeNet", "dval": "ShapeNet is a large scale repository for 3D CAD models developed by researchers from Stanford University, Princeton University and the Toyota Technological Institute at Chicago, USA. The repository contains over 300M models with 220,000 classified into 3,135 classes arranged using WordNet hypernym-hyponym relationships. ShapeNet Parts subset contains 31,693 meshes categorised into 16 common object classes (i.e. table, chair, plane etc.). Each shapes ground truth contains 2-5 parts (with a total of 50 part classes)." }, { "dkey": "Pix3D", "dval": "The Pix3D dataset is a large-scale benchmark of diverse image-shape pairs with pixel-level 2D-3D alignment. Pix3D has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc." }, { "dkey": "3DMatch", "dval": "The 3DMATCH benchmark evaluates how well descriptors (both 2D and 3D) can establish correspondences between RGB-D frames of different views. The dataset contains 2D RGB-D patches and 3D patches (local TDF voxel grid volumes) of wide-baselined correspondences. \n\nThe pixel size of each 2D patch is determined by the projection of the 0.3m3 local 3D patch around the interest point onto the image plane." }, { "dkey": "Dayton", "dval": "The Dayton dataset is a dataset for ground-to-aerial (or aerial-to-ground) image translation, or cross-view image synthesis. It contains images of road views and aerial views of roads. There are 76,048 images in total and the train/test split is 55,000/21,048. The images in the original dataset have 354×354 resolution." }, { "dkey": "MTNT", "dval": "The Machine Translation of Noisy Text (MTNT) dataset is a Machine Translation dataset that consists of noisy comments on Reddit and professionally sourced translation. The translation are between French, Japanese and French, with between 7k and 37k sentence per language pair." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "UAVA", "dval": "The UAVA,<i>UAV-Assistant</i>, dataset is specifically designed for fostering applications which consider UAVs and humans as cooperative agents.\nWe employ a real-world 3D scanned dataset (<a href=\"https://niessner.github.io/Matterport/\">Matterport3D</a>), physically-based rendering, a gamified simulator for realistic drone navigation trajectory collection, to generate realistic multimodal data both from the user’s exocentric view of the drone, as well as the drone’s egocentric view." } ]
We introduce an efficient, online and fully convolutional network (FCN) to handle video object segmentation
video object segmentation
2,017
[ "THEODORE", "BSDS500", "NVGesture", "BraTS 2017" ]
[ "DAVIS", "COCO" ]
[ { "dkey": "DAVIS", "dval": "The Densely Annotation Video Segmentation dataset (DAVIS) is a high quality and high resolution densely annotated video segmentation dataset under two resolutions, 480p and 1080p. There are 50 video sequences with 3455 densely annotated frames in pixel level. 30 videos with 2079 frames are for training and 20 videos with 1376 frames are for validation." }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "BSDS500", "dval": "Berkeley Segmentation Data Set 500 (BSDS500) is a standard benchmark for contour detection. This dataset is designed for evaluating natural edge detection that includes not only object contours but also object interior boundaries and background boundaries. It includes 500 natural images with carefully annotated boundaries collected from multiple users. The dataset is divided into three parts: 200 for training, 100 for validation and the rest 200 for test." }, { "dkey": "NVGesture", "dval": "The NVGesture dataset focuses on touchless driver controlling. It contains 1532 dynamic gestures fallen into 25 classes. It includes 1050 samples for training and 482 for testing. The videos are recorded with three modalities (RGB, depth, and infrared)." }, { "dkey": "BraTS 2017", "dval": "The BRATS2017 dataset. It contains 285 brain tumor MRI scans, with four MRI modalities as T1, T1ce, T2, and Flair for each scan. The dataset also provides full masks for brain tumors, with labels for ED, ET, NET/NCR. The segmentation evaluation is based on three tasks: WT, TC and ET segmentation." } ]
I want to build an end-to-end high-performance person re-identification system
person re-identification images
2,019
[ "MLe2e", "Airport", "E2E", "ACDC" ]
[ "Market-1501", "CUHK03" ]
[ { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "MLe2e", "dval": "MLe2 is a dataset for the evaluation of scene text end-to-end reading systems and all intermediate stages such as text detection, script identification and text recognition. The dataset contains a total of 711 scene images covering four different scripts (Latin, Chinese, Kannada, and Hangul)." }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "E2E", "dval": "End-to-End NLG Challenge (E2E) aims to assess whether recent end-to-end NLG systems can generate more complex output by learning from datasets containing higher lexical richness, syntactic complexity and diverse discourse phenomena." }, { "dkey": "ACDC", "dval": "The goal of the Automated Cardiac Diagnosis Challenge (ACDC) challenge is to:\n\n\ncompare the performance of automatic methods on the segmentation of the left ventricular endocardium and epicardium as the right ventricular endocardium for both end diastolic and end systolic phase instances;\ncompare the performance of automatic methods for the classification of the examinations in five classes (normal case, heart failure with infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, abnormal right ventricle).\n\nThe overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon. Acquired data were fully anonymized and handled within the regulations set by the local ethical committee of the Hospital of Dijon (France). Our dataset covers several well-defined pathologies with enough cases to (1) properly train machine learning methods and (2) clearly assess the variations of the main physiological parameters obtained from cine-MRI (in particular diastolic volume and ejection fraction). The dataset is composed of 150 exams (all from different patients) divided into 5 evenly distributed subgroups (4 pathological plus 1 healthy subject groups) as described below. Furthermore, each patient comes with the following additional information : weight, height, as well as the diastolic and systolic phase instants.\n\nThe database is made available to participants through two datasets from the dedicated online evaluation website after a personal registration: i) a training dataset of 100 patients along with the corresponding manual references based on the analysis of one clinical expert; ii) a testing dataset composed of 50 new patients, without manual annotations but with the patient information given above. The raw input images are provided through the Nifti format." } ]
A deep multi-task learning framework for face attribute estimation.
face attribute estimation image
2,018
[ "OmniArt", "WFLW", "NYU-VP", "RVL-CDIP", "K2HPD", "IJB-A" ]
[ "MORPH", "CelebA" ]
[ { "dkey": "MORPH", "dval": "MORPH is a facial age estimation dataset, which contains 55,134 facial images of 13,617 subjects ranging from 16 to 77 years old." }, { "dkey": "CelebA", "dval": "CelebFaces Attributes dataset contains 202,599 face images of the size 178×218 from 10,177 celebrities, each annotated with 40 binary labels indicating facial attributes like hair color, gender and age." }, { "dkey": "OmniArt", "dval": "Presents half a million samples and structured meta-data to encourage further research and societal engagement." }, { "dkey": "WFLW", "dval": "The Wider Facial Landmarks in the Wild or WFLW database contains 10000 faces (7500 for training and 2500 for testing) with 98 annotated landmarks. This database also features rich attribute annotations in terms of occlusion, head pose, make-up, illumination, blur and expressions." }, { "dkey": "NYU-VP", "dval": "NYU-VP is a new dataset for multi-model fitting, vanishing point (VP) estimation in this case. Each image is annotated with up to eight vanishing points, and pre-extracted line segments are provided which act as data points for a robust estimator. Due to its size, the dataset is the first to allow for supervised learning of a multi-model fitting task." }, { "dkey": "RVL-CDIP", "dval": "The RVL-CDIP dataset consists of scanned document images belonging to 16 classes such as letter, form, email, resume, memo, etc. The dataset has 320,000 training, 40,000 validation and 40,000 test images. The images are characterized by low quality, noise, and low resolution, typically 100 dpi." }, { "dkey": "K2HPD", "dval": "Includes 100K depth images under challenging scenarios." }, { "dkey": "IJB-A", "dval": "The IARPA Janus Benchmark A (IJB-A) database is developed with the aim to augment more challenges to the face recognition task by collecting facial images with a wide variations in pose, illumination, expression, resolution and occlusion. IJB-A is constructed by collecting 5,712 images and 2,085 videos from 500 identities, with an average of 11.4 images and 4.2 videos per identity." } ]
I want to design a network with minimal computation cost and high performance.
neural architecture optimization images
2,019
[ "COVERAGE", "ESAD", "IBC", "SNIPS", "SUN360", "I-HAZE", "Wiki-CS" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "COVERAGE", "dval": "COVERAGE contains copymove forged (CMFD) images and their originals with similar but genuine objects (SGOs). COVERAGE is designed to highlight and address tamper detection ambiguity of popular methods, caused by self-similarity within natural images. In COVERAGE, forged–original pairs are annotated with (i) the duplicated and forged region masks, and (ii) the tampering factor/similarity metric. For benchmarking, forgery quality is evaluated using (i) computer vision-based methods, and (ii) human detection performance." }, { "dkey": "ESAD", "dval": "ESAD is a large-scale dataset designed to tackle the problem of surgeon action detection in endoscopic minimally invasive surgery. ESAD aims at contributing to increase the effectiveness and reliability of surgical assistant robots by realistically testing their awareness of the actions performed by a surgeon. The dataset provides bounding box annotation for 21 action classes on real endoscopic video frames captured during prostatectomy, and was used as the basis of a recent MIDL 2020 challenge." }, { "dkey": "IBC", "dval": "The Individual Brain Charting (IBC) project aims at providing a new generation of functional-brain atlases. To map cognitive mechanisms in a fine scale, task-fMRI data at high-spatial-resolution are being acquired on a fixed cohort of 12 participants, while performing many different tasks. These data—free from both inter-subject and inter-site variability—are publicly available as means to support the investigation of functional segregation and connectivity as well as individual variability with a view to establishing a better link between brain systems and behavior.\n\n What’s special about the IBC dataset?\n\n\nTaskwise dataset: spanning the cognitive spectrum within subject\nFixed cohort over a 10-year span to minimize inter-subject variability\nFixed experimental setting to minimize inter-site variability\nMultimodal: fMRI (task-based and resting state), DWI, structural\n\n Main characteristics \n\n\n12 healthy participants (aged 27-40 at the time of recruitment)\nSpatial resolution: 1.5mm (isotropic); Temporal resolution: 2s\nScanner: Siemens 3T Magnetom Prisma; Coil: 64-channel\n50 acquisitions per participant upon completion of the dataset in 2022" }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "SUN360", "dval": "The goal of the SUN360 panorama database is to provide academic researchers in computer vision, computer graphics and computational photography, cognition and neuroscience, human perception, machine learning and data mining, with a comprehensive collection of annotated panoramas covering 360x180-degree full view for a large variety of environmental scenes, places and the objects within. To build the core of the dataset, the authors download a huge number of high-resolution panorama images from the Internet, and group them into different place categories. Then, they designed a WebGL annotation tool for annotating the polygons and cuboids for objects in the scene." }, { "dkey": "I-HAZE", "dval": "The I-Haze dataset contains 25 indoor hazy images (size 2833×4657 pixels) training. It has 5 hazy images for validation along with their corresponding ground truth images." }, { "dkey": "Wiki-CS", "dval": "Wiki-CS is a Wikipedia-based dataset for benchmarking Graph Neural Networks. The dataset is constructed from Wikipedia categories, specifically 10 classes corresponding to branches of computer science, with very high connectivity. The node features are derived from the text of the corresponding articles. They were calculated as the average of pretrained GloVe word embeddings (Pennington et al., 2014), resulting in 300-dimensional node features.\n\nThe dataset has 11,701 nodes and 216,123 edges." } ]
A self-similarity grouping approach to unsupervised domain adaptation in person re-identification
unsupervised domain adaptation person re-identification images
2,019
[ "Libri-Adapt", "Airport", "Partial-iLIDS", "CUHK02", "SYSU-MM01" ]
[ "DukeMTMC-reID", "Market-1501" ]
[ { "dkey": "DukeMTMC-reID", "dval": "The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images.\n\nNOTE: This dataset has been retracted." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "Libri-Adapt", "dval": "Libri-Adapt aims to support unsupervised domain adaptation research on speech recognition models." }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "Partial-iLIDS", "dval": "Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras. Some images contain people occluded by other individuals or luggage." }, { "dkey": "CUHK02", "dval": "CUHK02 is a dataset for person re-identification. It contains 1,816 identities from two disjoint camera views. Each identity has two samples per camera view making a total of 7,264 images. It is used for Person Re-identification." }, { "dkey": "SYSU-MM01", "dval": "The SYSU-MM01 is a dataset collected for the Visible-Infrared Re-identification problem. The images in the dataset were obtained from 491 different persons by recording them using 4 RGB and 2 infrared cameras. Within the dataset, the persons are divided into 3 fixed splits to create training, validation and test sets. In the training set, there are 20284 RGB and 9929 infrared images of 296 persons. The validation set contains 1974 RGB and 1980 infrared images of 99 persons. The testing set consists of the images of 96 persons where 3803 infrared images are used as query and 301 randomly selected RGB images are used as gallery." } ]
I want to introduce a new collective decision strategy for open set recognition.
open set recognition video
2,018
[ "ConvAI2", "BDD100K", "SNIPS", "OpenEDS", "Hollywood 3D dataset", "VoxForge", "HotpotQA" ]
[ "Letter", "USPS" ]
[ { "dkey": "Letter", "dval": "Letter Recognition Data Set is a handwritten digit dataset. The task is to identify each of a large number of black-and-white rectangular pixel displays as one of the 26 capital letters in the English alphabet. The character images were based on 20 different fonts and each letter within these 20 fonts was randomly distorted to produce a file of 20,000 unique stimuli. Each stimulus was converted into 16 primitive numerical attributes (statistical moments and edge counts) which were then scaled to fit into a range of integer values from 0 through 15." }, { "dkey": "USPS", "dval": "USPS is a digit dataset automatically scanned from envelopes by the U.S. Postal Service containing a total of 9,298 16×16 pixel grayscale samples; the images are centered, normalized and show a broad range of font styles." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "BDD100K", "dval": "Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "OpenEDS", "dval": "OpenEDS (Open Eye Dataset) is a large scale data set of eye-images captured using a virtual-reality (VR) head mounted display mounted with two synchronized eyefacing cameras at a frame rate of 200 Hz under controlled illumination. This dataset is compiled from video capture of the eye-region collected from 152 individual participants and is divided into four subsets: (i) 12,759 images with pixel-level annotations for key eye-regions: iris, pupil and sclera (ii) 252,690 unlabelled eye-images, (iii) 91,200 frames from randomly selected video sequence of 1.5 seconds in duration and (iv) 143 pairs of left and right point cloud data compiled from corneal topography of eye regions collected from a subset, 143 out of 152, participants in the study." }, { "dkey": "Hollywood 3D dataset", "dval": "A dataset for benchmarking action recognition algorithms in natural environments, while making use of 3D information. The dataset contains around 650 video clips, across 14 classes. In addition, two state of the art action recognition algorithms are extended to make use of the 3D data, and five new interest point detection strategies are also proposed, that extend to the 3D data." }, { "dkey": "VoxForge", "dval": "VoxForge is an open speech dataset that was set up to collect transcribed speech for use with Free and Open Source Speech Recognition Engines (on Linux, Windows and Mac)." }, { "dkey": "HotpotQA", "dval": "HotpotQA is a question answering dataset collected on the English Wikipedia, containing about 113K crowd-sourced questions that are constructed to require the introduction paragraphs of two Wikipedia articles to answer. Each question in the dataset comes with the two gold paragraphs, as well as a list of sentences in these paragraphs that crowdworkers identify as supporting facts necessary to answer the question. \n\nA diverse range of reasoning strategies are featured in HotpotQA, including questions involving missing entities in the question, intersection questions (What satisfies property A and property B?), and comparison questions, where two entities are compared by a common attribute, among others. In the few-document distractor setting, the QA models are given ten paragraphs in which the gold paragraphs are guaranteed to be found; in the open-domain fullwiki setting, the models are only given the question and the entire Wikipedia. Models are evaluated on their answer accuracy and explainability, where the former is measured as overlap between the predicted and gold answers with exact match (EM) and unigram F1, and the latter concerns how well the predicted supporting fact sentences match human annotation (Supporting Fact EM/F1). A joint metric is also reported on this dataset, which encourages systems to perform well on both tasks simultaneously." } ]
A novel adversarial multi-source domain aggregation
domain adaptation semantic segmentation images gta synthia cityscapes bdds
2,019
[ "Cell", "DailyDialog++", "PEC", "WinoGrande", "ImageCLEF-DA", "PACS" ]
[ "ImageNet", "Cityscapes" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "Cityscapes", "dval": "Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of around 5000 fine annotated images and 20000 coarse annotated ones. Data was captured in 50 cities during several months, daytimes, and good weather conditions. It was originally recorded as video so the frames were manually selected to have the following features: large number of dynamic objects, varying scene layout, and varying background." }, { "dkey": "Cell", "dval": "The CELL benchmark is made of fluorescence microscopy images of cell." }, { "dkey": "DailyDialog++", "dval": "Consists of (i) five relevant responses for each context and (ii) five adversarially crafted irrelevant responses for each context." }, { "dkey": "PEC", "dval": "A novel large-scale multi-domain dataset for persona-based empathetic conversations." }, { "dkey": "WinoGrande", "dval": "WinoGrande is a large-scale dataset of 44k problems, inspired by the original WSC design, but adjusted to improve both the scale and the hardness of the dataset. The key steps of the dataset construction consist of (1) a carefully designed crowdsourcing procedure, followed by (2) systematic bias reduction using a novel AfLite algorithm that generalizes human-detectable word associations to machine-detectable embedding associations." }, { "dkey": "ImageCLEF-DA", "dval": "The ImageCLEF-DA dataset is a benchmark dataset for ImageCLEF 2014 domain adaptation challenge, which contains three domains: Caltech-256 (C), ImageNet ILSVRC 2012 (I) and Pascal VOC 2012 (P). For each domain, there are 12 categories and 50 images in each category." }, { "dkey": "PACS", "dval": "PACS is an image dataset for domain generalization. It consists of four domains, namely Photo (1,670 images), Art Painting (2,048 images), Cartoon (2,344 images) and Sketch (3,929 images). Each domain contains seven categories." } ]
We propose a simple but effective approach for transferring supervision from neighboring examples to improve multi-label
multi-label classification images
2,018
[ "SuperGLUE", "TableBank", "DocBank", "BraTS 2017", "SICK", "SBU Captions Dataset" ]
[ "COCO", "Flickr30k" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "Flickr30k", "dval": "The Flickr30k dataset contains 31,000 images collected from Flickr, together with 5 reference sentences provided by human annotators." }, { "dkey": "SuperGLUE", "dval": "SuperGLUE is a benchmark dataset designed to pose a more rigorous test of language understanding than GLUE. SuperGLUE has the same high-level motivation as GLUE: to provide a simple, hard-to-game measure of progress toward general-purpose language understanding technologies for English. SuperGLUE follows the basic design of GLUE: It consists of a public leaderboard built around eight language understanding tasks, drawing on existing data, accompanied by a single-number\nperformance metric, and an analysis toolkit. However, it improves upon GLUE in several ways:\n\n\nMore challenging tasks: SuperGLUE retains the two hardest tasks in GLUE. The remaining tasks were identified from those submitted to an open call for task proposals and were selected based on difficulty for current NLP approaches.\nMore diverse task formats: The task formats in GLUE are limited to sentence- and sentence-pair classification. The authors expand the set of task formats in SuperGLUE to include\ncoreference resolution and question answering (QA).\nComprehensive human baselines: the authors include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance.\nImproved code support: SuperGLUE is distributed with a new, modular toolkit for work on pretraining, multi-task learning, and transfer learning in NLP, built around standard tools including PyTorch (Paszke et al., 2017) and AllenNLP (Gardner et al., 2017).\nRefined usage rules: The conditions for inclusion on the SuperGLUE leaderboard were revamped to ensure fair competition, an informative leaderboard, and full credit\nassignment to data and task creators." }, { "dkey": "TableBank", "dval": "To address the need for a standard open domain table benchmark dataset, the author propose a novel weak supervision approach to automatically create the TableBank, which is orders of magnitude larger than existing human labeled datasets for table analysis. Distinct from traditional weakly supervised training set, our approach can obtain not only large scale but also high quality training data.\n\nNowadays, there are a great number of electronic documents on the web such as Microsoft Word (.docx) and Latex (.tex) files. These online documents contain mark-up tags for tables in their source code by nature. Intuitively, one can manipulate these source code by adding bounding box using the mark-up language within each document. For Word documents, the internal Office XML code can be modified where the borderline of each table is identified. For Latex documents, the tex code can be also modified where bounding boxes of tables are recognized. In this way, high-quality labeled data is created for a variety of domains such as business documents, official fillings, research papers etc, which is tremendously beneficial for large-scale table analysis tasks.\n\nThe TableBank dataset totally consists of 417,234 high quality labeled tables as well as their original documents in a variety of domains." }, { "dkey": "DocBank", "dval": "A benchmark dataset that contains 500K document pages with fine-grained token-level annotations for document layout analysis. DocBank is constructed using a simple yet effective way with weak supervision from the \\LaTeX{} documents available on the arXiv.com." }, { "dkey": "BraTS 2017", "dval": "The BRATS2017 dataset. It contains 285 brain tumor MRI scans, with four MRI modalities as T1, T1ce, T2, and Flair for each scan. The dataset also provides full masks for brain tumors, with labels for ED, ET, NET/NCR. The segmentation evaluation is based on three tasks: WT, TC and ET segmentation." }, { "dkey": "SICK", "dval": "The Sentences Involving Compositional Knowledge (SICK) dataset is a dataset for compositional distributional semantics. It includes a large number of sentence pairs that are rich in the lexical, syntactic and semantic phenomena. Each pair of sentences is annotated in two dimensions: relatedness and entailment. The relatedness score ranges from 1 to 5, and Pearson’s r is used for evaluation; the entailment relation is categorical, consisting of entailment, contradiction, and neutral. There are 4439 pairs in the train split, 495 in the trial split used for development and 4906 in the test split. The sentence pairs are generated from image and video caption datasets before being paired up using some algorithm." }, { "dkey": "SBU Captions Dataset", "dval": "A collection that allows researchers to approach the extremely challenging problem of description generation using relatively simple non-parametric methods and produces surprisingly effective results." } ]
I want to train a supervised model for face alignment.
face alignment images
2,018
[ "FaceForensics", "SNIPS", "iFakeFaceDB", "ConvAI2", "Sentence Compression", "WFLW" ]
[ "COFW", "AFW", "AFLW" ]
[ { "dkey": "COFW", "dval": "The Caltech Occluded Faces in the Wild (COFW) dataset is designed to present faces in real-world conditions. Faces show large variations in shape and occlusions due to differences in pose, expression, use of accessories such as sunglasses and hats and interactions with objects (e.g. food, hands, microphones,
etc.). All images were hand annotated using the same 29 landmarks as in LFPW. Both the landmark positions as well as their occluded/unoccluded state were annotated. The faces are occluded to different degrees, with large variations in the type of occlusions encountered. COFW has an average occlusion of over 23." }, { "dkey": "AFW", "dval": "AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box." }, { "dkey": "AFLW", "dval": "The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image." }, { "dkey": "FaceForensics", "dval": "FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces. This dataset has two versions:\n\n\n\nSource-to-Target: where the authors reenact over 1000 videos with new facial expressions extracted from other videos, which e.g. can be used to train a classifier to detect fake images or videos.\n\n\n\nSelfreenactment: where the authors use Face2Face to reenact the facial expressions of videos with their own facial expressions as input to get pairs of videos, which e.g. can be used to train supervised generative refinement models." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "iFakeFaceDB", "dval": "iFakeFaceDB is a face image dataset for the study of synthetic face manipulation detection, comprising about 87,000 synthetic face images generated by the Style-GAN model and transformed with the GANprintR approach. All images were aligned and resized to the size of 224 x 224." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "Sentence Compression", "dval": "Sentence Compression is a dataset where the syntactic trees of the compressions are subtrees of their uncompressed counterparts, and hence where supervised systems which require a structural alignment between the input and output can be successfully trained." }, { "dkey": "WFLW", "dval": "The Wider Facial Landmarks in the Wild or WFLW database contains 10000 faces (7500 for training and 2500 for testing) with 98 annotated landmarks. This database also features rich attribute annotations in terms of occlusion, head pose, make-up, illumination, blur and expressions." } ]
I want to improve video action recognition accuracy with spatial-temporal attention and regularizers.
video action recognition
2,019
[ "FineGym", "Charades", "UCFRep", "TAPOS", "SoccerDB", "HAA500", "TUM Kitchen" ]
[ "ImageNet", "UCF101", "HMDB51" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "HMDB51", "dval": "The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. The original evaluation scheme uses three different training/testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. The average accuracy over these three splits is used to measure the final performance." }, { "dkey": "FineGym", "dval": "FineGym is an action recognition dataset build on top of gymnasium videos. Compared to existing action recognition datasets, FineGym is distinguished in richness, quality, and diversity. In particular, it provides temporal annotations at both action and sub-action levels with a three-level semantic hierarchy. For example, a \"balance beam\" event will be annotated as a sequence of elementary sub-actions derived from five sets: \"leap-jumphop\", \"beam-turns\", \"flight-salto\", \"flight-handspring\", and \"dismount\", where the sub-action in each set will be further annotated with finely defined class labels. This new level of granularity presents significant challenges for action recognition, e.g. how to parse the temporal structures from a coherent action, and how to distinguish between subtly different action classes." }, { "dkey": "Charades", "dval": "The Charades dataset is composed of 9,848 videos of daily indoors activities with an average length of 30 seconds, involving interactions with 46 objects classes in 15 types of indoor scenes and containing a vocabulary of 30 verbs leading to 157 action classes. Each video in this dataset is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacting objects. 267 different users were presented with a sentence, which includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence. In total, the dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos. In the standard split there are7,986 training video and 1,863 validation video." }, { "dkey": "UCFRep", "dval": "The UCFRep dataset contains 526 annotated repetitive action videos. This dataset is built from the action recognition dataset UCF101." }, { "dkey": "TAPOS", "dval": "TAPOS is a new dataset developed on sport videos with manual annotations of sub-actions, and conduct a study on temporal action parsing on top. A sport activity usually consists of multiple sub-actions and that the awareness of such temporal structures is beneficial to action recognition.\n\nTAPOS contains 16,294 valid instances in total, across 21 action classes. These instances have a duration of 9.4\nseconds on average. The number of instances within each class is different, where the largest class high jump has over\n1,600 instances, and the smallest class beam has 200 instances. The average number of sub-actions also varies\nfrom class to class, where parallel bars has 9 sub-actions on average, and long jump has 3 sub-actions on average. All instances are split into train, validation and test sets, of sizes 13094, 1790, and 1763, respectively." }, { "dkey": "SoccerDB", "dval": "Comprises of 171,191 video segments from 346 high-quality soccer games. The database contains 702,096 bounding boxes, 37,709 essential event labels with time boundary and 17,115 highlight annotations for object detection, action recognition, temporal action localization, and highlight detection tasks." }, { "dkey": "HAA500", "dval": "HAA500 is a manually annotated human-centric atomic action dataset for action recognition on 500 classes with over 591k labeled frames. Unlike existing atomic action datasets, where coarse-grained atomic actions were labeled with action-verbs, e.g., \"Throw\", HAA500 contains fine-grained atomic actions where only consistent actions fall under the same label, e.g., \"Baseball Pitching\" vs \"Free Throw in Basketball\", to minimize ambiguities in action classification. HAA500 has been carefully curated to capture the movement of human figures with less spatio-temporal label noises to greatly enhance the training of deep neural networks." }, { "dkey": "TUM Kitchen", "dval": "The TUM Kitchen dataset is an action recognition dataset that contains 20 video sequences captured by 4 cameras with overlapping views. The camera network captures the scene from four viewpoints with 25 fps, and every RGB frame is of the resolution 384×288 by pixels. The action labels are frame-wise, and provided for the left arm, the right arm and the torso separately." } ]
A generative adversarial network that models moving objects in images without any supervision whatsoever.
moving object detection images
2,019
[ "ISTD", "Raindrop", "FDF", "TAO" ]
[ "FBMS", "DAVIS" ]
[ { "dkey": "FBMS", "dval": "The Freiburg-Berkeley Motion Segmentation Dataset (FBMS-59) is an extension of the BMS dataset with 33 additional video sequences. A total of 720 frames is annotated. It has pixel-accurate segmentation annotations of moving objects. FBMS-59 comes with a split into a training set and a test set." }, { "dkey": "DAVIS", "dval": "The Densely Annotation Video Segmentation dataset (DAVIS) is a high quality and high resolution densely annotated video segmentation dataset under two resolutions, 480p and 1080p. There are 50 video sequences with 3455 densely annotated frames in pixel level. 30 videos with 2079 frames are for training and 20 videos with 1376 frames are for validation." }, { "dkey": "ISTD", "dval": "The Image Shadow Triplets dataset (ISTD) is a dataset for shadow understanding that contains 1870 image triplets of shadow image, shadow mask, and shadow-free image." }, { "dkey": "Raindrop", "dval": "Raindrop is a set of image pairs, where\neach pair contains exactly the same background scene, yet\none is degraded by raindrops and the other one is free from\nraindrops. To obtain this, the images are captured through two pieces of exactly the\nsame glass: one sprayed with water, and the other is left\nclean. The dataset consists of 1,119 pairs of images, with various\nbackground scenes and raindrops. They were captured with a Sony A6000\nand a Canon EOS 60." }, { "dkey": "FDF", "dval": "A diverse dataset of human faces, including unconventional poses, occluded faces, and a vast variability in backgrounds." }, { "dkey": "TAO", "dval": "TAO is a federated dataset for Tracking Any Object, containing 2,907 high resolution videos, captured in diverse environments, which are half a minute long on average. A bottom-up approach was used for discovering a large vocabulary of 833 categories, an order of magnitude more than prior tracking benchmarks. \n\nThe dataset was annotated by labelling tracks for objects that move at any point in the video, and giving names to them post factum." } ]
We propose a novel unsupervised method for learning human activity representations in minutes-long videos. In VideoGraph
human activity representation videos paragraph-level
2,019
[ "EPIC-KITCHENS-100", "Watch-n-Patch", "Icentia11K", "MIT Traffic", "AVA" ]
[ "Charades", "Breakfast" ]
[ { "dkey": "Charades", "dval": "The Charades dataset is composed of 9,848 videos of daily indoors activities with an average length of 30 seconds, involving interactions with 46 objects classes in 15 types of indoor scenes and containing a vocabulary of 30 verbs leading to 157 action classes. Each video in this dataset is annotated by multiple free-text descriptions, action labels, action intervals and classes of interacting objects. 267 different users were presented with a sentence, which includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence. In total, the dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos. In the standard split there are7,986 training video and 1,863 validation video." }, { "dkey": "Breakfast", "dval": "The Breakfast Actions Dataset comprises of 10 actions related to breakfast preparation, performed by 52 different individuals in 18 different kitchens. The dataset is one of the largest fully annotated datasets available. The actions are recorded “in the wild” as opposed to a single controlled lab environment. It consists of over 77 hours of video recordings." }, { "dkey": "EPIC-KITCHENS-100", "dval": "This paper introduces the pipeline to scale the largest dataset in egocentric vision EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (EPIC-KITCHENS-55), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection also enables evaluating the \"test of time\" - i.e. whether models trained on data collected in 2018 can generalise to new footage collected under the same hypotheses albeit \"two years on\".\nThe dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics." }, { "dkey": "Watch-n-Patch", "dval": "The Watch-n-Patch dataset was created with the focus on modeling human activities, comprising multiple actions in a completely unsupervised setting. It is collected with Microsoft Kinect One sensor for a total length of about 230 minutes, divided in 458 videos. 7 subjects perform human daily activities in 8 offices and 5 kitchens with complex backgrounds. Moreover, skeleton data are provided as ground truth annotations." }, { "dkey": "Icentia11K", "dval": "Public ECG dataset of continuous raw signals for representation learning containing 11 thousand patients and 2 billion labelled beats." }, { "dkey": "MIT Traffic", "dval": "MIT Traffic is a dataset for research on activity analysis and crowded scenes. It includes a traffic video sequence of 90 minutes long. It is recorded by a stationary camera. The size of the scene is 720 by 480 and it is divided into 20 clips." }, { "dkey": "AVA", "dval": "AVA is a project that provides audiovisual annotations of video for improving our understanding of human activity. Each of the video clips has been exhaustively annotated by human annotators, and together they represent a rich variety of scenes, recording conditions, and expressions of human activity. There are annotations for:\n\n\nKinetics (AVA-Kinetics) - a crossover between AVA and Kinetics. In order to provide localized action labels on a wider variety of visual scenes, authors provide AVA action labels on videos from Kinetics-700, nearly doubling the number of total annotations, and increasing the number of unique videos by over 500x. \nActions (AvA Actions) - the AVA dataset densely annotates 80 atomic visual actions in 430 15-minute movie clips, where actions are localized in space and time, resulting in 1.62M action labels with multiple labels per human occurring frequently. \nSpoken Activity (AVA ActiveSpeaker, AVA Speech). AVA ActiveSpeaker: associates speaking activity with a visible face, on the AVA v1.0 videos, resulting in 3.65 million frames labeled across ~39K face tracks. AVA Speech densely annotates audio-based speech activity in AVA v1.0 videos, and explicitly labels 3 background noise conditions, resulting in ~46K labeled segments spanning 45 hours of data." } ]
I want to study the effect of training on multiple languages, and improve upon the state-of
multilingual nli text
2,020
[ "Dialogue State Tracking Challenge", "SNIPS", "LibriSpeech", "SuperGLUE", "DailyDialog++" ]
[ "MultiNLI", "GLUE" ]
[ { "dkey": "MultiNLI", "dval": "The Multi-Genre Natural Language Inference (MultiNLI) dataset has 433K sentence pairs. Its size and mode of collection are modeled closely like SNLI. MultiNLI offers ten distinct genres (Face-to-face, Telephone, 9/11, Travel, Letters, Oxford University Press, Slate, Verbatim, Goverment and Fiction) of written and spoken English data. There are matched dev/test sets which are derived from the same sources as those in the training set, and mismatched sets which do not closely resemble any seen at training time." }, { "dkey": "GLUE", "dval": "General Language Understanding Evaluation (GLUE) benchmark is a collection of nine natural language understanding tasks, including single-sentence tasks CoLA and SST-2, similarity and paraphrasing tasks MRPC, STS-B and QQP, and natural language inference tasks MNLI, QNLI, RTE and WNLI." }, { "dkey": "Dialogue State Tracking Challenge", "dval": "The Dialog State Tracking Challenges 2 & 3 (DSTC2&3) were research challenge focused on improving the state of the art in tracking the state of spoken dialog systems. State tracking, sometimes called belief tracking, refers to accurately estimating the user's goal as a dialog progresses. Accurate state tracking is desirable because it provides robustness to errors in speech recognition, and helps reduce ambiguity inherent in language within a temporal process like dialog.\nIn these challenges, participants were given labelled corpora of dialogs to develop state tracking algorithms. The trackers were then evaluated on a common set of held-out dialogs, which were released, un-labelled, during a one week period.\n\nThe corpus was collected using Amazon Mechanical Turk, and consists of dialogs in two domains: restaurant information, and tourist information. Tourist information subsumes restaurant information, and includes bars, cafés etc. as well as multiple new slots. There were two rounds of evaluation using this data:\n\nDSTC 2 released a large number of training dialogs related to restaurant search. Compared to DSTC (which was in the bus timetables domain), DSTC 2 introduces changing user goals, tracking 'requested slots' as well as the new restaurants domain. Results from DSTC 2 were presented at SIGDIAL 2014.\nDSTC 3 addressed the problem of adaption to a new domain - tourist information. DSTC 3 releases a small amount of labelled data in the tourist information domain; participants will use this data plus the restaurant data from DSTC 2 for training.\nDialogs used for training are fully labelled; user transcriptions, user dialog-act semantics and dialog state are all annotated. (This corpus therefore is also suitable for studies in Spoken Language Understanding.)" }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "LibriSpeech", "dval": "The LibriSpeech corpus is a collection of approximately 1,000 hours of audiobooks that are a part of the LibriVox project. Most of the audiobooks come from the Project Gutenberg. The training data is split into 3 partitions of 100hr, 360hr, and 500hr sets while the dev and test data are split into the ’clean’ and ’other’ categories, respectively, depending upon how well or challenging Automatic Speech Recognition systems would perform against. Each of the dev and test sets is around 5hr in audio length. This corpus also provides the n-gram language models and the corresponding texts excerpted from the Project Gutenberg books, which contain 803M tokens and 977K unique words." }, { "dkey": "SuperGLUE", "dval": "SuperGLUE is a benchmark dataset designed to pose a more rigorous test of language understanding than GLUE. SuperGLUE has the same high-level motivation as GLUE: to provide a simple, hard-to-game measure of progress toward general-purpose language understanding technologies for English. SuperGLUE follows the basic design of GLUE: It consists of a public leaderboard built around eight language understanding tasks, drawing on existing data, accompanied by a single-number\nperformance metric, and an analysis toolkit. However, it improves upon GLUE in several ways:\n\n\nMore challenging tasks: SuperGLUE retains the two hardest tasks in GLUE. The remaining tasks were identified from those submitted to an open call for task proposals and were selected based on difficulty for current NLP approaches.\nMore diverse task formats: The task formats in GLUE are limited to sentence- and sentence-pair classification. The authors expand the set of task formats in SuperGLUE to include\ncoreference resolution and question answering (QA).\nComprehensive human baselines: the authors include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance.\nImproved code support: SuperGLUE is distributed with a new, modular toolkit for work on pretraining, multi-task learning, and transfer learning in NLP, built around standard tools including PyTorch (Paszke et al., 2017) and AllenNLP (Gardner et al., 2017).\nRefined usage rules: The conditions for inclusion on the SuperGLUE leaderboard were revamped to ensure fair competition, an informative leaderboard, and full credit\nassignment to data and task creators." }, { "dkey": "DailyDialog++", "dval": "Consists of (i) five relevant responses for each context and (ii) five adversarially crafted irrelevant responses for each context." } ]
A data-driven approach for estimating hand-object manipulations from RGB images.
hand-object manipulations rgb images
2,019
[ "YCBInEOAT Dataset", "EgoHands", "InterHand2.6M", "SynthHands", "NYUv2", "ContactPose" ]
[ "HIC", "ShapeNet" ]
[ { "dkey": "HIC", "dval": "The Hands in action dataset (HIC) dataset has RGB-D sequences of hands interacting with objects." }, { "dkey": "ShapeNet", "dval": "ShapeNet is a large scale repository for 3D CAD models developed by researchers from Stanford University, Princeton University and the Toyota Technological Institute at Chicago, USA. The repository contains over 300M models with 220,000 classified into 3,135 classes arranged using WordNet hypernym-hyponym relationships. ShapeNet Parts subset contains 31,693 meshes categorised into 16 common object classes (i.e. table, chair, plane etc.). Each shapes ground truth contains 2-5 parts (with a total of 50 part classes)." }, { "dkey": "YCBInEOAT Dataset", "dval": "A new dataset with significant occlusions related to object manipulation." }, { "dkey": "EgoHands", "dval": "The EgoHands dataset contains 48 Google Glass videos of complex, first-person interactions between two people. The main intention of this dataset is to enable better, data-driven approaches to understanding hands in first-person computer vision. The dataset offers\n\n\nhigh quality, pixel-level segmentations of hands\nthe possibility to semantically distinguish between the observer’s hands and someone else’s hands, as well as left and right hands\nvirtually unconstrained hand poses as actors freely engage in a set of joint activities\nlots of data with 15,053 ground-truth labeled hands" }, { "dkey": "InterHand2.6M", "dval": "The InterHand2.6M dataset is a large-scale real-captured dataset with accurate GT 3D interacting hand poses, used for 3D hand pose estimation The dataset contains 2.6M labeled single and interacting hand frames." }, { "dkey": "SynthHands", "dval": "The SynthHands dataset is a dataset for hand pose estimation which consists of real captured hand motion retargeted to a virtual hand with natural backgrounds and interactions with different objects. The dataset contains data for male and female hands, both with and without interaction with objects. While the hand and foreground object are synthtically generated using Unity, the motion was obtained from real performances as described in the accompanying paper. In addition, real object textures and background images (depth and color) were used. Ground truth 3D positions are provided for 21 keypoints of the hand." }, { "dkey": "NYUv2", "dval": "The NYU-Depth V2 data set is comprised of video sequences from a variety of indoor scenes as recorded by both the RGB and Depth cameras from the Microsoft Kinect. It features:\n\n\n1449 densely labeled pairs of aligned RGB and depth images\n464 new scenes taken from 3 cities\n407,024 new unlabeled frames\nEach object is labeled with a class and an instance number.\nThe dataset has several components:\nLabeled: A subset of the video data accompanied by dense multi-class labels. This data has also been preprocessed to fill in missing depth labels.\nRaw: The raw RGB, depth and accelerometer data as provided by the Kinect.\nToolbox: Useful functions for manipulating the data and labels." }, { "dkey": "ContactPose", "dval": "ContactPose is a dataset of hand-object contact paired with hand pose, object pose, and RGB-D images. ContactPose has 2306 unique grasps of 25 household objects grasped with 2 functional intents by 50 participants, and more than 2.9 M RGB-D grasp images." } ]
I want to use CNN to process 3D data.
3d data processing
2,015
[ "AFLW2000-3D", "SNIPS", "UAVA", "NewsQA" ]
[ "UCF101", "CIFAR-10" ]
[ { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "AFLW2000-3D", "dval": "AFLW2000-3D is a dataset of 2000 images that have been annotated with image-level 68-point 3D facial landmarks. This dataset is used for evaluation of 3D facial landmark detection models. The head poses are very diverse and often hard to be detected by a CNN-based face detector." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "UAVA", "dval": "The UAVA,<i>UAV-Assistant</i>, dataset is specifically designed for fostering applications which consider UAVs and humans as cooperative agents.\nWe employ a real-world 3D scanned dataset (<a href=\"https://niessner.github.io/Matterport/\">Matterport3D</a>), physically-based rendering, a gamified simulator for realistic drone navigation trajectory collection, to generate realistic multimodal data both from the user’s exocentric view of the drone, as well as the drone’s egocentric view." }, { "dkey": "NewsQA", "dval": "The NewsQA dataset is a crowd-sourced machine reading comprehension dataset of 120,000 question-answer pairs.\n\n\nDocuments are CNN news articles.\nQuestions are written by human users in natural language.\nAnswers may be multiword passages of the source text.\nQuestions may be unanswerable.\nNewsQA is collected using a 3-stage, siloed process.\nQuestioners see only an article’s headline and highlights.\nAnswerers see the question and the full article, then select an answer passage.\nValidators see the article, the question, and a set of answers that they rank.\nNewsQA is more natural and more challenging than previous datasets." } ]
RMSNorm for deep neural networks.
image classification images
2,019
[ "UNITOPATHO", "GoPro", "COVIDx", "COWC", "UNSW-NB15", "PROTEINS", "IPN Hand" ]
[ "COCO", "CIFAR-10" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "UNITOPATHO", "dval": "Histopathological characterization of colorectal polyps allows to tailor patients' management and follow up with the ultimate aim of avoiding or promptly detecting an invasive carcinoma. Colorectal polyps characterization relies on the histological analysis of tissue samples to determine the polyps malignancy and dysplasia grade. Deep neural networks achieve outstanding accuracy in medical patterns recognition, however they require large sets of annotated training images. We introduce UniToPatho, an annotated dataset of 9536 hematoxylin and eosin stained patches extracted from 292 whole-slide images, meant for training deep neural networks for colorectal polyps classification and adenomas grading. The slides are acquired through a Hamamatsu Nanozoomer S210 scanner at 20× magnification (0.4415 μm/px)" }, { "dkey": "GoPro", "dval": "The GoPro dataset for deblurring consists of 3,214 blurred images with the size of 1,280×720 that are divided into 2,103 training images and 1,111 test images. The dataset consists of pairs of a realistic blurry image and the corresponding ground truth shapr image that are obtained by a high-speed camera." }, { "dkey": "COVIDx", "dval": "An open access benchmark dataset comprising of 13,975 CXR images across 13,870 patient cases, with the largest number of publicly available COVID-19 positive cases to the best of the authors' knowledge." }, { "dkey": "COWC", "dval": "The Cars Overhead With Context (COWC) data set is a large set of annotated cars from overhead. It is useful for training a device such as a deep neural network to learn to detect and/or count cars." }, { "dkey": "UNSW-NB15", "dval": "UNSW-NB15 is a network intrusion dataset. It contains nine different attacks, includes DoS, worms, Backdoors, and Fuzzers. The dataset contains raw network packets. The number of records in the training set is 175,341 records and the testing set is 82,332 records from the different types, attack and normal.\n\nPaper: UNSW-NB15: a comprehensive data set for network intrusion detection systems" }, { "dkey": "PROTEINS", "dval": "PROTEINS is a dataset of proteins that are classified as enzymes or non-enzymes. Nodes represent the amino acids and two nodes are connected by an edge if they are less than 6 Angstroms apart." }, { "dkey": "IPN Hand", "dval": "The IPN Hand dataset is a benchmark video dataset with sufficient size, variation, and real-world elements able to train and evaluate deep neural networks for continuous Hand Gesture Recognition (HGR)." } ]
We conduct the first adversarial attacks on pose-estimation networks. We explore different architectures and
adversarial attacks human pose estimation images
2,019
[ "UNSW-NB15", "FDF", "LSP", "MSU-MFSD", "APRICOT", "Make3D" ]
[ "MPII", "COCO" ]
[ { "dkey": "MPII", "dval": "The MPII Human Pose Dataset for single person pose estimation is composed of about 25K images of which 15K are training samples, 3K are validation samples and 7K are testing samples (which labels are withheld by the authors). The images are taken from YouTube videos covering 410 different human activities and the poses are manually annotated with up to 16 body joints." }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "UNSW-NB15", "dval": "UNSW-NB15 is a network intrusion dataset. It contains nine different attacks, includes DoS, worms, Backdoors, and Fuzzers. The dataset contains raw network packets. The number of records in the training set is 175,341 records and the testing set is 82,332 records from the different types, attack and normal.\n\nPaper: UNSW-NB15: a comprehensive data set for network intrusion detection systems" }, { "dkey": "FDF", "dval": "A diverse dataset of human faces, including unconventional poses, occluded faces, and a vast variability in backgrounds." }, { "dkey": "LSP", "dval": "The Leeds Sports Pose (LSP) dataset is widely used as the benchmark for human pose estimation. The original LSP dataset contains 2,000 images of sportspersons gathered from Flickr, 1000 for training and 1000 for testing. Each image is annotated with 14 joint locations, where left and right joints are consistently labelled from a person-centric viewpoint. The extended LSP dataset contains additional 10,000 images labeled for training.\n\nImage: Sumer et al" }, { "dkey": "MSU-MFSD", "dval": "The MSU-MFSD dataset contains 280 video recordings of genuine and attack faces. 35 individuals have participated in the development of this database with a total of 280 videos. Two kinds of cameras with different resolutions (720×480 and 640×480) were used to record the videos from the 35 individuals. For the real accesses, each individual has two video recordings captured with the Laptop cameras and Android, respectively. For the video attacks, two types of cameras, the iPhone and Canon cameras were used to capture high definition videos on each of the subject. The videos taken with Canon camera were then replayed on iPad Air screen to generate the HD replay attacks while the videos recorded by the iPhone mobile were replayed itself to generate the mobile replay attacks. Photo attacks were produced by printing the 35 subjects’ photos on A3 papers using HP colour printer. The recording videos with respect to the 35 individuals were divided into training (15 subjects with 120 videos) and testing (40 subjects with 160 videos) datasets, respectively." }, { "dkey": "APRICOT", "dval": "APRICOT is a collection of over 1,000 annotated photographs of printed adversarial patches in public locations. The patches target several object categories for three COCO-trained detection models, and the photos represent natural variation in position, distance, lighting conditions, and viewing angle." }, { "dkey": "Make3D", "dval": "The Make3D dataset is a monocular Depth Estimation dataset that contains 400 single training RGB and depth map pairs, and 134 test samples. The RGB images have high resolution, while the depth maps are provided at low resolution." } ]
We propose to learn a deep convolutional neural network from a small dataset. The main novelty is the incorporation
image classification
2,015
[ "GoPro", "COVIDx", "Places", "Birdsnap", "COWC" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "GoPro", "dval": "The GoPro dataset for deblurring consists of 3,214 blurred images with the size of 1,280×720 that are divided into 2,103 training images and 1,111 test images. The dataset consists of pairs of a realistic blurry image and the corresponding ground truth shapr image that are obtained by a high-speed camera." }, { "dkey": "COVIDx", "dval": "An open access benchmark dataset comprising of 13,975 CXR images across 13,870 patient cases, with the largest number of publicly available COVID-19 positive cases to the best of the authors' knowledge." }, { "dkey": "Places", "dval": "The Places dataset is proposed for scene recognition and contains more than 2.5 million images covering more than 205 scene categories with more than 5,000 images per category." }, { "dkey": "Birdsnap", "dval": "Birdsnap is a large bird dataset consisting of 49,829 images from 500 bird species with 47,386 images used for training and 2,443 images used for testing." }, { "dkey": "COWC", "dval": "The Cars Overhead With Context (COWC) data set is a large set of annotated cars from overhead. It is useful for training a device such as a deep neural network to learn to detect and/or count cars." } ]
I want to train a model for image-text matching, which can be used
image-text matching images sentences
2,018
[ "FaceForensics", "SNIPS", "Market-1501", "ConvAI2" ]
[ "ImageNet", "Flickr30k" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "Flickr30k", "dval": "The Flickr30k dataset contains 31,000 images collected from Flickr, together with 5 reference sentences provided by human annotators." }, { "dkey": "FaceForensics", "dval": "FaceForensics is a video dataset consisting of more than 500,000 frames containing faces from 1004 videos that can be used to study image or video forgeries. All videos are downloaded from Youtube and are cut down to short continuous clips that contain mostly frontal faces. This dataset has two versions:\n\n\n\nSource-to-Target: where the authors reenact over 1000 videos with new facial expressions extracted from other videos, which e.g. can be used to train a classifier to detect fake images or videos.\n\n\n\nSelfreenactment: where the authors use Face2Face to reenact the facial expressions of videos with their own facial expressions as input to get pairs of videos, which e.g. can be used to train supervised generative refinement models." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." } ]
I want to create a dataset to evaluate different methods for visual navigation, such as frame-
visual navigation images events
2,016
[ "TUM monoVO", "ACDC", "Newspaper Navigator", "YouCook", "Virtual KITTI" ]
[ "Middlebury", "KITTI" ]
[ { "dkey": "Middlebury", "dval": "The Middlebury Stereo dataset consists of high-resolution stereo sequences with complex geometry and pixel-accurate ground-truth disparity data. The ground-truth disparities are acquired using a novel technique that employs structured lighting and does not require the calibration of the light projectors." }, { "dkey": "KITTI", "dval": "KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner. Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities. Álvarez et al. generated ground truth for 323 images from the road detection challenge with three classes: road, vertical, and sky. Zhang et al. annotated 252 (140 for training and 112 for testing) acquisitions – RGB and Velodyne scans – from the tracking challenge for ten object categories: building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence. Ros et al. labeled 170 training images and 46 testing images (from the visual odometry challenge) with 11 classes: building, tree, sky, car, sign, road, pedestrian, fence, pole, sidewalk, and bicyclist." }, { "dkey": "TUM monoVO", "dval": "TUM monoVO is a dataset for evaluating the tracking accuracy of monocular Visual Odometry (VO) and SLAM methods. It contains 50 real-world sequences comprising over 100 minutes of video, recorded across different environments – ranging from narrow indoor corridors to wide outdoor scenes.\nAll sequences contain mostly exploring camera motion, starting and ending at the same position: this allows to evaluate tracking accuracy via the accumulated drift from start to end, without requiring ground-truth for the full sequence.\nIn contrast to existing datasets, all sequences are photometrically calibrated: the dataset creators provide the exposure times for each frame as reported by the sensor, the camera response function and the lens attenuation factors (vignetting)." }, { "dkey": "ACDC", "dval": "The goal of the Automated Cardiac Diagnosis Challenge (ACDC) challenge is to:\n\n\ncompare the performance of automatic methods on the segmentation of the left ventricular endocardium and epicardium as the right ventricular endocardium for both end diastolic and end systolic phase instances;\ncompare the performance of automatic methods for the classification of the examinations in five classes (normal case, heart failure with infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, abnormal right ventricle).\n\nThe overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon. Acquired data were fully anonymized and handled within the regulations set by the local ethical committee of the Hospital of Dijon (France). Our dataset covers several well-defined pathologies with enough cases to (1) properly train machine learning methods and (2) clearly assess the variations of the main physiological parameters obtained from cine-MRI (in particular diastolic volume and ejection fraction). The dataset is composed of 150 exams (all from different patients) divided into 5 evenly distributed subgroups (4 pathological plus 1 healthy subject groups) as described below. Furthermore, each patient comes with the following additional information : weight, height, as well as the diastolic and systolic phase instants.\n\nThe database is made available to participants through two datasets from the dedicated online evaluation website after a personal registration: i) a training dataset of 100 patients along with the corresponding manual references based on the analysis of one clinical expert; ii) a testing dataset composed of 50 new patients, without manual annotations but with the patient information given above. The raw input images are provided through the Nifti format." }, { "dkey": "Newspaper Navigator", "dval": "The largest dataset of extracted visual content from historic newspapers ever produced. The Newspaper Navigator dataset, finetuned visual content recognition model." }, { "dkey": "YouCook", "dval": "This data set was prepared from 88 open-source YouTube cooking videos. The YouCook dataset contains videos of people cooking various recipes. The videos were downloaded from YouTube and are all in the third-person viewpoint; they represent a significantly more challenging visual problem than existing cooking and kitchen datasets (the background kitchen/scene is different for many and most videos have dynamic camera changes). In addition, frame-by-frame object and action annotations are provided for training data (as well as a number of precomputed low-level features). Finally, each video has a number of human provided natural language descriptions (on average, there are eight different descriptions per video). This dataset has been created to serve as a benchmark in describing complex real-world videos with natural language descriptions." }, { "dkey": "Virtual KITTI", "dval": "Virtual KITTI is a photo-realistic synthetic video dataset designed to learn and evaluate computer vision models for several video understanding tasks: object detection and multi-object tracking, scene-level and instance-level semantic segmentation, optical flow, and depth estimation.\n\nVirtual KITTI contains 50 high-resolution monocular videos (21,260 frames) generated from five different virtual worlds in urban settings under different imaging and weather conditions. These worlds were created using the Unity game engine and a novel real-to-virtual cloning method. These photo-realistic synthetic videos are automatically, exactly, and fully annotated for 2D and 3D multi-object tracking and at the pixel level with category, instance, flow, and depth labels (cf. below for download links)." } ]
I want to scan the trojaned model to check if it is trojaned.
trojan detection images
2,019
[ "SNIPS", "UAVA", "Scan2CAD", "SIZER", "ConvAI2", "People Snapshot Dataset" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "UAVA", "dval": "The UAVA,<i>UAV-Assistant</i>, dataset is specifically designed for fostering applications which consider UAVs and humans as cooperative agents.\nWe employ a real-world 3D scanned dataset (<a href=\"https://niessner.github.io/Matterport/\">Matterport3D</a>), physically-based rendering, a gamified simulator for realistic drone navigation trajectory collection, to generate realistic multimodal data both from the user’s exocentric view of the drone, as well as the drone’s egocentric view." }, { "dkey": "Scan2CAD", "dval": "Scan2CAD is an alignment dataset based on 1506 ScanNet scans with 97607 annotated keypoints pairs between 14225 (3049 unique) CAD models from ShapeNet and their counterpart objects in the scans. The top 3 annotated model classes are chairs, tables and cabinets which arises due to the nature of indoor scenes in ScanNet. The number of objects aligned per scene ranges from 1 to 40 with an average of 9.3.\n\nAdditionally, all ShapeNet CAD models used in the Scan2CAD dataset are annotated with their rotational symmetries: either none, 2-fold, 4-fold or infinite rotational symmetries around a canonical axis of the object." }, { "dkey": "SIZER", "dval": "Dataset of clothing size variation which includes different subjects wearing casual clothing items in various sizes, totaling to approximately 2000 scans. This dataset includes the scans, registrations to the SMPL model, scans segmented in clothing parts, garment category and size labels." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "People Snapshot Dataset", "dval": "Enables detailed human body model reconstruction in clothing from a single monocular RGB video without requiring a pre scanned template or manually clicked points." } ]
Multi-expert Gender Classification on Age Group (MGA).
age gender classification image
2,018
[ "BanglaLekha-Isolated", "aGender", "FairFace", "FFHQ-Aging", "Celeb-DF" ]
[ "Adience", "MORPH" ]
[ { "dkey": "Adience", "dval": "The Adience dataset, published in 2014, contains 26,580 photos across 2,284 subjects with a binary gender label and one label from eight different age groups, partitioned into five splits. The key principle of the data set is to capture the images as close to real world conditions as possible, including all variations in appearance, pose, lighting condition and image quality, to name a few." }, { "dkey": "MORPH", "dval": "MORPH is a facial age estimation dataset, which contains 55,134 facial images of 13,617 subjects ranging from 16 to 77 years old." }, { "dkey": "BanglaLekha-Isolated", "dval": "This dataset contains Bangla handwritten numerals, basic characters and compound characters. This dataset was collected from multiple geographical location within Bangladesh and includes sample collected from a variety of aged groups. This dataset can also be used for other classification problems i.e: gender, age, district." }, { "dkey": "aGender", "dval": "The aGender corpus contains audio recordings of predefined utterances and free speech produced by humans of different age and gender. Each utterance is labeled as one of four age groups: Child, Youth, Adult, Senior, and as one of three gender classes: Female, Male and Child." }, { "dkey": "FairFace", "dval": "FairFace is a face image dataset which is race balanced. It contains 108,501 images from 7 different race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino. Images were collected from the YFCC-100M Flickr dataset and labeled with race, gender, and age groups." }, { "dkey": "FFHQ-Aging", "dval": "FFHQ-Aging is a Dataset of human faces designed for benchmarking age transformation algorithms as well as many other possible vision tasks.\nThis dataset is an extention of the NVIDIA FFHQ dataset, on top of the 70,000 original FFHQ images, it also contains the following information for each image:\n* Gender information (male/female with confidence score)\n* Age group information (10 classes with confidence score)\n* Head pose (pitch, roll & yaw)\n* Glasses type (none, normal or dark)\n* Eye occlusion score (0-100, different score for each eye)\n* Full semantic map (19 classes, based on CelebAMask-HQ labels)" }, { "dkey": "Celeb-DF", "dval": "Celeb-DF is a large-scale challenging dataset for deepfake forensics. It includes 590 original videos collected from YouTube with subjects of different ages, ethnic groups and genders, and 5639 corresponding DeepFake videos." } ]
I want to train a semi-supervised model for image classification from labeled and unlabeled samples.
image classification images
2,019
[ "EMBER", "VoxPopuli", "DCASE 2018 Task 4", "PanNuke", "Fakeddit" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "EMBER", "dval": "A labeled benchmark dataset for training machine learning models to statically detect malicious Windows portable executable files. The dataset includes features extracted from 1.1M binary files: 900K training samples (300K malicious, 300K benign, 300K unlabeled) and 200K test samples (100K malicious, 100K benign)." }, { "dkey": "VoxPopuli", "dval": "VoxPopuli is a large-scale multilingual corpus providing 100K hours of unlabelled speech data in 23 languages. It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning. VoxPopuli also contains 1.8K hours of transcribed speeches in 16 languages and their aligned oral interpretations into 5 other languages totaling 5.1K hours." }, { "dkey": "DCASE 2018 Task 4", "dval": "DCASE2018 Task 4 is a dataset for large-scale weakly labeled semi-supervised sound event detection in domestic environments. The data are YouTube video excerpts focusing on domestic context which could be used for example in ambient assisted living applications. The domain was chosen due to the scientific challenges (wide variety of sounds, time-localized events...) and potential industrial applications.\nSpecifically, the task employs a subset of “Audioset: An Ontology And Human-Labeled Dataset For Audio Events” by Google. Audioset consists of an expanding ontology of 632 sound event classes and a collection of 2 million human-labeled 10-second sound clips (less than 21% are shorter than 10-seconds) drawn from 2 million Youtube videos. The ontology is specified as a hierarchical graph of event categories, covering a wide range of human and animal sounds, musical instruments and genres, and common everyday environmental sounds.\nTask 4 focuses on a subset of Audioset that consists of 10 classes of sound events: speech, dog, cat, alarm bell ringing, dishes, frying, blender, running water, vacuum cleaner, electric shaver toothbrush." }, { "dkey": "PanNuke", "dval": "PanNuke is a semi automatically generated nuclei instance segmentation and classification dataset with exhaustive nuclei labels across 19 different tissue types. The dataset consists of 481 visual fields, of which 312 are randomly sampled from more than 20K whole slide images at different magnifications, from multiple data sources. In total the dataset contains 205,343 labeled nuclei, each with an instance segmentation mask." }, { "dkey": "Fakeddit", "dval": "Fakeddit is a novel multimodal dataset for fake news detection consisting of over 1 million samples from multiple categories of fake news. After being processed through several stages of review, the samples are labeled according to 2-way, 3-way, and 6-way classification categories through distant supervision." } ]
A comprehensive survey of recent advances in object detection from RGB images
rgb-d object detection images depth maps
2,019
[ "CLOTH", "LAMBADA", "YAGO", "FAT", "THEODORE", "Kitchen Scenes" ]
[ "LFSD", "Caltech-101", "KITTI", "ModelNet" ]
[ { "dkey": "LFSD", "dval": "The Light Field Saliency Database (LFSD) contains 100 light fields with 360×360 spatial resolution. A rough focal stack and an all-focus image are provided for each light field. The images in this dataset usually have one salient foreground object and a background with good color contrast." }, { "dkey": "Caltech-101", "dval": "The Caltech101 dataset contains images from 101 object categories (e.g., “helicopter”, “elephant” and “chair” etc.) and a background category that contains the images not from the 101 object categories. For each object category, there are about 40 to 800 images, while most classes have about 50 images. The resolution of the image is roughly about 300×200 pixels." }, { "dkey": "KITTI", "dval": "KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner. Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities. Álvarez et al. generated ground truth for 323 images from the road detection challenge with three classes: road, vertical, and sky. Zhang et al. annotated 252 (140 for training and 112 for testing) acquisitions – RGB and Velodyne scans – from the tracking challenge for ten object categories: building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence. Ros et al. labeled 170 training images and 46 testing images (from the visual odometry challenge) with 11 classes: building, tree, sky, car, sign, road, pedestrian, fence, pole, sidewalk, and bicyclist." }, { "dkey": "ModelNet", "dval": "The ModelNet40 dataset contains synthetic object point clouds. As the most widely used benchmark for point cloud analysis, ModelNet40 is popular because of its various categories, clean shapes, well-constructed dataset, etc. The original ModelNet40 consists of 12,311 CAD-generated meshes in 40 categories (such as airplane, car, plant, lamp), of which 9,843 are used for training while the rest 2,468 are reserved for testing. The corresponding point cloud data points are uniformly sampled from the mesh surfaces, and then further preprocessed by moving to the origin and scaling into a unit sphere." }, { "dkey": "CLOTH", "dval": "The Cloze Test by Teachers (CLOTH) benchmark is a collection of nearly 100,000 4-way multiple-choice cloze-style questions from middle- and high school-level English language exams, where the answer fills a blank in a given text. Each question is labeled with a type of deep reasoning it involves, where the four possible types are grammar, short-term reasoning, matching/paraphrasing, and long-term reasoning, i.e., reasoning over multiple sentences" }, { "dkey": "LAMBADA", "dval": "The LAMBADA (LAnguage Modeling Broadened to Account for Discourse Aspects) benchmark is an open-ended cloze task which consists of about 10,000 passages from BooksCorpus where a missing target word is predicted in the last sentence of each passage. The missing word is constrained to always be the last word of the last sentence and there are no candidate words to choose from. Examples were filtered by humans to ensure they were possible to guess given the context, i.e., the sentences in the passage leading up to the last sentence. Examples were further filtered to ensure that missing words could not be guessed without the context, ensuring that models attempting the dataset would need to reason over the entire paragraph to answer questions." }, { "dkey": "YAGO", "dval": "Yet Another Great Ontology (YAGO) is a Knowledge Graph that augments WordNet with common knowledge facts extracted from Wikipedia, converting WordNet from a primarily linguistic resource to a common knowledge base. YAGO originally consisted of more than 1 million entities and 5 million facts describing relationships between these entities. YAGO2 grounded entities, facts, and events in time and space, contained 446 million facts about 9.8 million entities, while YAGO3 added about 1 million more entities from non-English Wikipedia articles. YAGO3-10 a subset of YAGO3, containing entities which have a minimum of 10 relations each." }, { "dkey": "FAT", "dval": "Falling Things (FAT) is a dataset for advancing the state-of-the-art in object detection and 3D pose estimation in the context of robotics. It consists of generated photorealistic images with accurate 3D pose annotations for all objects in 60k images.\n\nThe 60k annotated photos of 21 household objects are taken from the YCB objects set. For each image, the dataset contains the 3D poses, per-pixel class segmentation, and 2D/3D bounding box coordinates for all objects." }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "Kitchen Scenes", "dval": "Kitchen Scenes is a multi-view RGB-D dataset of nine kitchen scenes, each containing several objects in realistic cluttered environments including a subset of objects from the BigBird dataset. The viewpoints of the scenes are densely sampled and objects in the scenes are annotated with bounding boxes and in the 3D point cloud." } ]
We discuss the literature on instance retrieval from images and present a comprehensive survey of state-of-the-
instance retrieval images
2,018
[ "GSL", "THEODORE", "LogiQA", "SCUT-CTW1500", "CTC", "DOTA" ]
[ "ImageNet", "Oxford5k" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "Oxford5k", "dval": "Oxford5K is the Oxford Buildings Dataset, which contains 5062 images collected from Flickr. It offers a set of 55 queries for 11 landmark buildings, five for each landmark." }, { "dkey": "GSL", "dval": "Dataset Description\nThe Greek Sign Language (GSL) is a large-scale RGB+D dataset, suitable for Sign Language Recognition (SLR) and Sign Language Translation (SLT). The video captures are conducted using an Intel RealSense D435 RGB+D camera at a rate of 30 fps. Both the RGB and the depth streams are acquired in the same spatial resolution of 848×480 pixels. To increase variability in the videos, the camera position and orientation is slightly altered within subsequent recordings. Seven different signers are employed to perform 5 individual and commonly met scenarios in different public services. The average length of each scenario is twenty sentences.\n\nThe dataset contains 10,290 sentence instances, 40,785 gloss instances, 310 unique glosses (vocabulary size) and 331 unique sentences, with 4.23 glosses per sentence on average. Each signer is asked to perform the pre-defined dialogues five consecutive times. In all cases, the simulation considers a deaf person communicating with a single public service employee. The involved signer performs the sequence of glosses of both agents in the discussion. For the annotation of each gloss sequence, GSL linguistic experts are involved. The given annotations are at individual gloss and gloss sequence level. A translation of the gloss sentences to spoken Greek is also provided.\n\nEvaluation\nThe GSL dataset includes the 3 evaluation setups:\n\n\n\nSigner-dependent continuous sign language recognition (GSL SD) – roughly 80% of videos are used for training, corresponding to 8,189 instances. The rest 1,063 (10%) were kept for validation and 1,043 (10%) for testing.\n\n\n\nSigner-independent continuous sign language recognition (GSL SI) – the selected test gloss sequences are not used in the training set, while all the individual glosses exist in the training set. In GSL SI, the recordings of one signer are left out for validation and testing (588 and 881 instances, respectively). The rest 8821 instances are utilized for training.\n\n\n\nIsolated gloss sign language recognition (GSL isol.) – The validation set consists of 2,231 gloss instances, the test set 3,500, while the remaining 34,995 are used for training. All 310 unique glosses are seen in the training set.\n\n\n\nFor more info and results, advice our paper\n\nPaper Abstract: A Comprehensive Study on Sign Language Recognition Methods, Adaloglou et al. 2020\nIn this paper, a comparative experimental assessment of computer vision-based methods for sign language recognition is conducted. By implementing the most recent deep neural network methods in this field, a thorough evaluation on multiple publicly available datasets is performed. The aim of the present study is to provide insights on sign language recognition, focusing on mapping non-segmented video streams to glosses. For this task, two new sequence training criteria, known from the fields of speech and scene text recognition, are introduced. Furthermore, a\nplethora of pretraining schemes are thoroughly discussed. Finally, a new RGB+D dataset for the Greek sign language is created. To the best of our knowledge, this is the first sign language dataset where sentence and gloss level annotations are provided for every video capture.\n\nArxiv link" }, { "dkey": "THEODORE", "dval": "Recent work about synthetic indoor datasets from perspective views has shown significant improvements of object detection results with Convolutional Neural Networks(CNNs). In this paper, we introduce THEODORE: a novel, large-scale indoor dataset containing 100,000 high- resolution diversified fisheye images with 14 classes. To this end, we create 3D virtual environments of living rooms, different human characters and interior textures. Beside capturing fisheye images from virtual environments we create annotations for semantic segmentation, instance masks and bounding boxes for object detection tasks. We compare our synthetic dataset to state of the art real-world datasets for omnidirectional images. Based on MS COCO weights, we show that our dataset is well suited for fine-tuning CNNs for object detection. Through a high generalization of our models by means of image synthesis and domain randomization we reach an AP up to 0.84 for class person on High-Definition Analytics dataset." }, { "dkey": "LogiQA", "dval": "LogiQA consists of 8,678 QA instances, covering multiple types of deductive reasoning. Results show that state-of-the-art neural models perform by far worse than human ceiling. The dataset can also serve as a benchmark for reinvestigating logical AI under the deep learning NLP setting." }, { "dkey": "SCUT-CTW1500", "dval": "The SCUT-CTW1500 dataset contains 1,500 images: 1,000 for training and 500 for testing. In particular, it provides 10,751 cropped text instance images, including 3,530 with curved text. The images are manually harvested from the Internet, image libraries such as Google Open-Image, or phone cameras. The dataset contains a lot of horizontal and multi-oriented text." }, { "dkey": "CTC", "dval": "A dataset that allows exploration of cross-modal retrieval where images contain scene-text instances." }, { "dkey": "DOTA", "dval": "DOTA is a large-scale dataset for object detection in aerial images. It can be used to develop and evaluate object detectors in aerial images. The images are collected from different sensors and platforms. Each image is of the size in the range from 800 × 800 to 20,000 × 20,000 pixels and contains objects exhibiting a wide variety of scales, orientations, and shapes. The instances in DOTA images are annotated by experts in aerial image interpretation by arbitrary (8 d.o.f.) quadrilateral. We will continue to update DOTA, to grow in size and scope to reflect evolving real-world conditions. Now it has three versions:\n\nDOTA-v1.0 contains 15 common categories, 2,806 images and 188, 282 instances. The proportions of the training set, validation set, and testing set in DOTA-v1.0 are 1/2, 1/6, and 1/3, respectively.\n\nDOTA-v1.5 uses the same images as DOTA-v1.0, but the extremely small instances (less than 10 pixels) are also annotated. Moreover, a new category, ”container crane” is added. It contains 403,318 instances in total. The number of images and dataset splits are the same as DOTA-v1.0. This version was released for the DOAI Challenge 2019 on Object Detection in Aerial Images in conjunction with IEEE CVPR 2019.\n\nDOTA-v2.0 collects more Google Earth, GF-2 Satellite, and aerial images. There are 18 common categories, 11,268 images and 1,793,658 instances in DOTA-v2.0. Compared to DOTA-v1.5, it further adds the new categories of ”airport” and ”helipad”. The 11,268 images of DOTA are split into training, validation, test-dev, and test-challenge sets. To avoid the problem of overfitting, the proportion of training and validation set is smaller than the test set. Furthermore, we have two test sets, namely test-dev and test-challenge. Training contains 1,830 images and 268,627 instances. Validation contains 593 images and 81,048 instances. We released the images and ground truths for training and validation sets. Test-dev contains 2,792 images and 353,346 instances. We released the images but not the ground truths. Test-challenge contains 6,053 images and 1,090,637 instances." } ]
I am planning to use a supervised approach for graph classification.
graph classification graphs
2,019
[ "ConvAI2", "arXiv", "CSPubSum", "Word Sense Disambiguation: a Unified Evaluation Framework and Empirical Comparison" ]
[ "ENZYMES", "PROTEINS" ]
[ { "dkey": "ENZYMES", "dval": "ENZYMES is a dataset of 600 protein tertiary structures obtained from the BRENDA enzyme database. The ENZYMES dataset contains 6 enzymes." }, { "dkey": "PROTEINS", "dval": "PROTEINS is a dataset of proteins that are classified as enzymes or non-enzymes. Nodes represent the amino acids and two nodes are connected by an edge if they are less than 6 Angstroms apart." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "arXiv", "dval": "Arxiv HEP-TH (high energy physics theory) citation graph is from the e-print arXiv and covers all the citations within a dataset of 27,770 papers with 352,807 edges. If a paper i cites paper j, the graph contains a directed edge from i to j. If a paper cites, or is cited by, a paper outside the dataset, the graph does not contain any information about this.\nThe data covers papers in the period from January 1993 to April 2003 (124 months)." }, { "dkey": "CSPubSum", "dval": "CSPubSum is a dataset for summarisation of computer science publications, created by exploiting a large resource of author provided summaries and show straightforward ways of extending it further." }, { "dkey": "Word Sense Disambiguation: a Unified Evaluation Framework and Empirical Comparison", "dval": "The Evaluation framework of Raganato et al. 2017 includes two training sets (SemCor-Miller et al., 1993- and OMSTI-Taghipour and Ng, 2015-) and five test sets from the Senseval/SemEval series (Edmonds and Cotton, 2001; Snyder and Palmer, 2004; Pradhan et al., 2007; Navigli et al., 2013; Moro and Navigli, 2015), standardized to the same format and sense inventory (i.e. WordNet 3.0).\n\nTypically, there are two kinds of approach for WSD: supervised (which make use of sense-annotated training data) and knowledge-based (which make use of the properties of lexical resources).\n\nSupervised: The most widely used training corpus used is SemCor, with 226,036 sense annotations from 352 documents manually annotated. All supervised systems in the evaluation table are trained on SemCor. Some supervised methods, particularly neural architectures, usually employ the SemEval 2007 dataset as development set (marked by *). The most usual baseline is the Most Frequent Sense (MFS) heuristic, which selects for each target word the most frequent sense in the training data.\n\nKnowledge-based: Knowledge-based systems usually exploit WordNet or BabelNet as semantic network. The first sense given by the underlying sense inventory (i.e. WordNet 3.0) is included as a baseline.\n\nDescription from NLP Progress" } ]
I want to learn a universal image embedding to measure the semantic similarity of two images.
universal image embedding images
2,020
[ "SNIPS", "SICK", "COVERAGE", "Flickr Audio Caption Corpus", "Spot-the-diff" ]
[ "ImageNet", "VehicleID" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "VehicleID", "dval": "The “VehicleID” dataset contains CARS captured during the daytime by multiple real-world surveillance cameras distributed in a small city in China. There are 26,267 vehicles (221,763 images in total) in the entire dataset. Each image is attached with an id label corresponding to its identity in real world. In addition, the dataset contains manually labelled 10319 vehicles (90196 images in total) of their vehicle model information(i.e.“MINI-cooper”, “Audi A6L” and “BWM 1 Series”)." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "SICK", "dval": "The Sentences Involving Compositional Knowledge (SICK) dataset is a dataset for compositional distributional semantics. It includes a large number of sentence pairs that are rich in the lexical, syntactic and semantic phenomena. Each pair of sentences is annotated in two dimensions: relatedness and entailment. The relatedness score ranges from 1 to 5, and Pearson’s r is used for evaluation; the entailment relation is categorical, consisting of entailment, contradiction, and neutral. There are 4439 pairs in the train split, 495 in the trial split used for development and 4906 in the test split. The sentence pairs are generated from image and video caption datasets before being paired up using some algorithm." }, { "dkey": "COVERAGE", "dval": "COVERAGE contains copymove forged (CMFD) images and their originals with similar but genuine objects (SGOs). COVERAGE is designed to highlight and address tamper detection ambiguity of popular methods, caused by self-similarity within natural images. In COVERAGE, forged–original pairs are annotated with (i) the duplicated and forged region masks, and (ii) the tampering factor/similarity metric. For benchmarking, forgery quality is evaluated using (i) computer vision-based methods, and (ii) human detection performance." }, { "dkey": "Flickr Audio Caption Corpus", "dval": "The Flickr 8k Audio Caption Corpus contains 40,000 spoken captions of 8,000 natural images. It was collected in 2015 to investigate multimodal learning schemes for unsupervised speech pattern discovery. For a description of the corpus, see:\n\nD. Harwath and J. Glass, \"Deep Multimodal Semantic Embeddings for Speech and Images,\" 2015 IEEE Automatic Speech Recognition and Understanding Workshop, pp. 237-244, Scottsdale, Arizona, USA, December 2015" }, { "dkey": "Spot-the-diff", "dval": "Spot-the-diff is a dataset consisting of 13,192 image pairs along with corresponding human provided text annotations stating the differences between the two images." } ]
I want to train a model for sentence encoding.
natural language inference
2,017
[ "ConvAI2", "SNIPS", "Discovery Dataset", "GYAFC", "SentEval" ]
[ "SNLI", "MultiNLI" ]
[ { "dkey": "SNLI", "dval": "The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are image captions from Flickr30k, while hypotheses were generated by crowd-sourced annotators who were shown a premise and asked to generate entailing, contradicting, and neutral sentences. Annotators were instructed to judge the relation between sentences given that they describe the same event. Each pair is labeled as “entailment”, “neutral”, “contradiction” or “-”, where “-” indicates that an agreement could not be reached." }, { "dkey": "MultiNLI", "dval": "The Multi-Genre Natural Language Inference (MultiNLI) dataset has 433K sentence pairs. Its size and mode of collection are modeled closely like SNLI. MultiNLI offers ten distinct genres (Face-to-face, Telephone, 9/11, Travel, Letters, Oxford University Press, Slate, Verbatim, Goverment and Fiction) of written and spoken English data. There are matched dev/test sets which are derived from the same sources as those in the training set, and mismatched sets which do not closely resemble any seen at training time." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "Discovery Dataset", "dval": "The Discovery datasets consists of adjacent sentence pairs (s1,s2) with a discourse marker (y) that occurred at the beginning of s2. They were extracted from the depcc web corpus.\n\nMarkers prediction can be used in order to train a sentence encoders. Discourse markers can be considered as noisy labels for various semantic tasks, such as entailment (y=therefore), subjectivity analysis (y=personally) or sentiment analysis (y=sadly), similarity (y=similarly), typicality, (y=curiously) ...\n\nThe specificity of this dataset is the diversity of the markers, since previously used data used only ~10 imbalanced classes. The author of the dataset provide:\n\n\na list of the 174 discourse markers\na Base version of the dataset with 1.74 million pairs (10k examples per marker)\na Big version with 3.4 million pairs\na Hard version with 1.74 million pairs where the connective couldn't be predicted with a fastText linear model" }, { "dkey": "GYAFC", "dval": "Grammarly’s Yahoo Answers Formality Corpus (GYAFC) is the largest dataset for any style containing a total of 110K informal / formal sentence pairs.\n\nYahoo Answers is a question answering forum, contains a large number of informal sentences and allows redistribution of data. The authors used the Yahoo Answers L6 corpus to create the GYAFC dataset of informal and formal sentence pairs. In order to ensure a uniform distribution of data, they removed sentences that are questions, contain URLs, and are shorter than 5 words or longer than 25. After these preprocessing steps, 40 million sentences remain. \n\nThe Yahoo Answers corpus consists of several different domains like Business, Entertainment & Music, Travel, Food, etc. Pavlick and Tetreault formality classifier (PT16) shows that the formality level varies significantly\nacross different genres. In order to control for this variation, the authors work with two specific domains that contain the most informal sentences and show results on training and testing within those categories. The authors use the formality classifier from PT16 to identify informal sentences and train this classifier on the Answers genre of the PT16 corpus\nwhich consists of nearly 5,000 randomly selected sentences from Yahoo Answers manually annotated on a scale of -3 (very informal) to 3 (very formal). They find that the domains of Entertainment & Music and Family & Relationships contain the most informal sentences and create the GYAFC dataset using these domains." }, { "dkey": "SentEval", "dval": "SentEval is a toolkit for evaluating the quality of universal sentence representations. SentEval encompasses a variety of tasks, including binary and multi-class classification, natural language inference and sentence similarity. The set of tasks was selected based on what appears to be the community consensus regarding the appropriate evaluations for universal sentence representations. The toolkit comes with scripts to download and preprocess datasets, and an easy interface to evaluate sentence encoders." } ]
We propose a novel whole-to-part network and a part-to-whole network for the
image classification
2,018
[ "UNITOPATHO", "UMDFaces", "MuPoTS-3D", "Business Scene Dialogue", "Localized Narratives" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "UNITOPATHO", "dval": "Histopathological characterization of colorectal polyps allows to tailor patients' management and follow up with the ultimate aim of avoiding or promptly detecting an invasive carcinoma. Colorectal polyps characterization relies on the histological analysis of tissue samples to determine the polyps malignancy and dysplasia grade. Deep neural networks achieve outstanding accuracy in medical patterns recognition, however they require large sets of annotated training images. We introduce UniToPatho, an annotated dataset of 9536 hematoxylin and eosin stained patches extracted from 292 whole-slide images, meant for training deep neural networks for colorectal polyps classification and adenomas grading. The slides are acquired through a Hamamatsu Nanozoomer S210 scanner at 20× magnification (0.4415 μm/px)" }, { "dkey": "UMDFaces", "dval": "UMDFaces is a face dataset divided into two parts:\n\n\nStill Images - 367,888 face annotations for 8,277 subjects.\nVideo Frames - Over 3.7 million annotated video frames from over 22,000 videos of 3100 subjects.\n\nPart 1 - Still Images\n\nThe dataset contains 367,888 face annotations for 8,277 subjects divided into 3 batches. The annotations contain human curated bounding boxes for faces and estimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network.\n\nPart 2 - Video Frames\n\nThe second part contains 3,735,476 annotated video frames extracted from a total of 22,075 for 3,107 subjects. The annotations contain the estimated pose (yaw, pitch, and roll), locations of twenty-one keypoints, and gender information generated by a pre-trained neural network." }, { "dkey": "MuPoTS-3D", "dval": "MuPoTs-3D (Multi-person Pose estimation Test Set in 3D) is a dataset for pose estimation composed of more than 8,000 frames from 20 real-world scenes with up to three subjects. The poses are annotated with a 14-point skeleton model." }, { "dkey": "Business Scene Dialogue", "dval": "The Japanese-English business conversation corpus, namely Business Scene Dialogue corpus, was constructed in 3 steps:\n\n\nselecting business scenes,\nwriting monolingual conversation scenarios according to the selected scenes, and\ntranslating the scenarios into the other language.\n\nHalf of the monolingual scenarios were written in Japanese and the other half were written in English. The whole construction process was supervised by a person who satisfies the following conditions to guarantee the conversations to be natural:\n\n\nhas the experience of being engaged in language learning programs, especially for business conversations\nis able to smoothly communicate with others in various business scenes both in Japanese and English\nhas the experience of being involved in business\n\nThe BSD corpus is split into balanced training, development and evaluation sets. The documents in these sets are balanced in terms of scenes and original languages. In this repository we publicly share the full development and evaluation sets and a part of the training data set." }, { "dkey": "Localized Narratives", "dval": "We propose Localized Narratives, a new form of multimodal image annotations connecting vision and language. We ask annotators to describe an image with their voice while simultaneously hovering their mouse over the region they are describing. Since the voice and the mouse pointer are synchronized, we can localize every single word in the description. This dense visual grounding takes the form of a mouse trace segment per word and is unique to our data. We annotated 849k images with Localized Narratives: the whole COCO, Flickr30k, and ADE20K datasets, and 671k images of Open Images, all of which we make publicly available. We provide an extensive analysis of these annotations showing they are diverse, accurate, and efficient to produce. We also demonstrate their utility on the application of controlled image captioning." } ]
I want to train a fully supervised model for interactive instance segmentation from images.
interactive instance segmentation images
2,018
[ "CryoNuSeg", "ACDC", "Virtual KITTI", "RobotPush", "SNIPS" ]
[ "COCO", "SBD" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "SBD", "dval": "The Semantic Boundaries Dataset (SBD) is a dataset for predicting pixels on the boundary of the object (as opposed to the inside of the object with semantic segmentation). The dataset consists of 11318 images from the trainval set of the PASCAL VOC2011 challenge, divided into 8498 training and 2820 test images. This dataset has object instance boundaries with accurate figure/ground masks that are also labeled with one of 20 Pascal VOC classes." }, { "dkey": "CryoNuSeg", "dval": "CryoNuSeg is a fully annotated FS-derived cryosectioned and H&E-stained nuclei instance segmentation dataset. The dataset contains images from 10 human organs that were not exploited in other publicly available datasets, and is provided with three manual mark-ups to allow measuring intra-observer and inter-observer variability." }, { "dkey": "ACDC", "dval": "The goal of the Automated Cardiac Diagnosis Challenge (ACDC) challenge is to:\n\n\ncompare the performance of automatic methods on the segmentation of the left ventricular endocardium and epicardium as the right ventricular endocardium for both end diastolic and end systolic phase instances;\ncompare the performance of automatic methods for the classification of the examinations in five classes (normal case, heart failure with infarction, dilated cardiomyopathy, hypertrophic cardiomyopathy, abnormal right ventricle).\n\nThe overall ACDC dataset was created from real clinical exams acquired at the University Hospital of Dijon. Acquired data were fully anonymized and handled within the regulations set by the local ethical committee of the Hospital of Dijon (France). Our dataset covers several well-defined pathologies with enough cases to (1) properly train machine learning methods and (2) clearly assess the variations of the main physiological parameters obtained from cine-MRI (in particular diastolic volume and ejection fraction). The dataset is composed of 150 exams (all from different patients) divided into 5 evenly distributed subgroups (4 pathological plus 1 healthy subject groups) as described below. Furthermore, each patient comes with the following additional information : weight, height, as well as the diastolic and systolic phase instants.\n\nThe database is made available to participants through two datasets from the dedicated online evaluation website after a personal registration: i) a training dataset of 100 patients along with the corresponding manual references based on the analysis of one clinical expert; ii) a testing dataset composed of 50 new patients, without manual annotations but with the patient information given above. The raw input images are provided through the Nifti format." }, { "dkey": "Virtual KITTI", "dval": "Virtual KITTI is a photo-realistic synthetic video dataset designed to learn and evaluate computer vision models for several video understanding tasks: object detection and multi-object tracking, scene-level and instance-level semantic segmentation, optical flow, and depth estimation.\n\nVirtual KITTI contains 50 high-resolution monocular videos (21,260 frames) generated from five different virtual worlds in urban settings under different imaging and weather conditions. These worlds were created using the Unity game engine and a novel real-to-virtual cloning method. These photo-realistic synthetic videos are automatically, exactly, and fully annotated for 2D and 3D multi-object tracking and at the pixel level with category, instance, flow, and depth labels (cf. below for download links)." }, { "dkey": "RobotPush", "dval": "RobotPush is a dataset for object singulation – the task of separating cluttered objects through physical interaction. The dataset contains 3456 training images with labels and 1024 validation images with labels. It consists of simulated and real-world data collected from a PR2 robot that equipped with a Kinect 2 camera. The dataset also contains ground truth instance segmentation masks for 110 images in the test set." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." } ]
The model is a fully convolutional network with a total of 44,852,11
object recognition images
2,020
[ "COCO-Text", "Helen", "Stanford Cars", "Decagon", "BSDS500", "Total-Text" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "COCO-Text", "dval": "The COCO-Text dataset is a dataset for text detection and recognition. It is based on the MS COCO dataset, which contains images of complex everyday scenes. The COCO-Text dataset contains non-text images, legible text images and illegible text images. In total there are 22184 training images and 7026 validation images with at least one instance of legible text." }, { "dkey": "Helen", "dval": "The HELEN dataset is composed of 2330 face images of 400×400 pixels with labeled facial components generated through manually-annotated contours along eyes, eyebrows, nose, lips and jawline." }, { "dkey": "Stanford Cars", "dval": "The Stanford Cars dataset consists of 196 classes of cars with a total of 16,185 images, taken from the rear. The data is divided into almost a 50-50 train/test split with 8,144 training images and 8,041 testing images. Categories are typically at the level of Make, Model, Year. The images are 360×240." }, { "dkey": "Decagon", "dval": "Bio-decagon is a dataset for polypharmacy side effect identification problem framed as a multirelational link prediction problem in a two-layer multimodal graph/network of two node types: drugs and proteins. Protein-protein interaction\nnetwork describes relationships between proteins. Drug-drug interaction network contains 964 different types of edges (one for each side effect type) and describes which drug pairs lead to which side effects. Lastly,\ndrug-protein links describe the proteins targeted by a given drug.\n\nThe final network after linking entity vocabularies used by different databases has 645 drug and 19,085 protein nodes connected by 715,612 protein-protein, 4,651,131 drug-drug, and 18,596 drug-protein edges." }, { "dkey": "BSDS500", "dval": "Berkeley Segmentation Data Set 500 (BSDS500) is a standard benchmark for contour detection. This dataset is designed for evaluating natural edge detection that includes not only object contours but also object interior boundaries and background boundaries. It includes 500 natural images with carefully annotated boundaries collected from multiple users. The dataset is divided into three parts: 200 for training, 100 for validation and the rest 200 for test." }, { "dkey": "Total-Text", "dval": "Total-Text is a text detection dataset that consists of 1,555 images with a variety of text types including horizontal, multi-oriented, and curved text instances. The training split and testing split have 1,255 images and 300 images, respectively." } ]
A semi-supervised approach for anomaly detection, which leverages the unlabelled data to better learn the feature
anomaly detection text
2,019
[ "VoxPopuli", "Thyroid", "DAD", "DCASE 2018 Task 4", "NAB", "ExtremeWeather" ]
[ "ImageNet", "CIFAR-10" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "CIFAR-10", "dval": "The CIFAR-10 dataset (Canadian Institute for Advanced Research, 10 classes) is a subset of the Tiny Images dataset and consists of 60000 32x32 color images. The images are labelled with one of 10 mutually exclusive classes: airplane, automobile (but not truck or pickup truck), bird, cat, deer, dog, frog, horse, ship, and truck (but not pickup truck). There are 6000 images per class with 5000 training and 1000 testing images per class.\n\nThe criteria for deciding whether an image belongs to a class were as follows:\n\n\nThe class name should be high on the list of likely answers to the question “What is in this picture?”\nThe image should be photo-realistic. Labelers were instructed to reject line drawings.\nThe image should contain only one prominent instance of the object to which the class refers.\nThe object may be partially occluded or seen from an unusual viewpoint as long as its identity is still clear to the labeler." }, { "dkey": "VoxPopuli", "dval": "VoxPopuli is a large-scale multilingual corpus providing 100K hours of unlabelled speech data in 23 languages. It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning. VoxPopuli also contains 1.8K hours of transcribed speeches in 16 languages and their aligned oral interpretations into 5 other languages totaling 5.1K hours." }, { "dkey": "Thyroid", "dval": "Thyroid is a dataset for detection of thyroid diseases, in which patients diagnosed with hypothyroid or subnormal are anomalies against normal patients. It contains 2800 training data instance and 972 test instances, with 29 or so attributes." }, { "dkey": "DAD", "dval": "Contains normal driving videos together with a set of anomalous actions in its training set. In the test set of the DAD dataset, there are unseen anomalous actions that still need to be winnowed out from normal driving." }, { "dkey": "DCASE 2018 Task 4", "dval": "DCASE2018 Task 4 is a dataset for large-scale weakly labeled semi-supervised sound event detection in domestic environments. The data are YouTube video excerpts focusing on domestic context which could be used for example in ambient assisted living applications. The domain was chosen due to the scientific challenges (wide variety of sounds, time-localized events...) and potential industrial applications.\nSpecifically, the task employs a subset of “Audioset: An Ontology And Human-Labeled Dataset For Audio Events” by Google. Audioset consists of an expanding ontology of 632 sound event classes and a collection of 2 million human-labeled 10-second sound clips (less than 21% are shorter than 10-seconds) drawn from 2 million Youtube videos. The ontology is specified as a hierarchical graph of event categories, covering a wide range of human and animal sounds, musical instruments and genres, and common everyday environmental sounds.\nTask 4 focuses on a subset of Audioset that consists of 10 classes of sound events: speech, dog, cat, alarm bell ringing, dishes, frying, blender, running water, vacuum cleaner, electric shaver toothbrush." }, { "dkey": "NAB", "dval": "The First Temporal Benchmark Designed to Evaluate Real-time Anomaly Detectors Benchmark\n\nThe growth of the Internet of Things has created an abundance of streaming data. Finding anomalies in this data can provide valuable insights into opportunities or failures. Yet it’s difficult to achieve, due to the need to process data in real time, continuously learn and make predictions. How do we evaluate and compare various real-time anomaly detection techniques? \n\nThe Numenta Anomaly Benchmark (NAB) provides a standard, open source framework for evaluating real-time anomaly detection algorithms on streaming data. Through a controlled, repeatable environment of open-source tools, NAB rewards detectors that find anomalies as soon as possible, trigger no false alarms, and automatically adapt to any changing statistics. \n\nNAB comprises two main components: a scoring system designed for streaming data and a dataset with labeled, real-world time-series data." }, { "dkey": "ExtremeWeather", "dval": "Encourages machine learning research in this area and to help facilitate further work in understanding and mitigating the effects of climate change." } ]
The task of video description is the generation of a natural language sentence to describe the content of a given
video description
2,018
[ "Violin", "ProPara", "VATEX", "SNLI", "DiDeMo", "ToTTo", "SWAG" ]
[ "COCO", "MSVD", "ActivityNet", "LSMDC", "CLEVR" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "MSVD", "dval": "The Microsoft Research Video Description Corpus (MSVD) dataset consists of about 120K sentences collected during the summer of 2010. Workers on Mechanical Turk were paid to watch a short video snippet and then summarize the action in a single sentence. The result is a set of roughly parallel descriptions of more than 2,000 video snippets. Because the workers were urged to complete the task in the language of their choice, both paraphrase and bilingual alternations are captured in the data." }, { "dkey": "ActivityNet", "dval": "The ActivityNet dataset contains 200 different types of activities and a total of 849 hours of videos collected from YouTube. ActivityNet is the largest benchmark for temporal activity detection to date in terms of both the number of activity categories and number of videos, making the task particularly challenging. Version 1.3 of the dataset contains 19994 untrimmed videos in total and is divided into three disjoint subsets, training, validation, and testing by a ratio of 2:1:1. On average, each activity category has 137 untrimmed videos. Each video on average has 1.41 activities which are annotated with temporal boundaries. The ground-truth annotations of test videos are not public." }, { "dkey": "LSMDC", "dval": "This dataset contains 118,081 short video clips extracted from 202 movies. Each video has a caption, either extracted from the movie script or from transcribed DVS (descriptive video services) for the visually impaired. The validation set contains 7408 clips and evaluation is performed on a test set of 1000 videos from movies disjoint from the training and val sets." }, { "dkey": "CLEVR", "dval": "CLEVR (Compositional Language and Elementary Visual Reasoning) is a synthetic Visual Question Answering dataset. It contains images of 3D-rendered objects; each image comes with a number of highly compositional questions that fall into different categories. Those categories fall into 5 classes of tasks: Exist, Count, Compare Integer, Query Attribute and Compare Attribute. The CLEVR dataset consists of: a training set of 70k images and 700k questions, a validation set of 15k images and 150k questions, A test set of 15k images and 150k questions about objects, answers, scene graphs and functional programs for all train and validation images and questions. Each object present in the scene, aside of position, is characterized by a set of four attributes: 2 sizes: large, small, 3 shapes: square, cylinder, sphere, 2 material types: rubber, metal, 8 color types: gray, blue, brown, yellow, red, green, purple, cyan, resulting in 96 unique combinations." }, { "dkey": "Violin", "dval": "Video-and-Language Inference is the task of joint multimodal understanding of video and text. Given a video clip with aligned subtitles as premise, paired with a natural language hypothesis based on the video content, a model needs to infer whether the hypothesis is entailed or contradicted by the given video clip. The Violin dataset is a dataset for this task which consists of 95,322 video-hypothesis pairs from 15,887 video clips, spanning over 582 hours of video. These video clips contain rich content with diverse temporal dynamics, event shifts, and people interactions, collected from two sources: (i) popular TV shows, and (ii) movie clips from YouTube channels." }, { "dkey": "ProPara", "dval": "The ProPara dataset is designed to train and test comprehension of simple paragraphs describing processes (e.g., photosynthesis), designed for the task of predicting, tracking, and answering questions about how entities change during the process.\n\nProPara aims to promote the research in natural language understanding in the context of procedural text. This requires identifying the actions described in the paragraph and tracking state changes happening to the entities involved. The comprehension task is treated as that of predicting, tracking, and answering questions about how entities change during the procedure. The dataset contains 488 paragraphs and 3,300 sentences. Each paragraph is richly annotated with the existence and locations of all the main entities (the “participants”) at every time step (sentence) throughout the procedure (~81,000 annotations).\n\nProPara paragraphs are natural (authored by crowdsourcing) rather than synthetic (e.g., in bAbI). Workers were given a prompt (e.g., “What happens during photosynthesis?”) and then asked to author a series of sentences describing the sequence of events in the procedure. From these sentences, participant entities and their existence and locations were identified. The goal of the challenge is to predict the existence and location of each participant, based on sentences in the paragraph." }, { "dkey": "VATEX", "dval": "VATEX is multilingual, large, linguistically complex, and diverse dataset in terms of both video and natural language descriptions. It has two tasks for video-and-language research: (1) Multilingual Video Captioning, aimed at describing a video in various languages with a compact unified captioning model, and (2) Video-guided Machine Translation, to translate a source language description into the target language using the video information as additional spatiotemporal context." }, { "dkey": "SNLI", "dval": "The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are image captions from Flickr30k, while hypotheses were generated by crowd-sourced annotators who were shown a premise and asked to generate entailing, contradicting, and neutral sentences. Annotators were instructed to judge the relation between sentences given that they describe the same event. Each pair is labeled as “entailment”, “neutral”, “contradiction” or “-”, where “-” indicates that an agreement could not be reached." }, { "dkey": "DiDeMo", "dval": "The Distinct Describable Moments (DiDeMo) dataset is one of the largest and most diverse datasets for the temporal localization of events in videos given natural language descriptions. The videos are collected from Flickr and each video is trimmed to a maximum of 30 seconds. The videos in the dataset are divided into 5-second segments to reduce the complexity of annotation. The dataset is split into training, validation and test sets containing 8,395, 1,065 and 1,004 videos respectively. The dataset contains a total of 26,892 moments and one moment could be associated with descriptions from multiple annotators. The descriptions in DiDeMo dataset are detailed and contain camera movement, temporal transition indicators, and activities. Moreover, the descriptions in DiDeMo are verified so that each description refers to a single moment." }, { "dkey": "ToTTo", "dval": "ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.\n\nDuring the dataset creation process, tables from English Wikipedia are matched with (noisy) descriptions. Each table cell mentioned in the description is highlighted and the descriptions are iteratively cleaned and corrected to faithfully reflect the content of the highlighted cells." }, { "dkey": "SWAG", "dval": "Given a partial description like \"she opened the hood of the car,\" humans can reason about the situation and anticipate what might come next (\"then, she examined the engine\"). SWAG (Situations With Adversarial Generations) is a large-scale dataset for this task of grounded commonsense inference, unifying natural language inference and physically grounded reasoning.\n\nThe dataset consists of 113k multiple choice questions about grounded situations. Each question is a video caption from LSMDC or ActivityNet Captions, with four answer choices about what might happen next in the scene. The correct answer is the (real) video caption for the next event in the video; the three incorrect answers are adversarially generated and human verified, so as to fool machines but not humans. The authors aim for SWAG to be a benchmark for evaluating grounded commonsense NLI and for learning representations." } ]
I want to train a supervised model for person re-identification in the wild
person re-identification wild video
2,017
[ "SYSU-MM01", "ATRW", "Airport", "VeRi-Wild", "Partial-iLIDS", "DukeMTMC-reID" ]
[ "KITTI", "Market-1501" ]
[ { "dkey": "KITTI", "dval": "KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner. Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities. Álvarez et al. generated ground truth for 323 images from the road detection challenge with three classes: road, vertical, and sky. Zhang et al. annotated 252 (140 for training and 112 for testing) acquisitions – RGB and Velodyne scans – from the tracking challenge for ten object categories: building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence. Ros et al. labeled 170 training images and 46 testing images (from the visual odometry challenge) with 11 classes: building, tree, sky, car, sign, road, pedestrian, fence, pole, sidewalk, and bicyclist." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "SYSU-MM01", "dval": "The SYSU-MM01 is a dataset collected for the Visible-Infrared Re-identification problem. The images in the dataset were obtained from 491 different persons by recording them using 4 RGB and 2 infrared cameras. Within the dataset, the persons are divided into 3 fixed splits to create training, validation and test sets. In the training set, there are 20284 RGB and 9929 infrared images of 296 persons. The validation set contains 1974 RGB and 1980 infrared images of 99 persons. The testing set consists of the images of 96 persons where 3803 infrared images are used as query and 301 randomly selected RGB images are used as gallery." }, { "dkey": "ATRW", "dval": "The ATRW Dataset contains over 8,000 video clips from 92 Amur tigers, with bounding box, pose keypoint, and tiger identity annotations." }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "VeRi-Wild", "dval": "Veri-Wild is the largest vehicle re-identification dataset (as of CVPR 2019). The dataset is captured from a large CCTV surveillance system consisting of 174 cameras across one month (30× 24h) under unconstrained scenarios. This dataset comprises 416,314 vehicle images of 40,671 identities. Evaluation on this dataset is split across three subsets: small, medium and large; comprising 3000, 5000 and 10,000 identities respectively (in probe and gallery sets)." }, { "dkey": "Partial-iLIDS", "dval": "Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras. Some images contain people occluded by other individuals or luggage." }, { "dkey": "DukeMTMC-reID", "dval": "The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images.\n\nNOTE: This dataset has been retracted." } ]
I want to use unsupervised representation learning to learn useful feature representations for high-level recognition problems.
unsupervised representation learning images
2,017
[ "Icentia11K", "CC100", "VoxPopuli", "PTC", "REDDIT-12K", "AtariARI", "Email-EU" ]
[ "ImageNet", "UCF101" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "Icentia11K", "dval": "Public ECG dataset of continuous raw signals for representation learning containing 11 thousand patients and 2 billion labelled beats." }, { "dkey": "CC100", "dval": "This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository." }, { "dkey": "VoxPopuli", "dval": "VoxPopuli is a large-scale multilingual corpus providing 100K hours of unlabelled speech data in 23 languages. It is the largest open data to date for unsupervised representation learning as well as semi-supervised learning. VoxPopuli also contains 1.8K hours of transcribed speeches in 16 languages and their aligned oral interpretations into 5 other languages totaling 5.1K hours." }, { "dkey": "PTC", "dval": "PTC is a collection of 344 chemical compounds represented as graphs which report the carcinogenicity for rats. There are 19 node labels for each node." }, { "dkey": "REDDIT-12K", "dval": "Reddit12k contains 11929 graphs each corresponding to an online discussion thread where nodes represent users, and an edge represents the fact that one of the two users responded to the comment of the other user. There is 1 of 11 graph labels associated with each of these 11929 discussion graphs, representing the category of the community." }, { "dkey": "AtariARI", "dval": "The AtariARI (Atari Annotated RAM Interface) is an environment for representation learning. The Atari Arcade Learning Environment (ALE) does not explicitly expose any ground truth state information. However, ALE does expose the RAM state (128 bytes per timestep) which are used by the game programmer to store important state information such as the location of sprites, the state of the clock, or the current room the agent is in. To extract these variables, the dataset creators consulted commented disassemblies (or source code) of Atari 2600 games which were made available by Engelhardt and Jentzsch and CPUWIZ. The dataset creators were able to find and verify important state variables for a total of 22 games. Once this information was acquired, combining it with the ALE interface produced a wrapper that can automatically output a state label for every example frame generated from the game. The dataset creators make this available with an easy-to-use gym wrapper, which returns this information with no change to existing code using gym interfaces." }, { "dkey": "Email-EU", "dval": "EmailEU is a directed temporal network constructed from email exchanges in a large European research institution for a 803-day period. It contains 986 email addresses as nodes and 332,334 emails as edges with timestamps. There are 42 ground truth departments in the dataset." } ]
I want to train a model for semantic segmentation from images.
semantic segmentation image
2,016
[ "SNIPS", "ConvAI2", "SemanticKITTI", "Cata7", "Swiss3DCities" ]
[ "COCO", "Cityscapes" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "Cityscapes", "dval": "Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of around 5000 fine annotated images and 20000 coarse annotated ones. Data was captured in 50 cities during several months, daytimes, and good weather conditions. It was originally recorded as video so the frames were manually selected to have the following features: large number of dynamic objects, varying scene layout, and varying background." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." }, { "dkey": "SemanticKITTI", "dval": "SemanticKITTI is a large-scale outdoor-scene dataset for point cloud semantic segmentation. It is derived from the KITTI Vision Odometry Benchmark which it extends with dense point-wise annotations for the complete 360 field-of-view of the employed automotive LiDAR. The dataset consists of 22 sequences. Overall, the dataset provides 23201 point clouds for training and 20351 for testing." }, { "dkey": "Cata7", "dval": "Cata7 is the first cataract surgical instrument dataset for semantic segmentation. The dataset consists of seven videos while each video records a complete cataract surgery. All videos are from Beijing Tongren Hospital. Each video is split into a sequence of images, where resolution is 1920×1080 pixels. To reduce redundancy, the videos are downsampled from 30 fps to 1 fps. Also, images without surgical instruments are manually removed. Each image is labeled with precise edges and types of surgical instruments. This dataset contains 2,500 images, which are divided into training and test sets. The training set consists of five video sequences and test set consists of two video sequence." }, { "dkey": "Swiss3DCities", "dval": "Swiss3DCities is a dataset that is manually annotated for semantic segmentation with per-point labels, and is built using photogrammetry from images acquired by multirotors equipped with high-resolution cameras." } ]
We propose a new algorithm for human gait recognition using a computer vision technique called image recognition. In this
human gait recognition images
2,020
[ "BDD100K", "Hollywood 3D dataset", "USF", "OCID" ]
[ "MPII", "COCO" ]
[ { "dkey": "MPII", "dval": "The MPII Human Pose Dataset for single person pose estimation is composed of about 25K images of which 15K are training samples, 3K are validation samples and 7K are testing samples (which labels are withheld by the authors). The images are taken from YouTube videos covering 410 different human activities and the poses are manually annotated with up to 16 body joints." }, { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "BDD100K", "dval": "Datasets drive vision progress, yet existing driving datasets are impoverished in terms of visual content and supported tasks to study multitask learning for autonomous driving. Researchers are usually constrained to study a small set of problems on one dataset, while real-world computer vision applications require performing tasks of various complexities. We construct BDD100K, the largest driving video dataset with 100K videos and 10 tasks to evaluate the exciting progress of image recognition algorithms on autonomous driving. The dataset possesses geographic, environmental, and weather diversity, which is useful for training models that are less likely to be surprised by new conditions. Based on this diverse dataset, we build a benchmark for heterogeneous multitask learning and study how to solve the tasks together. Our experiments show that special training strategies are needed for existing models to perform such heterogeneous tasks. BDD100K opens the door for future studies in this important venue. More detail is at the dataset home page." }, { "dkey": "Hollywood 3D dataset", "dval": "A dataset for benchmarking action recognition algorithms in natural environments, while making use of 3D information. The dataset contains around 650 video clips, across 14 classes. In addition, two state of the art action recognition algorithms are extended to make use of the 3D data, and five new interest point detection strategies are also proposed, that extend to the 3D data." }, { "dkey": "USF", "dval": "The USF Human ID Gait Challenge Dataset is a dataset of videos for gait recognition. It has videos from 122 subjects in up to 32 possible combinations of variations in factors." }, { "dkey": "OCID", "dval": "Developing robot perception systems for handling objects in the real-world requires computer vision algorithms to be carefully scrutinized with respect to the expected operating domain. This demands large quantities of ground truth data to rigorously evaluate the performance of algorithms.\n\nThe Object Cluttered Indoor Dataset is an RGBD-dataset containing point-wise labeled point-clouds for each object. The data was captured using two ASUS-PRO Xtion cameras that are positioned at different heights. It captures diverse settings of objects, background, context, sensor to scene distance, viewpoint angle and lighting conditions. The main purpose of OCID is to allow systematic comparison of existing object segmentation methods in scenes with increasing amount of clutter. In addition OCID does also provide ground-truth data for other vision tasks like object-classification and recognition." } ]
I want to train a face detector.
face detection image
2,017
[ "SNIPS", "WildDeepfake", "SCUT-CTW1500", "AFLW2000-3D", "ConvAI2" ]
[ "AFW", "AFLW" ]
[ { "dkey": "AFW", "dval": "AFW (Annotated Faces in the Wild) is a face detection dataset that contains 205 images with 468 faces. Each face image is labeled with at most 6 landmarks with visibility labels, as well as a bounding box." }, { "dkey": "AFLW", "dval": "The Annotated Facial Landmarks in the Wild (AFLW) is a large-scale collection of annotated face images gathered from Flickr, exhibiting a large variety in appearance (e.g., pose, expression, ethnicity, age, gender) as well as general imaging and environmental conditions. In total about 25K faces are annotated with up to 21 landmarks per image." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "WildDeepfake", "dval": "WildDeepfake is a dataset for real-world deepfakes detection which consists of 7,314 face sequences extracted from 707 deepfake videos that are collected completely from the internet. WildDeepfake is a small dataset that can be used, in addition to existing datasets, to develop more effective detectors against real-world deepfakes." }, { "dkey": "SCUT-CTW1500", "dval": "The SCUT-CTW1500 dataset contains 1,500 images: 1,000 for training and 500 for testing. In particular, it provides 10,751 cropped text instance images, including 3,530 with curved text. The images are manually harvested from the Internet, image libraries such as Google Open-Image, or phone cameras. The dataset contains a lot of horizontal and multi-oriented text." }, { "dkey": "AFLW2000-3D", "dval": "AFLW2000-3D is a dataset of 2000 images that have been annotated with image-level 68-point 3D facial landmarks. This dataset is used for evaluation of 3D facial landmark detection models. The head poses are very diverse and often hard to be detected by a CNN-based face detector." }, { "dkey": "ConvAI2", "dval": "The ConvAI2 NeurIPS competition aimed at finding approaches to creating high-quality dialogue agents capable of meaningful open domain conversation. The ConvAI2 dataset for training models is based on the PERSONA-CHAT dataset. The speaker pairs each have assigned profiles coming from a set of 1155 possible personas (at training time), each consisting of at least 5 profile sentences, setting aside 100 never seen before personas for validation. As the original PERSONA-CHAT test set was released, a new hidden test set consisted of 100 new personas and over 1,015 dialogs was created by crowdsourced workers.\n\nTo avoid modeling that takes advantage of trivial word overlap, additional rewritten sets of the same train and test personas were crowdsourced, with related sentences that are rephrases, generalizations or specializations, rendering the task much more challenging. For example “I just got my nails done” is revised as “I love to pamper myself on a regular basis” and “I am on a diet now” is revised as “I need to lose weight.”\n\nThe training, validation and hidden test sets consists of 17,878, 1,000 and 1,015 dialogues, respectively." } ]
The goal of fashion alignment is to detect the positions of key points defined on fashion items. With
fashion landmark detection images
2,016
[ "Fashion-Gen", "OBP", "Fashion-MNIST", "Fashion IQ", "KITTI-trajectory-prediction", "Fashion 144K" ]
[ "DeepFashion", "LSP" ]
[ { "dkey": "DeepFashion", "dval": "DeepFashion is a dataset containing around 800K diverse fashion images with their rich annotations (46 categories, 1,000 descriptive attributes, bounding boxes and landmark information) ranging from well-posed product images to real-world-like consumer photos." }, { "dkey": "LSP", "dval": "The Leeds Sports Pose (LSP) dataset is widely used as the benchmark for human pose estimation. The original LSP dataset contains 2,000 images of sportspersons gathered from Flickr, 1000 for training and 1000 for testing. Each image is annotated with 14 joint locations, where left and right joints are consistently labelled from a person-centric viewpoint. The extended LSP dataset contains additional 10,000 images labeled for training.\n\nImage: Sumer et al" }, { "dkey": "Fashion-Gen", "dval": "Fashion-Gen consists of 293,008 high definition (1360 x 1360 pixels) fashion images paired with item descriptions provided by professional stylists. Each item is photographed from a variety of angles." }, { "dkey": "OBP", "dval": "Open Bandit Dataset is a public real-world logged bandit feedback data. The dataset is provided by ZOZO, Inc., the largest Japanese fashion e-commerce company with over 5 billion USD market capitalization (as of May 2020). The company uses multi-armed bandit algorithms to recommend fashion items to users in a large-scale fashion e-commerce platform called ZOZOTOWN." }, { "dkey": "Fashion-MNIST", "dval": "Fashion-MNIST is a dataset comprising of 28×28 grayscale images of 70,000 fashion products from 10 categories, with 7,000 images per category. The training set has 60,000 images and the test set has 10,000 images. Fashion-MNIST shares the same image size, data format and the structure of training and testing splits with the original MNIST." }, { "dkey": "Fashion IQ", "dval": "Fashion IQ support and advance research on interactive fashion image retrieval. Fashion IQ is the first fashion dataset to provide human-generated captions that distinguish similar pairs of garment images together with side-information consisting of real-world product descriptions and derived visual attribute labels for these images." }, { "dkey": "KITTI-trajectory-prediction", "dval": "KITTI is a well established dataset in the computer vision community. It has often been used for trajectory prediction despite not having a well defined split, generating non comparable baselines in different works. This dataset aims at bridging this gap and proposes a well defined split of the KITTI data.\nSamples are collected as 6 seconds chunks (2seconds for past and 4 for future) in a sliding window fashion from all trajectories in the dataset, including the egovehicle. There are a total of 8613 top-view trajectories for training and 2907 for testing.\nSince top-view maps are not provided by KITTI, semantic labels of static categories obtained with DeepLab-v3+ from all frames are projected in a common top-view map using the Velodyne 3D point cloud and IMU. The resulting maps have a spatial resolution of 0.5 meters and are provided along with the trajectories." }, { "dkey": "Fashion 144K", "dval": "Fashion 144K is a novel heterogeneous dataset with 144,169 user posts containing diverse image, textual and meta information." } ]
Domain adaptation for digit recognition.
domain adaptation images paragraph-level
2,020
[ "Libri-Adapt", "EMNIST", "MNIST-M", "VisDA-2017" ]
[ "ImageNet", "Cityscapes" ]
[ { "dkey": "ImageNet", "dval": "The ImageNet dataset contains 14,197,122 annotated images according to the WordNet hierarchy. Since 2010 the dataset is used in the ImageNet Large Scale Visual Recognition Challenge (ILSVRC), a benchmark in image classification and object detection.\nThe publicly released dataset contains a set of manually annotated training images. A set of test images is also released, with the manual annotations withheld.\nILSVRC annotations fall into one of two categories: (1) image-level annotation of a binary label for the presence or absence of an object class in the image, e.g., “there are cars in this image” but “there are no tigers,” and (2) object-level annotation of a tight bounding box and class label around an object instance in the image, e.g., “there is a screwdriver centered at position (20,25) with width of 50 pixels and height of 30 pixels”.\nThe ImageNet project does not own the copyright of the images, therefore only thumbnails and URLs of images are provided.\n\n\nTotal number of non-empty WordNet synsets: 21841\nTotal number of images: 14197122\nNumber of images with bounding box annotations: 1,034,908\nNumber of synsets with SIFT features: 1000\nNumber of images with SIFT features: 1.2 million" }, { "dkey": "Cityscapes", "dval": "Cityscapes is a large-scale database which focuses on semantic understanding of urban street scenes. It provides semantic, instance-wise, and dense pixel annotations for 30 classes grouped into 8 categories (flat surfaces, humans, vehicles, constructions, objects, nature, sky, and void). The dataset consists of around 5000 fine annotated images and 20000 coarse annotated ones. Data was captured in 50 cities during several months, daytimes, and good weather conditions. It was originally recorded as video so the frames were manually selected to have the following features: large number of dynamic objects, varying scene layout, and varying background." }, { "dkey": "Libri-Adapt", "dval": "Libri-Adapt aims to support unsupervised domain adaptation research on speech recognition models." }, { "dkey": "EMNIST", "dval": "EMNIST (extended MNIST) has 4 times more data than MNIST. It is a set of handwritten digits with a 28 x 28 format." }, { "dkey": "MNIST-M", "dval": "MNIST-M is created by combining MNIST digits with the patches randomly extracted from color photos of BSDS500 as their background. It contains 59,001 training and 90,001 test images." }, { "dkey": "VisDA-2017", "dval": "VisDA-2017 is a simulation-to-real dataset for domain adaptation with over 280,000 images across 12 categories in the training, validation and testing domains. The training images are generated from the same object under different circumstances, while the validation images are collected from MSCOCO.." } ]
Learning a discriminative representation for person re-identification.
person re-identification images
2,017
[ "DukeMTMC-reID", "CUHK-PEDES", "VeRi-776", "Airport", "Partial-iLIDS" ]
[ "VIPeR", "CUHK03" ]
[ { "dkey": "VIPeR", "dval": "The Viewpoint Invariant Pedestrian Recognition (VIPeR) dataset includes 632 people and two outdoor cameras under different viewpoints and light conditions. Each person has one image per camera and each image has been scaled to be 128×48 pixels. It provides the pose angle of each person as 0° (front), 45°, 90° (right), 135°, and 180° (back)." }, { "dkey": "CUHK03", "dval": "The CUHK03 consists of 14,097 images of 1,467 different identities, where 6 campus cameras were deployed for image collection and each identity is captured by 2 campus cameras. This dataset provides two types of annotations, one by manually labelled bounding boxes and the other by bounding boxes produced by an automatic detector. The dataset also provides 20 random train/test splits in which 100 identities are selected for testing and the rest for training" }, { "dkey": "DukeMTMC-reID", "dval": "The DukeMTMC-reID (Duke Multi-Tracking Multi-Camera ReIDentification) dataset is a subset of the DukeMTMC for image-based person re-ID. The dataset is created from high-resolution videos from 8 different cameras. It is one of the largest pedestrian image datasets wherein images are cropped by hand-drawn bounding boxes. The dataset consists 16,522 training images of 702 identities, 2,228 query images of the other 702 identities and 17,661 gallery images.\n\nNOTE: This dataset has been retracted." }, { "dkey": "CUHK-PEDES", "dval": "The CUHK-PEDES dataset is a caption-annotated pedestrian dataset. It contains 40,206 images over 13,003 persons. Images are collected from five existing person re-identification datasets, CUHK03, Market-1501, SSM, VIPER, and CUHK01 while each image is annotated with 2 text descriptions by crowd-sourcing workers. Sentences incorporate rich details about person appearances, actions, poses." }, { "dkey": "VeRi-776", "dval": "VeRi-776 is a vehicle re-identification dataset which contains 49,357 images of 776 vehicles from 20 cameras. The dataset is collected in the real traffic scenario, which is close to the setting of CityFlow. The dataset contains bounding boxes, types, colors and brands." }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "Partial-iLIDS", "dval": "Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras. Some images contain people occluded by other individuals or luggage." } ]
In this paper, we introduce a new pretraining task, which forces language models to incorporate knowledge
fact completion text
2,019
[ "GSL", "KLEJ", "DuoRC", "SuperGLUE" ]
[ "SearchQA", "SQuAD", "TriviaQA" ]
[ { "dkey": "SearchQA", "dval": "SearchQA was built using an in-production, commercial search engine. It closely reflects the full pipeline of a (hypothetical) general question-answering system, which consists of information retrieval and answer synthesis." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "TriviaQA", "dval": "TriviaQA is a realistic text-based question answering dataset which includes 950K question-answer pairs from 662K documents collected from Wikipedia and the web. This dataset is more challenging than standard QA benchmark datasets such as Stanford Question Answering Dataset (SQuAD), as the answers for a question may not be directly obtained by span prediction and the context is very long. TriviaQA dataset consists of both human-verified and machine-generated QA subsets." }, { "dkey": "GSL", "dval": "Dataset Description\nThe Greek Sign Language (GSL) is a large-scale RGB+D dataset, suitable for Sign Language Recognition (SLR) and Sign Language Translation (SLT). The video captures are conducted using an Intel RealSense D435 RGB+D camera at a rate of 30 fps. Both the RGB and the depth streams are acquired in the same spatial resolution of 848×480 pixels. To increase variability in the videos, the camera position and orientation is slightly altered within subsequent recordings. Seven different signers are employed to perform 5 individual and commonly met scenarios in different public services. The average length of each scenario is twenty sentences.\n\nThe dataset contains 10,290 sentence instances, 40,785 gloss instances, 310 unique glosses (vocabulary size) and 331 unique sentences, with 4.23 glosses per sentence on average. Each signer is asked to perform the pre-defined dialogues five consecutive times. In all cases, the simulation considers a deaf person communicating with a single public service employee. The involved signer performs the sequence of glosses of both agents in the discussion. For the annotation of each gloss sequence, GSL linguistic experts are involved. The given annotations are at individual gloss and gloss sequence level. A translation of the gloss sentences to spoken Greek is also provided.\n\nEvaluation\nThe GSL dataset includes the 3 evaluation setups:\n\n\n\nSigner-dependent continuous sign language recognition (GSL SD) – roughly 80% of videos are used for training, corresponding to 8,189 instances. The rest 1,063 (10%) were kept for validation and 1,043 (10%) for testing.\n\n\n\nSigner-independent continuous sign language recognition (GSL SI) – the selected test gloss sequences are not used in the training set, while all the individual glosses exist in the training set. In GSL SI, the recordings of one signer are left out for validation and testing (588 and 881 instances, respectively). The rest 8821 instances are utilized for training.\n\n\n\nIsolated gloss sign language recognition (GSL isol.) – The validation set consists of 2,231 gloss instances, the test set 3,500, while the remaining 34,995 are used for training. All 310 unique glosses are seen in the training set.\n\n\n\nFor more info and results, advice our paper\n\nPaper Abstract: A Comprehensive Study on Sign Language Recognition Methods, Adaloglou et al. 2020\nIn this paper, a comparative experimental assessment of computer vision-based methods for sign language recognition is conducted. By implementing the most recent deep neural network methods in this field, a thorough evaluation on multiple publicly available datasets is performed. The aim of the present study is to provide insights on sign language recognition, focusing on mapping non-segmented video streams to glosses. For this task, two new sequence training criteria, known from the fields of speech and scene text recognition, are introduced. Furthermore, a\nplethora of pretraining schemes are thoroughly discussed. Finally, a new RGB+D dataset for the Greek sign language is created. To the best of our knowledge, this is the first sign language dataset where sentence and gloss level annotations are provided for every video capture.\n\nArxiv link" }, { "dkey": "KLEJ", "dval": "The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language understanding task.\n\nKey benchmark features:\n\n\nIt contains a diverse set of tasks from different domains and with different objectives.\nMost tasks are created from existing datasets but the authors also released the new sentiment analysis dataset from an e-commerce domain.\nIt includes tasks which have relatively small datasets and require extensive external knowledge to solve them. It promotes the usage of transfer learning instead of training separate models from scratch.\n\nThe name KLEJ (English: GLUE) is an abbreviation for Kompleksowa Lista Ewaluacji Językowych (English: Comprehensive List of Language Evaluations) and refers to the GLUE benchmark." }, { "dkey": "DuoRC", "dval": "DuoRC contains 186,089 unique question-answer pairs created from a collection of 7680 pairs of movie plots where each pair in the collection reflects two versions of the same movie.\n\nWhy another RC dataset?\n\nDuoRC pushes the NLP community to address challenges on incorporating knowledge and reasoning in neural architectures for reading comprehension. It poses several interesting challenges such as:\n\n\nDuoRC using parallel plots is especially designed to contain a large number of questions with low lexical overlap between questions and their corresponding passages\nIt requires models to go beyond the content of the given passage itself and incorporate world-knowledge, background knowledge, and common-sense knowledge to arrive at the answer\nIt revolves around narrative passages from movie plots describing complex events and therefore naturally require complex reasoning (e.g. temporal reasoning, entailment, long-distance anaphoras, etc.) across multiple sentences to infer the answer to questions\nSeveral of the questions in DuoRC, while seeming relevant, cannot actually be answered from the given passage. This requires the model to detect the unanswerability of questions. This aspect is important for machines to achieve in industrial settings in particular" }, { "dkey": "SuperGLUE", "dval": "SuperGLUE is a benchmark dataset designed to pose a more rigorous test of language understanding than GLUE. SuperGLUE has the same high-level motivation as GLUE: to provide a simple, hard-to-game measure of progress toward general-purpose language understanding technologies for English. SuperGLUE follows the basic design of GLUE: It consists of a public leaderboard built around eight language understanding tasks, drawing on existing data, accompanied by a single-number\nperformance metric, and an analysis toolkit. However, it improves upon GLUE in several ways:\n\n\nMore challenging tasks: SuperGLUE retains the two hardest tasks in GLUE. The remaining tasks were identified from those submitted to an open call for task proposals and were selected based on difficulty for current NLP approaches.\nMore diverse task formats: The task formats in GLUE are limited to sentence- and sentence-pair classification. The authors expand the set of task formats in SuperGLUE to include\ncoreference resolution and question answering (QA).\nComprehensive human baselines: the authors include human performance estimates for all benchmark tasks, which verify that substantial headroom exists between a strong BERT-based baseline and human performance.\nImproved code support: SuperGLUE is distributed with a new, modular toolkit for work on pretraining, multi-task learning, and transfer learning in NLP, built around standard tools including PyTorch (Paszke et al., 2017) and AllenNLP (Gardner et al., 2017).\nRefined usage rules: The conditions for inclusion on the SuperGLUE leaderboard were revamped to ensure fair competition, an informative leaderboard, and full credit\nassignment to data and task creators." } ]
We propose two hierarchical Gaussian descriptors for person re-identification, and show that these descriptors outperform
person re-identification images
2,017
[ "PRID2011", "3DMatch", "CUHK02", "Airport", "Partial-iLIDS", "SYSU-MM01" ]
[ "VIPeR", "Market-1501" ]
[ { "dkey": "VIPeR", "dval": "The Viewpoint Invariant Pedestrian Recognition (VIPeR) dataset includes 632 people and two outdoor cameras under different viewpoints and light conditions. Each person has one image per camera and each image has been scaled to be 128×48 pixels. It provides the pose angle of each person as 0° (front), 45°, 90° (right), 135°, and 180° (back)." }, { "dkey": "Market-1501", "dval": "Market-1501 is a large-scale public benchmark dataset for person re-identification. It contains 1501 identities which are captured by six different cameras, and 32,668 pedestrian image bounding-boxes obtained using the Deformable Part Models pedestrian detector. Each person has 3.6 images on average at each viewpoint. The dataset is split into two parts: 750 identities are utilized for training and the remaining 751 identities are used for testing. In the official testing protocol 3,368 query images are selected as probe set to find the correct match across 19,732 reference gallery images." }, { "dkey": "PRID2011", "dval": "PRID 2011 is a person reidentification dataset that provides multiple person trajectories recorded from two different static surveillance cameras, monitoring crosswalks and sidewalks. The dataset shows a clean background, and the people in the dataset are rarely occluded. In the dataset, 200 people appear in both views. Among the 200 people, 178 people have more than 20 appearances" }, { "dkey": "3DMatch", "dval": "The 3DMATCH benchmark evaluates how well descriptors (both 2D and 3D) can establish correspondences between RGB-D frames of different views. The dataset contains 2D RGB-D patches and 3D patches (local TDF voxel grid volumes) of wide-baselined correspondences. \n\nThe pixel size of each 2D patch is determined by the projection of the 0.3m3 local 3D patch around the interest point onto the image plane." }, { "dkey": "CUHK02", "dval": "CUHK02 is a dataset for person re-identification. It contains 1,816 identities from two disjoint camera views. Each identity has two samples per camera view making a total of 7,264 images. It is used for Person Re-identification." }, { "dkey": "Airport", "dval": "The Airport dataset is a dataset for person re-identification which consists of 39,902 images and 9,651 identities across six cameras." }, { "dkey": "Partial-iLIDS", "dval": "Partial iLIDS is a dataset for occluded person person re-identification. It contains a total of 476 images of 119 people captured by 4 non-overlapping cameras. Some images contain people occluded by other individuals or luggage." }, { "dkey": "SYSU-MM01", "dval": "The SYSU-MM01 is a dataset collected for the Visible-Infrared Re-identification problem. The images in the dataset were obtained from 491 different persons by recording them using 4 RGB and 2 infrared cameras. Within the dataset, the persons are divided into 3 fixed splits to create training, validation and test sets. In the training set, there are 20284 RGB and 9929 infrared images of 296 persons. The validation set contains 1974 RGB and 1980 infrared images of 99 persons. The testing set consists of the images of 96 persons where 3803 infrared images are used as query and 301 randomly selected RGB images are used as gallery." } ]
I have built a robust framework for action recognition and prediction.
action recognition prediction video
2,018
[ "G3D", "DISFA", "UCFRep", "Stanford Dogs", "TinyVIRAT", "PHM2017" ]
[ "UCF101", "HMDB51" ]
[ { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "HMDB51", "dval": "The HMDB51 dataset is a large collection of realistic videos from various sources, including movies and web videos. The dataset is composed of 6,766 video clips from 51 action categories (such as “jump”, “kiss” and “laugh”), with each category containing at least 101 clips. The original evaluation scheme uses three different training/testing splits. In each split, each action class has 70 clips for training and 30 clips for testing. The average accuracy over these three splits is used to measure the final performance." }, { "dkey": "G3D", "dval": "The Gaming 3D Dataset (G3D) focuses on real-time action recognition in a gaming scenario. It contains 10 subjects performing 20 gaming actions: “punch right”, “punch left”, “kick right”, “kick left”, “defend”, “golf swing”, “tennis swing forehand”, “tennis swing backhand”, “tennis serve”, “throw bowling ball”, “aim and fire gun”, “walk”, “run”, “jump”, “climb”, “crouch”, “steer a car”, “wave”, “flap” and “clap”." }, { "dkey": "DISFA", "dval": "The Denver Intensity of Spontaneous Facial Action (DISFA) dataset consists of 27 videos of 4844 frames each, with 130,788 images in total. Action unit annotations are on different levels of intensity, which are ignored in the following experiments and action units are either set or unset. DISFA was selected from a wider range of databases popular in the field of facial expression recognition because of the high number of smiles, i.e. action unit 12. In detail, 30,792 have this action unit set, 82,176 images have some action unit(s) set and 48,612 images have no action unit(s) set at all." }, { "dkey": "UCFRep", "dval": "The UCFRep dataset contains 526 annotated repetitive action videos. This dataset is built from the action recognition dataset UCF101." }, { "dkey": "Stanford Dogs", "dval": "The Stanford Dogs dataset contains 20,580 images of 120 classes of dogs from around the world, which are divided into 12,000 images for training and 8,580 images for testing." }, { "dkey": "TinyVIRAT", "dval": "TinyVIRAT contains natural low-resolution activities. The actions in TinyVIRAT videos have multiple labels and they are extracted from surveillance videos which makes them realistic and more challenging." }, { "dkey": "PHM2017", "dval": "PHM2017 is a new dataset consisting of 7,192 English tweets across six diseases and conditions: Alzheimer’s Disease, heart attack (any severity), Parkinson’s disease, cancer (any type), Depression (any severity), and Stroke. The Twitter search API was used to retrieve the data using the colloquial disease names as search keywords, with the expectation of retrieving a high-recall, low precision dataset. After removing the re-tweets and replies, the tweets were manually annotated. The labels are:\n\n\nself-mention. The tweet contains a health mention with a health self-report of the Twitter account owner, e.g., \"However, I worked hard and ran for Tokyo Mayer Election Campaign in January through February, 2014, without publicizing the cancer.\"\nother-mention. The tweet contains a health mention of a health report about someone other than the account owner, e.g., \"Designer with Parkinson’s couldn’t work then engineer invents bracelet + changes her world\"\nawareness. The tweet contains the disease name, but does not mention a specific person, e.g., \"A Month Before a Heart Attack, Your Body Will Warn You With These 8 Signals\"\nnon-health. The tweet contains the disease name, but the tweet topic is not about health. \"Now I can have cancer on my wall for all to see <3\"" } ]
I propose an adaptive fusion strategy to exploit the complementary properties of deep and shallow features for tracking.
generic object tracking image
2,018
[ "DiCOVA", "ASNQ", "WFLW", "HIGGS Data Set" ]
[ "VOT2017", "VOT2016" ]
[ { "dkey": "VOT2017", "dval": "VOT2017 is a Visual Object Tracking dataset for different tasks that contains 60 short sequences annotated with 6 different attributes." }, { "dkey": "VOT2016", "dval": "VOT2016 is a video dataset for visual object tracking. It contains 60 video clips and 21,646 corresponding ground truth maps with pixel-wise annotation of salient objects." }, { "dkey": "DiCOVA", "dval": "The DiCOVA Challenge dataset is derived from the Coswara dataset, a crowd-sourced dataset of sound recordings from COVID-19 positive and non-COVID-19 individuals. The Coswara data is collected using a web-application2, launched in April-2020, accessible through the internet by anyone around the globe. The volunteering subjects are advised to record their respiratory sounds in a quiet environment. \n\nEach subject provides 9 audio recordings, namely, (a) shallow and deep breathing (2 nos.), (b) shallow and heavy cough (2 nos.), (c) sustained phonation of vowels [æ] (as in bat), [i] (as in beet), and [u] (as in boot) (3 nos.), and (d) fast and normal pace 1 to 20 number counting (2 nos.). \n\nThe DiCOVA Challenge has two tracks. The participants also provided metadata corresponding to their current health status (includes COVID19 status, any other respiratory ailments, and symptoms), demographic information, age and gender. From this Coswara dataset, two datasets have been created: \n\n(a) Track-1 dataset: composed of cough sound recordings. It t is composed of cough audio data from 1040 subjects.\n(b) Track-2 dataset: composed of deep breathing, vowel [i], and number counting (normal pace) speech recordings. It is composed of audio data from 1199 subjects." }, { "dkey": "ASNQ", "dval": "A large scale dataset to enable the transfer step, exploiting the Natural Questions dataset." }, { "dkey": "WFLW", "dval": "The Wider Facial Landmarks in the Wild or WFLW database contains 10000 faces (7500 for training and 2500 for testing) with 98 annotated landmarks. This database also features rich attribute annotations in terms of occlusion, head pose, make-up, illumination, blur and expressions." }, { "dkey": "HIGGS Data Set", "dval": "The data has been produced using Monte Carlo simulations. The first 21 features (columns 2-22) are kinematic properties measured by the particle detectors in the accelerator. The last seven features are functions of the first 21 features; these are high-level features derived by physicists to help discriminate between the two classes. There is an interest in using deep learning methods to obviate the need for physicists to manually develop such features. Benchmark results using Bayesian Decision Trees from a standard physics package and 5-layer neural networks are presented in the original paper. The last 500,000 examples are used as a test set." } ]
I want to train a model for panoramic road scene object detection.
object detection panoramic images
2,019
[ "IDD", "SYNTHIA-AL", "COCO-Tasks", "T-LESS" ]
[ "COCO", "KITTI" ]
[ { "dkey": "COCO", "dval": "The MS COCO (Microsoft Common Objects in Context) dataset is a large-scale object detection, segmentation, key-point detection, and captioning dataset. The dataset consists of 328K images.\n\nSplits:\nThe first version of MS COCO dataset was released in 2014. It contains 164K images split into training (83K), validation (41K) and test (41K) sets. In 2015 additional test set of 81K images was released, including all the previous test images and 40K new images.\n\nBased on community feedback, in 2017 the training/validation split was changed from 83K/41K to 118K/5K. The new split uses the same images and annotations. The 2017 test set is a subset of 41K images of the 2015 test set. Additionally, the 2017 release contains a new unannotated dataset of 123K images.\n\nAnnotations:\nThe dataset has annotations for\n\n\nobject detection: bounding boxes and per-instance segmentation masks with 80 object categories,\ncaptioning: natural language descriptions of the images (see MS COCO Captions),\nkeypoints detection: containing more than 200,000 images and 250,000 person instances labeled with keypoints (17 possible keypoints, such as left eye, nose, right hip, right ankle),\nstuff image segmentation – per-pixel segmentation masks with 91 stuff categories, such as grass, wall, sky (see MS COCO Stuff),\npanoptic: full scene segmentation, with 80 thing categories (such as person, bicycle, elephant) and a subset of 91 stuff categories (grass, sky, road),\ndense pose: more than 39,000 images and 56,000 person instances labeled with DensePose annotations – each labeled person is annotated with an instance id and a mapping between image pixels that belong to that person body and a template 3D model.\nThe annotations are publicly available only for training and validation images." }, { "dkey": "KITTI", "dval": "KITTI (Karlsruhe Institute of Technology and Toyota Technological Institute) is one of the most popular datasets for use in mobile robotics and autonomous driving. It consists of hours of traffic scenarios recorded with a variety of sensor modalities, including high-resolution RGB, grayscale stereo cameras, and a 3D laser scanner. Despite its popularity, the dataset itself does not contain ground truth for semantic segmentation. However, various researchers have manually annotated parts of the dataset to fit their necessities. Álvarez et al. generated ground truth for 323 images from the road detection challenge with three classes: road, vertical, and sky. Zhang et al. annotated 252 (140 for training and 112 for testing) acquisitions – RGB and Velodyne scans – from the tracking challenge for ten object categories: building, sky, road, vegetation, sidewalk, car, pedestrian, cyclist, sign/pole, and fence. Ros et al. labeled 170 training images and 46 testing images (from the visual odometry challenge) with 11 classes: building, tree, sky, car, sign, road, pedestrian, fence, pole, sidewalk, and bicyclist." }, { "dkey": "IDD", "dval": "IDD is a dataset for road scene understanding in unstructured environments used for semantic segmentation and object detection for autonomous driving. It consists of 10,004 images, finely annotated with 34 classes collected from 182 drive sequences on Indian roads." }, { "dkey": "SYNTHIA-AL", "dval": "Specially designed to evaluate active learning for video object detection in road scenes." }, { "dkey": "COCO-Tasks", "dval": "Comprises about 40,000 images where the most suitable objects for 14 tasks have been annotated." }, { "dkey": "T-LESS", "dval": "T-LESS is a dataset for estimating the 6D pose, i.e. translation and rotation, of texture-less rigid objects. The dataset features thirty industry-relevant objects with no significant texture and no discriminative color or reflectance properties. The objects exhibit symmetries and mutual similarities in shape and/or size. Compared to other datasets, a unique property is that some of the objects are parts of others. The dataset includes training and test images that were captured with three synchronized sensors, specifically a structured-light and a time-of-flight RGB-D sensor and a high-resolution RGB camera. There are approximately 39K training and 10K test images from each sensor. Additionally, two types of 3D models are provided for each object, i.e. a manually created CAD model and a semi-automatically reconstructed one. Training images depict individual objects against a black background. Test images originate from twenty test scenes having varying complexity, which increases from simple scenes with several isolated objects to very challenging ones with multiple instances of several objects and with a high amount of clutter and occlusion. The images were captured from a systematically sampled view sphere around the object/scene, and are annotated with accurate ground truth 6D poses of all modeled objects." } ]
I want to train a model to localize instrument playing in music videos.
weakly supervised instrument-playing action localization video
2,018
[ "MusicNet", "UCF101", "RWC", "SNIPS", "Countix", "FAIR-Play" ]
[ "AudioSet", "YouTube-8M" ]
[ { "dkey": "AudioSet", "dval": "Audioset is an audio event dataset, which consists of over 2M human-annotated 10-second video clips. These clips are collected from YouTube, therefore many of which are in poor-quality and contain multiple sound-sources. A hierarchical ontology of 632 event classes is employed to annotate these data, which means that the same sound could be annotated as different labels. For example, the sound of barking is annotated as Animal, Pets, and Dog. All the videos are split into Evaluation/Balanced-Train/Unbalanced-Train set." }, { "dkey": "YouTube-8M", "dval": "The YouTube-8M dataset is a large scale video dataset, which includes more than 7 million videos with 4716 classes labeled by the annotation system. The dataset consists of three parts: training set, validate set, and test set. In the training set, each class contains at least 100 training videos. Features of these videos are extracted by the state-of-the-art popular pre-trained models and released for public use. Each video contains audio and visual modality. Based on the visual information, videos are divided into 24 topics, such as sports, game, arts & entertainment, etc" }, { "dkey": "MusicNet", "dval": "MusicNet is a collection of 330 freely-licensed classical music recordings, together with over 1 million annotated labels indicating the precise time of each note in every recording, the instrument that plays each note, and the note's position in the metrical structure of the composition. The labels are acquired from musical scores aligned to recordings by dynamic time warping. The labels are verified by trained musicians; we estimate a labeling error rate of 4%. We offer the MusicNet labels to the machine learning and music communities as a resource for training models and a common benchmark for comparing results." }, { "dkey": "UCF101", "dval": "UCF101 dataset is an extension of UCF50 and consists of 13,320 video clips, which are classified into 101 categories. These 101 categories can be classified into 5 types (Body motion, Human-human interactions, Human-object interactions, Playing musical instruments and Sports). The total length of these video clips is over 27 hours. All the videos are collected from YouTube and have a fixed frame rate of 25 FPS with the resolution of 320 × 240." }, { "dkey": "RWC", "dval": "The RWC (Real World Computing) Music Database is a copyright-cleared music database (DB) that is available to researchers as a common foundation for research. It contains around 100 complete songs with manually labeled section boundaries. For the 50 instruments, individual sounds at half-tone intervals were captured with several variations of playing styles, dynamics, instrument manufacturers and musicians." }, { "dkey": "SNIPS", "dval": "The SNIPS Natural Language Understanding benchmark is a dataset of over 16,000 crowdsourced queries distributed among 7 user intents of various complexity:\n\n\nSearchCreativeWork (e.g. Find me the I, Robot television show),\nGetWeather (e.g. Is it windy in Boston, MA right now?),\nBookRestaurant (e.g. I want to book a highly rated restaurant in Paris tomorrow night),\nPlayMusic (e.g. Play the last track from Beyoncé off Spotify),\nAddToPlaylist (e.g. Add Diamonds to my roadtrip playlist),\nRateBook (e.g. Give 6 stars to Of Mice and Men),\nSearchScreeningEvent (e.g. Check the showtimes for Wonder Woman in Paris).\nThe training set contains of 13,084 utterances, the validation set and the test set contain 700 utterances each, with 100 queries per intent." }, { "dkey": "Countix", "dval": "Countix is a real world dataset of repetition videos collected in the wild (i.e.YouTube) covering a wide range of semantic settings with significant challenges such as camera and object motion, diverse set of periods and counts, and changes in the speed of repeated actions. Countix include repeated videos of workout activities (squats, pull ups, battle rope training, exercising arm), dance moves (pirouetting, pumping fist), playing instruments (playing ukulele), using tools repeatedly (hammer hitting objects, chainsaw cutting wood, slicing onion), artistic performances (hula hooping, juggling soccer ball), sports (playing ping pong and tennis) and many others. Figure 6 illustrates some examples from the dataset as well as the distribution of repetition counts and period lengths." }, { "dkey": "FAIR-Play", "dval": "FAIR-Play is a video-audio dataset consisting of 1,871 video clips and their corresponding binaural audio clips recording in a music room. The video clip and binaural clip of the same index are roughly aligned." } ]
We present a Transformer based multi-lingual sentence encoder model that learns cross-lingual representations for
cross-lingual retrieval text
2,019
[ "CC100", "NCLS", "REFreSD", "WikiAnn", "EXAMS", "CLIRMatrix", "XTREME" ]
[ "SNLI", "SentEval", "SQuAD" ]
[ { "dkey": "SNLI", "dval": "The SNLI dataset (Stanford Natural Language Inference) consists of 570k sentence-pairs manually labeled as entailment, contradiction, and neutral. Premises are image captions from Flickr30k, while hypotheses were generated by crowd-sourced annotators who were shown a premise and asked to generate entailing, contradicting, and neutral sentences. Annotators were instructed to judge the relation between sentences given that they describe the same event. Each pair is labeled as “entailment”, “neutral”, “contradiction” or “-”, where “-” indicates that an agreement could not be reached." }, { "dkey": "SentEval", "dval": "SentEval is a toolkit for evaluating the quality of universal sentence representations. SentEval encompasses a variety of tasks, including binary and multi-class classification, natural language inference and sentence similarity. The set of tasks was selected based on what appears to be the community consensus regarding the appropriate evaluations for universal sentence representations. The toolkit comes with scripts to download and preprocess datasets, and an easy interface to evaluate sentence encoders." }, { "dkey": "SQuAD", "dval": "The Stanford Question Answering Dataset (SQuAD) is a collection of question-answer pairs derived from Wikipedia articles. In SQuAD, the correct answers of questions can be any sequence of tokens in the given text. Because the questions and answers are produced by humans through crowdsourcing, it is more diverse than some other question-answering datasets. SQuAD 1.1 contains 107,785 question-answer pairs on 536 articles. SQuAD2.0 (open-domain SQuAD, SQuAD-Open), the latest version, combines the 100,000 questions in SQuAD1.1 with over 50,000 un-answerable questions written adversarially by crowdworkers in forms that are similar to the answerable ones." }, { "dkey": "CC100", "dval": "This corpus comprises of monolingual data for 100+ languages and also includes data for romanized languages. This was constructed using the urls and paragraph indices provided by the CC-Net repository by processing January-December 2018 Commoncrawl snapshots. Each file comprises of documents separated by double-newlines and paragraphs within the same document separated by a newline. The data is generated using the open source CC-Net repository." }, { "dkey": "NCLS", "dval": "Presents two high-quality large-scale CLS datasets based on existing monolingual summarization datasets." }, { "dkey": "REFreSD", "dval": "Consists of English-French sentence-pairs annotated with semantic divergence classes and token-level rationales." }, { "dkey": "WikiAnn", "dval": "WikiAnn is a dataset for cross-lingual name tagging and linking based on Wikipedia articles in 295 languages." }, { "dkey": "EXAMS", "dval": "A new benchmark dataset for cross-lingual and multilingual question answering for high school examinations. Collects more than 24,000 high-quality high school exam questions in 16 languages, covering 8 language families and 24 school subjects from Natural Sciences and Social Sciences, among others. EXAMS offers a fine-grained evaluation framework across multiple languages and subjects, which allows precise analysis and comparison of various models." }, { "dkey": "CLIRMatrix", "dval": "CLIRMatrix is a large collection of bilingual and multilingual datasets for Cross-Lingual Information Retrieval. It includes:\n\n\nBI-139: A bilingual dataset of queries in one language matched with relevant documents in another language for 139x138=19,182 language pairs,\nMULTI-8, a multilingual dataset of queries and documents jointly aligned in 8 different languages.\n\nIn total, 49 million unique queries and 34 billion (query, document, label) triplets were mined, making CLIRMatrix the largest and most comprehensive CLIR dataset to date." }, { "dkey": "XTREME", "dval": "The Cross-lingual TRansfer Evaluation of Multilingual Encoders (XTREME) benchmark was introduced to encourage more research on multilingual transfer learning,. XTREME covers 40 typologically diverse languages spanning 12 language families and includes 9 tasks that require reasoning about different levels of syntax or semantics.\n\nThe languages in XTREME are selected to maximize language diversity, coverage in existing tasks, and availability of training data. The languages in XTREME are selected to maximize language diversity, coverage in existing tasks, and availability of training data. Among these are many under-studied languages, such as the Dravidian languages Tamil (spoken in southern India, Sri Lanka, and Singapore), Telugu and Malayalam (spoken mainly in southern India), and the Niger-Congo languages Swahili and Yoruba, spoken in Africa." } ]