dataset_name stringlengths 2 128 | description stringlengths 1 9.7k | prompt stringlengths 59 185 |
|---|---|---|
Argoverse-HD | [Argoverse-HD](https://www.cs.cmu.edu/~mengtial/proj/streaming/) is a dataset built for streaming object detection, which encompasses real-time object detection, video object detection, tracking, and short-term forecasting. It contains the video data from [Argoverse 1.1](https://www.argoverse.org/av1.html) with our own... | Provide a detailed description of the following dataset: Argoverse-HD |
TimeHetNet | This meta-dataset is composed of previously known datasets.
This includes: It also uses a specific script to read and sample small tasks from specified sizes and lengths.
DS included here are from :
PeekDB (https://github.com/RafaelDrumond/PeekDB)
Informer data-sets (https://github.com/zhouhaoyi/Informer202... | Provide a detailed description of the following dataset: TimeHetNet |
PeekDB | Data-set from "PEEK-An LSTM Recurrent Network for Motion Classification from Sparse Data" | Provide a detailed description of the following dataset: PeekDB |
Monash | Time Series Forecasting Repository containing datasets of related time series for global forecasting. | Provide a detailed description of the following dataset: Monash |
Councils in Action | Using Council Data Project infrastructures (https://councildataproject.org), we assemble longitudinal municipal council meeting transcript data. This initial release of the Councils in Action dataset includes over 350 meetings of the city councils of Seattle Washington and Portland Oregon, and the county council of Kin... | Provide a detailed description of the following dataset: Councils in Action |
MTic | Periodic Tic sounds (T0=1s) sampled at 16kHz with duration of nearly 10s. | Provide a detailed description of the following dataset: MTic |
STEW | This dataset consists of raw EEG data from 48 subjects who participated in a multitasking workload experiment utilizing the SIMKAP multitasking test. The subjects’ brain activity at rest was also recorded before the test and is included as well. The Emotiv EPOC device, with sampling frequency of 128Hz and 14 channels w... | Provide a detailed description of the following dataset: STEW |
Age and Gender | EEG signals from 60 users have been recorded whose age range lies between 6 and 55 years. Among all, there were 25 females and 35 male users. In general, all the participants were either school children or belonged to the socioeconomic cross section of the population with no medical history. The EEG recordings were acq... | Provide a detailed description of the following dataset: Age and Gender |
Replication Data for: "Empirical Analysis of EIP-1559: Transaction Fees, Waiting Time, and Consensus Security" | Transaction fee mechanism (TFM) is an essential component of a blockchain protocol. However, a systematic evaluation of the real-world impact of TFMs is still absent. Using rich data from the Ethereum blockchain, mempool, and exchanges, we study the effect of EIP-1559, one of the first deployed TFMs that depart from th... | Provide a detailed description of the following dataset: Replication Data for: "Empirical Analysis of EIP-1559: Transaction Fees, Waiting Time, and Consensus Security" |
Replication Data for: "Deciphering Bitcoin Blockchain Data by Cohort Analysis" Version 3.1 | Bitcoin is a peer-to-peer electronic payment system that popularized rapidly in recent years. Usually, we need to query the complete history of bitcoin blockchain data to acquire variables of economic meaning. This becomes increasingly difficult now with over 1.6 billion historical transactions on the Bitcoin blockchai... | Provide a detailed description of the following dataset: Replication Data for: "Deciphering Bitcoin Blockchain Data by Cohort Analysis" Version 3.1 |
ReferIt3D | ReferIt3D provides two large-scale and complementary visio-linguistic datasets: i) Sr3D, which contains 83.5K template-based utterances leveraging spatial relations among fine-grained object classes to localize a referred object in a scene, and ii) Nr3D which contains 41.5K natural, free-form, utterances collected by d... | Provide a detailed description of the following dataset: ReferIt3D |
Southern California Seismic Network Data | These files are supplementary material for “Generalized Seismic Phase Detection with Deep Learning” by Ross et al. (2018), BSSA (doi.org/10.1785/0120180080). The models were trained using keras and TensorFlow, and can be used with these libraries. The training dataset contains 4.5 million seismograms evenly split betwe... | Provide a detailed description of the following dataset: Southern California Seismic Network Data |
Wireless-Intelligence | Wireless-Intelligence is a database website provided for AI-based wireless communication research, in which each dataset consists of hundreds and thousands of channel samples in different forms. The data is available for free to researchers for non-commercial use.
## What is Wireless-Intelligence?
Wireless-Intelli... | Provide a detailed description of the following dataset: Wireless-Intelligence |
SUES-200 | Cross-view Image Dataset Across Drone and Satellite
- multi-height
- multi-scene | Provide a detailed description of the following dataset: SUES-200 |
USC-GRAD-STDdb | USC-GRAD-STDdb comprises 115 video segments containing more than 25,000 annotated frames of HD 720p resolution (≈1280x720) with small objects of interest from 16 (≈4x4) to 256 (≈16x16) as pixel area. The length of the videos changes from 150 up to 500 frames. The size of every object is determined through the bounding ... | Provide a detailed description of the following dataset: USC-GRAD-STDdb |
Synthetic Object Preference Adaptation Data | This dataset involves a 2D or 3D agent moving from a start to goal pose while interacting with nearby objects. These objects can influence position of the agent via attraction or repulsion forces as well as influence orientation via attraction to object's orientation. This dataset can be used to pre-train general polic... | Provide a detailed description of the following dataset: Synthetic Object Preference Adaptation Data |
SurveyBank | There are 9,321 survey papers with high quality included in the SurvayBank in the domain of computer science. | Provide a detailed description of the following dataset: SurveyBank |
Brightkite | Brightkite was once a location-based social networking service provider where users shared their locations by checking-in. The friendship network was collected using their public API, and consists of 58,228 nodes and 214,078 edges. The network is originally directed but the collectors have constructed a network with un... | Provide a detailed description of the following dataset: Brightkite |
Assembly101 | Assembly101 is a new procedural activity dataset featuring 4321 videos of people assembling and disassembling 101 "take-apart" toy vehicles. Participants work without fixed instructions, and the sequences feature rich and natural variations in action ordering, mistakes, and corrections. Assembly101 is the first multi-v... | Provide a detailed description of the following dataset: Assembly101 |
GBCU | GBCU is the first public dataset for Gallbladder Cancer identification from Ultrasound images. GBCU contains a total of 1255 (432 normal, 558 benign, and 265 malignant) annotated abdominal Ultrasound images collected from 218 patients. Of the 218 patients, 71, 100, and 47 were from the normal, benign, and malignant cla... | Provide a detailed description of the following dataset: GBCU |
SIDD-Image | This is the first image-based network intrusion detection dataset. This large-scale dataset included network traffic protocol communication-based images from 15 different observation locations of different countries in Asia. This dataset is used to identify two different types of anomalies from benign network traffic. ... | Provide a detailed description of the following dataset: SIDD-Image |
VideoCC3M | We propose a new, scalable video-mining pipeline which transfers captioning supervision from image datasets to video and audio. We use this pipeline to mine paired video and captions, using the [Conceptual Captions3M](https://paperswithcode.com/dataset/conceptual-captions) image dataset as a seed dataset. Our resulting... | Provide a detailed description of the following dataset: VideoCC3M |
Cyclone Data | Archive of Global Tropical Cyclone Tracks Tracks from 1980 to May 2019. | Provide a detailed description of the following dataset: Cyclone Data |
Ocean Drifters | From Schaub, Michael T., et al. "Random walks on simplicial complexes and the normalized hodge 1-laplacian." SIAM Review 62.2 (2020): 353-391.
This datataset comes from the Global Ocean Drifter Program available at the AOML/NOAA Drifter Data Assembly. While the entire dataset spans several decades of measurements, S... | Provide a detailed description of the following dataset: Ocean Drifters |
BCI | The evaluation of human epidermal growth factor receptor 2 (HER2) expression is essential to formulate a precise treatment for breast cancer. The routine evaluation of HER2 is conducted with immunohistochemical techniques (IHC), which is very expensive. Therefore, we propose a breast cancer immunohistochemical (BCI) be... | Provide a detailed description of the following dataset: BCI |
SDF Shader Dataset | This dataset contains 63 signed distance function shaders collected mostly from Shadertoy.
Along with the shader source files, the dataset also provides point clouds of signed distance function samples in different distributions, available as a standalone zip file of `.npz` files: https://drive.google.com/file/d/1St... | Provide a detailed description of the following dataset: SDF Shader Dataset |
TEMPO | TEMPOral reasoning in video and language (TEMPO) is a dataset that consists of two parts: a dataset with real videos and template sentences (TEMPO - Template Language) which allows for controlled studies on temporal language, and a human language dataset which consists of temporal sentences annotated by humans (TEMPO -... | Provide a detailed description of the following dataset: TEMPO |
rc_49 | Includes several sets of synthetic stereo images labelled with grasp rectangles representing parallel-jaw grasps (Cornell-like format).
The set was introduced in the range of the ICAR paper
"Automatic generation of realistic training data for learning parallel-jaw grasping from synthetic stereo images",
please re... | Provide a detailed description of the following dataset: rc_49 |
PLOD-unfiltered | PLOD: An Abbreviation Detection Dataset
This is the PLOD (unfiltered) Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task of Abbreviation Detection. | Provide a detailed description of the following dataset: PLOD-unfiltered |
KAIST VIO Dataset | This is the dataset for testing the robustness of various VO/VIO methods, acquired on reak UAV. | Provide a detailed description of the following dataset: KAIST VIO Dataset |
GRIT | The General Robust Image Task (GRIT) Benchmark is an evaluation-only benchmark for evaluating the performance and robustness of vision systems across multiple image prediction tasks, concepts, and data sources. GRIT hopes to encourage our research community to pursue the following research directions:
1. **General p... | Provide a detailed description of the following dataset: GRIT |
DoPose | DoPose (Dortmund 6D Pose dataset) is a dataset of highly cluttered and closely stacked objects. The dataset is saved in the BOP format. The dataset includes RGB images, Depth images, 6D Pose of objects, segmentation mask (all and visible), COCO Json annotation, camera transformations, and 3D model of all objects. The d... | Provide a detailed description of the following dataset: DoPose |
Abstractive Text Summarization from Il Post | IlPost dataset, containing news articles taken from IlPost.
There are two features:
* source: Input news article.
* target: Summary of the article. | Provide a detailed description of the following dataset: Abstractive Text Summarization from Il Post |
Abstractive Text Summarization from Fanpage | Fanpage dataset, containing news articles taken from Fanpage.
There are two features:
* source: Input news article.
* target: Summary of the article. | Provide a detailed description of the following dataset: Abstractive Text Summarization from Fanpage |
MLSum-it | The MLSum-it dataset is the translated version (Helsinki-NLP/opus-mt-es-it) of the spanish portion of MLSum, containing news articles taken from BBC/mundo.
There are two features:
* source: Input news article.
* target: Summary of the article. | Provide a detailed description of the following dataset: MLSum-it |
Electromagnetic Calorimeter Shower Images | Each HDF5 file has the following structure:
`energy Dataset {100000, 1}`
`layer_0 Dataset {100000, 3, 96}`
`layer_1 Dataset {100000, 12, 12}`
`layer_2 Dataset {100000, 12, 6}`
`overflow Dataset {100000, 3}`
In practice... | Provide a detailed description of the following dataset: Electromagnetic Calorimeter Shower Images |
RETOUCH | The goal of the challenge is to compare automated algorithms that are able to detect and segment various types of fluids on a common dataset of optical coherence tomography (OCT) volumes representing different retinal diseases, acquired with devices from different manufacturers. We made available a dataset of OCT volum... | Provide a detailed description of the following dataset: RETOUCH |
NExT-QA | **NExT-QA** is a VideoQA benchmark targeting the explanation of video contents. It challenges QA models to reason about the causal and temporal actions and understand the rich object interactions in daily activities. It supports both multi-choice and open-ended QA tasks. The videos are untrimmed and the questions usua... | Provide a detailed description of the following dataset: NExT-QA |
MMChat | - A large scale Chinese multi-modal dialogue corpus (120.84K dialogues and 198.82 K images).
- MMCHAT contains image-grounded dialogues collected from real conversations on social media.
- We manually annotate 100K dialogues from MMCHAT with the dialogue quality and whether the dialogues are related to the given imag... | Provide a detailed description of the following dataset: MMChat |
MassiveText | **MassiveText** is a collection of large English-language text datasets from multiple sources: web pages, books, news articles, and code. The data pipeline includes text quality filtering, removal of repetitious text, deduplication of similar documents, and removal of documents with significant test-set overlap. Massiv... | Provide a detailed description of the following dataset: MassiveText |
Avicenna: Deductive Commonsense Reasoning | A syllogism is a common form of deductive reasoning that requires precisely two premises and one conclusion. The Avicenna corpus is a benchmark for syllogistic NLI and syllogistic NLG:
- syllogistic NLI: Identifying the possibility of inferring between pairs of inputted sentences.
- syllogistic NLG: Generating ... | Provide a detailed description of the following dataset: Avicenna: Deductive Commonsense Reasoning |
PLAD | PLAD is a dataset where sparse depth is provided by line-based visual SLAM to verify StructMDC. | Provide a detailed description of the following dataset: PLAD |
CER Smart Metering Project - Electricity Customer Behaviour Trial | The CER initiated the Smart Metering Project in 2007 with the purpose of undertaking trials to assess the performance of Smart Meters, their impact on consumers’ energy consumption and the economic case for a wider national rollout. It is a collaborative energy industry-wide project managed by the CER and actively invo... | Provide a detailed description of the following dataset: CER Smart Metering Project - Electricity Customer Behaviour Trial |
Unitail | The United Retail Datasets (Unitail) is a large-scale benchmark of basic visual tasks on products that challenges algorithms for detecting, reading, and matching. It offers the Unitial-Det, with 1.8M quadrilateral-shaped instances annotated; and the Unitial-OCR, containing 1454 product categories, 30k text regions, and... | Provide a detailed description of the following dataset: Unitail |
HFFD | We build a hybrid fake face (HFF) dataset, which contains eight types of face images. For real face images, three types of face images are randomly selected from three open datasets. They are low-resolution face images from CelebA, high-resolution face images from CelebA-HQ, and face video frames from FaceForensics, re... | Provide a detailed description of the following dataset: HFFD |
MIMI dataset | Nowadays, new branches of research are proposing the use of non-traditional data sources for the study of migration trends in order to find an original methodology to answer open questions about cross-border human mobility.
The Multi-aspect Integrated Migration Indicators (MIMI) dataset is a new dataset to be exploite... | Provide a detailed description of the following dataset: MIMI dataset |
HiNER-original | This dataset releases a significantly sized standard-abiding Hindi NER dataset containing 109,146 sentences and 2,220,856 tokens, annotated with 11 tags. | Provide a detailed description of the following dataset: HiNER-original |
HiNER-collapsed | This dataset releases a significantly sized standard-abiding Hindi NER dataset containing 109,146 sentences and 2,220,856 tokens, annotated with 3 collapsed tags (PER, LOC, ORG). | Provide a detailed description of the following dataset: HiNER-collapsed |
PLOD-filtered | PLOD: An Abbreviation Detection Dataset
This is the PLOD (filtered) Dataset published at LREC 2022. The dataset can help build sequence labelling models for the task of Abbreviation Detection. | Provide a detailed description of the following dataset: PLOD-filtered |
Fig-QA | **Fig-QA** consists of 10256 examples of human-written creative metaphors that are paired as a Winograd schema. It can be used to evaluate the commonsense reasoning of models. The metaphors themselves can also be used as training data for other tasks, such as metaphor detection or generation.
Image source: [https://... | Provide a detailed description of the following dataset: Fig-QA |
Czech Subjectivity Dataset | Czech subjectivity dataset of 10k manually annotated subjective and objective sentences from movie reviews and descriptions. See the paper description https://arxiv.org/abs/2204.13915 | Provide a detailed description of the following dataset: Czech Subjectivity Dataset |
Twitter-COMMs | Detecting out-of-context media, such as "mis-captioned" images on Twitter, is a relevant problem, especially in domains of high public significance. Twitter-COMMs is a large-scale multimodal dataset with 884k tweets relevant to the topics of Climate Change, COVID-19, and Military Vehicles. This dataset can be used to d... | Provide a detailed description of the following dataset: Twitter-COMMs |
https://osf.io/73c4q/ | Briganti et al. 2018 | Provide a detailed description of the following dataset: https://osf.io/73c4q/ |
https://osf.io/mj5wa/ | Armour et al. 2017 | Provide a detailed description of the following dataset: https://osf.io/mj5wa/ |
Streetscore | Paper abstract:
Social science literature has shown a strong connection between the visual appearance of a city’s neighborhoods and the behavior and health of its citizens. Yet, this re- search is limited by the lack of methods that can be used to quantify the appearance of streetscapes across cities or at high eno... | Provide a detailed description of the following dataset: Streetscore |
HowMany-QA | HowMany-Qa is a object counting dataset. It is taken from the counting-specific union of VQA 2.0 (Goyal et al., 2017) and Visual Genome QA (Krishna et al., 2016). | Provide a detailed description of the following dataset: HowMany-QA |
TorWIC | TorWIC is the dataset discussed in POCD: Probabilistic Object-Level Change Detection and Volumetric Mapping in Semi-Static Scenes. The purpose of this dataset is to evaluate the map mainteneance capabilities in a warehouse environment undergoing incremental changes. This dataset is collected in a Clearpath Robotics fac... | Provide a detailed description of the following dataset: TorWIC |
Identity Access Management dataset | We release 280 synthetic IAM graphs generated using IAM graphs of commercial companies.
Specifically, we vary the number of nodes, but keep graph density as is, i.e. in the range of 0.259 ± 0.198 (avg ± std).
To generate a synthetic graph, we first
sample the number of users and datastores from uniform distribution... | Provide a detailed description of the following dataset: Identity Access Management dataset |
GMD-12 | A dataset for medical consultation dialogues.
See our related paper for more details: https://arxiv.org/pdf/2204.13953.pdf | Provide a detailed description of the following dataset: GMD-12 |
r/transprogrammer survey results | Questions regarding computer science education for members of the r/transprogrammer Reddit. Used for the paper "Why The Trans Programmer?" by Skye Kychenthal. | Provide a detailed description of the following dataset: r/transprogrammer survey results |
ErAConD | ErAConD is a novel GEC dataset consisting of parallel original and corrected utterances drawn from open-domain chatbot conversations.
We collected 186 dialogs containing 1735 user utterance turns of open-domain dialog data by deploying BlenderBot on Amazon Mechanical Turk (AMT) via LEGOEval.
This dataset is, to o... | Provide a detailed description of the following dataset: ErAConD |
NLU Evaluation Corpora | This project is a collection of three corpora which can be used for evaluating chatbots or other conversational interfaces. Two of the corpora were extracted from StackExchange, one from a Telegram chatbot. | Provide a detailed description of the following dataset: NLU Evaluation Corpora |
OntoRock | OntoRock is a benchmark for evaluating the robustness of existing NER models via a systematic evaluation protocol. | Provide a detailed description of the following dataset: OntoRock |
UAGD | The source images of UAGD is manually selected from APPA-REAL, UTKFace and AgeDB datasets very carefully, which means only face images that are having large poses, containing noise pixels, bearing various expressions, and under different illuminations could be chosen. We also double clean and remove the images that hav... | Provide a detailed description of the following dataset: UAGD |
VSR | The Visual Spatial Reasoning (VSR) corpus is a collection of caption-image pairs with true/false labels. Each caption describes the spatial relation of two individual objects in the image, and a vision-language model (VLM) needs to judge whether the caption is correctly describing the image (True) or not (False). | Provide a detailed description of the following dataset: VSR |
Multivariate-Mobility-Paris | The original dataset was provided by Orange telecom in France, which contains anonymized and aggregated human mobility data. The Multivariate-Mobility-Paris dataset comprises information from 2020-08-24 to 2020-11-04 (72 days during the COVID-19 pandemic), with time granularity of 30 minutes and spatial granularity of ... | Provide a detailed description of the following dataset: Multivariate-Mobility-Paris |
DrugEHRQA | Contains over 70,000 question-answer pairs from both structured tables and unstructured notes from a publicly available Electronic Health Record (EHR). | Provide a detailed description of the following dataset: DrugEHRQA |
WikiMulti | **wikimulti** is a dataset for cross-lingual summarization based on Wikipedia articles in 15 languages. | Provide a detailed description of the following dataset: WikiMulti |
SYMON | Contains 5,193 video summaries of popular movies and TV series. SyMoN captures naturalistic storytelling videos for human audience made by human creators, and has higher story coverage and more frequent mental-state references than similar video-language story datasets. | Provide a detailed description of the following dataset: SYMON |
COVMis-Stance | **COVMis-Stance** is a stance detection dataset for COVID-19 misinformation. It consists of fake news and claims related to COVID. Fake news was collected from articles fact-checking sites, and fake claims were from the WHO official Twitter. It contains 2631 tweets annotated for stance towards 111 COVID19 misinformatio... | Provide a detailed description of the following dataset: COVMis-Stance |
PQuAD | Persian Question Answering Dataset (PQuAD) is a crowdsourced reading comprehension dataset on Persian Wikipedia articles. It includes 80,000 questions along with their answers, with 25% of the questions being adversarially unanswerable. | Provide a detailed description of the following dataset: PQuAD |
VCSL | VCSL (Video Copy Segment Localization) is a new comprehensive segment-level annotated video copy dataset. Compared with existing copy detection datasets restricted by either video-level annotation or small-scale, VCSL not only has two orders of magnitude more segment level labelled data, with 160k realistic video copy ... | Provide a detailed description of the following dataset: VCSL |
ViViD++ | A dataset capturing diverse visual data formats that target varying luminance conditions, and was recorded from alternative vision sensors, by handheld or mounted on a car, repeatedly in the same space but in different conditions. | Provide a detailed description of the following dataset: ViViD++ |
MuCGEC | MuCGEC is a multi-reference multi-source evaluation dataset for Chinese Grammatical Error Correction (CGEC), consisting of 7,063 sentences collected from three different Chinese-as-a-Second-Language (CSL) learner sources. Each sentence has been corrected by three annotators, and their corrections are meticulously revie... | Provide a detailed description of the following dataset: MuCGEC |
Custom Spatio-Temporal Action Video Dataset | This spatio-temporal actions dataset for video understanding consists of 4 parts: original videos, cropped videos, video frames, and annotation files. This dataset uses a proposed new multi-person annotation method of spatio-temporal actions. First, we use ffmpeg to crop the videos and frame the videos; then use yolov5... | Provide a detailed description of the following dataset: Custom Spatio-Temporal Action Video Dataset |
Task2Dial | A novel dataset of document-grounded task-based dialogues, where an Information Giver (IG) provides instructions (by consulting a document) to an Information Follower (IF), so that the latter can successfully complete the task. In this unique setting, the IF can ask clarification questions which may not be grounded in ... | Provide a detailed description of the following dataset: Task2Dial |
3MASSIV | A multilingual, multimodal and multi-aspect, expertly-annotated dataset of diverse short videos extracted from short-video social media platform - Moj. 3MASSIV comprises of 50k short videos (~20 seconds average duration) and 100K unlabeled videos in 11 different languages and captures popular short video trends like pr... | Provide a detailed description of the following dataset: 3MASSIV |
PeerSum | **PeerSum** is a new MDS dataset using peer reviews of scientific publications. The dataset differs from the existing MDS datasets in that summaries (i.e., the meta-reviews) are highly abstractive and they are real summaries of the source documents.
In PeerSum, we have reviews (with scores), comments and responses ... | Provide a detailed description of the following dataset: PeerSum |
HOI4D | A large-scale 4D egocentric dataset with rich annotations, to catalyze the research of category-level human-object interaction. HOI4D consists of 2.4M RGB-D egOCentric video frames over 4000 sequences collected by 4 participants interacting with 800 different object instances from 16 categories over 610 different indoo... | Provide a detailed description of the following dataset: HOI4D |
TASTEset | **TASTEset** Recipe Dataset and Food Entities Recognition is a dataset for Named Entity Recognition (NER) which consists of 700 recipes with more than 13,000 entities to extract. | Provide a detailed description of the following dataset: TASTEset |
Pirá | A large set of questions and answers about the ocean and the Brazilian coast both in Portuguese and English. Pirá is a crowdsourced question answering (QA) dataset on the ocean and the Brazilian coast designed for reading comprehension.
The dataset contains 2261 QA sets, as well as the texts associated with them. E... | Provide a detailed description of the following dataset: Pirá |
Danish Airs and Grounds | Danish Airs and Grounds (DAG) is a large collection of street-level and aerial images targeting such cases. Its main challenge lies in the extreme viewing-angle difference between query and reference images with consequent changes in illumination and perspective. The dataset is larger and more diverse than current publ... | Provide a detailed description of the following dataset: Danish Airs and Grounds |
SOMOS | The SOMOS dataset is a large-scale mean opinion scores (MOS) dataset consisting of solely neural text-to-speech (TTS) samples. It can be employed to train automatic MOS prediction systems focused on the assessment of modern synthesizers, and can stimulate advancements in acoustic model evaluation. It consists of 20K sy... | Provide a detailed description of the following dataset: SOMOS |
MUSIC-AVQA | The large-scale MUSIC-AVQA dataset of musical performance contains 45,867 question-answer pairs, distributed in 9,288 videos for over 150 hours. All QA pairs types are divided into 3 modal scenarios, which contain 9 question types and 33 question templates. Finally, as an open-ended problem of our AVQA tasks, all 42 ki... | Provide a detailed description of the following dataset: MUSIC-AVQA |
Winoground | Winoground is a dataset for evaluating the ability of vision and language models to conduct visio-linguistic compositional reasoning. Given two images and two captions, the goal is to match them correctly -- but crucially, both captions contain a completely identical set of words, only in a different order. The dataset... | Provide a detailed description of the following dataset: Winoground |
MCoNaLa | **MCoNaLa** is a multilingual dataset to benchmark code generation from natural language commands extending beyond English. Modeled off of the methodology from the English Code/Natural Language Challenge (CoNALa) dataset, the authors annotated a total of 896 NL-code pairs in three languages: Spanish, Japanese, and Russ... | Provide a detailed description of the following dataset: MCoNaLa |
Kinetics-GEB+ | **Kinetics-GEB+** (Generic Event Boundary Captioning, Grounding and Retrieval) is a dataset that consists of over 170k boundaries associated with captions describing status changes in the generic events in 12K videos. | Provide a detailed description of the following dataset: Kinetics-GEB+ |
DeToxy | **DeToxy** is a publicly available toxicity annotated dataset for the English language. DeToxy is sourced from various openly available speech databases and consists of over 2 million utterances. The dataset would act as a benchmark for the relatively new and un-explored Spoken Language Processing task of detecting tox... | Provide a detailed description of the following dataset: DeToxy |
GigaST | GigaST is a large-scale pseudo speech translation (ST) corpus. The corpus was created by translating the text in GigaSpeech, an English ASR corpus, into German and Chinese. The training set is translated by a strong machine translation system and the test set was translated by human. ST models trained with an addition ... | Provide a detailed description of the following dataset: GigaST |
MagicData-RAMC | The MagicData-RAMC corpus contains 180 hours of conversational speech data recorded from native speakers of Mandarin Chinese over mobile phones with a sampling rate of 16 kHz. The dialogs in the dialogs are classified into 15 diversified domains and tagged with topic labels, ranging from science and technology to ordin... | Provide a detailed description of the following dataset: MagicData-RAMC |
Animal Kingdom | Animal Kingdom is a large and diverse dataset that provides multiple annotated tasks to enable a more thorough understanding of natural animal behaviors. The wild animal footage used in the dataset records different times of the day in an extensive range of environments containing variations in backgrounds, viewpoints,... | Provide a detailed description of the following dataset: Animal Kingdom |
ROAD | ROAD is designed to test an autonomous vehicle's ability to detect road events, defined as triplets composed by an active agent, the action(s) it performs and the corresponding scene locations. ROAD comprises videos originally from the Oxford RobotCar Dataset, annotated with bounding boxes showing the location in the i... | Provide a detailed description of the following dataset: ROAD |
BEHAVE | BEHAVE is a full body human-object interaction dataset with multi-view RGBD frames and corresponding 3D SMPL and object fits along with the annotated contacts between them. Dataset contains ~15k frames at 5 locations with 8 subjects performing a wide range of interactions with 20 common objects. | Provide a detailed description of the following dataset: BEHAVE |
Bamboo | Bamboo Dataset is a mega-scale and information-dense dataset for both classification and detection pre-training. It is built upon integrating **24** public datasets (e.g. **ImagenNet**, **Places365**, **Object365**, **OpenImages**) and added new annotations through **active learning**. Bamboo has 69M image classificat... | Provide a detailed description of the following dataset: Bamboo |
Visual Affordance Learning | A large-scale multi-view RGBD visual affordance learning dataset, a benchmark of 47210 RGBD images from 37 object categories, annotated with 15 visual affordance categories and 35 cluttered/complex scenes with different objects and multiple affordances. To the best of our knowledge, this is the first ever and the large... | Provide a detailed description of the following dataset: Visual Affordance Learning |
PETCI | PETCI is a Parallel English Translation dataset of Chinese Idioms, collected from an idiom dictionary and Google and DeepL translation. PETCI contains 4,310 Chinese idioms with 29,936 English translations. These translations capture diverse translation errors and paraphrase strategies. | Provide a detailed description of the following dataset: PETCI |
Kobest | **Kobest** is a benchmark for Korean language reasoning. It consists of five Korean-language downstream tasks. Professional Korean linguists designed the tasks that require advanced Korean linguistic knowledge. | Provide a detailed description of the following dataset: Kobest |
Sen4AgriNet | A Sentinel-2 based time series multi country benchmark dataset, tailored for agricultural monitoring applications with Machine and Deep Learning. Sen4AgriNet dataset is annotated from farmer declarations collected via the Land Parcel Identification System (LPIS) for harmonizing country wide labels. Sen4AgriNet is the o... | Provide a detailed description of the following dataset: Sen4AgriNet |
YouTube-GDD | YouTubeGun Detection Dataset is collected from 343 high-definition YouTube videos and contains 5000 well-chosen images, in which 16064 instances of gun and 9046 instances of person are annotated. Compared to other datasets, YouTube-GDD is "dynamic", containing rich contextual information | Provide a detailed description of the following dataset: YouTube-GDD |
SynWoodScape | **SynWoodScape** is a synthetic version of the surround-view dataset covering many of its weaknesses and extending it. WoodScape comprises four surround-view cameras and nine tasks, including segmentation, depth estimation, 3D bounding box detection, and a novel soiling detection. Semantic annotation of 40 classes at t... | Provide a detailed description of the following dataset: SynWoodScape |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.