The dataset viewer is not available for this dataset.
Job manager was killed while running this job (job exceeded maximum duration).
Error code:   JobManagerExceededMaximumDurationError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/datasets-cards)

Dataset Card for MIMIC

Motivation

  1. For what purpose was the dataset created? Was there a specific task in mind? Was there a specific gap that needed to be filled? Please provide a description.

The contributions of our dataset to the vision community are listed below: (1) We release a pretraining dataset of 3.1M image pairs from diverse sets of videos, 3D scans, street views, and … for downstream dense prediction tasks. (2) The dataset can be scaled quickly because of the proposed data curation strategy. This strategy doesn't need any annotation except the images themselves.

  1. Who created the dataset (e.g., which team, research group) and on behalf of which entity (e.g., company, institution, organization)? Authors:Kalyani Marathe1,2∗ Mahtab Bigverdi1,2∗ Nishat Khan1 Tuhin Kundu Aniruddha Kembhavi2 Linda G. Shapiro1 Ranjay Krishna1,2 1University of Washington, 2Allen Institute for Artificial Intelligence

  2. Who funded the creation of the dataset? If there is an associated grant, please provide the name of the grantor and the grant name and number.

This research is sponsored by a grant from Amazon Technologies, Inc., as part of the Amazon-UW Science HUB.

  1. Any other comments? No.

Composition

  1. What do the instances that comprise the dataset represent (e.g., documents, photos, people, countries)? Are there multiple types of instances (e.g., movies, users, and ratings; people and interactions between them; nodes and edges)? Please provide a description.

All of the instances in the dataset are images. Common themes of the images include street locations, objects, indoor scenes, and frozen people from the Mannequin challenge. From each scene or object, pairs of images have at least 50% co-visibility.

  1. How many instances are there in total (of each type, if appropriate)? There are 3.1 million image pairs. (6.2 images in total )

  2. Does the dataset contain all possible instances or is it a sample (not necessarily random) of instances from a larger set? If the dataset is a sample, then what is the larger set? Is the sample representative of the larger set (e.g., geographic coverage)? If so, please describe how this representativeness was validated/verified. If it is not representative of the larger set, please describe why not (e.g., to cover a more diverse range of instances, because instances were withheld or unavailable).

The dataset contains all instances licensed. We sampled our image pairs from publicly licensed datasets.

  1. What data does each instance consist of? “Raw” data (e.g., unprocessed text or images) or features? In either case, please provide a description.

Instances are images resized to 224 x 224.

  1. Is there a label or target associated with each instance? If so, please provide a description. There are no labels associated with each instance. However, we provide a dictionary of matching patches of each pair with that pair of images.

  2. Is any information missing from individual instances? If so, please provide a description, explaining why this information is missing (e.g., because it was unavailable).

No, it isn't.

  1. Are relationships between individual instances made explicit (e.g., users’ movie ratings, social network links)? If so, please describe how these relationships are made explicit. No, there are no known relationships between instances in the dataset.

Yes, in the csv file with all file paths, there is a metadata column that shows which original dataset is this image pair from. Also, each folder name in each dataset shows the name of the video/3d scene which subfolders with image pairs are created from.

  1. Are there any errors, sources of noise, or redundancies in the dataset? If so, please provide a description.

Matching patches and overlap measurements are noisy because of the approximate matching algorithm, but this noise level in a large dataset is okay for pretraining.

  1. Is the dataset self-contained, or does it link to or otherwise rely on external resources (e.g., websites, tweets, other datasets)? If it links to or relies on external resources, a) are there guarantees that they will exist, and remain constant, over time; b) are there official archival versions of the complete dataset (i.e., including the external resources as they existed at the time the dataset was created); c) are there any restrictions (e.g., licenses, fees) associated with any of the external resources that might apply to a dataset consumer? Please provide descriptions of all external resources and any restrictions associated with them, as well as links or other access points, as appropriate.

The dataset is self-contained.

  1. Does the dataset contain data that might be considered confidential (e.g., data that is protected by legal privilege or by doctor-patient confidentiality, data that includes the content of individuals’ non-public communications)? If so, please provide a description. No.

  2. Does the dataset contain data that, if viewed directly, might be offensive, insulting, threatening, or might otherwise cause anxiety? No, we are using all released public datasets.

  3. Does the dataset identify any subpopulations (e.g., by age, gender)? If so, please describe how these subpopulations are identified and provide a description of their respective distributions within the dataset. The dataset has people engaging in the Mannequin challenge from all subpopulations.

  4. Is it possible to identify individuals (i.e., one or more natural persons), either directly or indirectly (i.e., in combination with other data) from the dataset? If so, please describe how. Yes, we collected images of people engaging in the mannequin challenge from a public dataset.

  5. Does the dataset contain data that might be considered sensitive in any way (e.g., data that reveals race or ethnic origins, sexual orientations, religious beliefs, political opinions or union memberships, or locations; financial or health data; biometric or genetic data; forms of government identification, such as social security numbers; criminal history)? If so, please provide a description. This dataset does not blur People’s faces, so it may reveal race. However, images with people were collected from a publicly available Mannequin dataset.

  6. Any other comments? No.

Collection Process

  1. How was the data associated with each instance acquired? Was the data directly observable (e.g., raw text, movie ratings), reported by subjects (e.g., survey responses), or indirectly inferred/derived from other data (e.g., part-of-speech tags, model-based guesses for age or language)? If the data was reported by subjects or indirectly inferred/derived from other data, was the data validated/verified? If so, please describe how.

It is indirectly inferred/derived from other public datasets like videos and 3D scans.

  1. What mechanisms or procedures were used to collect the data (e.g., hardware apparatuses or sensors, manual human curation, software programs, software APIs)? How were these mechanisms or procedures validated? The images in the dataset are extracted from publicly licensed datasets.

  2. If the dataset is a sample from a larger set, what was the sampling strategy (e.g., deterministic, probabilistic with specific sampling probabilities)? We sampled pairs of images from different scenes (videos/3D scans) of public datasets with the condition of having 50 to 75% co-visibility.

  3. Who was involved in the data collection process (e.g., students, crowdworkers, contractors) and how were they compensated (e.g., how much were crowdworkers paid)? The authors only.

  4. Over what timeframe was the data collected? Does this timeframe match the creation timeframe of the data associated with the instances (e.g., recent crawl of old news articles)? If not, please describe the timeframe in which the data associated with the instances was created.

The public imaging datasets that we used vary in their date taken over a wide range of years up to 2022.

  1. Were any ethical review processes conducted (e.g., by an institutional review board)? If so, please provide a description of these review processes, including the outcomes, as well as a link or other access point to any supporting documentation. If the dataset does not relate to people, you may skip the remaining questions in this section.

No, Some pairs of images have people which come from the Mannequin which is a licensed public dataset.

  1. Did you collect the data from the individuals in question directly, or obtain it via third parties or other sources (e.g., websites)?

Details of the images with people are in the public Mannequin dataset from Google.

.8. Were the individuals in question notified about the data collection? If so, please describe (or show with screenshots or other information) how notice was provided, and provide a link or other access point to, or otherwise reproduce, the exact language of the notification itself.

Details of the images with people are in the public Mannequin dataset from Google

  1. Did the individuals in question consent to the collection and use of their data? If so, please describe (or show with screenshots or other information) how consent was requested and provided, and provide a link or other access point to, or otherwise reproduce, the exact language to which the individuals consented. Details of the images with people are in the public Mannequin dataset from Google.

  2. If consent was obtained, were the consenting individuals provided with a mechanism to revoke their consent in the future or for certain uses? If so, please provide a description, as well as a link or other access point to the mechanism (if appropriate).

Details of the images with people are in the public Mannequin dataset from Google.

  1. Has an analysis of the potential impact of the dataset and its use on data subjects (e.g., a data protection impact analysis) been conducted? If so, please provide a description of this analysis, including the outcomes, as well as a link or other access point to any supporting documentation.

In this dataset, we are using available public datasets and gathering image pairs with a certain amount of co-visibility.

  1. Any other comments? No.

Preprocessing / Cleaning / Labeling

  1. Was any preprocessing / cleaning / labeling of the data done (e.g., discretization or bucketing, tokenization, part-of-speech tagging, SIFT feature extraction, removal of instances, processing of missing values)? If so, please provide a description. If not, you may skip the remaining questions in this section. We used sift keypoint detection and homography translation to define the co-visibility metric between image pairs and accepted/discarded pairs of images extracted from the available public datasets based on this. At the end, we resized all images to 224x224.

  2. Was the “raw” data saved in addition to the preprocessed/cleaned/labeled data (e.g., to support unanticipated future uses)? If so, please provide a link or other access point to the “raw” data. No

. 3. Is the software that was used to preprocess/clean/label the data available? If so, please provide a link or other access point. We used an algorithm provided in our project’s GitHub repo.(https://github.com/RAIVNLab/MIMIC/)

  1. Is there anything about the composition of the dataset or the way it was collected and preprocessed/cleaned/labeled that might impact future uses? For example, is there anything that a dataset consumer might need to know to avoid uses that could result in unfair treatment of individuals or groups (e.g., stereotyping, quality of service issues) or other risks or harms (e.g., legal risks, financial harms)? If so, please provide a description. Is there anything a dataset consumer could do to mitigate these risks or harms? No. . 5. Are there tasks for which the dataset should not be used? If so, please provide a description.No.

  2. Any other comments? No.

Distribution

  1. Will the dataset be distributed to third parties outside of the entity (e.g., company, institution, organization) on behalf of which the dataset was created? If so, please provide a description. The dataset will be available for the research community.

  2. How will the dataset will be distributed (e.g., tarball on website, API, GitHub)? Does the dataset have a digital object identifier (DOI)? The dataset is available at https://github.com/RAIVNLab/MIMIC/.

  3. When will the dataset be distributed? The dataset is released now.

  4. Will the dataset be distributed under a copyright or other intellectual property (IP) license, and/or under applicable terms of use (ToU)? If so, please describe this license and/or ToU, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms or ToU, as well as any fees associated with these restrictions.

Yes. The license agreement and terms of use for the dataset can be found at https://github.com/RAIVNLab/MIMIC/blob/main/LICENSE.

  1. Have any third parties imposed IP-based or other restrictions on the data associated with the instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any relevant licensing terms, as well as any fees associated with these restrictions. No

  2. Do any export controls or other regulatory restrictions apply to the dataset or to individual instances? If so, please describe these restrictions, and provide a link or other access point to, or otherwise reproduce, any supporting documentation. No

  3. Any other comments? No.

Maintenance

  1. Who will be supporting/hosting/maintaining the dataset? The dataset will be hosted at HuggingFace.

  2. How can the owner/curator/manager of the dataset be contacted (e.g., email address)? Please email kmarathe@cs.washington.edu . 3. Is there an erratum? If so, please provide a link or other access point. No.

  3. Will the dataset be updated (e.g., to correct labeling errors, add new instances, delete instances)? If so, please describe how often, by whom, and how updates will be communicated to dataset consumers (e.g., mailing list, 26 GitHub)? We might add pairs and wont remove any.

  4. If the dataset relates to people, are there applicable limits on the retention of the data associated with the instances (e.g., were the individuals in question told that their data would be retained for a fixed period of time and then deleted)? If so, please describe these limits and explain how they will be enforced. There are no limits on data retention.

  5. Will older versions of the dataset continue to be supported/hosted/maintained? If so, please describe how. If not, please describe how its obsolescence will be communicated to dataset consumers. Yes, we will keep csv files of the paths of data for all versions.

  6. If others want to extend/augment/build on/contribute to the dataset, is there a mechanism for them to do so? If so, please provide a description. Will these contributions be validated/verified? If so, please describe how. If not, why not? Is there a process for communicating/distributing these contributions to dataset consumers? If so, please provide a description. Yes, with the algorithm provided, everyone can generate image pairs with a desired co-visibility from different data sources.

  7. Any other comments? No

Downloads last month
22