Datasets:
Tasks:
Other
Languages:
English
Multilinguality:
monolingual
Size Categories:
100K<n<1M
Language Creators:
crowdsourced
Annotations Creators:
crowdsourced
Source Datasets:
original
ArXiv:
License:
metadata
annotations_creators:
- crowdsourced
language_creators:
- crowdsourced
languages:
- en
licenses:
- other-charades
multilinguality:
- monolingual
paperswithcode_id: something-something
pretty_name: Something Something v2
size_categories:
- 100K<n<1M
source_datasets:
- original
task_categories:
- other
task_ids:
- other
Dataset Card for Something Something v2
Table of Contents
- Table of Contents
- Dataset Description
- Dataset Structure
- Dataset Creation
- Considerations for Using the Data
- Additional Information
Dataset Description
- Homepage: https://prior.allenai.org/projects/charades
- Repository: https://github.com/gsig/charades-algorithms
- Paper: https://arxiv.org/abs/1604.01753
- Leaderboard: https://paperswithcode.com/sota/action-classification-on-charades
- Point of Contact: mailto: vision.amt@allenai.org
Dataset Summary
The Something-Something dataset (version 2) is a collection of 220,847 labeled video clips of humans performing pre-defined, basic actions with everyday objects. It is designed to train machine learning models in fine-grained understanding of human hand gestures like putting something into something, turning something upside down and covering something with something.
Supported Tasks and Leaderboards
action-classification
: The goal of this task is to classify actions happening in a video. This is a multilabel classification. The leaderboard is available here
Languages
The annotations in the dataset are in English.
Dataset Structure
Data Instances
{
"video_id": "46GP8",
"video": "/home/amanpreet_huggingface_co/.cache/huggingface/datasets/downloads/extracted/3f022da5305aaa189f09476dbf7d5e02f6fe12766b927c076707360d00deb44d/46GP8.mp4",
"subject": "HR43",
"scene": "Kitchen",
"quality": 6,
"relevance": 7,
"verified": "Yes",
"script": "A person cooking on a stove while watching something out a window.",
"objects": ["food", "stove", "window"],
"descriptions": [
"A person cooks food on a stove before looking out of a window."
],
"labels": [92, 147],
"action_timings": [
[11.899999618530273, 21.200000762939453],
[0.0, 12.600000381469727]
],
"length": 24.829999923706055
}
Data Fields
video_id
:str
Unique identifier for each video.video
:str
Path to the video filesubject
:str
Unique identifier for each subject in the datasetscene
:str
One of 15 indoor scenes in the dataset, such as Kitchenquality
:int
The quality of the video judged by an annotator (7-point scale, 7=high quality), -100 if missingrelevance
:int
The relevance of the video to the script judged by an annotated (7-point scale, 7=very relevant), -100 if missingverified
:str
'Yes' if an annotator successfully verified that the video matches the script, else 'No'script
:str
The human-generated script used to generate the videodescriptions
:List[str]
List of descriptions by annotators watching the videolabels
:List[int]
Multi-label actions found in the video. Indices from 0 to 156.action_timings
:List[Tuple[int, int]]
Timing where each of the above actions happened.length
:float
The length of the video in seconds
Click here to see the full list of ImageNet class labels mapping:
Data Splits
train | validation | test | |
---|---|---|---|
# of examples | 1281167 | 50000 | 100000 |
Dataset Creation
Curation Rationale
Source Data
Initial Data Collection and Normalization
Who are the source language producers?
Annotations
Annotation process
Who are the annotators?
Personal and Sensitive Information
Considerations for Using the Data
Social Impact of Dataset
[More Information Needed]
Discussion of Biases
[More Information Needed]
Other Known Limitations
[More Information Needed]
Additional Information
Dataset Curators
Licensing Information
Citation Information
@inproceedings{goyal2017something,
title={The" something something" video database for learning and evaluating visual common sense},
author={Goyal, Raghav and Ebrahimi Kahou, Samira and Michalski, Vincent and Materzynska, Joanna and Westphal, Susanne and Kim, Heuna and Haenel, Valentin and Fruend, Ingo and Yianilos, Peter and Mueller-Freitag, Moritz and others},
booktitle={Proceedings of the IEEE international conference on computer vision},
pages={5842--5850},
year={2017}
}
Contributions
Thanks to @apsdehal for adding this dataset.