aps
commited on
Commit
•
16f3142
0
Parent(s):
Complete Charades
Browse files- README.md +371 -0
- charades.py +138 -0
- classes.py +179 -0
- dataset_infos.json +1 -0
README.md
ADDED
@@ -0,0 +1,371 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
annotations_creators:
|
3 |
+
- crowdsourced
|
4 |
+
language_creators:
|
5 |
+
- crowdsourced
|
6 |
+
languages:
|
7 |
+
- en
|
8 |
+
licenses:
|
9 |
+
- other-charades
|
10 |
+
multilinguality:
|
11 |
+
- monolingual
|
12 |
+
paperswithcode_id: charades
|
13 |
+
pretty_name: Charades
|
14 |
+
size_categories:
|
15 |
+
- 1K<n<10K
|
16 |
+
source_datasets:
|
17 |
+
- original
|
18 |
+
task_categories:
|
19 |
+
- other
|
20 |
+
task_ids:
|
21 |
+
- other
|
22 |
+
---
|
23 |
+
|
24 |
+
# Dataset Card for [Dataset Name]
|
25 |
+
|
26 |
+
## Table of Contents
|
27 |
+
- [Table of Contents](#table-of-contents)
|
28 |
+
- [Dataset Description](#dataset-description)
|
29 |
+
- [Dataset Summary](#dataset-summary)
|
30 |
+
- [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
|
31 |
+
- [Languages](#languages)
|
32 |
+
- [Dataset Structure](#dataset-structure)
|
33 |
+
- [Data Instances](#data-instances)
|
34 |
+
- [Data Fields](#data-fields)
|
35 |
+
- [Data Splits](#data-splits)
|
36 |
+
- [Dataset Creation](#dataset-creation)
|
37 |
+
- [Curation Rationale](#curation-rationale)
|
38 |
+
- [Source Data](#source-data)
|
39 |
+
- [Annotations](#annotations)
|
40 |
+
- [Personal and Sensitive Information](#personal-and-sensitive-information)
|
41 |
+
- [Considerations for Using the Data](#considerations-for-using-the-data)
|
42 |
+
- [Social Impact of Dataset](#social-impact-of-dataset)
|
43 |
+
- [Discussion of Biases](#discussion-of-biases)
|
44 |
+
- [Other Known Limitations](#other-known-limitations)
|
45 |
+
- [Additional Information](#additional-information)
|
46 |
+
- [Dataset Curators](#dataset-curators)
|
47 |
+
- [Licensing Information](#licensing-information)
|
48 |
+
- [Citation Information](#citation-information)
|
49 |
+
- [Contributions](#contributions)
|
50 |
+
|
51 |
+
## Dataset Description
|
52 |
+
|
53 |
+
- **Homepage:** https://prior.allenai.org/projects/charades
|
54 |
+
- **Repository:** https://github.com/gsig/charades-algorithms
|
55 |
+
- **Paper:** https://arxiv.org/abs/1604.01753
|
56 |
+
- **Leaderboard:** https://paperswithcode.com/sota/action-classification-on-charades
|
57 |
+
- **Point of Contact:** mailto: vision.amt@allenai.org
|
58 |
+
|
59 |
+
### Dataset Summary
|
60 |
+
|
61 |
+
Charades is dataset composed of 9848 videos of daily indoors activities collected through Amazon Mechanical Turk. 267 different users were presented with a sentence, that includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence (like in a game of Charades). The dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos
|
62 |
+
|
63 |
+
### Supported Tasks and Leaderboards
|
64 |
+
|
65 |
+
- `multilabel-action-classification`: The goal of this task is to classify actions happening in a video. This is a multilabel classification. The leaderboard is available [here](https://paperswithcode.com/sota/action-classification-on-charades)
|
66 |
+
|
67 |
+
|
68 |
+
### Languages
|
69 |
+
|
70 |
+
The annotations in the dataset are in English.
|
71 |
+
|
72 |
+
## Dataset Structure
|
73 |
+
|
74 |
+
### Data Instances
|
75 |
+
|
76 |
+
```
|
77 |
+
{
|
78 |
+
"video_id": "46GP8",
|
79 |
+
"video": "/home/amanpreet_huggingface_co/.cache/huggingface/datasets/downloads/extracted/3f022da5305aaa189f09476dbf7d5e02f6fe12766b927c076707360d00deb44d/46GP8.mp4",
|
80 |
+
"subject": "HR43",
|
81 |
+
"scene": "Kitchen",
|
82 |
+
"quality": 6,
|
83 |
+
"relevance": 7,
|
84 |
+
"verified": "Yes",
|
85 |
+
"script": "A person cooking on a stove while watching something out a window.",
|
86 |
+
"objects": ["food", "stove", "window"],
|
87 |
+
"descriptions": [
|
88 |
+
"A person cooks food on a stove before looking out of a window."
|
89 |
+
],
|
90 |
+
"labels": [92, 147],
|
91 |
+
"action_timings": [
|
92 |
+
[11.899999618530273, 21.200000762939453],
|
93 |
+
[0.0, 12.600000381469727]
|
94 |
+
],
|
95 |
+
"length": 24.829999923706055
|
96 |
+
}
|
97 |
+
```
|
98 |
+
|
99 |
+
### Data Fields
|
100 |
+
|
101 |
+
- `video_id`: `str` Unique identifier for each video.
|
102 |
+
- `video`: `str` Path to the video file
|
103 |
+
- `subject`: `str` Unique identifier for each subject in the dataset
|
104 |
+
- `scene`: `str` One of 15 indoor scenes in the dataset, such as Kitchen
|
105 |
+
- `quality`: `int` The quality of the video judged by an annotator (7-point scale, 7=high quality), -100 if missing
|
106 |
+
- `relevance`: `int` The relevance of the video to the script judged by an annotated (7-point scale, 7=very relevant), -100 if missing
|
107 |
+
- `verified`: `str` 'Yes' if an annotator successfully verified that the video matches the script, else 'No'
|
108 |
+
- `script`: `str` The human-generated script used to generate the video
|
109 |
+
- `descriptions`: `List[str]` List of descriptions by annotators watching the video
|
110 |
+
- `labels`: `List[int]` Multi-label actions found in the video. Indices from 0 to 156.
|
111 |
+
- `action_timings`: `List[Tuple[int, int]]` Timing where each of the above actions happened.
|
112 |
+
- `length`: `float` The length of the video in seconds
|
113 |
+
|
114 |
+
<details>
|
115 |
+
<summary>
|
116 |
+
Click here to see the full list of ImageNet class labels mapping:
|
117 |
+
</summary>
|
118 |
+
|
119 |
+
|id|Class|
|
120 |
+
|--|-----|
|
121 |
+
|c000 | Holding some clothes |
|
122 |
+
|c001 | Putting clothes somewhere |
|
123 |
+
|c002 | Taking some clothes from somewhere |
|
124 |
+
|c003 | Throwing clothes somewhere |
|
125 |
+
|c004 | Tidying some clothes |
|
126 |
+
|c005 | Washing some clothes |
|
127 |
+
|c006 | Closing a door |
|
128 |
+
|c007 | Fixing a door |
|
129 |
+
|c008 | Opening a door |
|
130 |
+
|c009 | Putting something on a table |
|
131 |
+
|c010 | Sitting on a table |
|
132 |
+
|c011 | Sitting at a table |
|
133 |
+
|c012 | Tidying up a table |
|
134 |
+
|c013 | Washing a table |
|
135 |
+
|c014 | Working at a table |
|
136 |
+
|c015 | Holding a phone/camera |
|
137 |
+
|c016 | Playing with a phone/camera |
|
138 |
+
|c017 | Putting a phone/camera somewhere |
|
139 |
+
|c018 | Taking a phone/camera from somewhere |
|
140 |
+
|c019 | Talking on a phone/camera |
|
141 |
+
|c020 | Holding a bag |
|
142 |
+
|c021 | Opening a bag |
|
143 |
+
|c022 | Putting a bag somewhere |
|
144 |
+
|c023 | Taking a bag from somewhere |
|
145 |
+
|c024 | Throwing a bag somewhere |
|
146 |
+
|c025 | Closing a book |
|
147 |
+
|c026 | Holding a book |
|
148 |
+
|c027 | Opening a book |
|
149 |
+
|c028 | Putting a book somewhere |
|
150 |
+
|c029 | Smiling at a book |
|
151 |
+
|c030 | Taking a book from somewhere |
|
152 |
+
|c031 | Throwing a book somewhere |
|
153 |
+
|c032 | Watching/Reading/Looking at a book |
|
154 |
+
|c033 | Holding a towel/s |
|
155 |
+
|c034 | Putting a towel/s somewhere |
|
156 |
+
|c035 | Taking a towel/s from somewhere |
|
157 |
+
|c036 | Throwing a towel/s somewhere |
|
158 |
+
|c037 | Tidying up a towel/s |
|
159 |
+
|c038 | Washing something with a towel |
|
160 |
+
|c039 | Closing a box |
|
161 |
+
|c040 | Holding a box |
|
162 |
+
|c041 | Opening a box |
|
163 |
+
|c042 | Putting a box somewhere |
|
164 |
+
|c043 | Taking a box from somewhere |
|
165 |
+
|c044 | Taking something from a box |
|
166 |
+
|c045 | Throwing a box somewhere |
|
167 |
+
|c046 | Closing a laptop |
|
168 |
+
|c047 | Holding a laptop |
|
169 |
+
|c048 | Opening a laptop |
|
170 |
+
|c049 | Putting a laptop somewhere |
|
171 |
+
|c050 | Taking a laptop from somewhere |
|
172 |
+
|c051 | Watching a laptop or something on a laptop |
|
173 |
+
|c052 | Working/Playing on a laptop |
|
174 |
+
|c053 | Holding a shoe/shoes |
|
175 |
+
|c054 | Putting shoes somewhere |
|
176 |
+
|c055 | Putting on shoe/shoes |
|
177 |
+
|c056 | Taking shoes from somewhere |
|
178 |
+
|c057 | Taking off some shoes |
|
179 |
+
|c058 | Throwing shoes somewhere |
|
180 |
+
|c059 | Sitting in a chair |
|
181 |
+
|c060 | Standing on a chair |
|
182 |
+
|c061 | Holding some food |
|
183 |
+
|c062 | Putting some food somewhere |
|
184 |
+
|c063 | Taking food from somewhere |
|
185 |
+
|c064 | Throwing food somewhere |
|
186 |
+
|c065 | Eating a sandwich |
|
187 |
+
|c066 | Making a sandwich |
|
188 |
+
|c067 | Holding a sandwich |
|
189 |
+
|c068 | Putting a sandwich somewhere |
|
190 |
+
|c069 | Taking a sandwich from somewhere |
|
191 |
+
|c070 | Holding a blanket |
|
192 |
+
|c071 | Putting a blanket somewhere |
|
193 |
+
|c072 | Snuggling with a blanket |
|
194 |
+
|c073 | Taking a blanket from somewhere |
|
195 |
+
|c074 | Throwing a blanket somewhere |
|
196 |
+
|c075 | Tidying up a blanket/s |
|
197 |
+
|c076 | Holding a pillow |
|
198 |
+
|c077 | Putting a pillow somewhere |
|
199 |
+
|c078 | Snuggling with a pillow |
|
200 |
+
|c079 | Taking a pillow from somewhere |
|
201 |
+
|c080 | Throwing a pillow somewhere |
|
202 |
+
|c081 | Putting something on a shelf |
|
203 |
+
|c082 | Tidying a shelf or something on a shelf |
|
204 |
+
|c083 | Reaching for and grabbing a picture |
|
205 |
+
|c084 | Holding a picture |
|
206 |
+
|c085 | Laughing at a picture |
|
207 |
+
|c086 | Putting a picture somewhere |
|
208 |
+
|c087 | Taking a picture of something |
|
209 |
+
|c088 | Watching/looking at a picture |
|
210 |
+
|c089 | Closing a window |
|
211 |
+
|c090 | Opening a window |
|
212 |
+
|c091 | Washing a window |
|
213 |
+
|c092 | Watching/Looking outside of a window |
|
214 |
+
|c093 | Holding a mirror |
|
215 |
+
|c094 | Smiling in a mirror |
|
216 |
+
|c095 | Washing a mirror |
|
217 |
+
|c096 | Watching something/someone/themselves in a mirror |
|
218 |
+
|c097 | Walking through a doorway |
|
219 |
+
|c098 | Holding a broom |
|
220 |
+
|c099 | Putting a broom somewhere |
|
221 |
+
|c100 | Taking a broom from somewhere |
|
222 |
+
|c101 | Throwing a broom somewhere |
|
223 |
+
|c102 | Tidying up with a broom |
|
224 |
+
|c103 | Fixing a light |
|
225 |
+
|c104 | Turning on a light |
|
226 |
+
|c105 | Turning off a light |
|
227 |
+
|c106 | Drinking from a cup/glass/bottle |
|
228 |
+
|c107 | Holding a cup/glass/bottle of something |
|
229 |
+
|c108 | Pouring something into a cup/glass/bottle |
|
230 |
+
|c109 | Putting a cup/glass/bottle somewhere |
|
231 |
+
|c110 | Taking a cup/glass/bottle from somewhere |
|
232 |
+
|c111 | Washing a cup/glass/bottle |
|
233 |
+
|c112 | Closing a closet/cabinet |
|
234 |
+
|c113 | Opening a closet/cabinet |
|
235 |
+
|c114 | Tidying up a closet/cabinet |
|
236 |
+
|c115 | Someone is holding a paper/notebook |
|
237 |
+
|c116 | Putting their paper/notebook somewhere |
|
238 |
+
|c117 | Taking paper/notebook from somewhere |
|
239 |
+
|c118 | Holding a dish |
|
240 |
+
|c119 | Putting a dish/es somewhere |
|
241 |
+
|c120 | Taking a dish/es from somewhere |
|
242 |
+
|c121 | Wash a dish/dishes |
|
243 |
+
|c122 | Lying on a sofa/couch |
|
244 |
+
|c123 | Sitting on sofa/couch |
|
245 |
+
|c124 | Lying on the floor |
|
246 |
+
|c125 | Sitting on the floor |
|
247 |
+
|c126 | Throwing something on the floor |
|
248 |
+
|c127 | Tidying something on the floor |
|
249 |
+
|c128 | Holding some medicine |
|
250 |
+
|c129 | Taking/consuming some medicine |
|
251 |
+
|c130 | Putting groceries somewhere |
|
252 |
+
|c131 | Laughing at television |
|
253 |
+
|c132 | Watching television |
|
254 |
+
|c133 | Someone is awakening in bed |
|
255 |
+
|c134 | Lying on a bed |
|
256 |
+
|c135 | Sitting in a bed |
|
257 |
+
|c136 | Fixing a vacuum |
|
258 |
+
|c137 | Holding a vacuum |
|
259 |
+
|c138 | Taking a vacuum from somewhere |
|
260 |
+
|c139 | Washing their hands |
|
261 |
+
|c140 | Fixing a doorknob |
|
262 |
+
|c141 | Grasping onto a doorknob |
|
263 |
+
|c142 | Closing a refrigerator |
|
264 |
+
|c143 | Opening a refrigerator |
|
265 |
+
|c144 | Fixing their hair |
|
266 |
+
|c145 | Working on paper/notebook |
|
267 |
+
|c146 | Someone is awakening somewhere |
|
268 |
+
|c147 | Someone is cooking something |
|
269 |
+
|c148 | Someone is dressing |
|
270 |
+
|c149 | Someone is laughing |
|
271 |
+
|c150 | Someone is running somewhere |
|
272 |
+
|c151 | Someone is going from standing to sitting |
|
273 |
+
|c152 | Someone is smiling |
|
274 |
+
|c153 | Someone is sneezing |
|
275 |
+
|c154 | Someone is standing up from somewhere |
|
276 |
+
|c155 | Someone is undressing |
|
277 |
+
|c156 | Someone is eating something |
|
278 |
+
</details>
|
279 |
+
|
280 |
+
### Data Splits
|
281 |
+
|
282 |
+
|
283 |
+
| |train |validation| test |
|
284 |
+
|-------------|------:|---------:|------:|
|
285 |
+
|# of examples|1281167|50000 |100000 |
|
286 |
+
|
287 |
+
|
288 |
+
## Dataset Creation
|
289 |
+
|
290 |
+
### Curation Rationale
|
291 |
+
|
292 |
+
> Computer vision has a great potential to help our daily lives by searching for lost keys, watering flowers or reminding us to take a pill. To succeed with such tasks, computer vision methods need to be trained from real and diverse examples of our daily dynamic scenes. While most of such scenes are not particularly exciting, they typically do not appear on YouTube, in movies or TV broadcasts. So how do we collect sufficiently many diverse but boring samples representing our lives? We propose a novel Hollywood in Homes approach to collect such data. Instead of shooting videos in the lab, we ensure diversity by distributing and crowdsourcing the whole process of video creation from script writing to video recording and annotation.
|
293 |
+
|
294 |
+
### Source Data
|
295 |
+
|
296 |
+
#### Initial Data Collection and Normalization
|
297 |
+
|
298 |
+
> Similar to filming, we have a three-step process for generating a video. The first step is generating the script of the indoor video. The key here is to allow workers to generate diverse scripts yet ensure that we have enough data for each category. The second step in the process is to use the script and ask workers to record a video of that sentence being acted out. In the final step, we ask the workers to verify if the recorded video corresponds to script, followed by an annotation procedure.
|
299 |
+
|
300 |
+
#### Who are the source language producers?
|
301 |
+
|
302 |
+
Amazon Mechnical Turk annotators
|
303 |
+
|
304 |
+
### Annotations
|
305 |
+
|
306 |
+
#### Annotation process
|
307 |
+
|
308 |
+
> Similar to filming, we have a three-step process for generating a video. The first step is generating the script of the indoor video. The key here is to allow workers to generate diverse scripts yet ensure that we have enough data for each category. The second step in the process is to use the script and ask workers to record a video of that sentence being acted out. In the final step, we ask the workers to verify if the recorded video corresponds to script, followed by an annotation procedure.
|
309 |
+
|
310 |
+
#### Who are the annotators?
|
311 |
+
|
312 |
+
Amazon Mechnical Turk annotators
|
313 |
+
|
314 |
+
### Personal and Sensitive Information
|
315 |
+
|
316 |
+
Nothing specifically mentioned in the paper.
|
317 |
+
|
318 |
+
## Considerations for Using the Data
|
319 |
+
|
320 |
+
### Social Impact of Dataset
|
321 |
+
|
322 |
+
[More Information Needed]
|
323 |
+
|
324 |
+
### Discussion of Biases
|
325 |
+
|
326 |
+
[More Information Needed]
|
327 |
+
|
328 |
+
### Other Known Limitations
|
329 |
+
|
330 |
+
[More Information Needed]
|
331 |
+
|
332 |
+
## Additional Information
|
333 |
+
|
334 |
+
### Dataset Curators
|
335 |
+
|
336 |
+
AMT annotators
|
337 |
+
|
338 |
+
### Licensing Information
|
339 |
+
|
340 |
+
License for Non-Commercial Use
|
341 |
+
|
342 |
+
If this software is redistributed, this license must be included. The term software includes any source files, documentation, executables, models, and data.
|
343 |
+
|
344 |
+
This software and data is available for general use by academic or non-profit, or government-sponsored researchers. It may also be used for evaluation purposes elsewhere. This license does not grant the right to use this software or any derivation of it in a for-profit enterprise. For commercial use, please contact The Allen Institute for Artificial Intelligence.
|
345 |
+
|
346 |
+
This license does not grant the right to modify and publicly release the data in any form.
|
347 |
+
|
348 |
+
This license does not grant the right to distribute the data to a third party in any form.
|
349 |
+
|
350 |
+
The subjects in this data should be treated with respect and dignity. This license only grants the right to publish short segments or still images in an academic publication where necessary to present examples, experimental results, or observations.
|
351 |
+
|
352 |
+
This software comes with no warranty or guarantee of any kind. By using this software, the user accepts full liability.
|
353 |
+
|
354 |
+
The Allen Institute for Artificial Intelligence (C) 2016.
|
355 |
+
|
356 |
+
### Citation Information
|
357 |
+
|
358 |
+
```bibtex
|
359 |
+
@article{sigurdsson2016hollywood,
|
360 |
+
author = {Gunnar A. Sigurdsson and G{\"u}l Varol and Xiaolong Wang and Ivan Laptev and Ali Farhadi and Abhinav Gupta},
|
361 |
+
title = {Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding},
|
362 |
+
journal = {ArXiv e-prints},
|
363 |
+
eprint = {1604.01753},
|
364 |
+
year = {2016},
|
365 |
+
url = {http://arxiv.org/abs/1604.01753},
|
366 |
+
}
|
367 |
+
```
|
368 |
+
|
369 |
+
### Contributions
|
370 |
+
|
371 |
+
Thanks to [@apsdehal](https://github.com/apsdehal) for adding this dataset.
|
charades.py
ADDED
@@ -0,0 +1,138 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2022 The HuggingFace Datasets Authors and the current dataset script contributor.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
|
16 |
+
# Lint as: python3
|
17 |
+
"""Charades is dataset composed of 9848 videos of daily indoors activities collected through Amazon Mechanical Turk"""
|
18 |
+
|
19 |
+
|
20 |
+
import csv
|
21 |
+
import os
|
22 |
+
|
23 |
+
import datasets
|
24 |
+
|
25 |
+
from .classes import CHARADES_CLASSES
|
26 |
+
|
27 |
+
_CITATION = """
|
28 |
+
@article{sigurdsson2016hollywood,
|
29 |
+
author = {Gunnar A. Sigurdsson and G{\"u}l Varol and Xiaolong Wang and Ivan Laptev and Ali Farhadi and Abhinav Gupta},
|
30 |
+
title = {Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding},
|
31 |
+
journal = {ArXiv e-prints},
|
32 |
+
eprint = {1604.01753},
|
33 |
+
year = {2016},
|
34 |
+
url = {http://arxiv.org/abs/1604.01753},
|
35 |
+
}
|
36 |
+
"""
|
37 |
+
|
38 |
+
_DESCRIPTION = """\
|
39 |
+
Charades is dataset composed of 9848 videos of daily indoors activities collected through Amazon Mechanical Turk. 267 different users were presented with a sentence, that includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence (like in a game of Charades). The dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos.
|
40 |
+
"""
|
41 |
+
|
42 |
+
|
43 |
+
_ANNOTATIONS_URL = "https://ai2-public-datasets.s3-us-west-2.amazonaws.com/charades/Charades.zip"
|
44 |
+
_VIDEOS_URL = {
|
45 |
+
"default": "https://ai2-public-datasets.s3-us-west-2.amazonaws.com/charades/Charades_v1.zip",
|
46 |
+
"480p": "https://ai2-public-datasets.s3-us-west-2.amazonaws.com/charades/Charades_v1_480.zip",
|
47 |
+
}
|
48 |
+
|
49 |
+
|
50 |
+
class Charades(datasets.GeneratorBasedBuilder):
|
51 |
+
"""Charades is dataset composed of 9848 videos of daily indoors activities collected through Amazon Mechanical Turk"""
|
52 |
+
|
53 |
+
BUILDER_CONFIGS = [datasets.BuilderConfig(name="default"), datasets.BuilderConfig(name="480p")]
|
54 |
+
DEFAULT_CONFIG_NAME = "default"
|
55 |
+
|
56 |
+
def _info(self):
|
57 |
+
return datasets.DatasetInfo(
|
58 |
+
description=_DESCRIPTION,
|
59 |
+
features=datasets.Features(
|
60 |
+
{
|
61 |
+
"video_id": datasets.Value("string"),
|
62 |
+
"video": datasets.Value("string"),
|
63 |
+
"subject": datasets.Value("string"),
|
64 |
+
"scene": datasets.Value("string"),
|
65 |
+
"quality": datasets.Value("int32"),
|
66 |
+
"relevance": datasets.Value("int32"),
|
67 |
+
"verified": datasets.Value("string"),
|
68 |
+
"script": datasets.Value("string"),
|
69 |
+
"objects": datasets.features.Sequence(datasets.Value("string")),
|
70 |
+
"descriptions": datasets.features.Sequence(datasets.Value("string")),
|
71 |
+
"labels": datasets.Sequence(
|
72 |
+
datasets.features.ClassLabel(
|
73 |
+
num_classes=len(CHARADES_CLASSES), names=list(CHARADES_CLASSES.values())
|
74 |
+
)
|
75 |
+
),
|
76 |
+
"action_timings": datasets.Sequence(datasets.Sequence(datasets.Value("float32"))),
|
77 |
+
"length": datasets.Value("float32"),
|
78 |
+
}
|
79 |
+
),
|
80 |
+
supervised_keys=None,
|
81 |
+
homepage="",
|
82 |
+
citation=_CITATION,
|
83 |
+
)
|
84 |
+
|
85 |
+
def _split_generators(self, dl_manager):
|
86 |
+
annotations_path = dl_manager.download_and_extract(_ANNOTATIONS_URL)
|
87 |
+
archive = os.path.join(dl_manager.download_and_extract(_VIDEOS_URL[self.config.name]), "Charades_v1")
|
88 |
+
return [
|
89 |
+
datasets.SplitGenerator(
|
90 |
+
name=datasets.Split.TRAIN,
|
91 |
+
gen_kwargs={
|
92 |
+
"annotation_file": os.path.join(annotations_path, "Charades", "Charades_v1_train.csv"),
|
93 |
+
"video_folder": archive,
|
94 |
+
},
|
95 |
+
),
|
96 |
+
datasets.SplitGenerator(
|
97 |
+
name=datasets.Split.TEST,
|
98 |
+
gen_kwargs={
|
99 |
+
"annotation_file": os.path.join(annotations_path, "Charades", "Charades_v1_test.csv"),
|
100 |
+
"video_folder": archive,
|
101 |
+
},
|
102 |
+
),
|
103 |
+
]
|
104 |
+
|
105 |
+
def _generate_examples(self, annotation_file, video_folder):
|
106 |
+
"""This function returns the examples."""
|
107 |
+
with open(annotation_file, "r", encoding="utf-8") as csv_file:
|
108 |
+
reader = csv.DictReader(csv_file)
|
109 |
+
idx = 0
|
110 |
+
for row in reader:
|
111 |
+
path = os.path.join(video_folder, row["id"] + ".mp4")
|
112 |
+
labels = []
|
113 |
+
action_timings = []
|
114 |
+
for class_label in row["actions"].split(";"):
|
115 |
+
# Skip empty action labels
|
116 |
+
if len(class_label) != 0:
|
117 |
+
# format is like: "c123 11.0 13.0"
|
118 |
+
labels.append(CHARADES_CLASSES[class_label.split(" ")[0]])
|
119 |
+
timings = list(map(float, class_label.split(" ")[1:]))
|
120 |
+
action_timings.append(timings)
|
121 |
+
|
122 |
+
yield idx, {
|
123 |
+
"video_id": row["id"],
|
124 |
+
"video": path,
|
125 |
+
"subject": row["subject"],
|
126 |
+
"scene": row["scene"],
|
127 |
+
"quality": int(row["quality"]) if len(row["quality"]) != 0 else -100,
|
128 |
+
"relevance": int(row["relevance"]) if len(row["relevance"]) != 0 else -100,
|
129 |
+
"verified": row["verified"],
|
130 |
+
"script": row["script"],
|
131 |
+
"objects": row["objects"].split(";"),
|
132 |
+
"descriptions": row["descriptions"].split(";"),
|
133 |
+
"labels": labels,
|
134 |
+
"action_timings": action_timings,
|
135 |
+
"length": row["length"],
|
136 |
+
}
|
137 |
+
|
138 |
+
idx += 1
|
classes.py
ADDED
@@ -0,0 +1,179 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# coding=utf-8
|
2 |
+
# Copyright 2022 the HuggingFace Datasets Authors.
|
3 |
+
#
|
4 |
+
# Licensed under the Apache License, Version 2.0 (the "License");
|
5 |
+
# you may not use this file except in compliance with the License.
|
6 |
+
# You may obtain a copy of the License at
|
7 |
+
#
|
8 |
+
# http://www.apache.org/licenses/LICENSE-2.0
|
9 |
+
#
|
10 |
+
# Unless required by applicable law or agreed to in writing, software
|
11 |
+
# distributed under the License is distributed on an "AS IS" BASIS,
|
12 |
+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
13 |
+
# See the License for the specific language governing permissions and
|
14 |
+
# limitations under the License.
|
15 |
+
|
16 |
+
|
17 |
+
from collections import OrderedDict
|
18 |
+
|
19 |
+
CHARADES_CLASSES = OrderedDict(
|
20 |
+
{
|
21 |
+
"c000": "Holding some clothes",
|
22 |
+
"c001": "Putting clothes somewhere",
|
23 |
+
"c002": "Taking some clothes from somewhere",
|
24 |
+
"c003": "Throwing clothes somewhere",
|
25 |
+
"c004": "Tidying some clothes",
|
26 |
+
"c005": "Washing some clothes",
|
27 |
+
"c006": "Closing a door",
|
28 |
+
"c007": "Fixing a door",
|
29 |
+
"c008": "Opening a door",
|
30 |
+
"c009": "Putting something on a table",
|
31 |
+
"c010": "Sitting on a table",
|
32 |
+
"c011": "Sitting at a table",
|
33 |
+
"c012": "Tidying up a table",
|
34 |
+
"c013": "Washing a table",
|
35 |
+
"c014": "Working at a table",
|
36 |
+
"c015": "Holding a phone/camera",
|
37 |
+
"c016": "Playing with a phone/camera",
|
38 |
+
"c017": "Putting a phone/camera somewhere",
|
39 |
+
"c018": "Taking a phone/camera from somewhere",
|
40 |
+
"c019": "Talking on a phone/camera",
|
41 |
+
"c020": "Holding a bag",
|
42 |
+
"c021": "Opening a bag",
|
43 |
+
"c022": "Putting a bag somewhere",
|
44 |
+
"c023": "Taking a bag from somewhere",
|
45 |
+
"c024": "Throwing a bag somewhere",
|
46 |
+
"c025": "Closing a book",
|
47 |
+
"c026": "Holding a book",
|
48 |
+
"c027": "Opening a book",
|
49 |
+
"c028": "Putting a book somewhere",
|
50 |
+
"c029": "Smiling at a book",
|
51 |
+
"c030": "Taking a book from somewhere",
|
52 |
+
"c031": "Throwing a book somewhere",
|
53 |
+
"c032": "Watching/Reading/Looking at a book",
|
54 |
+
"c033": "Holding a towel/s",
|
55 |
+
"c034": "Putting a towel/s somewhere",
|
56 |
+
"c035": "Taking a towel/s from somewhere",
|
57 |
+
"c036": "Throwing a towel/s somewhere",
|
58 |
+
"c037": "Tidying up a towel/s",
|
59 |
+
"c038": "Washing something with a towel",
|
60 |
+
"c039": "Closing a box",
|
61 |
+
"c040": "Holding a box",
|
62 |
+
"c041": "Opening a box",
|
63 |
+
"c042": "Putting a box somewhere",
|
64 |
+
"c043": "Taking a box from somewhere",
|
65 |
+
"c044": "Taking something from a box",
|
66 |
+
"c045": "Throwing a box somewhere",
|
67 |
+
"c046": "Closing a laptop",
|
68 |
+
"c047": "Holding a laptop",
|
69 |
+
"c048": "Opening a laptop",
|
70 |
+
"c049": "Putting a laptop somewhere",
|
71 |
+
"c050": "Taking a laptop from somewhere",
|
72 |
+
"c051": "Watching a laptop or something on a laptop",
|
73 |
+
"c052": "Working/Playing on a laptop",
|
74 |
+
"c053": "Holding a shoe/shoes",
|
75 |
+
"c054": "Putting shoes somewhere",
|
76 |
+
"c055": "Putting on shoe/shoes",
|
77 |
+
"c056": "Taking shoes from somewhere",
|
78 |
+
"c057": "Taking off some shoes",
|
79 |
+
"c058": "Throwing shoes somewhere",
|
80 |
+
"c059": "Sitting in a chair",
|
81 |
+
"c060": "Standing on a chair",
|
82 |
+
"c061": "Holding some food",
|
83 |
+
"c062": "Putting some food somewhere",
|
84 |
+
"c063": "Taking food from somewhere",
|
85 |
+
"c064": "Throwing food somewhere",
|
86 |
+
"c065": "Eating a sandwich",
|
87 |
+
"c066": "Making a sandwich",
|
88 |
+
"c067": "Holding a sandwich",
|
89 |
+
"c068": "Putting a sandwich somewhere",
|
90 |
+
"c069": "Taking a sandwich from somewhere",
|
91 |
+
"c070": "Holding a blanket",
|
92 |
+
"c071": "Putting a blanket somewhere",
|
93 |
+
"c072": "Snuggling with a blanket",
|
94 |
+
"c073": "Taking a blanket from somewhere",
|
95 |
+
"c074": "Throwing a blanket somewhere",
|
96 |
+
"c075": "Tidying up a blanket/s",
|
97 |
+
"c076": "Holding a pillow",
|
98 |
+
"c077": "Putting a pillow somewhere",
|
99 |
+
"c078": "Snuggling with a pillow",
|
100 |
+
"c079": "Taking a pillow from somewhere",
|
101 |
+
"c080": "Throwing a pillow somewhere",
|
102 |
+
"c081": "Putting something on a shelf",
|
103 |
+
"c082": "Tidying a shelf or something on a shelf",
|
104 |
+
"c083": "Reaching for and grabbing a picture",
|
105 |
+
"c084": "Holding a picture",
|
106 |
+
"c085": "Laughing at a picture",
|
107 |
+
"c086": "Putting a picture somewhere",
|
108 |
+
"c087": "Taking a picture of something",
|
109 |
+
"c088": "Watching/looking at a picture",
|
110 |
+
"c089": "Closing a window",
|
111 |
+
"c090": "Opening a window",
|
112 |
+
"c091": "Washing a window",
|
113 |
+
"c092": "Watching/Looking outside of a window",
|
114 |
+
"c093": "Holding a mirror",
|
115 |
+
"c094": "Smiling in a mirror",
|
116 |
+
"c095": "Washing a mirror",
|
117 |
+
"c096": "Watching something/someone/themselves in a mirror",
|
118 |
+
"c097": "Walking through a doorway",
|
119 |
+
"c098": "Holding a broom",
|
120 |
+
"c099": "Putting a broom somewhere",
|
121 |
+
"c100": "Taking a broom from somewhere",
|
122 |
+
"c101": "Throwing a broom somewhere",
|
123 |
+
"c102": "Tidying up with a broom",
|
124 |
+
"c103": "Fixing a light",
|
125 |
+
"c104": "Turning on a light",
|
126 |
+
"c105": "Turning off a light",
|
127 |
+
"c106": "Drinking from a cup/glass/bottle",
|
128 |
+
"c107": "Holding a cup/glass/bottle of something",
|
129 |
+
"c108": "Pouring something into a cup/glass/bottle",
|
130 |
+
"c109": "Putting a cup/glass/bottle somewhere",
|
131 |
+
"c110": "Taking a cup/glass/bottle from somewhere",
|
132 |
+
"c111": "Washing a cup/glass/bottle",
|
133 |
+
"c112": "Closing a closet/cabinet",
|
134 |
+
"c113": "Opening a closet/cabinet",
|
135 |
+
"c114": "Tidying up a closet/cabinet",
|
136 |
+
"c115": "Someone is holding a paper/notebook",
|
137 |
+
"c116": "Putting their paper/notebook somewhere",
|
138 |
+
"c117": "Taking paper/notebook from somewhere",
|
139 |
+
"c118": "Holding a dish",
|
140 |
+
"c119": "Putting a dish/es somewhere",
|
141 |
+
"c120": "Taking a dish/es from somewhere",
|
142 |
+
"c121": "Wash a dish/dishes",
|
143 |
+
"c122": "Lying on a sofa/couch",
|
144 |
+
"c123": "Sitting on sofa/couch",
|
145 |
+
"c124": "Lying on the floor",
|
146 |
+
"c125": "Sitting on the floor",
|
147 |
+
"c126": "Throwing something on the floor",
|
148 |
+
"c127": "Tidying something on the floor",
|
149 |
+
"c128": "Holding some medicine",
|
150 |
+
"c129": "Taking/consuming some medicine",
|
151 |
+
"c130": "Putting groceries somewhere",
|
152 |
+
"c131": "Laughing at television",
|
153 |
+
"c132": "Watching television",
|
154 |
+
"c133": "Someone is awakening in bed",
|
155 |
+
"c134": "Lying on a bed",
|
156 |
+
"c135": "Sitting in a bed",
|
157 |
+
"c136": "Fixing a vacuum",
|
158 |
+
"c137": "Holding a vacuum",
|
159 |
+
"c138": "Taking a vacuum from somewhere",
|
160 |
+
"c139": "Washing their hands",
|
161 |
+
"c140": "Fixing a doorknob",
|
162 |
+
"c141": "Grasping onto a doorknob",
|
163 |
+
"c142": "Closing a refrigerator",
|
164 |
+
"c143": "Opening a refrigerator",
|
165 |
+
"c144": "Fixing their hair",
|
166 |
+
"c145": "Working on paper/notebook",
|
167 |
+
"c146": "Someone is awakening somewhere",
|
168 |
+
"c147": "Someone is cooking something",
|
169 |
+
"c148": "Someone is dressing",
|
170 |
+
"c149": "Someone is laughing",
|
171 |
+
"c150": "Someone is running somewhere",
|
172 |
+
"c151": "Someone is going from standing to sitting",
|
173 |
+
"c152": "Someone is smiling",
|
174 |
+
"c153": "Someone is sneezing",
|
175 |
+
"c154": "Someone is standing up from somewhere",
|
176 |
+
"c155": "Someone is undressing",
|
177 |
+
"c156": "Someone is eating something",
|
178 |
+
}
|
179 |
+
)
|
dataset_infos.json
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
{"default": {"description": "Charades is dataset composed of 9848 videos of daily indoors activities collected through Amazon Mechanical Turk. 267 different users were presented with a sentence, that includes objects and actions from a fixed vocabulary, and they recorded a video acting out the sentence (like in a game of Charades). The dataset contains 66,500 temporal annotations for 157 action classes, 41,104 labels for 46 object classes, and 27,847 textual descriptions of the videos. \n", "citation": "\n@article{sigurdsson2016hollywood,\n author = {Gunnar A. Sigurdsson and G{\"u}l Varol and Xiaolong Wang and Ivan Laptev and Ali Farhadi and Abhinav Gupta},\n title = {Hollywood in Homes: Crowdsourcing Data Collection for Activity Understanding},\n journal = {ArXiv e-prints},\n eprint = {1604.01753}, \n year = {2016},\n url = {http://arxiv.org/abs/1604.01753},\n}\n", "homepage": "", "license": "", "features": {"video_id": {"dtype": "string", "id": null, "_type": "Value"}, "video": {"dtype": "string", "id": null, "_type": "Value"}, "subject": {"dtype": "string", "id": null, "_type": "Value"}, "scene": {"dtype": "string", "id": null, "_type": "Value"}, "quality": {"dtype": "int32", "id": null, "_type": "Value"}, "relevance": {"dtype": "int32", "id": null, "_type": "Value"}, "verified": {"dtype": "string", "id": null, "_type": "Value"}, "script": {"dtype": "string", "id": null, "_type": "Value"}, "objects": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "descriptions": {"feature": {"dtype": "string", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "labels": {"feature": {"num_classes": 157, "names": ["Holding some clothes", "Putting clothes somewhere", "Taking some clothes from somewhere", "Throwing clothes somewhere", "Tidying some clothes", "Washing some clothes", "Closing a door", "Fixing a door", "Opening a door", "Putting something on a table", "Sitting on a table", "Sitting at a table", "Tidying up a table", "Washing a table", "Working at a table", "Holding a phone/camera", "Playing with a phone/camera", "Putting a phone/camera somewhere", "Taking a phone/camera from somewhere", "Talking on a phone/camera", "Holding a bag", "Opening a bag", "Putting a bag somewhere", "Taking a bag from somewhere", "Throwing a bag somewhere", "Closing a book", "Holding a book", "Opening a book", "Putting a book somewhere", "Smiling at a book", "Taking a book from somewhere", "Throwing a book somewhere", "Watching/Reading/Looking at a book", "Holding a towel/s", "Putting a towel/s somewhere", "Taking a towel/s from somewhere", "Throwing a towel/s somewhere", "Tidying up a towel/s", "Washing something with a towel", "Closing a box", "Holding a box", "Opening a box", "Putting a box somewhere", "Taking a box from somewhere", "Taking something from a box", "Throwing a box somewhere", "Closing a laptop", "Holding a laptop", "Opening a laptop", "Putting a laptop somewhere", "Taking a laptop from somewhere", "Watching a laptop or something on a laptop", "Working/Playing on a laptop", "Holding a shoe/shoes", "Putting shoes somewhere", "Putting on shoe/shoes", "Taking shoes from somewhere", "Taking off some shoes", "Throwing shoes somewhere", "Sitting in a chair", "Standing on a chair", "Holding some food", "Putting some food somewhere", "Taking food from somewhere", "Throwing food somewhere", "Eating a sandwich", "Making a sandwich", "Holding a sandwich", "Putting a sandwich somewhere", "Taking a sandwich from somewhere", "Holding a blanket", "Putting a blanket somewhere", "Snuggling with a blanket", "Taking a blanket from somewhere", "Throwing a blanket somewhere", "Tidying up a blanket/s", "Holding a pillow", "Putting a pillow somewhere", "Snuggling with a pillow", "Taking a pillow from somewhere", "Throwing a pillow somewhere", "Putting something on a shelf", "Tidying a shelf or something on a shelf", "Reaching for and grabbing a picture", "Holding a picture", "Laughing at a picture", "Putting a picture somewhere", "Taking a picture of something", "Watching/looking at a picture", "Closing a window", "Opening a window", "Washing a window", "Watching/Looking outside of a window", "Holding a mirror", "Smiling in a mirror", "Washing a mirror", "Watching something/someone/themselves in a mirror", "Walking through a doorway", "Holding a broom", "Putting a broom somewhere", "Taking a broom from somewhere", "Throwing a broom somewhere", "Tidying up with a broom", "Fixing a light", "Turning on a light", "Turning off a light", "Drinking from a cup/glass/bottle", "Holding a cup/glass/bottle of something", "Pouring something into a cup/glass/bottle", "Putting a cup/glass/bottle somewhere", "Taking a cup/glass/bottle from somewhere", "Washing a cup/glass/bottle", "Closing a closet/cabinet", "Opening a closet/cabinet", "Tidying up a closet/cabinet", "Someone is holding a paper/notebook", "Putting their paper/notebook somewhere", "Taking paper/notebook from somewhere", "Holding a dish", "Putting a dish/es somewhere", "Taking a dish/es from somewhere", "Wash a dish/dishes", "Lying on a sofa/couch", "Sitting on sofa/couch", "Lying on the floor", "Sitting on the floor", "Throwing something on the floor", "Tidying something on the floor", "Holding some medicine", "Taking/consuming some medicine", "Putting groceries somewhere", "Laughing at television", "Watching television", "Someone is awakening in bed", "Lying on a bed", "Sitting in a bed", "Fixing a vacuum", "Holding a vacuum", "Taking a vacuum from somewhere", "Washing their hands", "Fixing a doorknob", "Grasping onto a doorknob", "Closing a refrigerator", "Opening a refrigerator", "Fixing their hair", "Working on paper/notebook", "Someone is awakening somewhere", "Someone is cooking something", "Someone is dressing", "Someone is laughing", "Someone is running somewhere", "Someone is going from standing to sitting", "Someone is smiling", "Someone is sneezing", "Someone is standing up from somewhere", "Someone is undressing", "Someone is eating something"], "id": null, "_type": "ClassLabel"}, "length": -1, "id": null, "_type": "Sequence"}, "action_timings": {"feature": {"feature": {"dtype": "float32", "id": null, "_type": "Value"}, "length": -1, "id": null, "_type": "Sequence"}, "length": -1, "id": null, "_type": "Sequence"}, "length": {"dtype": "float32", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "charades", "config_name": "default", "version": {"version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 5466595, "num_examples": 7985, "dataset_name": "charades"}, "test": {"name": "test", "num_bytes": 1634411, "num_examples": 1863, "dataset_name": "charades"}}, "download_checksums": {"https://ai2-public-datasets.s3-us-west-2.amazonaws.com/charades/Charades.zip": {"num_bytes": 3519822, "checksum": "c616913ef79c2ddde06d9c562eae57bb8901d459d7568a0d27bf09cbf33ae866"}, "https://ai2-public-datasets.s3-us-west-2.amazonaws.com/charades/Charades_v1.zip": {"num_bytes": 58807341227, "checksum": "128cb3dfed94c0789ed4114be9d7990d4845633756e7ccc68dd54a511ff3b0e7"}}, "download_size": 58810861049, "post_processing_size": null, "dataset_size": 7101006, "size_in_bytes": 58817962055}}
|