scottgeng00
commited on
Commit
•
1fbaa40
1
Parent(s):
81120e8
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,117 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
|
5 |
+
# Dataset Card for the RealTalk Video Dataset
|
6 |
+
|
7 |
+
Thank you for your interest in the RealTalk dataset! RealTalk consists of 692 in-the-wild videos of dyadic (i.e. two person) conversations, curated with the goal of advancing multimodal communication research in computer vision.
|
8 |
+
If you find our dataset useful, please cite
|
9 |
+
```
|
10 |
+
@inproceedings{geng2023affective,
|
11 |
+
title={Affective Faces for Goal-Driven Dyadic Communication},
|
12 |
+
author={Geng, Scott and Teotia, Revant and Tendulkar, Purva and Menon, Sachit and Vondrick, Carl},
|
13 |
+
year={2023}
|
14 |
+
}
|
15 |
+
```
|
16 |
+
|
17 |
+
---------------------------------------------------------------------------------------------------------------------------------------
|
18 |
+
|
19 |
+
## Dataset Details
|
20 |
+
|
21 |
+
The dataset contains 692 full-length videos scraped from [The Skin Deep](https://www.youtube.com/c/TheSkinDeep), a public YouTube channel that captures long-form, unscripted conversations between diverse indivudals about different facets of the human experience. We also include associated annotations; we detail all files present in the dataset below.
|
22 |
+
|
23 |
+
### File Overview
|
24 |
+
|
25 |
+
General notes:
|
26 |
+
* All frame numbers are indexed from 0.
|
27 |
+
* We denote 'p0' as the person on the left side of the video, and 'p1' as the person on the right side.
|
28 |
+
* <video_id> denotes the unique 11 digit video ID assigned by YouTube to a specific video.
|
29 |
+
|
30 |
+
#### [0] videos/videos_{xx}.tar
|
31 |
+
Contains the full-length raw videos that the dataset is created from in shards of 50. Each video is stored at 25 fps in ```avi``` format.
|
32 |
+
Each video is stored with filename ```<video_id>.avi``` (e.g., ```5hxY5Svr2aM.avi```).
|
33 |
+
|
34 |
+
#### [1] audio.tar.gz
|
35 |
+
Contains audio files extracted from the videos, stored in ```mp3``` format.
|
36 |
+
|
37 |
+
#### [2] asr.tar.gz
|
38 |
+
Contains ASR outputs of [Whisper](https://github.com/openai/whisper) for each video. Subtitles for video ```<video_id>.avi``` are stored in the file ```<video_id>.json``` as the dictionary
|
39 |
+
```
|
40 |
+
{
|
41 |
+
'text': <full asr transcript of video>
|
42 |
+
'segments': <time-stamped ASR segments>
|
43 |
+
'language': <detected language of video>
|
44 |
+
}
|
45 |
+
```
|
46 |
+
|
47 |
+
#### [3.0] benchmark/train_test_split.json
|
48 |
+
This json file describes the clips used as the benchmark train/test split in our paper. The file stores the dictionary
|
49 |
+
```
|
50 |
+
{
|
51 |
+
'train': [list of train samples],
|
52 |
+
'test': [list of test samples]
|
53 |
+
}
|
54 |
+
```
|
55 |
+
where each entry in the list is another dictionary with format
|
56 |
+
```
|
57 |
+
{
|
58 |
+
'id': [video_id, start_frame (inclusive), end_frame (exclusive)],
|
59 |
+
'speaker': 'p0'|'p1'
|
60 |
+
'listener': 'p0'|'p1'
|
61 |
+
'asr': str
|
62 |
+
}
|
63 |
+
```
|
64 |
+
The ASR of the clip is computed with [Whisper](https://github.com/openai/whisper).
|
65 |
+
|
66 |
+
|
67 |
+
#### [3.1] benchmark/embeddings.pkl
|
68 |
+
Pickle file containing visual embeddings of the listener frames in the training/testing clips, as computed by several pretrained face models implemented in [deepface](https://github.com/serengil/deepface). The file stores a dictionary with format
|
69 |
+
```
|
70 |
+
{
|
71 |
+
f'{video_id}.{start_frame}.{end_frame}:{
|
72 |
+
{
|
73 |
+
<model_name_1>: <array of listener embeddings>,
|
74 |
+
<model_name_2>: <array of listener embeddings>,
|
75 |
+
...
|
76 |
+
}
|
77 |
+
...
|
78 |
+
}
|
79 |
+
```
|
80 |
+
|
81 |
+
#### [4] annotations.tar.gz
|
82 |
+
Contains face bounding box and active speaker annotations for every frame of each video. Annotations for video ```<video_id>.avi``` are contained in file ```<video_id>.json```, which stores a nested dictionary structure:
|
83 |
+
```
|
84 |
+
{str(frame_number):{
|
85 |
+
'people':{
|
86 |
+
'p0':{'score': float, 'bbox': array}
|
87 |
+
'p1':{'score': float, 'bbox': array}
|
88 |
+
}
|
89 |
+
'current_speaker': 'p0'|'p1'|None
|
90 |
+
}
|
91 |
+
...
|
92 |
+
}
|
93 |
+
```
|
94 |
+
The 'score' field stores the active speaker score as predicted by [TalkNet-ASD](https://github.com/TaoRuijie/TalkNet-ASD); larger positive values indicate a higher probability that the person is speaking. Note also that the 'people' subdictionary may or may not contain the keys 'p0', 'p1', depending on who is visible in the frame.
|
95 |
+
|
96 |
+
#### [5] emoca.tar.gz
|
97 |
+
|
98 |
+
Contains [EMOCA](https://emoca.is.tue.mpg.de/) embeddings for almost all frames in all the videos. The embeddings for```<video_id>.avi``` are contained in the pickle file ```<video_id>.pkl```, which has dictionary structure
|
99 |
+
```
|
100 |
+
{
|
101 |
+
int(frame_number):{
|
102 |
+
'p0': <embedding dict from EMOCA>,
|
103 |
+
'p1': <embedding dict from EMOCA>
|
104 |
+
}
|
105 |
+
...
|
106 |
+
}
|
107 |
+
```
|
108 |
+
|
109 |
+
Note that some frames may be missing embeddings due to occlusions or failures in face detection.
|
110 |
+
|
111 |
+
## Dataset Card Authors
|
112 |
+
|
113 |
+
Scott Geng
|
114 |
+
|
115 |
+
## Dataset Card Contact
|
116 |
+
|
117 |
+
sgeng@cs.washington.edu
|