wangyueqian
commited on
Commit
•
b17dde8
1
Parent(s):
b4f9488
upload annotations
Browse files- .gitattributes +3 -0
- 5_turns/test-metadata.json +0 -0
- 5_turns/test-noisy-metadata.json +0 -0
- 5_turns/train-metadata.json +3 -0
- 8_turns/test-metadata.json +0 -0
- 8_turns/test-noisy-metadata.json +0 -0
- 8_turns/train-metadata.json +3 -0
- README.md +128 -0
- face_track_annotations.zip +3 -0
.gitattributes
CHANGED
@@ -57,3 +57,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
57 |
# Video files - compressed
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
57 |
# Video files - compressed
|
58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
60 |
+
*.zip.* filter=lfs diff=lfs merge=lfs -text
|
61 |
+
5_turns/train-metadata.json filter=lfs diff=lfs merge=lfs -text
|
62 |
+
8_turns/train-metadata.json filter=lfs diff=lfs merge=lfs -text
|
5_turns/test-metadata.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
5_turns/test-noisy-metadata.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
5_turns/train-metadata.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:3d1b57fa1f55a1f7a3e7ca74f169a4be360e1bc11423c960a5beff032aea858c
|
3 |
+
size 16041197
|
8_turns/test-metadata.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
8_turns/test-noisy-metadata.json
ADDED
The diff for this file is too large to render.
See raw diff
|
|
8_turns/train-metadata.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:93e1f3fbd3468a5576a00e67463e89376c0c9393a37f8ad42ddaffe015da3de1
|
3 |
+
size 16432595
|
README.md
ADDED
@@ -0,0 +1,128 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: mit
|
3 |
+
---
|
4 |
+
|
5 |
+
This repository contains the multi-modal multi-party conversation dataset described in the paper **Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding**.
|
6 |
+
|
7 |
+
## Related Resources
|
8 |
+
- Paper: [Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding](https://arxiv.org/abs/2412.17295)
|
9 |
+
- Conversation Speaker Identification Model: [CSI model](https://huggingface.co/datasets/wangyueqian/friends_mmc)
|
10 |
+
|
11 |
+
|
12 |
+
## Friends-MMC dataset
|
13 |
+
The structure of this repository is as follows:
|
14 |
+
```
|
15 |
+
datasets/
|
16 |
+
├── 5_turns/
|
17 |
+
│ ├── images/
|
18 |
+
│ ├── train-metadata.json
|
19 |
+
│ ├── test-metadata.json
|
20 |
+
│ ├── test-noisy-metadata.json
|
21 |
+
├── 8_turns/
|
22 |
+
│ ├── images/
|
23 |
+
│ ├── train-metadata.json
|
24 |
+
│ ├── test-metadata.json
|
25 |
+
│ ├── test-noisy-metadata.json
|
26 |
+
├── face_track_videos/
|
27 |
+
│ ├── s01e01/ // season name and episode name
|
28 |
+
│ │ ├── 001196-001272 // each folder contains the cropped face tracks for each turn. The numbers in folder name is start and end frame number.
|
29 |
+
│ │ │ ├── 0.avi 0.wav 1.avi 1.wav
|
30 |
+
│ │ ├── 001272-001375
|
31 |
+
│ │ ├── ...
|
32 |
+
│ ├── s01e02/
|
33 |
+
│ ├── s01e03/
|
34 |
+
│ ├── ...
|
35 |
+
├── face_track_annotations/
|
36 |
+
│ ├── train/
|
37 |
+
│ │ ├── s01e01.pkl // each pickle file stores metadata (frame number and bounding box in the original video for each frame) of the cropped face track video
|
38 |
+
│ │ ├── s01e02.pkl
|
39 |
+
│ │ ├── ...
|
40 |
+
│ ├── test/ // same format as files in `train` folder, but for season 03 (test set)
|
41 |
+
│ │ ├── s03e01.pkl
|
42 |
+
│ │ ├── s03e02.pkl
|
43 |
+
│ │ ├── ...
|
44 |
+
│ ├── test-noisy/ // some face tracks removed from the files in `test` folder
|
45 |
+
│ │ ├── s03e01.pkl
|
46 |
+
│ │ ├── s03e02.pkl
|
47 |
+
│ │ ├── ...
|
48 |
+
├── raw_videos/ // contains raw video of the TV Series
|
49 |
+
├── ubuntu_dialogue_corpus/ // The Ubuntu Dialogue Corpus [1] which is used for training the text module of the CSI model
|
50 |
+
├── README.md
|
51 |
+
```
|
52 |
+
|
53 |
+
[1] Hu, W., Chan, Z., Liu, B., Zhao, D., Ma, J., & Yan, R. (2019). GSN: A Graph-Structured Network for Multi-Party Dialogues. International Joint Conference on Artificial Intelligence.
|
54 |
+
|
55 |
+
## Download the dataset
|
56 |
+
The `face_track_videos/`, `face_track_annotations/`, `ubuntu_dialogue_corpus`, `5_turns/images/` and `8_turns/images/` folders are stored in zip files. Unzip them after downloading:
|
57 |
+
```shell
|
58 |
+
unzip -q face_track_annotations.zip
|
59 |
+
unzip -q face_track_videos.zip
|
60 |
+
unzip -q ubuntu_dialogue_corpus.zip
|
61 |
+
cd 5_turns
|
62 |
+
unzip -q images.zip
|
63 |
+
cd ../8_turns
|
64 |
+
unzip -q images.zip
|
65 |
+
cd ..
|
66 |
+
```
|
67 |
+
|
68 |
+
The `raw_videos/` folder is also stored in a zip file. However, as the raw videos are not used in the experiments, you can optionally download and unzip:
|
69 |
+
```shell
|
70 |
+
cat raw_videos.zip* > raw_videos.zip
|
71 |
+
unzip -q raw_videos.zip
|
72 |
+
```
|
73 |
+
|
74 |
+
## Data Format
|
75 |
+
### Metadata
|
76 |
+
Dialogue annotations are stored in `train-metadata.json`, `test-metadata.json` and `test-noisy-metadata.json`. Take an example from `5_turns/train-metadata.json`, each example is formatted as follows:
|
77 |
+
```json
|
78 |
+
[
|
79 |
+
{
|
80 |
+
"frame": "s01e01-001259", "video": "s01e01-001196-001272", "speaker": "monica",
|
81 |
+
"content": "There's nothing to tell! He's just some guy I work with!",
|
82 |
+
"faces": [[[763, 254, 807, 309], "carol"], [[582, 265, 620, 314], "monica"]]
|
83 |
+
},
|
84 |
+
{
|
85 |
+
"frame": "s01e01-001323", "video": "s01e01-001272-001375", "speaker": "joey",
|
86 |
+
"content": "C'mon, you're going out with the guy! There's gotta be something wrong with him!",
|
87 |
+
"faces": [[[569, 175, 715, 371], "joey"]]
|
88 |
+
},
|
89 |
+
{...}, {...}, {...} // three more above-like dicts
|
90 |
+
]
|
91 |
+
```
|
92 |
+
- "frame" corresponds to the filename of the single frame of this turn sampled from the video (`5_turns/images/s01e01-001259.jpg`),
|
93 |
+
- "content" is the textual content of this turn,
|
94 |
+
- "faces" is a list of face bounding boxes (x1, y1, x2, y2) and their corresponding speaker names in the image `5_turns/images/s01e01-001259.jpg`,
|
95 |
+
- "video" corresponds to the filname of folder of face tracks (`s01e01/001196-001272`) in the`face_track_videos/` folder,
|
96 |
+
- "speaker" is the ground truth speaker annotation.
|
97 |
+
|
98 |
+
### Face tracks
|
99 |
+
The face tracks that appears in the video corresponding to each turn are stored in a folder in the `face_track_videos/` folder. The folder name is the start and end frame number of the track. For example, `s01e01/001196-001272` contains the face tracks for the turn from frame 196 to frame 272 in episode `s01e01`. Each face track is stored in two files: `.avi` and `.wav`. The `.avi` file is the cropped face track video, and the `.wav` file is the corresponding audio of the frames.
|
100 |
+
|
101 |
+
### Face track annotations
|
102 |
+
The face track annotation for each episode is a python dictionary. Take an example from `face_track_annotations/s01e01.pkl`, each turn is formatted as follows:
|
103 |
+
```json
|
104 |
+
"s01e01-001196-001272": [
|
105 |
+
{"face_track_id": 0, "name": "carol", "frame": [1251, 1252, ...], "bbox": [[762.22, 257.18, 805.59, 309.45], [762.29, 256.34, 806.16, 309.51], ...]}, // face track 1
|
106 |
+
{"face_track_id": 1, "name": "monica", "frame": [frame 1, frame 2, ...], "bbox": [bbox 1, bbox 2, ...]}, // face track 2
|
107 |
+
]
|
108 |
+
```
|
109 |
+
|
110 |
+
Each python dictionary in this example marks the track of a face.
|
111 |
+
- "face_track_id" corresponds to the face track file name in `face_track_videos`. In this example, the face track for "carol" is `face_track_videos/s01e01/001196-001272/0.avi(.wav)`.
|
112 |
+
- "frame" is a list of frame numbers in the turn. Each frame number >= start frame number and <= end frame number,
|
113 |
+
- "bbox" is a list of bounding boxes (x1, y1, x2, y2). Each bounding box marks a face in its corresponding frame (e.g., the box [762.22, 257.18, 805.59, 309.45] of frame 1251 marks an appearence of a carol's face).
|
114 |
+
|
115 |
+
|
116 |
+
## Citation
|
117 |
+
If you use this work in your research, please cite:
|
118 |
+
```bibtex
|
119 |
+
@misc{wang2024friendsmmcdatasetmultimodalmultiparty,
|
120 |
+
title={Friends-MMC: A Dataset for Multi-modal Multi-party Conversation Understanding},
|
121 |
+
author={Yueqian Wang and Xiaojun Meng and Yuxuan Wang and Jianxin Liang and Qun Liu and Dongyan Zhao},
|
122 |
+
year={2024},
|
123 |
+
eprint={2412.17295},
|
124 |
+
archivePrefix={arXiv},
|
125 |
+
primaryClass={cs.CL},
|
126 |
+
url={https://arxiv.org/abs/2412.17295},
|
127 |
+
}
|
128 |
+
```
|
face_track_annotations.zip
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:ddf34bf24f53cdd2ac1da0910d428b4f8190d5f986e181f2b9983f67ddb7e424
|
3 |
+
size 101675405
|