Datasets:

Modalities:
Text
Formats:
text
Libraries:
Datasets
License:
yasserTII commited on
Commit
00bb627
Β·
verified Β·
1 Parent(s): 4b485c2

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +132 -0
README.md ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - ar
5
+ - fr
6
+ - es
7
+ - zh
8
+ pretty_name: visper
9
+ ---
10
+
11
+ This repository contains **ViSpeR**, a large-scale dataset and models for Visual Speech Recognition for Arabic, Chinese, French and Spanish.
12
+
13
+ ## Dataset Summary:
14
+
15
+ Given the scarcity of publicly available VSR data for non-English languages, we collected VSR data for the most four spoken languages at scale.
16
+
17
+
18
+ Comparison of VSR datasets. Our proposed ViSpeR dataset is larger in size compared to other datasets that cover non-English languages for the VSR task. For our dataset, the numbers in parenthesis denote the number of clips. We also give the clip coverage under TedX and Wild subsets of our ViSpeR dataset.
19
+
20
+ | Dataset | French (fr) | Spanish (es) | Arabic (ar) | Chinese (zh) |
21
+ |-----------------|-----------------|-----------------|-----------------|-----------------|
22
+ | **MuAVIC** | 176 | 178 | 16 | -- |
23
+ | **VoxCeleb2** | 124 | 42 | -- | -- |
24
+ | **AVSpeech** | 122 | 270 | -- | -- |
25
+ | **ViSpeR (TedX)** | 192 (160k) | 207 (151k) | 49 (48k) | 129 (143k) |
26
+ | **ViSpeR (Wild)** | 799 (548k) | 851 (531k) | 1152 (1.01M) | 658 (593k) |
27
+ | **ViSpeR (full)** | 991 (709k) | 1058 (683k) | 1200 (1.06M) | 787 (736k) |
28
+
29
+
30
+ ## Downloading the data:
31
+
32
+ First, use the langauge.json to download the videos and put them in seperate folders. The raw data should be structured as follows:
33
+ ```bash
34
+ Data/
35
+ β”œβ”€β”€ Chinese/
36
+ β”‚ β”œβ”€β”€ video_id.mp4
37
+ β”‚ └── ...
38
+ β”œβ”€β”€ Arabic/
39
+ β”‚ β”œβ”€β”€ video_id.mp4
40
+ β”‚ └── ...
41
+ β”œβ”€β”€ French/
42
+ β”‚ β”œβ”€β”€ video_id.mp4
43
+ β”‚ └── ...
44
+ β”œβ”€β”€ Spanish/
45
+ β”‚ β”œβ”€β”€ video_id.mp4
46
+ β”‚ └── ...
47
+
48
+ ```
49
+
50
+ ## Setup:
51
+
52
+ 1- Setup the environement:
53
+ ```bash
54
+ conda create --name visper python=3.10
55
+ conda activate visper
56
+ pip install -r requirements.txt
57
+ ```
58
+
59
+ 2- Install ffmpeg:
60
+ ```bash
61
+ conda install "ffmpeg<5" -c conda-forge
62
+ ```
63
+
64
+ ## Processing the data:
65
+
66
+ You need the download the meta data from [HF](https://huggingface.co/datasets/tiiuae/visper), this includes ```train.tar.gz``` and ```test.tar,gz```. Then, use the provided metadata to process the raw data for creating the ViSpeR dataset. You can use the ```crop_videos.py``` to process the data, note that all clips are cropped and transformed
67
+
68
+ | Languages | Split | Link |
69
+ |----------|---------------|----------------|
70
+ | en,fr, es, ar, cz | train | [train](https://huggingface.co/datasets/tiiuae/visper/train.tar.gz) |
71
+ | en,fr, es, ar, cz | test | [test](https://huggingface.co/tiiuae/visper/test.tar.gz) |
72
+
73
+
74
+
75
+ ```bash
76
+ python crop_videos.py --video_dir [path_to_data_language] --save_path [save_path_language] --json [language_metadata.json] --use_ffmpeg True
77
+ ```
78
+
79
+ ```bash
80
+ ViSpeR/
81
+ β”œβ”€β”€ Chinese/
82
+ β”‚ β”œβ”€β”€ video_id/
83
+ β”‚ β”‚ │── 00001.mp4
84
+ β”‚ β”‚ │── 00001.json
85
+ β”‚ └── ...
86
+ β”œβ”€β”€ Arabic/
87
+ β”‚ β”œβ”€β”€ video_id/
88
+ β”‚ β”‚ │── 00001.mp4
89
+ β”‚ β”‚ │── 00001.json
90
+ β”‚ └── ...
91
+ β”œβ”€β”€ French/
92
+ β”‚ β”œβ”€β”€ video_id/
93
+ β”‚ β”‚ │── 00001.mp4
94
+ β”‚ β”‚ │── 00001.json
95
+ β”‚ └── ...
96
+ β”œβ”€β”€ Spanish/
97
+ β”‚ β”œβ”€β”€ video_id/
98
+ β”‚ β”‚ │── 00001.mp4
99
+ β”‚ β”‚ │── 00001.json
100
+ β”‚ └── ...
101
+
102
+ ```
103
+
104
+ The ```video_id/xxxx.json``` has the 'label' of the corresponding video ```video_id/xxxx.mp4```.
105
+
106
+ ## Intended Use
107
+
108
+ This dataset can be used to train models for visual speech recognition. It's particularly useful for research and development purposes in the field of audio-visual content processing. The data can be used to assess the performance of current and future models.
109
+
110
+ ## Limitations and Biases
111
+ Due to the data collection process focusing on YouTube, biases inherent to the platform may be present in the dataset. Also, while measures are taken to ensure diversity in content, the dataset might still be skewed towards certain types of content due to the filtering process.
112
+
113
+
114
+ ## Citation
115
+ ```bash
116
+
117
+ @inproceedings{djilali2023lip2vec,
118
+ title={Lip2Vec: Efficient and Robust Visual Speech Recognition via Latent-to-Latent Visual to Audio Representation Mapping},
119
+ author={Djilali, Yasser Abdelaziz Dahou and Narayan, Sanath and Boussaid, Haithem and Almazrouei, Ebtessam and Debbah, Merouane},
120
+ booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
121
+ pages={13790--13801},
122
+ year={2023}
123
+ }
124
+
125
+ @inproceedings{djilali2024vsr,
126
+ title={Do VSR Models Generalize Beyond LRS3?},
127
+ author={Djilali, Yasser Abdelaziz Dahou and Narayan, Sanath and LeBihan, Eustache and Boussaid, Haithem and Almazrouei, Ebtesam and Debbah, Merouane},
128
+ booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
129
+ pages={6635--6644},
130
+ year={2024}
131
+ }
132
+ ```