File size: 2,697 Bytes
c769a3e
 
 
 
 
 
 
 
6032b1b
c769a3e
7a44e60
c769a3e
 
bd3afe7
 
 
41986e1
bd3afe7
 
 
 
 
 
8372129
bd3afe7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1a7d37d
60535c7
e5b2e12
 
 
6969bea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7a44e60
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
datasets:
- tiiuae/visper
language:
- en
- es
- fr
- ar
- zh
inference: false
license: cc-by-nc-2.0
metrics:
- wer
---


# ViSpeR: Multilingual Audio-Visual Speech Recognition

ViSPer is a model for audio visual speech recognition (VSR/AVSR). Trained on 5500 hours of labelled video data. 

# Training details:

We use our proposed dataset to train a encoder-decoder model in a fully-supervised manner under a multi-lingual setting. While the encoder size is 12 layers, the decoder size is 6 layers. The hidden size, MLP and number of heads are set to 768, 3072 and 12, respectively. The unigram tokenizers are learned for all languages combined and have a vocabulary size of 21k.
The models are trained for 150 epochs on 64 Nvidia A100 GPUs (40GB) using AdamW optimizer with max LR of 1e-3 and a weight decay of 0.1. A cosine scheduler with a warm-up of 5 epochs is used for training. The maximum batch size per GPU is set to 1800 video frames.

# Performance:

We provide the results of the model on our proposed benchmarks in this table:

| Language | VSR (WER/CER) | AVSR (WER/CER) |
|----------|---------------|----------------|
| French   | 29.8          | 5.7            |
| Spanish  | 39.4          | 4.4            |
| Arabic   | 47.8          | 8.4            |
| Chinese  | 51.3 (CER)    | 15.4 (CER)     |
| English  | 49.1          | 8.1            |

# Broader impact:

In essence, while we hope that ViSPer will open the doors for new research questions and opportunities, and should only be used for this purpose. There are also potential dual use concerns that come with releasing ViSPer (dataset and models), trained on a substantial corpus of multilingual video data. While the technology behind ViSPer offers significant advances in multimodal speech recognition, it should only be used for research purposes.

## ViSpeR paper coming soon

## Check our VSR related works 
```bash

@inproceedings{djilali2023lip2vec,
  title={Lip2Vec: Efficient and Robust Visual Speech Recognition via Latent-to-Latent Visual to Audio Representation Mapping},
  author={Djilali, Yasser Abdelaziz Dahou and Narayan, Sanath and Boussaid, Haithem and Almazrouei, Ebtessam and Debbah, Merouane},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={13790--13801},
  year={2023}
}

@inproceedings{djilali2024vsr,
  title={Do VSR Models Generalize Beyond LRS3?},
  author={Djilali, Yasser Abdelaziz Dahou and Narayan, Sanath and LeBihan, Eustache and Boussaid, Haithem and Almazrouei, Ebtesam and Debbah, Merouane},
  booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision},
  pages={6635--6644},
  year={2024}
}
```