Jzuluaga commited on
Commit
22af886
1 Parent(s): 03154fa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +121 -11
README.md CHANGED
@@ -2,36 +2,146 @@
2
  license: apache-2.0
3
  tags:
4
  - automatic-speech-recognition
5
- - experiments/data/uwb_atcc/train
6
  - generated_from_trainer
7
  metrics:
8
  - wer
9
  model-index:
10
- - name: 0.0ld_0.0ad_0.0attd_0.05fpd_0.075mtp_12mtl_0.0mfp_12mfl_1acc
11
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
- should probably proofread and complete it, then remove this comment. -->
16
 
17
- # 0.0ld_0.0ad_0.0attd_0.05fpd_0.075mtp_12mtl_0.0mfp_12mfl_1acc
 
 
 
 
 
 
 
18
 
19
- This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the EXPERIMENTS/DATA/UWB_ATCC/TRAIN - NA dataset.
20
  It achieves the following results on the evaluation set:
21
  - Loss: 0.8470
22
  - Wer: 0.1898
23
 
24
- ## Model description
 
 
 
 
25
 
26
- More information needed
 
 
 
 
27
 
28
  ## Intended uses & limitations
29
 
30
- More information needed
 
31
 
32
  ## Training and evaluation data
33
 
34
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
35
 
36
  ## Training procedure
37
 
 
2
  license: apache-2.0
3
  tags:
4
  - automatic-speech-recognition
 
5
  - generated_from_trainer
6
  metrics:
7
  - wer
8
  model-index:
9
+ - name: wav2vec2-xls-r-300m-en-atc-uwb-atcc
10
+ results:
11
+ - task:
12
+ type: automatic-speech-recognition
13
+ name: Speech Recognition
14
+ dataset:
15
+ type: Jzuluaga/uwb_atcc
16
+ name: UWB-ATCC dataset (Air Traffic Control Communications)
17
+ config: test
18
+ split: test
19
+ metrics:
20
+ - type: wer
21
+ value: 0.36
22
+ name: TEST WER
23
+ verified: False
24
  ---
25
 
26
+ # wav2vec2-xls-r-300m-en-atc-uwb-atcc
 
27
 
28
+ This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the [UWB-ATCC corpus](https://huggingface.co/datasets/Jzuluaga/uwb_atcc).
29
+
30
+ <a href="https://colab.research.google.com/github/idiap/w2v2-air-traffic/blob/main/src/eval_xlsr_atc_model.ipynb">
31
+ <img alt="GitHub" src="https://colab.research.google.com/assets/colab-badge.svg\">
32
+ </a>
33
+ <a href="https://github.com/idiap/w2v2-air-traffic">
34
+ <img alt="GitHub" src="https://img.shields.io/badge/GitHub-Open%20source-green\">
35
+ </a>
36
 
 
37
  It achieves the following results on the evaluation set:
38
  - Loss: 0.8470
39
  - Wer: 0.1898
40
 
41
+ Paper: [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822).
42
+
43
+ Authors: Juan Zuluaga-Gomez, Amrutha Prasad, Iuliia Nigmatulina, Saeed Sarfjoo, Petr Motlicek, Matthias Kleinert, Hartmut Helmke, Oliver Ohneiser, Qingran Zhan
44
+
45
+ Abstract: Recent work on self-supervised pre-training focus</b> on leveraging large-scale unlabeled speech data to build robust end-to-end (E2E)acoustic models (AM) that can be later fine-tuned on downstream tasks e.g., automatic speech recognition (ASR). Yet, few works investigated the impact on performance when the data properties substantially differ between the pre-training and fine-tuning phases, termed domain shift. We target this scenario by analyzing the robustness of Wav2Vec 2.0 and XLS-R models on downstream ASR for a completely unseen domain, air traffic control (ATC) communications. We benchmark these two models on several open-source and challenging ATC databases with signal-to-noise ratio between 5 and 20 dB. Relative word error rate (WER) reductions between 20% to 40% are obtained in comparison to hybrid-based ASR baselines by only fine-tuning E2E acoustic models with a smaller fraction of labeled data. We analyze WERs on the low-resource scenario and gender bias carried by one ATC dataset.
46
 
47
+ Code GitHub repository: https://github.com/idiap/w2v2-air-traffic
48
+
49
+ ## Usage
50
+
51
+ You can use our Google Colab notebook to run and evaluate our model: https://github.com/idiap/w2v2-air-traffic/blob/master/src/eval_xlsr_atc_model.ipynb
52
 
53
  ## Intended uses & limitations
54
 
55
+ This model was fine-tuned on air traffic control data. We don't expect that it keeps the same performance on some others datasets, e.g., LibriSpeech or CommonVoice.
56
+
57
 
58
  ## Training and evaluation data
59
 
60
+ See Table 1 (page 3) in our paper: [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822). We described there the partitions of how to use our model.
61
+
62
+ - We use the UWB-ATCC corpus to fine-tune this model. You can download the raw data here: https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0
63
+ - However, do not worry, we have prepared the database in `Datasets format`. Here, [UWB-ATCC corpus on HuggingFace](https://huggingface.co/datasets/Jzuluaga/uwb_atcc). You can scroll and check the train/test partitions, and even listen to some audios.
64
+ - If you want to prepare a database in HuggingFace format, you can follow the data loader script in: [data_loader_atc.py](https://huggingface.co/datasets/Jzuluaga/uwb_atcc/blob/main/atc_data_loader.py).
65
+ -
66
+ ## Writing your own inference script
67
+
68
+ If you use language model, you need to install the KenLM bindings with:
69
+
70
+ ```bash
71
+ conda activate your_environment
72
+ pip install https://github.com/kpu/kenlm/archive/master.zip
73
+ ```
74
+
75
+ The snippet of code:
76
+
77
+ ```python
78
+ from datasets import load_dataset, load_metric, Audio
79
+ import torch
80
+ from transformers import AutoModelForCTC, Wav2Vec2Processor, Wav2Vec2ProcessorWithLM
81
+ import torchaudio.functional as F
82
+
83
+ USE_LM = False
84
+ DATASET_ID = "Jzuluaga/uwb_atcc"
85
+ MODEL_ID = "Jzuluaga/wav2vec2-xls-r-300m-en-atc-uwb-atcc"
86
+
87
+ # 1. Load the dataset
88
+ # we only load the 'test' partition, however, if you want to load the 'train' partition, you can change it accordingly
89
+ uwb_atcc_corpus_test = load_dataset(DATASET_ID, "test", split="test")
90
+
91
+ # 2. Load the model
92
+ model = AutoModelForCTC.from_pretrained(MODEL_ID)
93
+
94
+ # 3. Load the processors, we offer support with LM, which should yield better resutls
95
+ if USE_LM:
96
+ processor = Wav2Vec2ProcessorWithLM.from_pretrained(MODEL_ID)
97
+ else:
98
+ processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
99
+ # 4. Format the test sample
100
+ sample = next(iter(uwb_atcc_corpus_test))
101
+ file_sampling_rate = sample['audio']['sampling_rate']
102
+ # resample if neccessary
103
+ if file_sampling_rate != 16000:
104
+ resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), file_sampling_rate, 16000).numpy()
105
+ else:
106
+ resampled_audio = torch.tensor(sample["audio"]["array"]).numpy()
107
+ input_values = processor(resampled_audio, return_tensors="pt").input_values
108
+
109
+ # 5. Run the forward pass in the model
110
+ with torch.no_grad():
111
+ logits = model(input_values).logits
112
+
113
+ # get the transcription with processor
114
+ if USE_LM:
115
+ transcription = processor.batch_decode(logits.numpy()).text
116
+ else:
117
+ pred_ids = torch.argmax(logits, dim=-1)
118
+ transcription = processor.batch_decode(pred_ids)
119
+ # print the output
120
+ print(transcription)
121
+ ```
122
+
123
+ # Cite us
124
+
125
+ If you use this code for your research, please cite our paper with:
126
+
127
+ ```
128
+ @article{zuluaga2022how,
129
+ title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
130
+ author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and Motlicek, Petr and Kleinert, Matthias and Helmke, Hartmut and Ohneiser, Oliver and Zhan, Qingran},
131
+ journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
132
+ year={2022}
133
+ }
134
+ ```
135
+ and,
136
+
137
+ ```
138
+ @article{zuluaga2022bertraffic,
139
+ title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
140
+ author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and Nigmatulina, Iuliia and Motlicek, Petr and Ondre, Karel and Ohneiser, Oliver and Helmke, Hartmut},
141
+ journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
142
+ year={2022}
143
+ }
144
+ ```
145
 
146
  ## Training procedure
147