Jzuluaga commited on
Commit
549fe69
1 Parent(s): edb315f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +145 -14
README.md CHANGED
@@ -1,23 +1,70 @@
1
  ---
2
  license: apache-2.0
 
 
 
3
  tags:
 
 
 
 
4
  - generated_from_trainer
 
 
5
  metrics:
6
- - precision
7
- - recall
8
- - f1
9
- - accuracy
 
 
 
 
 
10
  model-index:
11
- - name: uwb_atcc
12
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  ---
14
 
15
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
16
- should probably proofread and complete it, then remove this comment. -->
17
 
18
- # uwb_atcc
19
 
20
- This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
 
 
 
 
 
21
  It achieves the following results on the evaluation set:
22
  - Loss: 0.0098
23
  - Precision: 0.9760
@@ -25,17 +72,101 @@ It achieves the following results on the evaluation set:
25
  - F1: 0.9750
26
  - Accuracy: 0.9965
27
 
28
- ## Model description
29
 
30
- More information needed
 
 
 
 
 
 
 
31
 
32
  ## Intended uses & limitations
33
 
34
- More information needed
35
 
36
  ## Training and evaluation data
37
 
38
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
 
40
  ## Training procedure
41
 
 
1
  ---
2
  license: apache-2.0
3
+ language: en
4
+ datasets:
5
+ - Jzuluaga/uwb_atcc
6
  tags:
7
+ - text
8
+ - token-classification
9
+ - en-atc
10
+ - en
11
  - generated_from_trainer
12
+ - bert
13
+ - bertraffic
14
  metrics:
15
+ - Precision
16
+ - Recall
17
+ - Accuracy
18
+ - F1
19
+ - Jaccard Error Rate
20
+ widget:
21
+ - text: "lining up runway three one csa five bravo easy five three kilo romeo contact ruzyne ground one two one decimal nine good bye"
22
+ - text: "swiss four six one foxtrot line up runway three one and wait one two one nine csa four yankee alfa"
23
+ - text: "tower klm five five tango ils three one wizz air four papa uniform tower roger"
24
  model-index:
25
+ - name: bert-base-token-classification-for-atc-en-uwb-atcc
26
+ results:
27
+ - task:
28
+ type: token-classification
29
+ name: chunking
30
+ dataset:
31
+ type: Jzuluaga/uwb_atcc
32
+ name: UWB-ATCC corpus (Air Traffic Control Communications)
33
+ config: test
34
+ split: test
35
+ metrics:
36
+ - type: F1
37
+ value: 0.87
38
+ name: TEST F1 (macro)
39
+ verified: False
40
+ - type: Accuracy
41
+ value: 0.91
42
+ name: TEST Accuracy
43
+ verified: False
44
+ - type: Precision
45
+ value: 0.86
46
+ name: TEST Precision (macro)
47
+ verified: False
48
+ - type: Recall
49
+ value: 0.88
50
+ name: TEST Recall (macro)
51
+ verified: False
52
+ - type: Jaccard Error Rate
53
+ value: 0.169
54
+ name: TEST Jaccard Error Rate
55
+ verified: False
56
+
57
  ---
58
 
59
+ # bert-base-token-classification-for-atc-en-uwb-atcc
 
60
 
 
61
 
62
+ This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [UWB-ATCC corpus](https://huggingface.co/datasets/Jzuluaga/uwb_atcc).
63
+
64
+ <a href="https://github.com/idiap/bert-text-diarization-atc">
65
+ <img alt="GitHub" src="https://img.shields.io/badge/GitHub-Open%20source-green\">
66
+ </a>
67
+
68
  It achieves the following results on the evaluation set:
69
  - Loss: 0.0098
70
  - Precision: 0.9760
 
72
  - F1: 0.9750
73
  - Accuracy: 0.9965
74
 
 
75
 
76
+ Paper: [BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications](https://arxiv.org/abs/2110.05781).
77
+
78
+ Authors: Juan Zuluaga-Gomez, Seyyed Saeed Sarfjoo, Amrutha Prasad, Iuliia Nigmatulina, Petr Motlicek, Karel Ondrej, Oliver Ohneiser, Hartmut Helmke
79
+
80
+ Abstract: Automatic speech recognition (ASR) allows transcribing the communications between air traffic controllers (ATCOs) and aircraft pilots. The transcriptions are used later to extract ATC named entities, e.g., aircraft callsigns. One common challenge is speech activity detection (SAD) and speaker diarization (SD). In the failure condition, two or more segments remain in the same recording, jeopardizing the overall performance. We propose a system that combines SAD and a BERT model to perform speaker change detection and speaker role detection (SRD) by chunking ASR transcripts, i.e., SD with a defined number of speakers together with SRD. The proposed model is evaluated on real-life public ATC databases. Our BERT SD model baseline reaches up to 10% and 20% token-based Jaccard error rate (JER) in public and private ATC databases. We also achieved relative improvements of 32% and 7.7% in JERs and SD error rate (DER), respectively, compared to VBx, a well-known SD system.
81
+
82
+ Code — GitHub repository: https://github.com/idiap/bert-text-diarization-atc
83
+
84
 
85
  ## Intended uses & limitations
86
 
87
+ This model was fine-tuned on air traffic control data. We don't expect that it keeps the same performance on some others datasets where BERT was pre-trained or fine-tuned.
88
 
89
  ## Training and evaluation data
90
 
91
+ See Table 3 (page 5) in our paper:[BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications](https://arxiv.org/abs/2110.05781).. We described there the data used to fine-tune or model for speaker role and speaker change detection.
92
+
93
+ - We use the UWB-ATCC corpus to fine-tune this model. You can download the raw data here: https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0
94
+ - However, do not worry, we have prepared a script in our repository for preparing this databases:
95
+ - Dataset preparation folder: https://github.com/idiap/bert-text-diarization-atc/tree/main/data/databases/uwb_atcc
96
+ - Prepare the data: https://github.com/idiap/bert-text-diarization-atc/blob/main/data/databases/uwb_atcc/data_prepare_uwb_atcc_corpus.sh
97
+ - Get the data in the format required by HuggingFace: https://github.com/idiap/bert-text-diarization-atc/blob/main/data/databases/uwb_atcc/exp_prepare_uwb_atcc_corpus.sh
98
+
99
+
100
+ ## Writing your own inference script
101
+
102
+ If you use language model, you need to install the KenLM bindings with:
103
+
104
+ ```bash
105
+ conda activate your_environment
106
+ pip install https://github.com/kpu/kenlm/archive/master.zip
107
+ ```
108
+
109
+ The snippet of code:
110
+
111
+ ```python
112
+ from transformers import AutoTokenizer, AutoModelForTokenClassification
113
+
114
+ tokenizer = AutoTokenizer.from_pretrained("Jzuluaga/bert-base-token-classification-for-atc-en-uwb-atcc")
115
+ model = AutoModelForTokenClassification.from_pretrained("Jzuluaga/bert-base-token-classification-for-atc-en-uwb-atcc")
116
+
117
+
118
+ ##### Process text sample (from wikipedia)
119
+
120
+ from transformers import pipeline
121
+
122
+ nlp = pipeline('ner', model=model, tokenizer=tokenizer, aggregation_strategy="simple")
123
+ nlp("lining up runway three one csa five bravo b easy five three kilo romeo contact ruzyne ground one two one decimal nine good bye)
124
+
125
+
126
+ [{'entity_group': 'pilot',
127
+ 'score': 0.99991554,
128
+ 'word': 'lining up runway three one csa five bravo b', 'start': 0, 'end': 43
129
+ },
130
+ {'entity_group': 'atco',
131
+ 'score': 0.99994576,
132
+ 'word': 'easy five three kilo romeo contact ruzyne ground one two one decimal nine good bye', 'start': 44, 'end': 126
133
+ }]
134
+
135
+ ```
136
+
137
+ # Cite us
138
+
139
+ If you use this code for your research, please cite our paper with:
140
+
141
+ ```
142
+ @article{zuluaga2022bertraffic,
143
+ title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
144
+ author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
145
+ journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
146
+ year={2022}
147
+ }
148
+ ```
149
+
150
+ and,
151
+
152
+ ```
153
+ @article{zuluaga2022how,
154
+ title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
155
+ author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
156
+ journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
157
+ year={2022}
158
+ }
159
+ ```
160
+ and,
161
+
162
+ ```
163
+ @article{zuluaga2022atco2,
164
+ title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
165
+ author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
166
+ journal={arXiv preprint arXiv:2211.04054},
167
+ year={2022}
168
+ }
169
+ ```
170
 
171
  ## Training procedure
172