Yi-Xuan-Tan commited on
Commit
c5f1fae
1 Parent(s): 99ff68a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +137 -0
README.md ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ datasets:
3
+ - librispeech_asr
4
+ - declare-lab/MELD
5
+ - PolyAI/minds14
6
+ - google/fleurs
7
+ language:
8
+ - en
9
+ metrics:
10
+ - accuracy
11
+ - f1
12
+ - mae
13
+ - pearsonr
14
+ - exact_match
15
+ tags:
16
+ - audio
17
+ - speech
18
+ - pre-training
19
+ - spoken language understanding
20
+ ---
21
+
22
+ SEGUE is a pre-training approach for sequence-level spoken language understanding (SLU) tasks.
23
+ We use knowledge distillation on a parallel speech-text corpus (e.g. an ASR corpus) to distil
24
+ language understanding knowledge from a textual sentence embedder to a pre-trained speech encoder.
25
+ SEGUE applied to Wav2Vec 2.0 improves performance for many SLU tasks, including
26
+ intent classification / slot-filling, spoken sentiment analysis, and spoken emotion classification.
27
+ These improvements were observed in both fine-tuned and non-fine-tuned settings, as well as few-shot settings.
28
+
29
+ ## Model Details
30
+
31
+ - **Repository:** https://github.com/declare-lab/segue
32
+ - **Paper:**
33
+
34
+ ## How to Get Started with the Model
35
+
36
+ To use this model checkpoint, you need to use the model classes on [our GitHub repository](https://github.com/declare-lab/segue).
37
+
38
+ ```python3
39
+ from segue.modeling_segue import SegueModel
40
+ import soundfile
41
+
42
+ # assuming this is 16kHz mono audio
43
+ raw_audio_array, sampling_rate = soundfile.read('example.wav')
44
+
45
+ model = SegueModel.from_pretrained('declare-lab/segue-w2v2-base')
46
+ inputs = model.processor(audio = raw_audio_array, sampling_rate = sampling_rate)
47
+ outputs = model(**inputs)
48
+ ```
49
+
50
+ You do not need to create the `Processor` yourself, it is already available as `model.processor`.
51
+
52
+ `SegueForRegression` and `SegueForClassification` are also available. For classification,
53
+ the number of classes can be specified through the n_classes field in model config,
54
+ e.g. `SegueForClassification.from_pretrained('declare-lab/segue-w2v2-base', n_classes=7)`.
55
+ Multi-label classification is also supported, e.g. `n_classes=[3, 7]` for two labels with 3 and 7 classes respectively.
56
+
57
+ Pre-training and downstream task training scripts are available on [our GitHub repository](https://github.com/declare-lab/segue).
58
+
59
+ ## Results
60
+
61
+ We show only simplified MInDS-14 and MELD results for brevity.
62
+ Please refer to the paper for full results.
63
+
64
+ ### MInDS-14 (intent classification)
65
+
66
+ *Note: we used only the en-US subset of MInDS-14.*
67
+
68
+ #### Fine-tuning
69
+
70
+ |Model|Accuracy|
71
+ |-|-|
72
+ |w2v 2.0|89.4±2.3|
73
+ |SEGUE|**97.6±0.5**|
74
+
75
+ *Note: Wav2Vec 2.0 fine-tuning was unstable. Only 3 out of 6 runs converged, the result shown were taken from converged runs only.*
76
+
77
+ #### Frozen encoder
78
+
79
+ |Model|Accuracy|
80
+ |-|-|
81
+ |w2v 2.0|54.0|
82
+ |SEGUE|**77.9**|
83
+
84
+ #### Few-shot
85
+
86
+ Plots of k-shot per class accuracy against k:
87
+
88
+ <img src='readme/minds-14.svg' style='width: 50%;'>
89
+
90
+ ### MELD (sentiment and emotion classification)
91
+
92
+ #### Fine-tuning
93
+
94
+ |Model|Sentiment F1|Emotion F1|
95
+ |-|-|-|
96
+ |w2v 2.0|47.3|39.3|
97
+ |SEGUE|53.2|41.1|
98
+ |SEGUE (higher LR)|**54.1**|**47.2**|
99
+
100
+ *Note: Wav2Vec 2.0 fine-tuning was unstable at the higher LR.*
101
+
102
+ #### Frozen encoder
103
+
104
+ |Model|Sentiment F1|Emotion F1|
105
+ |-|-|-|
106
+ |w2v 2.0|45.0&plusmn;0.7|34.3&plusmn;1.2|
107
+ |SEGUE|**45.8&plusmn;0.1**|**35.7&plusmn;0.3**|
108
+
109
+ #### Few-shot
110
+
111
+ Plots of MELD k-shot per class F1 score against k - sentiment and emotion respectively:
112
+
113
+ <img src='readme/meld-sent.svg' style='display: inline; width: 40%;'>
114
+ <img src='readme/meld-emo.svg' style='display: inline; width: 40%;'>
115
+
116
+ ## Limitations
117
+
118
+ In the paper, we hypothesized that SEGUE may perform worse on tasks that rely less on
119
+ understanding and more on word detection. This may explain why SEGUE did not manage to
120
+ improve upon Wav2Vec 2.0 on the Fluent Speech Commands (FSC) task. We also experimented with
121
+ an ASR task (FLEURS), which heavily relies on word detection, to further demonstrate this.
122
+
123
+ However, this is does not mean that SEGUE performs worse on intent classification tasks
124
+ in general. MInDS-14, was able to benifit greatly from SEGUE despite also being an intent
125
+ classification task, as it has more free-form utterances that may benefit more from
126
+ understanding.
127
+
128
+ ## Citation
129
+
130
+ ```bibtex
131
+ @inproceedings{segue2023,
132
+ title={Sentence Embedder Guided Utterance Encoder (SEGUE) for Spoken Language Understanding},
133
+ author={Tan, Yi Xuan and Majumder, Navonil and Poria, Soujanya},
134
+ booktitle={Interspeech},
135
+ year={2023}
136
+ }
137
+ ```