mazesmazes commited on
Commit
6020e02
·
verified ·
1 Parent(s): 9bbb9cd

Training in progress - step 1000

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
adapter_config.json ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "alora_invocation_tokens": null,
3
+ "alpha_pattern": {},
4
+ "arrow_config": null,
5
+ "auto_mapping": null,
6
+ "base_model_name_or_path": "mazesmazes/tiny-audio-embedded-3",
7
+ "bias": "none",
8
+ "corda_config": null,
9
+ "ensure_weight_tying": false,
10
+ "eva_config": null,
11
+ "exclude_modules": null,
12
+ "fan_in_fan_out": false,
13
+ "inference_mode": true,
14
+ "init_lora_weights": true,
15
+ "layer_replication": null,
16
+ "layers_pattern": null,
17
+ "layers_to_transform": null,
18
+ "loftq_config": {},
19
+ "lora_alpha": 32,
20
+ "lora_bias": false,
21
+ "lora_dropout": 0.0,
22
+ "lora_ga_config": null,
23
+ "megatron_config": null,
24
+ "megatron_core": "megatron.core",
25
+ "modules_to_save": null,
26
+ "peft_type": "LORA",
27
+ "peft_version": "0.19.1",
28
+ "qalora_group_size": 16,
29
+ "r": 64,
30
+ "rank_pattern": {},
31
+ "revision": null,
32
+ "target_modules": [
33
+ "v_proj",
34
+ "q_proj"
35
+ ],
36
+ "target_parameters": null,
37
+ "task_type": "CAUSAL_LM",
38
+ "trainable_token_indices": null,
39
+ "use_bdlora": null,
40
+ "use_dora": false,
41
+ "use_qalora": false,
42
+ "use_rslora": false
43
+ }
adapter_model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cd2c62cbb461ffcade2fb13e1261ebfb623bf263de425121eade62514acbab06
3
+ size 36715216
alignment.py ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Forced alignment for word-level timestamps using Wav2Vec2."""
2
+
3
+ import numpy as np
4
+ import torch
5
+
6
+
7
+ def _get_device() -> str:
8
+ """Get best available device for non-transformers models."""
9
+ if torch.cuda.is_available():
10
+ return "cuda"
11
+ if torch.backends.mps.is_available():
12
+ return "mps"
13
+ return "cpu"
14
+
15
+
16
+ class ForcedAligner:
17
+ """Lazy-loaded forced aligner for word-level timestamps using torchaudio wav2vec2.
18
+
19
+ Uses Viterbi trellis algorithm for optimal alignment path finding.
20
+ """
21
+
22
+ _bundle = None
23
+ _model = None
24
+ _labels = None
25
+ _dictionary = None
26
+
27
+ @classmethod
28
+ def get_instance(cls, device: str = "cuda"):
29
+ """Get or create the forced alignment model (singleton).
30
+
31
+ Args:
32
+ device: Device to run model on ("cuda" or "cpu")
33
+
34
+ Returns:
35
+ Tuple of (model, labels, dictionary)
36
+ """
37
+ if cls._model is None:
38
+ import torchaudio
39
+
40
+ cls._bundle = torchaudio.pipelines.WAV2VEC2_ASR_BASE_960H
41
+ cls._model = cls._bundle.get_model().to(device)
42
+ cls._model.eval()
43
+ cls._labels = cls._bundle.get_labels()
44
+ cls._dictionary = {c: i for i, c in enumerate(cls._labels)}
45
+ return cls._model, cls._labels, cls._dictionary
46
+
47
+ @staticmethod
48
+ def _get_trellis(emission: torch.Tensor, tokens: list[int], blank_id: int = 0) -> torch.Tensor:
49
+ """Build trellis for forced alignment using forward algorithm.
50
+
51
+ The trellis[t, j] represents the log probability of the best path that
52
+ aligns the first j tokens to the first t frames.
53
+
54
+ Args:
55
+ emission: Log-softmax emission matrix of shape (num_frames, num_classes)
56
+ tokens: List of target token indices
57
+ blank_id: Index of the blank/CTC token (default 0)
58
+
59
+ Returns:
60
+ Trellis matrix of shape (num_frames + 1, num_tokens + 1)
61
+ """
62
+ num_frames = emission.size(0)
63
+ num_tokens = len(tokens)
64
+
65
+ trellis = torch.full((num_frames + 1, num_tokens + 1), -float("inf"))
66
+ trellis[0, 0] = 0
67
+
68
+ for t in range(num_frames):
69
+ for j in range(num_tokens + 1):
70
+ # Stay: emit blank and stay at j tokens
71
+ stay = trellis[t, j] + emission[t, blank_id]
72
+
73
+ # Move: emit token j and advance to j+1 tokens
74
+ move = trellis[t, j - 1] + emission[t, tokens[j - 1]] if j > 0 else -float("inf")
75
+
76
+ trellis[t + 1, j] = max(stay, move) # Viterbi: take best path
77
+
78
+ return trellis
79
+
80
+ @staticmethod
81
+ def _backtrack(
82
+ trellis: torch.Tensor, emission: torch.Tensor, tokens: list[int], blank_id: int = 0
83
+ ) -> list[tuple[int, float, float]]:
84
+ """Backtrack through trellis to find optimal forced monotonic alignment.
85
+
86
+ Guarantees:
87
+ - All tokens are emitted exactly once
88
+ - Strictly monotonic: each token's frames come after previous token's
89
+ - No frame skipping or token teleporting
90
+
91
+ Returns list of (token_id, start_frame, end_frame) for each token.
92
+ """
93
+ num_frames = emission.size(0)
94
+ num_tokens = len(tokens)
95
+
96
+ if num_tokens == 0:
97
+ return []
98
+
99
+ # Find the best ending point (should be at num_tokens)
100
+ # But verify trellis reached a valid state
101
+ if trellis[num_frames, num_tokens] == -float("inf"):
102
+ # Alignment failed - fall back to uniform distribution
103
+ frames_per_token = num_frames / num_tokens
104
+ return [
105
+ (tokens[i], i * frames_per_token, (i + 1) * frames_per_token)
106
+ for i in range(num_tokens)
107
+ ]
108
+
109
+ # Backtrack: find where each token transition occurred
110
+ # path[i] = frame where token i was first emitted
111
+ token_frames: list[list[int]] = [[] for _ in range(num_tokens)]
112
+
113
+ t = num_frames
114
+ j = num_tokens
115
+
116
+ while t > 0 and j > 0:
117
+ # Check: did we transition from j-1 to j at frame t-1?
118
+ stay_score = trellis[t - 1, j] + emission[t - 1, blank_id]
119
+ move_score = trellis[t - 1, j - 1] + emission[t - 1, tokens[j - 1]]
120
+
121
+ if move_score >= stay_score:
122
+ # Token j-1 was emitted at frame t-1
123
+ token_frames[j - 1].append(t - 1)
124
+ j -= 1
125
+ t -= 1
126
+
127
+ # Handle any remaining tokens at the start (edge case)
128
+ while j > 0:
129
+ token_frames[j - 1].append(0)
130
+ j -= 1
131
+
132
+ # We appended in reverse-time order; restore monotonic order
133
+ for frames in token_frames:
134
+ frames.reverse()
135
+
136
+ # Convert to spans
137
+ token_spans: list[tuple[int, float, float]] = []
138
+ for token_idx, frames in enumerate(token_frames):
139
+ if not frames:
140
+ # Token never emitted - assign minimal span after previous
141
+ if token_spans:
142
+ prev_end = token_spans[-1][2]
143
+ frames = [int(prev_end)]
144
+ else:
145
+ frames = [0]
146
+
147
+ token_id = tokens[token_idx]
148
+ start_frame = float(min(frames))
149
+ end_frame = float(max(frames)) + 1.0
150
+ token_spans.append((token_id, start_frame, end_frame))
151
+
152
+ return token_spans
153
+
154
+ # Offset compensation for Wav2Vec2-BASE systematic bias (in seconds)
155
+ # Calibrated on librispeech-alignments dataset
156
+ START_OFFSET = 0.06 # Subtract from start times (shift earlier)
157
+ END_OFFSET = -0.03 # Add to end times (shift later)
158
+
159
+ @classmethod
160
+ def align(
161
+ cls,
162
+ audio: np.ndarray,
163
+ text: str,
164
+ sample_rate: int = 16000,
165
+ _language: str = "eng",
166
+ _batch_size: int = 16,
167
+ ) -> list[dict]:
168
+ """Align transcript to audio and return word-level timestamps.
169
+
170
+ Uses Viterbi trellis algorithm for optimal forced alignment.
171
+
172
+ Args:
173
+ audio: Audio waveform as numpy array
174
+ text: Transcript text to align
175
+ sample_rate: Audio sample rate (default 16000)
176
+ _language: ISO-639-3 language code (default "eng" for English, unused)
177
+ _batch_size: Batch size for alignment model (unused)
178
+
179
+ Returns:
180
+ List of dicts with 'word', 'start', 'end' keys
181
+ """
182
+ import torchaudio
183
+
184
+ device = _get_device()
185
+ model, _labels, dictionary = cls.get_instance(device)
186
+ assert cls._bundle is not None and dictionary is not None # Initialized by get_instance
187
+
188
+ # Convert audio to tensor (copy to ensure array is writable)
189
+ if isinstance(audio, np.ndarray):
190
+ waveform = torch.from_numpy(audio.copy()).float()
191
+ else:
192
+ waveform = audio.clone().float()
193
+
194
+ # Ensure 2D (channels, time)
195
+ if waveform.dim() == 1:
196
+ waveform = waveform.unsqueeze(0)
197
+
198
+ # Resample if needed (wav2vec2 expects 16kHz)
199
+ if sample_rate != cls._bundle.sample_rate:
200
+ waveform = torchaudio.functional.resample(
201
+ waveform, sample_rate, cls._bundle.sample_rate
202
+ )
203
+
204
+ waveform = waveform.to(device)
205
+
206
+ # Get emissions from model
207
+ with torch.inference_mode():
208
+ emissions, _ = model(waveform)
209
+ emissions = torch.log_softmax(emissions, dim=-1)
210
+
211
+ emission = emissions[0].cpu()
212
+
213
+ # Normalize text: uppercase, keep only valid characters
214
+ transcript = text.upper()
215
+
216
+ # Build tokens from transcript (including word separators)
217
+ tokens = []
218
+ for char in transcript:
219
+ if char in dictionary:
220
+ tokens.append(dictionary[char])
221
+ elif char == " ":
222
+ tokens.append(dictionary.get("|", dictionary.get(" ", 0)))
223
+
224
+ if not tokens:
225
+ return []
226
+
227
+ # Build Viterbi trellis and backtrack for optimal path
228
+ trellis = cls._get_trellis(emission, tokens, blank_id=0)
229
+ alignment_path = cls._backtrack(trellis, emission, tokens, blank_id=0)
230
+
231
+ # Convert frame indices to time (model stride is 320 samples at 16kHz = 20ms)
232
+ frame_duration = 320 / cls._bundle.sample_rate
233
+
234
+ # Apply separate offset compensation for start/end (Wav2Vec2 systematic bias)
235
+ start_offset = cls.START_OFFSET
236
+ end_offset = cls.END_OFFSET
237
+
238
+ # Group aligned tokens into words based on pipe separator
239
+ words = text.split()
240
+ word_timestamps = []
241
+ current_word_start = None
242
+ current_word_end = None
243
+ word_idx = 0
244
+ separator_id = dictionary.get("|", dictionary.get(" ", 0))
245
+
246
+ for token_id, start_frame, end_frame in alignment_path:
247
+ if token_id == separator_id: # Word separator
248
+ if (
249
+ current_word_start is not None
250
+ and current_word_end is not None
251
+ and word_idx < len(words)
252
+ ):
253
+ start_time = max(0.0, current_word_start * frame_duration - start_offset)
254
+ end_time = max(0.0, current_word_end * frame_duration - end_offset)
255
+ word_timestamps.append(
256
+ {
257
+ "word": words[word_idx],
258
+ "start": start_time,
259
+ "end": end_time,
260
+ }
261
+ )
262
+ word_idx += 1
263
+ current_word_start = None
264
+ current_word_end = None
265
+ else:
266
+ if current_word_start is None:
267
+ current_word_start = start_frame
268
+ current_word_end = end_frame
269
+
270
+ # Don't forget the last word
271
+ if (
272
+ current_word_start is not None
273
+ and current_word_end is not None
274
+ and word_idx < len(words)
275
+ ):
276
+ start_time = max(0.0, current_word_start * frame_duration - start_offset)
277
+ end_time = max(0.0, current_word_end * frame_duration - end_offset)
278
+ word_timestamps.append(
279
+ {
280
+ "word": words[word_idx],
281
+ "start": start_time,
282
+ "end": end_time,
283
+ }
284
+ )
285
+
286
+ return word_timestamps
asr_config.py ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional
2
+
3
+ import transformers
4
+
5
+ # Default conv layers for Whisper/GLM-ASR audio encoders: [(pad, kernel, stride), ...]
6
+ DEFAULT_ENCODER_CONV_LAYERS = [(1, 3, 1), (1, 3, 2)]
7
+
8
+
9
+ def compute_encoder_output_length(mel_length, conv_layers=None):
10
+ """Apply encoder conv layer formulas to compute output length.
11
+
12
+ Works with both Python ints and torch tensors of mel lengths; the formula
13
+ `(L + 2*p - (k-1) - 1) // s + 1` per layer is identical for both.
14
+ """
15
+ layers = conv_layers if conv_layers is not None else DEFAULT_ENCODER_CONV_LAYERS
16
+ length = mel_length
17
+ for padding, kernel_size, stride in layers:
18
+ length = (length + 2 * padding - (kernel_size - 1) - 1) // stride + 1
19
+ return length
20
+
21
+
22
+ class ASRConfig(transformers.PretrainedConfig):
23
+ """Configuration class for the ASR model.
24
+
25
+ This config combines settings for:
26
+ - Audio encoder (GLM-ASR/Whisper)
27
+ - Text decoder (Qwen)
28
+ - Projector (MLP, MOSA, MoE, QFormer)
29
+ - Generation parameters
30
+ - Training options (LoRA)
31
+ """
32
+
33
+ model_type = "asr_model"
34
+ is_composition = True
35
+
36
+ def __init__(
37
+ self,
38
+ audio_model_id: str = "zai-org/GLM-ASR-Nano-2512",
39
+ text_model_id: str = "Qwen/Qwen3-0.6B",
40
+ attn_implementation: str = "flash_attention_2",
41
+ model_dtype: str = "bfloat16",
42
+ num_beams: Optional[int] = None,
43
+ system_prompt: str = "You are a helpful assistant.",
44
+ encoder_dim: Optional[int] = None,
45
+ llm_dim: Optional[int] = None,
46
+ # Encoder conv layers: list of (padding, kernel_size, stride) tuples
47
+ # Default is Whisper/GLM-ASR structure: conv1(k=3,s=1,p=1) + conv2(k=3,s=2,p=1)
48
+ encoder_conv_layers: Optional[list] = None,
49
+ audio_sample_rate: int = 16000,
50
+ projector_pool_stride: int = 4,
51
+ downsample_rate: int = 5, # Granite default
52
+ projector_hidden_dim: Optional[int] = None,
53
+ projector_type: str = "mlp", # "mlp", "mosa", "moe", "qformer"
54
+ # MoE-specific configuration
55
+ num_experts: int = 4, # Number of experts in MoE projectors
56
+ num_experts_per_tok: int = 2, # Top-k experts per token
57
+ router_aux_loss_coef: float = 0.01, # Auxiliary loss coefficient for load balancing
58
+ # QFormer-specific configuration (Granite defaults)
59
+ qformer_window_size: int = 15, # Window size for QFormer processing
60
+ qformer_hidden_size: Optional[int] = None, # QFormer hidden size (defaults to encoder_dim)
61
+ qformer_num_layers: int = 2, # Number of QFormer transformer layers
62
+ qformer_num_heads: int = 16, # Number of attention heads in QFormer
63
+ qformer_intermediate_size: Optional[int] = None, # FFN size (defaults to 4x hidden)
64
+ # LoRA configuration (for Stage 2 fine-tuning)
65
+ use_lora: bool = False,
66
+ lora_rank: int = 8, # SALMONN default
67
+ lora_alpha: int = 32, # SALMONN default (scaling factor 4.0)
68
+ lora_dropout: float = 0.0,
69
+ lora_target_modules: Optional[list] = None, # Default: all linear layers
70
+ freeze_projector: bool = False, # True for Stage 2 (LoRA-only training)
71
+ freeze_language_model: bool = True, # False = full decoder fine-tuning
72
+ do_sample: bool = False,
73
+ temperature: Optional[float] = None,
74
+ top_p: Optional[float] = None,
75
+ top_k: Optional[int] = None,
76
+ max_new_tokens: Optional[int] = None,
77
+ min_new_tokens: Optional[int] = None,
78
+ repetition_penalty: Optional[float] = None,
79
+ length_penalty: Optional[float] = None,
80
+ no_repeat_ngram_size: Optional[int] = None,
81
+ use_cache: Optional[bool] = None,
82
+ **kwargs,
83
+ ):
84
+ """Initialize ASR model configuration.
85
+
86
+ Args:
87
+ audio_model_id: HuggingFace model ID for audio encoder (GLM-ASR/Whisper)
88
+ text_model_id: HuggingFace model ID for text decoder (Qwen)
89
+ attn_implementation: Attention implementation ("flash_attention_2", "sdpa", "eager")
90
+ model_dtype: Model dtype ("bfloat16", "float16", "float32")
91
+ projector_type: Projector architecture ("mlp", "mosa", "moe", "qformer")
92
+ use_lora: Enable LoRA adapters for Stage 2 fine-tuning
93
+ """
94
+ # Set default generation parameters (greedy decoding only).
95
+ # Applied via setattr below — keeping these out of kwargs so they
96
+ # don't get re-overwritten by super().__init__(**kwargs) at the end.
97
+ generation_defaults = {
98
+ "num_beams": 1,
99
+ "max_new_tokens": 128,
100
+ "min_new_tokens": 0,
101
+ "repetition_penalty": 1.0,
102
+ "length_penalty": 1.0,
103
+ "no_repeat_ngram_size": 0,
104
+ "use_cache": True,
105
+ }
106
+
107
+ self.audio_model_id = audio_model_id
108
+ self.text_model_id = text_model_id
109
+ self.attn_implementation = attn_implementation
110
+ self.model_dtype = model_dtype
111
+ self.system_prompt = system_prompt
112
+ self.encoder_dim = encoder_dim
113
+ self.llm_dim = llm_dim
114
+ self.encoder_conv_layers = encoder_conv_layers or DEFAULT_ENCODER_CONV_LAYERS
115
+ self.audio_sample_rate = audio_sample_rate
116
+ self.projector_pool_stride = projector_pool_stride
117
+ self.downsample_rate = downsample_rate
118
+ self.projector_hidden_dim = projector_hidden_dim
119
+ self.projector_type = projector_type
120
+ # MoE-specific configuration
121
+ self.num_experts = num_experts
122
+ self.num_experts_per_tok = num_experts_per_tok
123
+ self.router_aux_loss_coef = router_aux_loss_coef
124
+ # QFormer-specific configuration
125
+ self.qformer_window_size = qformer_window_size
126
+ self.qformer_hidden_size = qformer_hidden_size
127
+ self.qformer_num_layers = qformer_num_layers
128
+ self.qformer_num_heads = qformer_num_heads
129
+ self.qformer_intermediate_size = qformer_intermediate_size
130
+ # LoRA configuration
131
+ self.use_lora = use_lora
132
+ self.lora_rank = lora_rank
133
+ self.lora_alpha = lora_alpha
134
+ self.lora_dropout = lora_dropout
135
+ self.lora_target_modules = lora_target_modules or [
136
+ "q_proj",
137
+ "k_proj",
138
+ "v_proj",
139
+ "o_proj",
140
+ "gate_proj",
141
+ "up_proj",
142
+ "down_proj",
143
+ ]
144
+ self.freeze_projector = freeze_projector
145
+ self.freeze_language_model = freeze_language_model
146
+
147
+ explicit_generation_args = {
148
+ "num_beams": num_beams,
149
+ "max_new_tokens": max_new_tokens,
150
+ "min_new_tokens": min_new_tokens,
151
+ "repetition_penalty": repetition_penalty,
152
+ "length_penalty": length_penalty,
153
+ "no_repeat_ngram_size": no_repeat_ngram_size,
154
+ "use_cache": use_cache,
155
+ }
156
+ for key, default in generation_defaults.items():
157
+ value = explicit_generation_args[key]
158
+ setattr(self, key, value if value is not None else default)
159
+ self.do_sample = do_sample
160
+ self.temperature = temperature
161
+ self.top_p = top_p
162
+ self.top_k = top_k
163
+
164
+ if "audio_config" not in kwargs:
165
+ self.audio_config = transformers.AutoConfig.from_pretrained(audio_model_id)
166
+ # Override dtype to match model_dtype
167
+ self.audio_config.dtype = model_dtype
168
+ else:
169
+ self.audio_config = kwargs.pop("audio_config")
170
+
171
+ if "text_config" not in kwargs:
172
+ self.text_config = transformers.AutoConfig.from_pretrained(
173
+ text_model_id, trust_remote_code=True
174
+ )
175
+ # Override dtype to match model_dtype
176
+ self.text_config.dtype = model_dtype
177
+ else:
178
+ self.text_config = kwargs.pop("text_config")
179
+
180
+ if isinstance(self.text_config, dict):
181
+ # Reconstruct config from dict using the model_type stored in the dict
182
+ model_type = self.text_config["model_type"]
183
+ config_class = transformers.AutoConfig.for_model(model_type).__class__
184
+ self.text_config = config_class(**self.text_config)
185
+
186
+ if isinstance(self.audio_config, dict):
187
+ model_type = self.audio_config.get("model_type")
188
+ if model_type:
189
+ config_class = transformers.AutoConfig.for_model(model_type).__class__
190
+ self.audio_config = config_class(**self.audio_config)
191
+
192
+ super().__init__(**kwargs)
193
+
194
+ # Point encoder to audio_config so pipeline uses correct feature extractor
195
+ # The pipeline looks for config.encoder._name_or_path for feature extractor
196
+ self.encoder = self.audio_config
197
+
198
+ self.auto_map = {
199
+ "AutoConfig": "asr_config.ASRConfig",
200
+ "AutoModel": "asr_modeling.ASRModel",
201
+ "AutoModelForSpeechSeq2Seq": "asr_modeling.ASRModel",
202
+ "AutoProcessor": "asr_processing.ASRProcessor",
203
+ }
204
+ self.custom_pipelines = {
205
+ "automatic-speech-recognition": {
206
+ "impl": "asr_pipeline.ASRPipeline",
207
+ "pt": ["AutoModelForSpeechSeq2Seq"],
208
+ "tf": [],
209
+ "type": "audio",
210
+ }
211
+ }
212
+ self.architectures = ["ASRModel"]
213
+ self.pipeline_tag = "automatic-speech-recognition"
214
+
215
+
216
+ transformers.AutoConfig.register("asr_model", ASRConfig)
asr_modeling.py ADDED
@@ -0,0 +1,828 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from pathlib import Path
3
+ from threading import Thread
4
+ from typing import Iterator, Optional, Union
5
+
6
+ import torch
7
+ import torch.nn as nn
8
+ import torch.nn.functional as F # noqa: N812
9
+ from transformers import (
10
+ AutoModel,
11
+ AutoModelForCausalLM,
12
+ AutoTokenizer,
13
+ PreTrainedModel,
14
+ TextIteratorStreamer,
15
+ )
16
+ from transformers.generation import GenerationMixin
17
+ from transformers.modeling_outputs import CausalLMOutputWithPast
18
+
19
+ try:
20
+ from .asr_config import ASRConfig, compute_encoder_output_length
21
+ from .projectors import PROJECTOR_CLASSES
22
+ except ImportError:
23
+ from asr_config import ASRConfig, compute_encoder_output_length # type: ignore[no-redef]
24
+ from projectors import PROJECTOR_CLASSES # type: ignore[no-redef]
25
+
26
+
27
+ def _gather_audio_embeds(audio_embeds: torch.Tensor, token_counts: torch.Tensor) -> torch.Tensor:
28
+ """Flatten per-sample audio embeddings into a packed tensor.
29
+
30
+ For each row i, takes the first ``token_counts[i]`` rows of
31
+ ``audio_embeds[i]`` and concatenates them. If any token count exceeds
32
+ ``audio_embeds.shape[1]``, the deficit is zero-padded.
33
+
34
+ Equivalent to a per-sample slice/cat loop but with O(1) host-device
35
+ syncs per call (one ``max().item()``) instead of one per sample.
36
+ """
37
+ _, max_len, _ = audio_embeds.shape
38
+ needed = int(token_counts.max().item())
39
+ if needed > max_len:
40
+ audio_embeds = F.pad(audio_embeds, (0, 0, 0, needed - max_len))
41
+ max_len = needed
42
+ indices = torch.arange(max_len, device=audio_embeds.device).unsqueeze(0)
43
+ mask = indices < token_counts.unsqueeze(1)
44
+ return audio_embeds[mask]
45
+
46
+
47
+ class ASRModel(PreTrainedModel, GenerationMixin):
48
+ """Audio-to-text model combining an audio encoder, projector, and language model."""
49
+
50
+ config_class = ASRConfig
51
+ base_model_prefix = "model"
52
+ main_input_name = "input_features"
53
+ _supports_flash_attn_2 = True
54
+ supports_gradient_checkpointing = True
55
+ _is_loading_from_pretrained: bool = False
56
+
57
+ TRANSCRIBE_PROMPT = "Transcribe the speech to text"
58
+
59
+ @classmethod
60
+ def from_pretrained(cls, pretrained_model_name_or_path: str, *args, **kwargs) -> "ASRModel":
61
+ """Load model from pretrained, handling device placement correctly."""
62
+ from safetensors.torch import load_file
63
+ from transformers.utils.hub import cached_file
64
+
65
+ config = kwargs.pop("config", None)
66
+ if config is None:
67
+ config = ASRConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
68
+
69
+ # Set flag to avoid device_map="auto" in sub-model loaders
70
+ cls._is_loading_from_pretrained = True
71
+
72
+ try:
73
+ model = cls(config, **kwargs)
74
+
75
+ # Load projector weights from safetensors
76
+ subfolder = kwargs.get("subfolder")
77
+ revision = kwargs.get("revision")
78
+ cache_kwargs = {}
79
+ if subfolder:
80
+ cache_kwargs["subfolder"] = subfolder
81
+ if revision:
82
+ cache_kwargs["revision"] = revision
83
+
84
+ model_file = cached_file(
85
+ pretrained_model_name_or_path,
86
+ "model.safetensors",
87
+ _raise_exceptions_for_missing_entries=False,
88
+ **cache_kwargs,
89
+ )
90
+
91
+ if model_file is not None:
92
+ state_dict = load_file(model_file)
93
+ model.load_state_dict(state_dict, strict=False)
94
+
95
+ # Load LoRA adapters if use_lora is enabled
96
+ if getattr(config, "use_lora", False):
97
+ # Check for adapter_config.json (required by PEFT to load adapters)
98
+ adapter_config_file = cached_file(
99
+ pretrained_model_name_or_path,
100
+ "adapter_config.json",
101
+ _raise_exceptions_for_missing_entries=False,
102
+ **cache_kwargs,
103
+ )
104
+ if adapter_config_file is not None:
105
+ # Load saved adapter weights using the original repo_id/path
106
+ # PEFT handles Hub downloads and caching internally
107
+ from peft import PeftModel
108
+
109
+ model.language_model = PeftModel.from_pretrained(
110
+ model.language_model,
111
+ pretrained_model_name_or_path,
112
+ is_trainable=True,
113
+ **cache_kwargs,
114
+ )
115
+ else:
116
+ # No saved adapters - initialize fresh LLM LoRA for training
117
+ from peft import LoraConfig, get_peft_model
118
+
119
+ lora_config = LoraConfig(
120
+ r=config.lora_rank,
121
+ lora_alpha=config.lora_alpha,
122
+ target_modules=config.lora_target_modules,
123
+ lora_dropout=config.lora_dropout,
124
+ bias="none",
125
+ task_type="CAUSAL_LM",
126
+ )
127
+ model.language_model = get_peft_model(model.language_model, lora_config)
128
+
129
+ return model
130
+ finally:
131
+ cls._is_loading_from_pretrained = False
132
+
133
+ def __init__(self, config: ASRConfig, **kwargs) -> None:
134
+ super().__init__(config)
135
+
136
+ self.system_prompt = config.system_prompt
137
+ target_dtype = getattr(torch, config.model_dtype)
138
+
139
+ # Audio encoder (frozen)
140
+ self.audio_tower = self._load_audio_encoder(config, target_dtype)
141
+
142
+ # Language model (frozen)
143
+ self.language_model = self._load_language_model(config, target_dtype)
144
+
145
+ # Initialize tokenizer and special tokens
146
+ self._init_tokenizer(config)
147
+
148
+ # Set up generation config with greedy decoding defaults
149
+ self.generation_config = self.language_model.generation_config
150
+ self.generation_config.max_new_tokens = config.max_new_tokens
151
+ self.generation_config.min_new_tokens = config.min_new_tokens
152
+ self.generation_config.num_beams = config.num_beams
153
+ self.generation_config.do_sample = config.do_sample
154
+ # Set sampling params from config (None means use model defaults)
155
+ self.generation_config.temperature = config.temperature
156
+ self.generation_config.top_p = config.top_p
157
+ self.generation_config.top_k = config.top_k
158
+ self.generation_config.use_cache = config.use_cache
159
+ self.generation_config.length_penalty = config.length_penalty
160
+ self.generation_config.repetition_penalty = config.repetition_penalty
161
+ self.generation_config.no_repeat_ngram_size = config.no_repeat_ngram_size
162
+ # Set EOS tokens, filtering out any that don't exist in the tokenizer
163
+ eos_candidates = [
164
+ self.tokenizer.convert_tokens_to_ids("<|im_end|>"),
165
+ self.tokenizer.convert_tokens_to_ids("<|endoftext|>"),
166
+ ]
167
+ self.generation_config.eos_token_id = [t for t in eos_candidates if t is not None]
168
+ self.generation_config.pad_token_id = self.tokenizer.pad_token_id
169
+
170
+ # Feature extractor for audio preprocessing
171
+ self.feature_extractor = self._create_feature_extractor(config)
172
+
173
+ # Audio projector (trainable unless freeze_projector is set)
174
+ self.projector = self._create_projector(config, target_dtype)
175
+
176
+ # Setup LoRA if enabled (Stage 2 fine-tuning)
177
+ # Skip if loading from pretrained - from_pretrained will handle adapter loading
178
+ if getattr(config, "use_lora", False) and not getattr(
179
+ self.__class__, "_is_loading_from_pretrained", False
180
+ ):
181
+ self._setup_lora(config)
182
+
183
+ # Freeze projector if specified (for Stage 2 LoRA-only training)
184
+ if getattr(config, "freeze_projector", False):
185
+ self.projector.requires_grad_(False)
186
+
187
+ # For model parallelism
188
+ self._no_split_modules = getattr(self.language_model, "_no_split_modules", [])
189
+
190
+ def _create_feature_extractor(self, config: ASRConfig):
191
+ """Create the appropriate feature extractor for the audio encoder."""
192
+ from transformers import AutoFeatureExtractor
193
+
194
+ feature_extractor = AutoFeatureExtractor.from_pretrained(config.audio_model_id)
195
+ # Whisper's encoder requires a fixed 3000 mel frames (30s) and the
196
+ # feature extractor pads to that by default — leave it alone. Other
197
+ # encoders (e.g. GLM-ASR) accept variable-length input, so we disable
198
+ # padding to avoid wasting compute on silent frames.
199
+ if "whisper" not in config.audio_model_id.lower():
200
+ feature_extractor.padding = False
201
+ return feature_extractor
202
+
203
+ @classmethod
204
+ def _load_audio_encoder(cls, config: ASRConfig, dtype: torch.dtype) -> nn.Module:
205
+ """Load and freeze the audio encoder."""
206
+ encoder_kwargs = {
207
+ "attn_implementation": config.attn_implementation,
208
+ "low_cpu_mem_usage": True,
209
+ "dtype": dtype,
210
+ }
211
+
212
+ if "whisper" in config.audio_model_id.lower():
213
+ from transformers import WhisperModel
214
+
215
+ full_model = WhisperModel.from_pretrained(config.audio_model_id, **encoder_kwargs)
216
+ encoder = full_model.encoder
217
+ del full_model
218
+ elif "glm" in config.audio_model_id.lower():
219
+ # GLM-ASR models use audio_tower as the encoder
220
+ # Requires transformers >= 5.x or installed from source
221
+ from transformers import AutoModelForSeq2SeqLM
222
+
223
+ full_model = AutoModelForSeq2SeqLM.from_pretrained(
224
+ config.audio_model_id, trust_remote_code=True, **encoder_kwargs
225
+ )
226
+ # GLM stores encoder at audio_tower (GlmAsrEncoder)
227
+ encoder = full_model.audio_tower
228
+ # Clear references to free VRAM from the LLM decoder
229
+ full_model.language_model = None
230
+ full_model.multi_modal_projector = None
231
+ del full_model
232
+ else:
233
+ encoder = AutoModel.from_pretrained(config.audio_model_id, **encoder_kwargs)
234
+
235
+ encoder.requires_grad_(False)
236
+ encoder.eval()
237
+ return encoder
238
+
239
+ @classmethod
240
+ def _load_language_model(cls, config: ASRConfig, dtype: torch.dtype) -> PreTrainedModel:
241
+ """Load and freeze the language model."""
242
+ decoder_kwargs = {
243
+ "attn_implementation": config.attn_implementation,
244
+ "trust_remote_code": True,
245
+ "low_cpu_mem_usage": True,
246
+ "dtype": dtype,
247
+ }
248
+
249
+ decoder = AutoModelForCausalLM.from_pretrained(config.text_model_id, **decoder_kwargs)
250
+ decoder.config.use_cache = getattr(config, "use_cache", True)
251
+ if getattr(config, "freeze_language_model", True):
252
+ decoder.requires_grad_(False)
253
+ decoder.train(False)
254
+ return decoder
255
+
256
+ def _create_projector(self, config: ASRConfig, dtype: torch.dtype) -> nn.Module:
257
+ """Create the trainable audio projector."""
258
+ # Auto-detect dimensions if not specified
259
+ if config.encoder_dim is None:
260
+ enc_cfg = self.audio_tower.config
261
+ config.encoder_dim = getattr(enc_cfg, "hidden_size", None) or getattr(
262
+ enc_cfg, "d_model", None
263
+ )
264
+ if config.encoder_dim is None:
265
+ raise ValueError("Could not auto-detect encoder_dim. Please specify in config.")
266
+
267
+ if config.llm_dim is None:
268
+ dec_cfg = self.language_model.config
269
+ config.llm_dim = getattr(dec_cfg, "hidden_size", None) or getattr(
270
+ dec_cfg, "d_model", None
271
+ )
272
+ if config.llm_dim is None:
273
+ raise ValueError("Could not auto-detect llm_dim. Please specify in config.")
274
+
275
+ # Select projector type based on config
276
+ projector_type = getattr(config, "projector_type", "mlp")
277
+ projector_class = PROJECTOR_CLASSES.get(projector_type)
278
+ if projector_class is None:
279
+ raise ValueError(
280
+ f"Unknown projector_type: {projector_type}. "
281
+ f"Valid options: {list(PROJECTOR_CLASSES.keys())}"
282
+ )
283
+ projector = projector_class(config)
284
+
285
+ # Move projector to same device as language model (important when using quantization)
286
+ device = next(self.language_model.parameters()).device
287
+ return projector.to(device=device, dtype=dtype)
288
+
289
+ def _setup_lora(self, config: ASRConfig):
290
+ """Apply LoRA adapters to the language model for Stage 2 fine-tuning."""
291
+ from peft import LoraConfig, get_peft_model
292
+
293
+ lora_config = LoraConfig(
294
+ r=config.lora_rank,
295
+ lora_alpha=config.lora_alpha,
296
+ target_modules=config.lora_target_modules,
297
+ lora_dropout=config.lora_dropout,
298
+ bias="none",
299
+ task_type="CAUSAL_LM",
300
+ )
301
+ self.language_model = get_peft_model(self.language_model, lora_config)
302
+
303
+ def _init_tokenizer(self, config: ASRConfig):
304
+ """Initialize tokenizer with audio token."""
305
+ self.tokenizer = AutoTokenizer.from_pretrained(config.text_model_id, trust_remote_code=True)
306
+
307
+ # Set pad token. Prefer a dedicated pad token if the tokenizer has one
308
+ # (e.g. Qwen's <|finetune_right_pad_id|>); otherwise fall back to
309
+ # eos_token, which is the standard pattern for Llama-style tokenizers
310
+ # (SmolLM2, Llama, etc.) that ship without a separate pad token.
311
+ if (
312
+ self.tokenizer.pad_token is None
313
+ or self.tokenizer.pad_token_id == self.tokenizer.eos_token_id
314
+ ):
315
+ if "<|finetune_right_pad_id|>" in self.tokenizer.get_vocab():
316
+ self.tokenizer.pad_token = "<|finetune_right_pad_id|>"
317
+ elif self.tokenizer.pad_token is None:
318
+ self.tokenizer.pad_token = self.tokenizer.eos_token
319
+
320
+ # Add audio token
321
+ existing_special = getattr(self.tokenizer, "additional_special_tokens", None) or []
322
+ if "<audio>" not in existing_special:
323
+ self.tokenizer.add_special_tokens(
324
+ {"additional_special_tokens": existing_special + ["<audio>"]}
325
+ )
326
+ # mean_resizing=True initializes the new <audio> row at the mean of
327
+ # existing rows so its scale matches the pretrained distribution. The
328
+ # input-side <audio> embedding is overwritten via masked_scatter and
329
+ # never seen by the LM, but with tied embeddings (Qwen3-0.6B) this
330
+ # same row is the lm_head column for predicting <audio>; a Gaussian
331
+ # draw at config.initializer_range was visible in early-step logits.
332
+ self.language_model.resize_token_embeddings(len(self.tokenizer), mean_resizing=True)
333
+
334
+ self.audio_token_id = self.tokenizer.convert_tokens_to_ids("<audio>")
335
+ self.tokenizer.padding_side = "right"
336
+
337
+ # Sync token IDs to configs
338
+ for cfg in [self.config.text_config, self.language_model.config, self.generation_config]:
339
+ if cfg is not None:
340
+ cfg.pad_token_id = self.tokenizer.pad_token_id
341
+ cfg.eos_token_id = self.tokenizer.eos_token_id
342
+ cfg.bos_token_id = self.tokenizer.bos_token_id
343
+
344
+ def train(self, mode: bool = True):
345
+ """Set train/eval mode, but keep frozen submodules out of train mode.
346
+
347
+ HF Trainer calls `model.train()` at the top of every training step, which
348
+ recursively switches every submodule into train mode — re-enabling dropout
349
+ on modules with `requires_grad_(False)`. The frozen encoder (and the LM
350
+ when `freeze_language_model=True`) should always run deterministically;
351
+ train-mode dropout only adds noise that can't improve a frozen network.
352
+ """
353
+ super().train(mode)
354
+ self.audio_tower.train(False)
355
+ if getattr(self.config, "freeze_language_model", True):
356
+ self.language_model.train(False)
357
+ return self
358
+
359
+ def _set_gradient_checkpointing(self, enable: bool = True, gradient_checkpointing_func=None):
360
+ """Enable/disable gradient checkpointing for the language model."""
361
+ # The LLM still stores activations during forward for backprop to projector
362
+ # Gradient checkpointing trades compute for memory by recomputing activations
363
+ if hasattr(self.language_model, "_set_gradient_checkpointing"):
364
+ self.language_model._set_gradient_checkpointing(enable, gradient_checkpointing_func)
365
+ elif hasattr(self.language_model, "gradient_checkpointing_enable") and enable:
366
+ self.language_model.gradient_checkpointing_enable(
367
+ gradient_checkpointing_kwargs={"use_reentrant": False}
368
+ )
369
+ elif hasattr(self.language_model, "gradient_checkpointing_disable") and not enable:
370
+ self.language_model.gradient_checkpointing_disable()
371
+
372
+ def get_input_embeddings(self) -> nn.Module:
373
+ return self.language_model.get_input_embeddings()
374
+
375
+ def set_input_embeddings(self, value: nn.Module) -> None:
376
+ self.language_model.set_input_embeddings(value)
377
+
378
+ def get_output_embeddings(self) -> nn.Module:
379
+ return self.language_model.get_output_embeddings()
380
+
381
+ def set_output_embeddings(self, value: nn.Module) -> None:
382
+ self.language_model.set_output_embeddings(value)
383
+
384
+ def get_processor(self):
385
+ """Get the processor for this model."""
386
+ try:
387
+ from .asr_processing import ASRProcessor
388
+ except ImportError:
389
+ from asr_processing import ASRProcessor # type: ignore[no-redef]
390
+
391
+ return ASRProcessor(
392
+ feature_extractor=self.feature_extractor,
393
+ tokenizer=self.tokenizer,
394
+ projector=self.projector,
395
+ encoder_conv_layers=self.config.encoder_conv_layers,
396
+ )
397
+
398
+ def state_dict(self, *args, **kwargs) -> dict[str, torch.Tensor]:
399
+ """Save trainable weights: projector, plus the language model when fine-tuned."""
400
+ sd = {f"projector.{k}": v for k, v in self.projector.state_dict().items()}
401
+ if not getattr(self.config, "freeze_language_model", True):
402
+ sd.update(
403
+ {f"language_model.{k}": v for k, v in self.language_model.state_dict().items()}
404
+ )
405
+ return sd
406
+
407
+ def _compute_encoder_output_lengths(
408
+ self,
409
+ audio_attention_mask: torch.Tensor,
410
+ ) -> torch.Tensor:
411
+ """Compute per-sample encoder output lengths using conv layer formulas."""
412
+ return compute_encoder_output_length(
413
+ audio_attention_mask.sum(dim=-1),
414
+ self.config.encoder_conv_layers,
415
+ )
416
+
417
+ def _encode_audio(
418
+ self,
419
+ audio_features: torch.Tensor,
420
+ expected_token_counts: torch.Tensor,
421
+ ) -> torch.Tensor:
422
+ """Encode audio features and return flattened embeddings matching expected_token_counts.
423
+
424
+ Args:
425
+ audio_features: Mel spectrogram features (batch, n_mels, mel_len)
426
+ expected_token_counts: Per-sample audio token counts as int64 tensor (batch,).
427
+
428
+ Returns:
429
+ Flattened audio embeddings of shape (sum(expected_token_counts), hidden_dim).
430
+ """
431
+ with torch.no_grad():
432
+ encoder_out = self.audio_tower(input_features=audio_features)
433
+ hidden_states = encoder_out.last_hidden_state
434
+
435
+ audio_embeds = self.projector(hidden_states)
436
+
437
+ token_counts = expected_token_counts.to(device=audio_embeds.device, dtype=torch.long)
438
+ return _gather_audio_embeds(audio_embeds, token_counts)
439
+
440
+ def forward(
441
+ self,
442
+ input_ids: Optional[torch.Tensor] = None,
443
+ input_features: Optional[torch.Tensor] = None,
444
+ audio_attention_mask: Optional[torch.Tensor] = None,
445
+ attention_mask: Optional[torch.Tensor] = None,
446
+ position_ids: Optional[torch.Tensor] = None,
447
+ past_key_values: Optional[torch.Tensor] = None,
448
+ inputs_embeds: Optional[torch.Tensor] = None,
449
+ labels: Optional[torch.Tensor] = None,
450
+ use_cache: Optional[bool] = None,
451
+ cache_position: Optional[torch.Tensor] = None,
452
+ audio_token_counts: Optional[torch.Tensor] = None,
453
+ **kwargs,
454
+ ) -> CausalLMOutputWithPast:
455
+ """Forward pass for training and inference."""
456
+ if inputs_embeds is None:
457
+ inputs_embeds = self.language_model.get_input_embeddings()(input_ids)
458
+
459
+ if input_features is not None and input_ids is not None:
460
+ is_audio_token = input_ids == self.audio_token_id
461
+ if audio_token_counts is None:
462
+ audio_token_counts = is_audio_token.sum(dim=-1)
463
+ else:
464
+ audio_token_counts = audio_token_counts.to(
465
+ device=input_ids.device, dtype=torch.long
466
+ )
467
+
468
+ audio_embeds = self._encode_audio(input_features, audio_token_counts)
469
+
470
+ audio_token_mask = is_audio_token.unsqueeze(-1)
471
+ inputs_embeds = inputs_embeds.masked_scatter(
472
+ audio_token_mask.to(inputs_embeds.device),
473
+ audio_embeds.to(inputs_embeds.device, dtype=inputs_embeds.dtype),
474
+ )
475
+
476
+ outputs = self.language_model(
477
+ attention_mask=attention_mask,
478
+ position_ids=position_ids,
479
+ past_key_values=past_key_values,
480
+ inputs_embeds=inputs_embeds,
481
+ labels=labels,
482
+ use_cache=use_cache,
483
+ cache_position=cache_position,
484
+ **kwargs,
485
+ )
486
+
487
+ if outputs.loss is not None and hasattr(self.projector, "get_aux_loss"):
488
+ aux_loss = self.projector.get_aux_loss()
489
+ if aux_loss is not None and aux_loss.numel() > 0:
490
+ outputs.loss = outputs.loss + aux_loss.to(outputs.loss.device)
491
+
492
+ return outputs
493
+
494
+ def prepare_inputs_for_generation(self, *args, **kwargs):
495
+ """Prepare inputs for generation, handling audio features for cached decoding."""
496
+ input_features = kwargs.pop("input_features", None)
497
+ cache_position = kwargs.get("cache_position")
498
+
499
+ model_inputs = self.language_model.prepare_inputs_for_generation(*args, **kwargs)
500
+
501
+ # Only pass audio features on the first generation step (cache_position[0] == 0)
502
+ if cache_position is not None and cache_position[0] == 0 and input_features is not None:
503
+ model_inputs["input_features"] = input_features
504
+
505
+ return model_inputs
506
+
507
+ def _get_num_audio_tokens(
508
+ self,
509
+ audio_attention_mask: torch.Tensor,
510
+ ) -> int:
511
+ """Calculate number of audio tokens based on actual audio length.
512
+
513
+ Uses attention mask to get real audio length, then computes:
514
+ mel_frames -> encoder_frames (via conv formulas) -> projector output tokens
515
+ """
516
+ encoder_lengths = self._compute_encoder_output_lengths(audio_attention_mask)
517
+ # Use max length for batch (all samples should have same token count for generation)
518
+ encoder_output_len = int(encoder_lengths.max().item())
519
+ return int(self.projector.get_output_length(encoder_output_len))
520
+
521
+ @torch.no_grad()
522
+ def generate(
523
+ self,
524
+ input_ids: Optional[torch.Tensor] = None,
525
+ input_features: Optional[torch.Tensor] = None,
526
+ audio_attention_mask: Optional[torch.Tensor] = None,
527
+ attention_mask: Optional[torch.Tensor] = None,
528
+ system_prompt: Optional[str] = None,
529
+ **generate_kwargs,
530
+ ) -> torch.Tensor:
531
+ """Generate transcription from audio input.
532
+
533
+ Can be called in two ways:
534
+ 1. With input_ids containing <audio> tokens (from processor)
535
+ 2. With just audio, and we build the prompt internally
536
+ """
537
+ if input_features is None:
538
+ raise ValueError("input_features required for generation")
539
+ if audio_attention_mask is None:
540
+ raise ValueError("audio_attention_mask required for generation")
541
+
542
+ device = input_features.device
543
+ batch_size = input_features.shape[0]
544
+
545
+ # Encode audio -> flattened embeddings (no per-sample host sync)
546
+ encoder_lengths = self._compute_encoder_output_lengths(audio_attention_mask)
547
+ token_counts = self.projector.get_output_length(encoder_lengths).to(torch.long)
548
+ audio_embeds = self._encode_audio(input_features, token_counts)
549
+
550
+ # If input_ids not provided, build prompt with correct number of audio tokens
551
+ if input_ids is None:
552
+ num_audio_tokens = self._get_num_audio_tokens(audio_attention_mask)
553
+ audio_placeholder = "<audio>" * num_audio_tokens
554
+
555
+ system_prompt = system_prompt or self.system_prompt
556
+
557
+ messages: list[dict[str, str]] = []
558
+ if system_prompt:
559
+ messages.append({"role": "system", "content": system_prompt})
560
+ # Audio tokens only (instruction-free)
561
+ user_content = audio_placeholder
562
+ if self.TRANSCRIBE_PROMPT:
563
+ user_content += " " + self.TRANSCRIBE_PROMPT
564
+ messages.append({"role": "user", "content": user_content})
565
+
566
+ chat_result = self.tokenizer.apply_chat_template(
567
+ messages,
568
+ tokenize=True,
569
+ add_generation_prompt=True,
570
+ return_tensors="pt",
571
+ enable_thinking=False, # Disable Qwen3 thinking mode for ASR
572
+ )
573
+ input_ids = chat_result.input_ids.to(device)
574
+
575
+ if input_ids.dim() == 1:
576
+ input_ids = input_ids.unsqueeze(0)
577
+ if input_ids.shape[0] == 1 and batch_size > 1:
578
+ input_ids = input_ids.expand(batch_size, -1)
579
+
580
+ attention_mask = torch.ones_like(input_ids)
581
+
582
+ # Get text embeddings and replace audio tokens with audio embeddings
583
+ inputs_embeds = self.language_model.get_input_embeddings()(input_ids)
584
+ audio_token_mask = (input_ids == self.audio_token_id).unsqueeze(-1)
585
+ inputs_embeds = inputs_embeds.masked_scatter(
586
+ audio_token_mask.to(inputs_embeds.device),
587
+ audio_embeds.to(inputs_embeds.device, dtype=inputs_embeds.dtype),
588
+ )
589
+
590
+ # Generate using language model
591
+ # Pass both input_ids and inputs_embeds so repetition_penalty works correctly
592
+ # (it needs input_ids to track which tokens have been used)
593
+ output = self.language_model.generate(
594
+ input_ids=input_ids,
595
+ inputs_embeds=inputs_embeds,
596
+ attention_mask=attention_mask,
597
+ generation_config=self.generation_config,
598
+ **generate_kwargs,
599
+ )
600
+
601
+ # When using inputs_embeds with input_ids, generate returns full sequence
602
+ # Strip the input tokens to return only generated tokens
603
+ sequences = output if isinstance(output, torch.Tensor) else output.sequences
604
+ input_len = input_ids.shape[1]
605
+ return sequences[:, input_len:]
606
+
607
+ def generate_streaming(
608
+ self,
609
+ input_features: torch.Tensor,
610
+ audio_attention_mask: torch.Tensor,
611
+ system_prompt: Optional[str] = None,
612
+ **generate_kwargs,
613
+ ) -> Iterator[str]:
614
+ """Generate transcription with streaming token output.
615
+
616
+ Yields partial transcript strings as tokens are generated.
617
+ Reduces time-to-first-word by streaming tokens as they're decoded.
618
+
619
+ Args:
620
+ input_features: Mel spectrogram features (batch, n_mels, mel_len)
621
+ audio_attention_mask: Mask for real vs padded mel frames (batch, mel_len)
622
+ system_prompt: Optional system prompt override
623
+ **generate_kwargs: Additional generation arguments
624
+
625
+ Yields:
626
+ Partial transcript text as each token is generated
627
+ """
628
+ device = input_features.device
629
+ batch_size = input_features.shape[0]
630
+
631
+ # Encode audio -> flattened embeddings (no per-sample host sync)
632
+ encoder_lengths = self._compute_encoder_output_lengths(audio_attention_mask)
633
+ token_counts = self.projector.get_output_length(encoder_lengths).to(torch.long)
634
+ audio_embeds = self._encode_audio(input_features, token_counts)
635
+
636
+ # Build prompt with correct number of audio tokens
637
+ num_audio_tokens = self._get_num_audio_tokens(audio_attention_mask)
638
+ audio_placeholder = "<audio>" * num_audio_tokens
639
+
640
+ system_prompt = system_prompt or self.system_prompt
641
+
642
+ messages: list[dict[str, str]] = []
643
+ if system_prompt:
644
+ messages.append({"role": "system", "content": system_prompt})
645
+ # Audio tokens only (instruction-free)
646
+ user_content = audio_placeholder
647
+ if self.TRANSCRIBE_PROMPT:
648
+ user_content += " " + self.TRANSCRIBE_PROMPT
649
+ messages.append({"role": "user", "content": user_content})
650
+
651
+ chat_result = self.tokenizer.apply_chat_template(
652
+ messages,
653
+ tokenize=True,
654
+ add_generation_prompt=True,
655
+ return_tensors="pt",
656
+ enable_thinking=False, # Disable Qwen3 thinking mode for ASR
657
+ )
658
+ input_ids = chat_result.input_ids.to(device)
659
+
660
+ if input_ids.dim() == 1:
661
+ input_ids = input_ids.unsqueeze(0)
662
+ if input_ids.shape[0] == 1 and batch_size > 1:
663
+ input_ids = input_ids.expand(batch_size, -1)
664
+
665
+ attention_mask = torch.ones_like(input_ids)
666
+
667
+ # Get text embeddings and replace audio tokens with audio embeddings
668
+ inputs_embeds = self.language_model.get_input_embeddings()(input_ids)
669
+ audio_token_mask = (input_ids == self.audio_token_id).unsqueeze(-1)
670
+ inputs_embeds = inputs_embeds.masked_scatter(
671
+ audio_token_mask.to(inputs_embeds.device),
672
+ audio_embeds.to(inputs_embeds.device, dtype=inputs_embeds.dtype),
673
+ )
674
+
675
+ # Setup streamer for token-by-token output
676
+ streamer = TextIteratorStreamer(
677
+ self.tokenizer,
678
+ skip_prompt=True,
679
+ skip_special_tokens=True,
680
+ )
681
+
682
+ # Prepare generation kwargs
683
+ gen_kwargs = {
684
+ "inputs_embeds": inputs_embeds,
685
+ "attention_mask": attention_mask,
686
+ "generation_config": self.generation_config,
687
+ "streamer": streamer,
688
+ **generate_kwargs,
689
+ }
690
+
691
+ # Run generation in background thread
692
+ thread = Thread(target=self.language_model.generate, kwargs=gen_kwargs)
693
+ thread.start()
694
+
695
+ # Yield tokens as they're generated, filtering out <think>...</think> blocks
696
+ # Start assuming no think block - only filter when we see <think>
697
+ in_think_block = False
698
+ buffer = ""
699
+
700
+ for text in streamer:
701
+ buffer += text
702
+
703
+ # Check for think block start (in case model outputs think blocks)
704
+ while "<think>" in buffer:
705
+ in_think_block = True
706
+ # Yield any text before <think>
707
+ before_think = buffer.split("<think>")[0]
708
+ if before_think:
709
+ yield before_think
710
+ buffer = buffer.split("<think>", 1)[-1]
711
+
712
+ # Check for think block end
713
+ while in_think_block and "</think>" in buffer:
714
+ in_think_block = False
715
+ buffer = buffer.split("</think>", 1)[-1]
716
+
717
+ # Yield text if not in think block
718
+ if not in_think_block and buffer:
719
+ yield buffer
720
+ buffer = ""
721
+
722
+ # Yield any remaining buffer
723
+ if buffer and not in_think_block:
724
+ yield buffer
725
+
726
+ thread.join()
727
+
728
+ def save_pretrained(self, save_directory: Union[str, Path], **kwargs) -> None:
729
+ """Save model, tokenizer, and processor."""
730
+ import shutil
731
+
732
+ save_dir = Path(save_directory)
733
+ save_dir.mkdir(parents=True, exist_ok=True)
734
+
735
+ # Update config with actual vocab size
736
+ self.config.vocab_size = self.language_model.config.vocab_size
737
+ self.config.text_config.vocab_size = self.language_model.config.vocab_size
738
+
739
+ if hasattr(self.audio_tower.config, "num_mel_bins"):
740
+ self.config.audio_config.num_mel_bins = self.audio_tower.config.num_mel_bins
741
+
742
+ # Save model (temporarily remove non-serializable attributes)
743
+ tokenizer = self.tokenizer
744
+ del self.tokenizer
745
+
746
+ try:
747
+ super().save_pretrained(save_dir, **kwargs)
748
+ finally:
749
+ self.tokenizer = tokenizer
750
+
751
+ # Save tokenizer and feature extractor
752
+ self.tokenizer.save_pretrained(save_dir)
753
+ self.feature_extractor.save_pretrained(save_dir)
754
+
755
+ # Save LoRA adapters if present (creates adapter_model.safetensors and adapter_config.json)
756
+ # Don't save embedding layers - the <audio> token embedding is never used
757
+ # (it's replaced with projected audio embeddings before the LLM sees it)
758
+ if hasattr(self.language_model, "peft_config"):
759
+ self.language_model.save_pretrained(save_dir, save_embedding_layers=False)
760
+
761
+ # Clear base_model_name_or_path in adapter_config.json to prevent HF pipeline
762
+ # from redirecting to the base LLM repo (like Qwen) which breaks feature
763
+ # extractor loading for multimodal models. If a repo_id is provided, use that
764
+ # so the model can be loaded directly from the Hub.
765
+ adapter_config_path = save_dir / "adapter_config.json"
766
+ if adapter_config_path.exists():
767
+ with adapter_config_path.open() as f:
768
+ adapter_config = json.load(f)
769
+
770
+ # Use repo_id if available, otherwise clear to prevent redirect.
771
+ # Use empty string instead of None to avoid str(None) -> "None" bug
772
+ # in some transformers/PEFT versions.
773
+ repo_id = (
774
+ kwargs.get("repo_id")
775
+ or kwargs.get("push_to_hub_model_id")
776
+ or getattr(self.config, "pretrained_model_path", None)
777
+ or "" # Use empty string instead of None
778
+ )
779
+ adapter_config["base_model_name_or_path"] = repo_id
780
+
781
+ with adapter_config_path.open("w") as f:
782
+ json.dump(adapter_config, f, indent=2)
783
+
784
+ # Add processor auto_map to preprocessor_config.json
785
+ config_path = save_dir / "preprocessor_config.json"
786
+ if config_path.exists():
787
+ with config_path.open() as f:
788
+ processor_config = json.load(f)
789
+ else:
790
+ processor_config = {}
791
+
792
+ processor_config.update(
793
+ {
794
+ "processor_class": "ASRProcessor",
795
+ "auto_map": {"AutoProcessor": "asr_processing.ASRProcessor"},
796
+ }
797
+ )
798
+
799
+ with config_path.open("w") as f:
800
+ json.dump(processor_config, f, indent=2)
801
+
802
+ # Copy source files for auto-loading
803
+ src_dir = Path(__file__).parent
804
+ for asr_file in src_dir.glob("asr_*.py"):
805
+ shutil.copy(asr_file, save_dir / asr_file.name)
806
+ # Copy projectors module
807
+ shutil.copy(src_dir / "projectors.py", save_dir / "projectors.py")
808
+ # Copy alignment module
809
+ shutil.copy(src_dir / "alignment.py", save_dir / "alignment.py")
810
+ # Copy diarization module
811
+ shutil.copy(src_dir / "diarization.py", save_dir / "diarization.py")
812
+
813
+ def push_to_hub(self, repo_id: str, **kwargs) -> str:
814
+ """Push model to HuggingFace Hub, ensuring adapter_config points to repo.
815
+
816
+ IMPORTANT: Sets base_model_name_or_path in adapter_config.json to repo_id
817
+ so that transformers pipeline() can load the model correctly. Without this,
818
+ the pipeline tries to load from "None" which fails.
819
+ """
820
+ # Store repo_id in config so save_pretrained can access it
821
+ self.config.pretrained_model_path = repo_id
822
+ # Call parent's push_to_hub
823
+ return super().push_to_hub(repo_id, **kwargs)
824
+
825
+
826
+ # Register with transformers Auto classes
827
+ # (AutoConfig.register is handled in asr_config.py at module load.)
828
+ AutoModel.register(ASRConfig, ASRModel)
asr_pipeline.py ADDED
@@ -0,0 +1,324 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ASR pipeline for audio-to-text transcription with optional timestamps and diarization."""
2
+
3
+ import re
4
+ from pathlib import Path
5
+ from typing import Any
6
+
7
+ import numpy as np
8
+ import torch
9
+ import transformers
10
+ from transformers.pipelines.audio_utils import ffmpeg_read
11
+
12
+ try:
13
+ from .alignment import ForcedAligner
14
+ from .asr_modeling import ASRModel
15
+ from .diarization import SpeakerDiarizer
16
+ except ImportError:
17
+ from alignment import ForcedAligner # type: ignore[no-redef]
18
+ from asr_modeling import ASRModel # type: ignore[no-redef]
19
+ from diarization import SpeakerDiarizer # type: ignore[no-redef]
20
+
21
+ # Re-export for backwards compatibility
22
+ __all__ = ["ForcedAligner", "SpeakerDiarizer", "ASRPipeline"]
23
+
24
+ _THINK_TAG_RE = re.compile(r"<think>.*?</think>\s*", flags=re.DOTALL)
25
+ _DEFAULT_MIN_REPEATS = 3
26
+ _TRAILING_CHAR_RE = re.compile(rf"(.)\1{{{_DEFAULT_MIN_REPEATS - 1},}}$")
27
+ _TRAILING_WORD_RE = re.compile(
28
+ rf"\b(\w+)(?:\s+\1){{{_DEFAULT_MIN_REPEATS - 1},}}\s*$", re.IGNORECASE
29
+ )
30
+
31
+
32
+ class ASRPipeline(transformers.AutomaticSpeechRecognitionPipeline):
33
+ """ASR Pipeline for audio-to-text transcription."""
34
+
35
+ model: ASRModel
36
+
37
+ def __init__(self, model: ASRModel, **kwargs):
38
+ """Initialize ASR pipeline.
39
+
40
+ Args:
41
+ model: ASRModel instance for transcription
42
+ **kwargs: Additional arguments (feature_extractor, tokenizer, device)
43
+ """
44
+ feature_extractor = kwargs.pop("feature_extractor", None)
45
+ tokenizer = kwargs.pop("tokenizer", model.tokenizer)
46
+
47
+ if feature_extractor is None:
48
+ feature_extractor = model.get_processor().feature_extractor
49
+
50
+ super().__init__(
51
+ model=model, feature_extractor=feature_extractor, tokenizer=tokenizer, **kwargs
52
+ )
53
+ self._current_audio = None
54
+
55
+ def _sanitize_parameters(self, **kwargs):
56
+ """Intercept our custom parameters before parent class validates them."""
57
+ # Remove our custom parameters so parent doesn't see them
58
+ kwargs.pop("return_timestamps", None)
59
+ kwargs.pop("return_speakers", None)
60
+ kwargs.pop("num_speakers", None)
61
+ kwargs.pop("min_speakers", None)
62
+ kwargs.pop("max_speakers", None)
63
+ kwargs.pop("hf_token", None)
64
+ kwargs.pop("user_prompt", None)
65
+ kwargs.pop("diarization_backend", None)
66
+
67
+ return super()._sanitize_parameters(**kwargs)
68
+
69
+ def __call__(
70
+ self,
71
+ inputs,
72
+ **kwargs,
73
+ ):
74
+ """Transcribe audio with optional word-level timestamps and speaker diarization.
75
+
76
+ Args:
77
+ inputs: Audio input (file path, dict with array/sampling_rate, etc.)
78
+ return_timestamps: If True, return word-level timestamps using forced alignment
79
+ return_speakers: If True, return speaker labels for each word
80
+ user_prompt: Custom transcription prompt (default: "Transcribe: ")
81
+ num_speakers: Exact number of speakers (if known, for diarization)
82
+ min_speakers: Minimum number of speakers (for diarization)
83
+ max_speakers: Maximum number of speakers (for diarization)
84
+ **kwargs: Additional arguments passed to the pipeline
85
+
86
+ Returns:
87
+ Dict with 'text' key, 'words' key if return_timestamps=True,
88
+ and speaker labels on words if return_speakers=True
89
+ """
90
+ # Extract our params before super().__call__ (which will also call _sanitize_parameters)
91
+ return_timestamps = kwargs.pop("return_timestamps", False)
92
+ return_speakers = kwargs.pop("return_speakers", False)
93
+ user_prompt = kwargs.pop("user_prompt", None)
94
+ diarization_params = {
95
+ "num_speakers": kwargs.pop("num_speakers", None),
96
+ "min_speakers": kwargs.pop("min_speakers", None),
97
+ "max_speakers": kwargs.pop("max_speakers", None),
98
+ }
99
+
100
+ if return_speakers:
101
+ return_timestamps = True
102
+
103
+ # Set custom user prompt if provided
104
+ original_prompt = None
105
+ if user_prompt:
106
+ original_prompt = self.model.TRANSCRIBE_PROMPT
107
+ self.model.TRANSCRIBE_PROMPT = user_prompt
108
+
109
+ # Store audio for timestamp alignment and diarization
110
+ if return_timestamps or return_speakers:
111
+ self._current_audio = self._extract_audio(inputs)
112
+
113
+ # Run standard transcription
114
+ result = super().__call__(inputs, **kwargs)
115
+
116
+ # Add timestamps if requested
117
+ if return_timestamps and self._current_audio is not None:
118
+ text = result.get("text", "")
119
+ if text:
120
+ try:
121
+ words = ForcedAligner.align(
122
+ self._current_audio["array"],
123
+ text,
124
+ sample_rate=self._current_audio.get("sampling_rate", 16000),
125
+ )
126
+ result["words"] = words
127
+ except Exception as e:
128
+ result["words"] = []
129
+ result["timestamp_error"] = str(e)
130
+ else:
131
+ result["words"] = []
132
+
133
+ # Add speaker diarization if requested
134
+ if return_speakers and self._current_audio is not None:
135
+ try:
136
+ # Run diarization
137
+ speaker_segments = SpeakerDiarizer.diarize(
138
+ self._current_audio["array"],
139
+ sample_rate=self._current_audio.get("sampling_rate", 16000),
140
+ **{k: v for k, v in diarization_params.items() if v is not None},
141
+ )
142
+ result["speaker_segments"] = speaker_segments
143
+
144
+ # Assign speakers to words
145
+ if result.get("words"):
146
+ result["words"] = SpeakerDiarizer.assign_speakers_to_words(
147
+ result["words"],
148
+ speaker_segments,
149
+ )
150
+ except Exception as e:
151
+ result["speaker_segments"] = []
152
+ result["diarization_error"] = str(e)
153
+
154
+ # Clean up
155
+ self._current_audio = None
156
+ if original_prompt is not None:
157
+ self.model.TRANSCRIBE_PROMPT = original_prompt
158
+
159
+ return result
160
+
161
+ def _extract_audio(self, inputs) -> dict | None:
162
+ """Extract audio array from various input formats using HF utilities."""
163
+ if isinstance(inputs, dict):
164
+ if "array" in inputs:
165
+ return {
166
+ "array": inputs["array"],
167
+ "sampling_rate": inputs.get("sampling_rate", 16000),
168
+ }
169
+ if "raw" in inputs:
170
+ return {
171
+ "array": inputs["raw"],
172
+ "sampling_rate": inputs.get("sampling_rate", 16000),
173
+ }
174
+ elif isinstance(inputs, str):
175
+ # File path - load audio using ffmpeg (same as HF pipeline)
176
+ with Path(inputs).open("rb") as f:
177
+ audio = ffmpeg_read(f.read(), sampling_rate=16000)
178
+ return {"array": audio, "sampling_rate": 16000}
179
+ elif isinstance(inputs, bytes):
180
+ audio = ffmpeg_read(inputs, sampling_rate=16000)
181
+ return {"array": audio, "sampling_rate": 16000}
182
+ elif isinstance(inputs, np.ndarray):
183
+ return {"array": inputs, "sampling_rate": 16000}
184
+
185
+ return None
186
+
187
+ def preprocess(self, inputs, **preprocess_params):
188
+ """Preprocess audio inputs for the model.
189
+
190
+ Args:
191
+ inputs: Audio input (dict with array, file path, etc.)
192
+ **preprocess_params: Additional preprocessing parameters
193
+
194
+ Yields:
195
+ Model input dicts with input_features and attention_mask
196
+ """
197
+ # Handle dict with "array" key (from datasets)
198
+ if isinstance(inputs, dict) and "array" in inputs:
199
+ inputs = {
200
+ "raw": inputs["array"],
201
+ "sampling_rate": inputs.get("sampling_rate", self.feature_extractor.sampling_rate),
202
+ }
203
+
204
+ for item in super().preprocess(inputs, **preprocess_params):
205
+ if "is_last" not in item:
206
+ item["is_last"] = True
207
+ yield item
208
+
209
+ def _forward(self, model_inputs, **generate_kwargs) -> dict[str, Any]:
210
+ """Run model forward pass to generate transcription.
211
+
212
+ Args:
213
+ model_inputs: Dict with input_features and attention_mask
214
+ **generate_kwargs: Generation parameters
215
+
216
+ Returns:
217
+ Dict with generated token IDs
218
+ """
219
+ # Extract audio features and is_last flag
220
+ is_last = model_inputs.pop("is_last", True) if isinstance(model_inputs, dict) else True
221
+
222
+ input_features = model_inputs["input_features"].to(self.model.device)
223
+ audio_attention_mask = model_inputs["attention_mask"].to(self.model.device)
224
+
225
+ generated_ids = self.model.generate(
226
+ input_features=input_features,
227
+ audio_attention_mask=audio_attention_mask,
228
+ **generate_kwargs,
229
+ )
230
+
231
+ return {"tokens": generated_ids, "is_last": is_last}
232
+
233
+ def postprocess(self, model_outputs, **kwargs) -> dict[str, str]:
234
+ """Convert model output tokens to text.
235
+
236
+ Args:
237
+ model_outputs: Dict with 'tokens' key containing generated IDs
238
+ **kwargs: Additional postprocessing parameters
239
+
240
+ Returns:
241
+ Dict with 'text' key containing transcription
242
+ """
243
+ # Handle list of outputs (from chunking)
244
+ if isinstance(model_outputs, list):
245
+ model_outputs = model_outputs[0] if model_outputs else {}
246
+
247
+ tokens = model_outputs.get("tokens")
248
+ if tokens is None:
249
+ return super().postprocess(model_outputs, **kwargs)
250
+
251
+ if torch.is_tensor(tokens):
252
+ tokens = tokens.cpu()
253
+ if tokens.dim() > 1:
254
+ tokens = tokens[0]
255
+
256
+ # Filter out eos tokens that the tokenizer doesn't recognize as special
257
+ # (generation_config.eos_token_id may differ from tokenizer.eos_token_id)
258
+ if hasattr(self, "model") and hasattr(self.model, "generation_config"):
259
+ eos_ids = self.model.generation_config.eos_token_id
260
+ if eos_ids is not None:
261
+ eos_set = set(eos_ids) if isinstance(eos_ids, list) else {eos_ids}
262
+ tokens = [t for t in tokens.tolist() if t not in eos_set]
263
+
264
+ text = self.tokenizer.decode(tokens, skip_special_tokens=True).strip()
265
+ # Strip <think>...</think> tags (Qwen3 doesn't respect /no_think prompt)
266
+ if "<think>" in text:
267
+ text = _THINK_TAG_RE.sub("", text).strip()
268
+ text = _truncate_repetitions(text)
269
+ return {"text": text}
270
+
271
+
272
+ def _truncate_repetitions(text: str, min_repeats: int = 3) -> str:
273
+ """Truncate repeated words/phrases/characters at end of text.
274
+
275
+ Detects patterns like:
276
+ - Repeated words: "the the the the" -> "the"
277
+ - Repeated phrases: "i am sorry i am sorry i am sorry" -> "i am sorry"
278
+ - Repeated characters: "444444" -> "4"
279
+
280
+ Args:
281
+ text: Input text to process
282
+ min_repeats: Minimum repetitions to trigger truncation (default 3)
283
+
284
+ Returns:
285
+ Text with trailing repetitions removed
286
+ """
287
+ if not text:
288
+ return text
289
+
290
+ if min_repeats == _DEFAULT_MIN_REPEATS:
291
+ char_pattern = _TRAILING_CHAR_RE
292
+ word_pattern = _TRAILING_WORD_RE
293
+ else:
294
+ char_pattern = re.compile(rf"(.)\1{{{min_repeats - 1},}}$")
295
+ word_pattern = re.compile(rf"\b(\w+)(?:\s+\1){{{min_repeats - 1},}}\s*$", re.IGNORECASE)
296
+
297
+ text = char_pattern.sub(r"\1", text)
298
+ while word_pattern.search(text):
299
+ text = word_pattern.sub(r"\1", text)
300
+
301
+ # 3. Truncate repeated phrases (2-20 words) at end
302
+ # e.g., "i am sorry i am sorry i am sorry" -> "i am sorry"
303
+ words = text.split()
304
+ if len(words) < min_repeats * 2:
305
+ return text
306
+
307
+ # Cheap pre-check: trailing window must contain duplicates for any phrase repeat
308
+ # to be possible. set(window) == window means all unique → no repetition.
309
+ window = words[-min_repeats * 2 :]
310
+ if len(set(window)) == len(window):
311
+ return text
312
+
313
+ for phrase_len in range(2, min(21, len(words) // min_repeats + 1)):
314
+ phrase_escaped = re.escape(" ".join(words[-phrase_len:]))
315
+ phrase_pattern = re.compile(
316
+ rf"(^|.*?\s)({phrase_escaped})(?:\s+{phrase_escaped}){{{min_repeats - 1},}}\s*$",
317
+ re.IGNORECASE,
318
+ )
319
+ match = phrase_pattern.match(text)
320
+ if match:
321
+ text = (match.group(1) + match.group(2)).strip()
322
+ break
323
+
324
+ return text
asr_processing.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional, Union
2
+
3
+ import torch
4
+ import transformers
5
+ from transformers import ProcessorMixin
6
+
7
+ try:
8
+ from .asr_config import DEFAULT_ENCODER_CONV_LAYERS, ASRConfig, compute_encoder_output_length
9
+ except ImportError:
10
+ from asr_config import ( # type: ignore[no-redef]
11
+ DEFAULT_ENCODER_CONV_LAYERS,
12
+ ASRConfig,
13
+ compute_encoder_output_length,
14
+ )
15
+
16
+
17
+ class ASRProcessor(ProcessorMixin):
18
+ """Processor for Whisper-based ASR models."""
19
+
20
+ attributes = ["feature_extractor", "tokenizer"]
21
+ feature_extractor_class = "AutoFeatureExtractor"
22
+ tokenizer_class = "AutoTokenizer"
23
+ AUDIO_TOKEN = "<audio>"
24
+ TRANSCRIBE_PROMPT = "Transcribe the speech to text"
25
+
26
+ def __init__(
27
+ self,
28
+ feature_extractor,
29
+ tokenizer,
30
+ projector=None,
31
+ encoder_conv_layers: Optional[list] = None,
32
+ ):
33
+ """Initialize the ASR processor.
34
+
35
+ Args:
36
+ feature_extractor: Audio feature extractor (WhisperFeatureExtractor)
37
+ tokenizer: Text tokenizer for the language model
38
+ projector: Audio projector module (for computing output lengths)
39
+ encoder_conv_layers: Conv layer specs [(pad, kernel, stride), ...]
40
+ """
41
+ self.feature_extractor = feature_extractor
42
+ self.tokenizer = tokenizer
43
+ self.audio_token_id = tokenizer.convert_tokens_to_ids(self.AUDIO_TOKEN)
44
+ self.projector = projector
45
+ self.encoder_conv_layers = encoder_conv_layers or DEFAULT_ENCODER_CONV_LAYERS
46
+
47
+ def _compute_encoder_output_length(self, mel_length: int) -> int:
48
+ """Compute encoder output length using conv layer formulas."""
49
+ return compute_encoder_output_length(mel_length, self.encoder_conv_layers)
50
+
51
+ def __call__(
52
+ self,
53
+ audio: Optional[Union[list, "torch.Tensor"]] = None,
54
+ text: Optional[str] = None,
55
+ system_prompt: Optional[str] = None,
56
+ return_tensors: str = "pt",
57
+ **kwargs,
58
+ ) -> dict:
59
+ """Process audio and text inputs for inference.
60
+
61
+ Args:
62
+ audio: Raw audio waveform(s)
63
+ text: Target transcription (optional, for training - but use DataCollator instead)
64
+ system_prompt: Optional system prompt
65
+ return_tensors: Return format ("pt" for PyTorch)
66
+
67
+ Returns:
68
+ Dict with input_features, input_ids, attention_mask
69
+ """
70
+ result = {}
71
+
72
+ # Process audio
73
+ if audio is not None:
74
+ audio_inputs = self.feature_extractor(
75
+ audio,
76
+ sampling_rate=getattr(self.feature_extractor, "sampling_rate", 16000),
77
+ return_attention_mask=True,
78
+ return_tensors=return_tensors,
79
+ **kwargs,
80
+ )
81
+ result["input_features"] = audio_inputs["input_features"]
82
+ result["audio_attention_mask"] = audio_inputs["attention_mask"]
83
+
84
+ # Use actual audio length (from attention mask) for token count
85
+ real_mel_len = int(audio_inputs["attention_mask"].sum(dim=-1).max().item())
86
+ encoder_output_len = self._compute_encoder_output_length(real_mel_len)
87
+ num_audio_tokens = self.projector.get_output_length(encoder_output_len)
88
+ else:
89
+ num_audio_tokens = 0
90
+
91
+ # Build prompt with audio token placeholders (instruction-free)
92
+ if num_audio_tokens > 0:
93
+ user_content = self.AUDIO_TOKEN * num_audio_tokens
94
+ if self.TRANSCRIBE_PROMPT:
95
+ user_content += " " + self.TRANSCRIBE_PROMPT
96
+ else:
97
+ user_content = self.TRANSCRIBE_PROMPT or ""
98
+
99
+ messages = []
100
+ if system_prompt:
101
+ messages.append({"role": "system", "content": system_prompt})
102
+ messages.append({"role": "user", "content": user_content})
103
+ if text is not None:
104
+ messages.append({"role": "assistant", "content": text})
105
+
106
+ # Tokenize
107
+ tokenized = self.tokenizer.apply_chat_template(
108
+ messages,
109
+ tokenize=True,
110
+ add_generation_prompt=(text is None),
111
+ return_tensors=return_tensors,
112
+ enable_thinking=False, # Disable Qwen3 thinking mode for ASR
113
+ )
114
+
115
+ # Handle both tensor and BatchEncoding returns
116
+ if isinstance(tokenized, torch.Tensor):
117
+ input_ids = tokenized
118
+ else:
119
+ # BatchEncoding or dict-like object
120
+ input_ids = tokenized.get("input_ids", tokenized.input_ids)
121
+
122
+ if input_ids.dim() == 1:
123
+ input_ids = input_ids.unsqueeze(0)
124
+
125
+ result["input_ids"] = input_ids
126
+ result["attention_mask"] = torch.ones_like(input_ids)
127
+
128
+ return result
129
+
130
+
131
+ ASRProcessor.register_for_auto_class()
132
+ transformers.AutoProcessor.register(ASRConfig, ASRProcessor)
chat_template.jinja ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0].role == 'system' %}
4
+ {{- messages[0].content + '\n\n' }}
5
+ {%- endif %}
6
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
7
+ {%- for tool in tools %}
8
+ {{- "\n" }}
9
+ {{- tool | tojson }}
10
+ {%- endfor %}
11
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
12
+ {%- else %}
13
+ {%- if messages[0].role == 'system' %}
14
+ {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
15
+ {%- endif %}
16
+ {%- endif %}
17
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
18
+ {%- for message in messages[::-1] %}
19
+ {%- set index = (messages|length - 1) - loop.index0 %}
20
+ {%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
21
+ {%- set ns.multi_step_tool = false %}
22
+ {%- set ns.last_query_index = index %}
23
+ {%- endif %}
24
+ {%- endfor %}
25
+ {%- for message in messages %}
26
+ {%- if message.content is string %}
27
+ {%- set content = message.content %}
28
+ {%- else %}
29
+ {%- set content = '' %}
30
+ {%- endif %}
31
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
32
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
33
+ {%- elif message.role == "assistant" %}
34
+ {%- set reasoning_content = '' %}
35
+ {%- if message.reasoning_content is string %}
36
+ {%- set reasoning_content = message.reasoning_content %}
37
+ {%- else %}
38
+ {%- if '</think>' in content %}
39
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
40
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
41
+ {%- endif %}
42
+ {%- endif %}
43
+ {%- if loop.index0 > ns.last_query_index %}
44
+ {%- if loop.last or (not loop.last and reasoning_content) %}
45
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
46
+ {%- else %}
47
+ {{- '<|im_start|>' + message.role + '\n' + content }}
48
+ {%- endif %}
49
+ {%- else %}
50
+ {{- '<|im_start|>' + message.role + '\n' + content }}
51
+ {%- endif %}
52
+ {%- if message.tool_calls %}
53
+ {%- for tool_call in message.tool_calls %}
54
+ {%- if (loop.first and content) or (not loop.first) %}
55
+ {{- '\n' }}
56
+ {%- endif %}
57
+ {%- if tool_call.function %}
58
+ {%- set tool_call = tool_call.function %}
59
+ {%- endif %}
60
+ {{- '<tool_call>\n{"name": "' }}
61
+ {{- tool_call.name }}
62
+ {{- '", "arguments": ' }}
63
+ {%- if tool_call.arguments is string %}
64
+ {{- tool_call.arguments }}
65
+ {%- else %}
66
+ {{- tool_call.arguments | tojson }}
67
+ {%- endif %}
68
+ {{- '}\n</tool_call>' }}
69
+ {%- endfor %}
70
+ {%- endif %}
71
+ {{- '<|im_end|>\n' }}
72
+ {%- elif message.role == "tool" %}
73
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
74
+ {{- '<|im_start|>user' }}
75
+ {%- endif %}
76
+ {{- '\n<tool_response>\n' }}
77
+ {{- content }}
78
+ {{- '\n</tool_response>' }}
79
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
80
+ {{- '<|im_end|>\n' }}
81
+ {%- endif %}
82
+ {%- endif %}
83
+ {%- endfor %}
84
+ {%- if add_generation_prompt %}
85
+ {{- '<|im_start|>assistant\n' }}
86
+ {%- if true %}
87
+ {{- '<think>\n\n</think>\n\n' }}
88
+ {%- endif %}
89
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,342 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ASRModel"
4
+ ],
5
+ "attn_implementation": "sdpa",
6
+ "audio_config": {
7
+ "_name_or_path": "zai-org/GLM-ASR-Nano-2512",
8
+ "architectures": [
9
+ "GlmAsrForConditionalGeneration"
10
+ ],
11
+ "audio_config": {
12
+ "_name_or_path": "",
13
+ "architectures": null,
14
+ "attention_dropout": 0.0,
15
+ "chunk_size_feed_forward": 0,
16
+ "dtype": null,
17
+ "head_dim": 64,
18
+ "hidden_act": "gelu",
19
+ "hidden_size": 1280,
20
+ "id2label": {
21
+ "0": "LABEL_0",
22
+ "1": "LABEL_1"
23
+ },
24
+ "initializer_range": 0.02,
25
+ "intermediate_size": 5120,
26
+ "is_encoder_decoder": false,
27
+ "label2id": {
28
+ "LABEL_0": 0,
29
+ "LABEL_1": 1
30
+ },
31
+ "max_position_embeddings": 1500,
32
+ "model_type": "glmasr_encoder",
33
+ "num_attention_heads": 20,
34
+ "num_hidden_layers": 32,
35
+ "num_key_value_heads": 20,
36
+ "num_mel_bins": 128,
37
+ "output_attentions": false,
38
+ "output_hidden_states": false,
39
+ "partial_rotary_factor": 0.5,
40
+ "problem_type": null,
41
+ "return_dict": true,
42
+ "rope_parameters": {
43
+ "partial_rotary_factor": 0.5,
44
+ "rope_theta": 10000.0,
45
+ "rope_type": "default"
46
+ }
47
+ },
48
+ "audio_token_id": 59260,
49
+ "dtype": "float32",
50
+ "hidden_size": 2048,
51
+ "model_type": "glmasr",
52
+ "num_mel_bins": 128,
53
+ "projector_hidden_act": "gelu",
54
+ "text_config": {
55
+ "_name_or_path": "",
56
+ "architectures": null,
57
+ "attention_bias": false,
58
+ "attention_dropout": 0.0,
59
+ "bos_token_id": 1,
60
+ "chunk_size_feed_forward": 0,
61
+ "dtype": null,
62
+ "eos_token_id": [
63
+ 59246,
64
+ 59253,
65
+ 59255
66
+ ],
67
+ "head_dim": 128,
68
+ "hidden_act": "silu",
69
+ "hidden_size": 2048,
70
+ "id2label": {
71
+ "0": "LABEL_0",
72
+ "1": "LABEL_1"
73
+ },
74
+ "initializer_range": 0.02,
75
+ "intermediate_size": 6144,
76
+ "is_encoder_decoder": false,
77
+ "label2id": {
78
+ "LABEL_0": 0,
79
+ "LABEL_1": 1
80
+ },
81
+ "max_position_embeddings": 8192,
82
+ "mlp_bias": false,
83
+ "model_type": "llama",
84
+ "num_attention_heads": 16,
85
+ "num_hidden_layers": 28,
86
+ "num_key_value_heads": 4,
87
+ "output_attentions": false,
88
+ "output_hidden_states": false,
89
+ "pad_token_id": null,
90
+ "pretraining_tp": 1,
91
+ "problem_type": null,
92
+ "return_dict": true,
93
+ "rms_norm_eps": 1e-05,
94
+ "rope_parameters": {
95
+ "rope_theta": 10000.0,
96
+ "rope_type": "default"
97
+ },
98
+ "tie_word_embeddings": false,
99
+ "use_cache": true,
100
+ "vocab_size": 59264
101
+ },
102
+ "vocab_size": 59264
103
+ },
104
+ "audio_model_id": "zai-org/GLM-ASR-Nano-2512",
105
+ "audio_sample_rate": 16000,
106
+ "auto_map": {
107
+ "AutoConfig": "asr_config.ASRConfig",
108
+ "AutoModel": "asr_modeling.ASRModel",
109
+ "AutoModelForSpeechSeq2Seq": "asr_modeling.ASRModel",
110
+ "AutoProcessor": "asr_processing.ASRProcessor"
111
+ },
112
+ "bos_token_id": null,
113
+ "custom_pipelines": {
114
+ "automatic-speech-recognition": {
115
+ "impl": "asr_pipeline.ASRPipeline",
116
+ "pt": [
117
+ "AutoModelForSpeechSeq2Seq"
118
+ ],
119
+ "tf": [],
120
+ "type": "audio"
121
+ }
122
+ },
123
+ "do_sample": false,
124
+ "downsample_rate": 5,
125
+ "dtype": "float32",
126
+ "encoder": {
127
+ "_name_or_path": "zai-org/GLM-ASR-Nano-2512",
128
+ "architectures": [
129
+ "GlmAsrForConditionalGeneration"
130
+ ],
131
+ "audio_config": {
132
+ "_name_or_path": "",
133
+ "architectures": null,
134
+ "attention_dropout": 0.0,
135
+ "chunk_size_feed_forward": 0,
136
+ "dtype": null,
137
+ "head_dim": 64,
138
+ "hidden_act": "gelu",
139
+ "hidden_size": 1280,
140
+ "id2label": {
141
+ "0": "LABEL_0",
142
+ "1": "LABEL_1"
143
+ },
144
+ "initializer_range": 0.02,
145
+ "intermediate_size": 5120,
146
+ "is_encoder_decoder": false,
147
+ "label2id": {
148
+ "LABEL_0": 0,
149
+ "LABEL_1": 1
150
+ },
151
+ "max_position_embeddings": 1500,
152
+ "model_type": "glmasr_encoder",
153
+ "num_attention_heads": 20,
154
+ "num_hidden_layers": 32,
155
+ "num_key_value_heads": 20,
156
+ "num_mel_bins": 128,
157
+ "output_attentions": false,
158
+ "output_hidden_states": false,
159
+ "partial_rotary_factor": 0.5,
160
+ "problem_type": null,
161
+ "return_dict": true,
162
+ "rope_parameters": {
163
+ "partial_rotary_factor": 0.5,
164
+ "rope_theta": 10000.0,
165
+ "rope_type": "default"
166
+ }
167
+ },
168
+ "audio_token_id": 59260,
169
+ "dtype": "float32",
170
+ "hidden_size": 2048,
171
+ "model_type": "glmasr",
172
+ "num_mel_bins": 128,
173
+ "projector_hidden_act": "gelu",
174
+ "text_config": {
175
+ "_name_or_path": "",
176
+ "architectures": null,
177
+ "attention_bias": false,
178
+ "attention_dropout": 0.0,
179
+ "bos_token_id": 1,
180
+ "chunk_size_feed_forward": 0,
181
+ "dtype": null,
182
+ "eos_token_id": [
183
+ 59246,
184
+ 59253,
185
+ 59255
186
+ ],
187
+ "head_dim": 128,
188
+ "hidden_act": "silu",
189
+ "hidden_size": 2048,
190
+ "id2label": {
191
+ "0": "LABEL_0",
192
+ "1": "LABEL_1"
193
+ },
194
+ "initializer_range": 0.02,
195
+ "intermediate_size": 6144,
196
+ "is_encoder_decoder": false,
197
+ "label2id": {
198
+ "LABEL_0": 0,
199
+ "LABEL_1": 1
200
+ },
201
+ "max_position_embeddings": 8192,
202
+ "mlp_bias": false,
203
+ "model_type": "llama",
204
+ "num_attention_heads": 16,
205
+ "num_hidden_layers": 28,
206
+ "num_key_value_heads": 4,
207
+ "output_attentions": false,
208
+ "output_hidden_states": false,
209
+ "pad_token_id": null,
210
+ "pretraining_tp": 1,
211
+ "problem_type": null,
212
+ "return_dict": true,
213
+ "rms_norm_eps": 1e-05,
214
+ "rope_parameters": {
215
+ "rope_theta": 10000.0,
216
+ "rope_type": "default"
217
+ },
218
+ "tie_word_embeddings": false,
219
+ "use_cache": true,
220
+ "vocab_size": 59264
221
+ },
222
+ "vocab_size": 59264
223
+ },
224
+ "encoder_conv_layers": [
225
+ [
226
+ 1,
227
+ 3,
228
+ 1
229
+ ],
230
+ [
231
+ 1,
232
+ 3,
233
+ 2
234
+ ]
235
+ ],
236
+ "encoder_dim": 1280,
237
+ "eos_token_id": 151645,
238
+ "freeze_language_model": false,
239
+ "freeze_projector": false,
240
+ "length_penalty": 1.0,
241
+ "llm_dim": 1024,
242
+ "lora_alpha": 32,
243
+ "lora_dropout": 0.0,
244
+ "lora_rank": 64,
245
+ "lora_target_modules": [
246
+ "q_proj",
247
+ "v_proj"
248
+ ],
249
+ "max_new_tokens": 256,
250
+ "min_new_tokens": 0,
251
+ "model_dtype": "float32",
252
+ "model_type": "asr_model",
253
+ "no_repeat_ngram_size": 0,
254
+ "num_beams": 1,
255
+ "num_experts": 4,
256
+ "num_experts_per_tok": 2,
257
+ "pad_token_id": 151643,
258
+ "pipeline_tag": "automatic-speech-recognition",
259
+ "pretrained_model_path": "mazesmazes/tiny-audio-embedded-3",
260
+ "projector_hidden_dim": 2048,
261
+ "projector_pool_stride": 4,
262
+ "projector_type": "mlp",
263
+ "qformer_hidden_size": null,
264
+ "qformer_intermediate_size": null,
265
+ "qformer_num_heads": 16,
266
+ "qformer_num_layers": 2,
267
+ "qformer_window_size": 15,
268
+ "repetition_penalty": 1.0,
269
+ "router_aux_loss_coef": 0.01,
270
+ "system_prompt": "",
271
+ "temperature": null,
272
+ "text_config": {
273
+ "_name_or_path": "Qwen/Qwen3-0.6B",
274
+ "architectures": [
275
+ "Qwen3ForCausalLM"
276
+ ],
277
+ "attention_bias": false,
278
+ "attention_dropout": 0.0,
279
+ "bos_token_id": null,
280
+ "dtype": "float32",
281
+ "eos_token_id": 151645,
282
+ "head_dim": 128,
283
+ "hidden_act": "silu",
284
+ "hidden_size": 1024,
285
+ "initializer_range": 0.02,
286
+ "intermediate_size": 3072,
287
+ "layer_types": [
288
+ "full_attention",
289
+ "full_attention",
290
+ "full_attention",
291
+ "full_attention",
292
+ "full_attention",
293
+ "full_attention",
294
+ "full_attention",
295
+ "full_attention",
296
+ "full_attention",
297
+ "full_attention",
298
+ "full_attention",
299
+ "full_attention",
300
+ "full_attention",
301
+ "full_attention",
302
+ "full_attention",
303
+ "full_attention",
304
+ "full_attention",
305
+ "full_attention",
306
+ "full_attention",
307
+ "full_attention",
308
+ "full_attention",
309
+ "full_attention",
310
+ "full_attention",
311
+ "full_attention",
312
+ "full_attention",
313
+ "full_attention",
314
+ "full_attention",
315
+ "full_attention"
316
+ ],
317
+ "max_position_embeddings": 40960,
318
+ "max_window_layers": 28,
319
+ "model_type": "qwen3",
320
+ "num_attention_heads": 16,
321
+ "num_hidden_layers": 28,
322
+ "num_key_value_heads": 8,
323
+ "pad_token_id": 151643,
324
+ "rms_norm_eps": 1e-06,
325
+ "rope_parameters": {
326
+ "rope_theta": 1000000,
327
+ "rope_type": "default"
328
+ },
329
+ "sliding_window": null,
330
+ "tie_word_embeddings": true,
331
+ "use_cache": true,
332
+ "use_sliding_window": false,
333
+ "vocab_size": 151670
334
+ },
335
+ "text_model_id": "Qwen/Qwen3-0.6B",
336
+ "top_k": null,
337
+ "top_p": null,
338
+ "transformers_version": "5.6.1",
339
+ "use_cache": false,
340
+ "use_lora": true,
341
+ "vocab_size": 151670
342
+ }
diarization.py ADDED
@@ -0,0 +1,730 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Speaker diarization using TEN-VAD + ECAPA-TDNN + spectral clustering.
2
+
3
+ Spectral clustering implementation adapted from FunASR/3D-Speaker:
4
+ https://github.com/alibaba-damo-academy/FunASR
5
+ MIT License (https://opensource.org/licenses/MIT)
6
+ """
7
+
8
+ import warnings
9
+
10
+ import numpy as np
11
+ import scipy
12
+ import sklearn.metrics.pairwise
13
+ import torch
14
+ from sklearn.cluster._kmeans import k_means
15
+ from sklearn.preprocessing import normalize
16
+
17
+
18
+ def _get_device() -> torch.device:
19
+ """Get best available device for inference."""
20
+ if torch.cuda.is_available():
21
+ return torch.device("cuda")
22
+ if torch.backends.mps.is_available():
23
+ return torch.device("mps")
24
+ return torch.device("cpu")
25
+
26
+
27
+ class SpectralCluster:
28
+ """Spectral clustering using unnormalized Laplacian of affinity matrix.
29
+
30
+ Adapted from FunASR/3D-Speaker and SpeechBrain implementations.
31
+ Uses eigenvalue gap to automatically determine number of speakers.
32
+ """
33
+
34
+ def __init__(self, min_num_spks: int = 1, max_num_spks: int = 15, pval: float = 0.06):
35
+ self.min_num_spks = min_num_spks
36
+ self.max_num_spks = max_num_spks
37
+ self.pval = pval
38
+
39
+ def __call__(self, embeddings: np.ndarray, oracle_num: int | None = None) -> np.ndarray:
40
+ """Run spectral clustering on embeddings.
41
+
42
+ Args:
43
+ embeddings: Speaker embeddings of shape [N, D]
44
+ oracle_num: Optional known number of speakers
45
+
46
+ Returns:
47
+ Cluster labels of shape [N]
48
+ """
49
+ # Similarity matrix computation
50
+ sim_mat = self.get_sim_mat(embeddings)
51
+
52
+ # Refining similarity matrix with pval
53
+ prunned_sim_mat = self.p_pruning(sim_mat)
54
+
55
+ # Symmetrization
56
+ sym_prund_sim_mat = 0.5 * (prunned_sim_mat + prunned_sim_mat.T)
57
+
58
+ # Laplacian calculation
59
+ laplacian = self.get_laplacian(sym_prund_sim_mat)
60
+
61
+ # Get Spectral Embeddings
62
+ emb, num_of_spk = self.get_spec_embs(laplacian, oracle_num)
63
+
64
+ # Perform clustering
65
+ return self.cluster_embs(emb, num_of_spk)
66
+
67
+ def get_sim_mat(self, embeddings: np.ndarray) -> np.ndarray:
68
+ """Compute cosine similarity matrix."""
69
+ return sklearn.metrics.pairwise.cosine_similarity(embeddings, embeddings)
70
+
71
+ def p_pruning(self, affinity: np.ndarray) -> np.ndarray:
72
+ """Prune low similarity values in affinity matrix (keep top pval fraction)."""
73
+ n = affinity.shape[0]
74
+ pval = max(self.pval, 6.0 / n)
75
+ k_keep = max(1, int(pval * n))
76
+
77
+ # Vectorized: find top-k indices per row and zero out the rest
78
+ top_k_idx = np.argpartition(affinity, -k_keep, axis=1)[:, -k_keep:]
79
+ mask = np.zeros_like(affinity, dtype=bool)
80
+ np.put_along_axis(mask, top_k_idx, True, axis=1)
81
+ affinity[~mask] = 0
82
+ return affinity
83
+
84
+ def get_laplacian(self, sim_mat: np.ndarray) -> np.ndarray:
85
+ """Compute unnormalized Laplacian matrix."""
86
+ from scipy.sparse.csgraph import laplacian
87
+
88
+ np.fill_diagonal(sim_mat, 0)
89
+ return laplacian(sim_mat, normed=False)
90
+
91
+ def get_spec_embs(
92
+ self, laplacian: np.ndarray, k_oracle: int | None = None
93
+ ) -> tuple[np.ndarray, int]:
94
+ """Extract spectral embeddings from Laplacian."""
95
+ lambdas, eig_vecs = scipy.linalg.eigh(laplacian)
96
+
97
+ if k_oracle is not None:
98
+ num_of_spk = k_oracle
99
+ else:
100
+ lambda_gap_list = self.get_eigen_gaps(
101
+ lambdas[self.min_num_spks - 1 : self.max_num_spks + 1]
102
+ )
103
+ num_of_spk = np.argmax(lambda_gap_list) + self.min_num_spks
104
+
105
+ emb = eig_vecs[:, :num_of_spk]
106
+ return emb, num_of_spk
107
+
108
+ def cluster_embs(self, emb: np.ndarray, k: int) -> np.ndarray:
109
+ """Cluster spectral embeddings using k-means."""
110
+ _, labels, _ = k_means(emb, k, n_init=10)
111
+ return labels
112
+
113
+ def get_eigen_gaps(self, eig_vals: np.ndarray) -> np.ndarray:
114
+ """Compute gaps between consecutive eigenvalues."""
115
+ return np.diff(eig_vals)
116
+
117
+
118
+ class SpeakerClusterer:
119
+ """Speaker clustering backend using spectral clustering with speaker merging.
120
+
121
+ Features:
122
+ - Spectral clustering with eigenvalue gap for auto speaker count detection
123
+ - P-pruning for affinity matrix refinement
124
+ - Post-clustering speaker merging by cosine similarity
125
+ """
126
+
127
+ def __init__(
128
+ self,
129
+ min_num_spks: int = 2,
130
+ max_num_spks: int = 10,
131
+ merge_thr: float = 0.90, # Moderate merging
132
+ ):
133
+ self.min_num_spks = min_num_spks
134
+ self.max_num_spks = max_num_spks
135
+ self.merge_thr = merge_thr
136
+ self._spectral_cluster: SpectralCluster | None = None
137
+
138
+ def _get_spectral_cluster(self) -> SpectralCluster:
139
+ """Lazy-load spectral clusterer."""
140
+ if self._spectral_cluster is None:
141
+ self._spectral_cluster = SpectralCluster(
142
+ min_num_spks=self.min_num_spks,
143
+ max_num_spks=self.max_num_spks,
144
+ )
145
+ return self._spectral_cluster
146
+
147
+ def __call__(self, embeddings: np.ndarray, num_speakers: int | None = None) -> np.ndarray:
148
+ """Cluster speaker embeddings and return labels.
149
+
150
+ Args:
151
+ embeddings: Speaker embeddings of shape [N, D]
152
+ num_speakers: Optional oracle number of speakers
153
+
154
+ Returns:
155
+ Cluster labels of shape [N]
156
+ """
157
+ if len(embeddings.shape) != 2:
158
+ raise ValueError(f"Expected 2D array, got shape {embeddings.shape}")
159
+
160
+ # Handle edge cases
161
+ if embeddings.shape[0] == 0:
162
+ return np.array([], dtype=int)
163
+ if embeddings.shape[0] == 1:
164
+ return np.array([0], dtype=int)
165
+ if embeddings.shape[0] < 6:
166
+ return np.zeros(embeddings.shape[0], dtype=int)
167
+
168
+ # Normalize embeddings and replace NaN/inf
169
+ embeddings = np.nan_to_num(embeddings, nan=0.0, posinf=0.0, neginf=0.0)
170
+ embeddings = normalize(embeddings)
171
+
172
+ # Run spectral clustering (suppress numerical warnings)
173
+ spectral = self._get_spectral_cluster()
174
+
175
+ # Update min/max for oracle case
176
+ if num_speakers is not None:
177
+ spectral.min_num_spks = num_speakers
178
+ spectral.max_num_spks = num_speakers
179
+
180
+ with warnings.catch_warnings():
181
+ warnings.filterwarnings("ignore", category=RuntimeWarning)
182
+ labels = spectral(embeddings, oracle_num=num_speakers)
183
+
184
+ # Reset min/max
185
+ if num_speakers is not None:
186
+ spectral.min_num_spks = self.min_num_spks
187
+ spectral.max_num_spks = self.max_num_spks
188
+
189
+ # Merge similar speakers if no oracle
190
+ if num_speakers is None:
191
+ labels = self._merge_by_cos(labels, embeddings, self.merge_thr)
192
+
193
+ # Re-index labels sequentially
194
+ _, labels = np.unique(labels, return_inverse=True)
195
+
196
+ return labels
197
+
198
+ def _merge_by_cos(self, labels: np.ndarray, embs: np.ndarray, cos_thr: float) -> np.ndarray:
199
+ """Merge similar speakers by cosine similarity of centroids."""
200
+ from scipy.cluster.hierarchy import fcluster, linkage
201
+ from scipy.spatial.distance import pdist
202
+
203
+ unique_labels = np.unique(labels)
204
+ if len(unique_labels) <= 1:
205
+ return labels
206
+
207
+ # Compute normalized speaker centroids
208
+ centroids = np.array([embs[labels == lbl].mean(0) for lbl in unique_labels])
209
+ centroids = normalize(centroids)
210
+
211
+ # Hierarchical clustering with cosine distance
212
+ distances = pdist(centroids, metric="cosine")
213
+ linkage_matrix = linkage(distances, method="average")
214
+ merged_labels = fcluster(linkage_matrix, t=1.0 - cos_thr, criterion="distance") - 1
215
+
216
+ # Map original labels to merged labels
217
+ label_map = dict(zip(unique_labels, merged_labels))
218
+ return np.array([label_map[lbl] for lbl in labels])
219
+
220
+
221
+ class LocalSpeakerDiarizer:
222
+ """Local speaker diarization using TEN-VAD + ECAPA-TDNN + spectral clustering.
223
+
224
+ Pipeline:
225
+ 1. TEN-VAD detects speech segments
226
+ 2. Sliding window (1.0s, 75% overlap) for uniform embedding extraction
227
+ 3. ECAPA-TDNN extracts speaker embeddings per window
228
+ 4. Spectral clustering with eigenvalue gap for auto speaker detection
229
+ 5. Frame-level consensus voting for segment reconstruction
230
+ 6. Post-processing merges short segments to reduce flicker
231
+
232
+ Tunable Parameters (class attributes):
233
+ - WINDOW_SIZE: Embedding extraction window size in seconds
234
+ - STEP_SIZE: Sliding window step size (overlap = WINDOW_SIZE - STEP_SIZE)
235
+ - VAD_THRESHOLD: Speech detection threshold (lower = more sensitive)
236
+ - VAD_MIN_DURATION: Minimum speech segment duration
237
+ - VAD_MAX_GAP: Maximum gap to bridge between segments
238
+ - VAD_PAD_ONSET/OFFSET: Padding added to speech segments
239
+ - VOTING_RATE: Frame resolution for consensus voting
240
+ - MIN_SEGMENT_DURATION: Minimum final segment duration
241
+ - SAME_SPEAKER_GAP: Maximum gap to merge same-speaker segments
242
+ - TAIL_COVERAGE_RATIO: Minimum tail coverage to add extra window
243
+ """
244
+
245
+ _ten_vad_model = None
246
+ _ecapa_model = None
247
+ _device = None
248
+
249
+ # ==================== TUNABLE PARAMETERS ====================
250
+
251
+ # Sliding window for embedding extraction
252
+ WINDOW_SIZE = 0.75 # seconds - shorter window for finer resolution
253
+ STEP_SIZE = 0.15 # seconds (80% overlap for more votes)
254
+ TAIL_COVERAGE_RATIO = 0.1 # Add extra window if tail > this ratio of window
255
+
256
+ # VAD hysteresis parameters
257
+ VAD_THRESHOLD = 0.25 # Balanced threshold
258
+ VAD_MIN_DURATION = 0.05 # Minimum speech segment duration (seconds)
259
+ VAD_MAX_GAP = 0.50 # Bridge gaps shorter than this (seconds)
260
+ VAD_PAD_ONSET = 0.05 # Padding at segment start (seconds)
261
+ VAD_PAD_OFFSET = 0.05 # Padding at segment end (seconds)
262
+
263
+ # Frame-level voting
264
+ VOTING_RATE = 0.01 # 10ms resolution for consensus voting
265
+
266
+ # Post-processing
267
+ MIN_SEGMENT_DURATION = 0.15 # Minimum final segment duration (seconds)
268
+ SHORT_SEGMENT_GAP = 0.1 # Gap threshold for merging short segments
269
+ SAME_SPEAKER_GAP = 0.5 # Gap threshold for merging same-speaker segments
270
+
271
+ # ===========================================================
272
+
273
+ @classmethod
274
+ def _get_ten_vad_model(cls):
275
+ """Lazy-load TEN-VAD model (singleton)."""
276
+ if cls._ten_vad_model is None:
277
+ from ten_vad import TenVad
278
+
279
+ cls._ten_vad_model = TenVad(hop_size=256, threshold=cls.VAD_THRESHOLD)
280
+ return cls._ten_vad_model
281
+
282
+ @classmethod
283
+ def _get_device(cls) -> torch.device:
284
+ """Get the best available device."""
285
+ if cls._device is None:
286
+ cls._device = _get_device()
287
+ return cls._device
288
+
289
+ @classmethod
290
+ def _get_ecapa_model(cls):
291
+ """Lazy-load ECAPA-TDNN speaker embedding model (singleton)."""
292
+ if cls._ecapa_model is None:
293
+ # Suppress torchaudio deprecation warning from SpeechBrain
294
+ with warnings.catch_warnings():
295
+ warnings.filterwarnings("ignore", message="torchaudio._backend")
296
+ from speechbrain.inference.speaker import EncoderClassifier
297
+
298
+ device = cls._get_device()
299
+ cls._ecapa_model = EncoderClassifier.from_hparams(
300
+ source="speechbrain/spkrec-ecapa-voxceleb",
301
+ run_opts={"device": str(device)},
302
+ )
303
+
304
+ return cls._ecapa_model
305
+
306
+ @classmethod
307
+ def diarize(
308
+ cls,
309
+ audio: np.ndarray | str,
310
+ sample_rate: int = 16000,
311
+ num_speakers: int | None = None,
312
+ min_speakers: int = 2,
313
+ max_speakers: int = 10,
314
+ **_kwargs,
315
+ ) -> list[dict]:
316
+ """Run speaker diarization on audio.
317
+
318
+ Args:
319
+ audio: Audio waveform as numpy array or path to audio file
320
+ sample_rate: Audio sample rate (default 16000)
321
+ num_speakers: Exact number of speakers (if known)
322
+ min_speakers: Minimum number of speakers
323
+ max_speakers: Maximum number of speakers
324
+
325
+ Returns:
326
+ List of dicts with 'speaker', 'start', 'end' keys
327
+ """
328
+ # Handle file path input
329
+ if isinstance(audio, str):
330
+ import librosa
331
+
332
+ audio, sample_rate = librosa.load(audio, sr=16000)
333
+
334
+ # Ensure correct sample rate
335
+ if sample_rate != 16000:
336
+ import librosa
337
+
338
+ audio = librosa.resample(audio, orig_sr=sample_rate, target_sr=16000)
339
+ sample_rate = 16000
340
+
341
+ audio = audio.astype(np.float32)
342
+ total_duration = len(audio) / sample_rate
343
+
344
+ # Step 1: VAD (returns segments and raw frame-level decisions)
345
+ segments, vad_frames = cls._get_speech_segments(audio, sample_rate)
346
+ if not segments:
347
+ return []
348
+
349
+ # Step 2: Extract embeddings
350
+ embeddings, window_segments = cls._extract_embeddings(audio, segments, sample_rate)
351
+ if len(embeddings) == 0:
352
+ return []
353
+
354
+ # Step 3: Cluster
355
+ clusterer = SpeakerClusterer(min_num_spks=min_speakers, max_num_spks=max_speakers)
356
+ labels = clusterer(embeddings, num_speakers)
357
+
358
+ # Step 4: Post-process with consensus voting (VAD-aware)
359
+ return cls._postprocess_segments(window_segments, labels, total_duration, vad_frames)
360
+
361
+ @classmethod
362
+ def _get_speech_segments(
363
+ cls, audio_array: np.ndarray, sample_rate: int = 16000
364
+ ) -> tuple[list[dict], list[bool]]:
365
+ """Get speech segments using TEN-VAD.
366
+
367
+ Returns:
368
+ Tuple of (segments list, vad_frames list of per-frame speech decisions)
369
+ """
370
+ vad_model = cls._get_ten_vad_model()
371
+
372
+ # Convert to int16 as required by TEN-VAD
373
+ # Clip to prevent integer overflow
374
+ if audio_array.dtype != np.int16:
375
+ audio_int16 = (np.clip(audio_array, -1.0, 1.0) * 32767).astype(np.int16)
376
+ else:
377
+ audio_int16 = audio_array
378
+
379
+ # Process frame by frame
380
+ hop_size = 256
381
+ frame_duration = hop_size / sample_rate
382
+ speech_frames: list[bool] = []
383
+
384
+ for i in range(0, len(audio_int16) - hop_size, hop_size):
385
+ frame = audio_int16[i : i + hop_size]
386
+ _, is_speech = vad_model.process(frame)
387
+ speech_frames.append(is_speech)
388
+
389
+ # Convert frame-level decisions to segments
390
+ segments = []
391
+ in_speech = False
392
+ start_idx = 0
393
+
394
+ for i, is_speech in enumerate(speech_frames):
395
+ if is_speech and not in_speech:
396
+ start_idx = i
397
+ in_speech = True
398
+ elif not is_speech and in_speech:
399
+ start_time = start_idx * frame_duration
400
+ end_time = i * frame_duration
401
+ segments.append(
402
+ {
403
+ "start": start_time,
404
+ "end": end_time,
405
+ "start_sample": int(start_time * sample_rate),
406
+ "end_sample": int(end_time * sample_rate),
407
+ }
408
+ )
409
+ in_speech = False
410
+
411
+ # Handle trailing speech
412
+ if in_speech:
413
+ start_time = start_idx * frame_duration
414
+ end_time = len(speech_frames) * frame_duration
415
+ segments.append(
416
+ {
417
+ "start": start_time,
418
+ "end": end_time,
419
+ "start_sample": int(start_time * sample_rate),
420
+ "end_sample": int(end_time * sample_rate),
421
+ }
422
+ )
423
+
424
+ return cls._apply_vad_hysteresis(segments, sample_rate), speech_frames
425
+
426
+ @classmethod
427
+ def _apply_vad_hysteresis(cls, segments: list[dict], sample_rate: int = 16000) -> list[dict]:
428
+ """Apply hysteresis-like post-processing to VAD segments."""
429
+ if not segments:
430
+ return segments
431
+
432
+ segments = sorted(segments, key=lambda x: x["start"])
433
+
434
+ # Fill short gaps
435
+ merged = [segments[0].copy()]
436
+ for seg in segments[1:]:
437
+ gap = seg["start"] - merged[-1]["end"]
438
+ if gap <= cls.VAD_MAX_GAP:
439
+ merged[-1]["end"] = seg["end"]
440
+ merged[-1]["end_sample"] = seg["end_sample"]
441
+ else:
442
+ merged.append(seg.copy())
443
+
444
+ # Remove short segments
445
+ filtered = [seg for seg in merged if (seg["end"] - seg["start"]) >= cls.VAD_MIN_DURATION]
446
+
447
+ # Dilate segments (add padding)
448
+ for seg in filtered:
449
+ seg["start"] = max(0.0, seg["start"] - cls.VAD_PAD_ONSET)
450
+ seg["end"] = seg["end"] + cls.VAD_PAD_OFFSET
451
+ seg["start_sample"] = int(seg["start"] * sample_rate)
452
+ seg["end_sample"] = int(seg["end"] * sample_rate)
453
+
454
+ return filtered
455
+
456
+ @classmethod
457
+ def _extract_embeddings(
458
+ cls, audio_array: np.ndarray, segments: list[dict], sample_rate: int
459
+ ) -> tuple[np.ndarray, list[dict]]:
460
+ """Extract speaker embeddings using sliding windows."""
461
+ speaker_model = cls._get_ecapa_model()
462
+
463
+ window_samples = int(cls.WINDOW_SIZE * sample_rate)
464
+ step_samples = int(cls.STEP_SIZE * sample_rate)
465
+
466
+ embeddings = []
467
+ window_segments = []
468
+
469
+ with torch.no_grad():
470
+ for seg in segments:
471
+ seg_start = seg["start_sample"]
472
+ seg_end = seg["end_sample"]
473
+ seg_len = seg_end - seg_start
474
+
475
+ # Generate window positions
476
+ if seg_len <= window_samples:
477
+ starts = [seg_start]
478
+ ends = [seg_end]
479
+ else:
480
+ starts = list(range(seg_start, seg_end - window_samples + 1, step_samples))
481
+ ends = [s + window_samples for s in starts]
482
+
483
+ # Cover tail if > TAIL_COVERAGE_RATIO of window remains
484
+ if ends and ends[-1] < seg_end:
485
+ remainder = seg_end - ends[-1]
486
+ if remainder > (window_samples * cls.TAIL_COVERAGE_RATIO):
487
+ starts.append(seg_end - window_samples)
488
+ ends.append(seg_end)
489
+
490
+ for c_start, c_end in zip(starts, ends):
491
+ chunk = audio_array[c_start:c_end]
492
+
493
+ # Pad short chunks with reflection
494
+ if len(chunk) < window_samples:
495
+ pad_width = window_samples - len(chunk)
496
+ chunk = np.pad(chunk, (0, pad_width), mode="reflect")
497
+
498
+ # Extract embedding using SpeechBrain's encode_batch
499
+ chunk_tensor = torch.from_numpy(chunk).float().unsqueeze(0)
500
+ embedding = (
501
+ speaker_model.encode_batch(chunk_tensor).squeeze(0).squeeze(0).cpu().numpy()
502
+ )
503
+
504
+ # Validate embedding
505
+ if np.isfinite(embedding).all() and np.linalg.norm(embedding) > 1e-8:
506
+ embeddings.append(embedding)
507
+ window_segments.append(
508
+ {
509
+ "start": c_start / sample_rate,
510
+ "end": c_end / sample_rate,
511
+ }
512
+ )
513
+
514
+ # Normalize all embeddings at once
515
+ if embeddings:
516
+ return normalize(np.array(embeddings)), window_segments
517
+ return np.array([]), []
518
+
519
+ @classmethod
520
+ def _resample_vad(cls, vad_frames: list[bool], num_frames: int) -> np.ndarray:
521
+ """Resample VAD frame decisions to match voting grid resolution.
522
+
523
+ VAD operates at 256 samples / 16000 Hz = 16ms per frame.
524
+ Voting operates at VOTING_RATE (default 10ms) per frame.
525
+ This maps VAD decisions to the finer voting grid.
526
+ """
527
+ if not vad_frames:
528
+ return np.zeros(num_frames, dtype=bool)
529
+
530
+ vad_rate = 256 / 16000 # 16ms per VAD frame
531
+ vad_arr = np.array(vad_frames)
532
+
533
+ # Vectorized: compute VAD frame indices for each voting frame
534
+ voting_times = np.arange(num_frames) * cls.VOTING_RATE
535
+ vad_indices = np.clip((voting_times / vad_rate).astype(int), 0, len(vad_arr) - 1)
536
+ return vad_arr[vad_indices]
537
+
538
+ @classmethod
539
+ def _postprocess_segments(
540
+ cls,
541
+ window_segments: list[dict],
542
+ labels: np.ndarray,
543
+ total_duration: float,
544
+ vad_frames: list[bool],
545
+ ) -> list[dict]:
546
+ """Post-process using frame-level consensus voting with VAD-aware silence."""
547
+ if not window_segments or len(labels) == 0:
548
+ return []
549
+
550
+ # Correct labels to be contiguous
551
+ unique_labels = np.unique(labels)
552
+ label_map = {old: new for new, old in enumerate(unique_labels)}
553
+ clean_labels = np.array([label_map[lbl] for lbl in labels])
554
+ num_speakers = len(unique_labels)
555
+
556
+ if num_speakers == 0:
557
+ return []
558
+
559
+ # Create voting grid
560
+ num_frames = int(np.ceil(total_duration / cls.VOTING_RATE)) + 1
561
+ votes = np.zeros((num_frames, num_speakers), dtype=np.float32)
562
+
563
+ # Accumulate votes
564
+ for win, label in zip(window_segments, clean_labels):
565
+ start_frame = int(win["start"] / cls.VOTING_RATE)
566
+ end_frame = int(win["end"] / cls.VOTING_RATE)
567
+ end_frame = min(end_frame, num_frames)
568
+ if start_frame < end_frame:
569
+ votes[start_frame:end_frame, label] += 1.0
570
+
571
+ # Determine winner per frame
572
+ frame_speakers = np.argmax(votes, axis=1)
573
+ max_votes = np.max(votes, axis=1)
574
+
575
+ # Resample VAD to voting grid resolution for silence-aware voting
576
+ vad_resampled = cls._resample_vad(vad_frames, num_frames)
577
+
578
+ # Convert frames to segments
579
+ final_segments = []
580
+ current_speaker = -1
581
+ seg_start = 0.0
582
+
583
+ for f in range(num_frames):
584
+ speaker = int(frame_speakers[f])
585
+ score = max_votes[f]
586
+
587
+ # Force silence if VAD says no speech OR no votes
588
+ if score == 0 or not vad_resampled[f]:
589
+ speaker = -1
590
+
591
+ if speaker != current_speaker:
592
+ if current_speaker != -1:
593
+ final_segments.append(
594
+ {
595
+ "speaker": f"SPEAKER_{current_speaker}",
596
+ "start": seg_start,
597
+ "end": f * cls.VOTING_RATE,
598
+ }
599
+ )
600
+ current_speaker = speaker
601
+ seg_start = f * cls.VOTING_RATE
602
+
603
+ # Close last segment
604
+ if current_speaker != -1:
605
+ final_segments.append(
606
+ {
607
+ "speaker": f"SPEAKER_{current_speaker}",
608
+ "start": seg_start,
609
+ "end": num_frames * cls.VOTING_RATE,
610
+ }
611
+ )
612
+
613
+ return cls._merge_short_segments(final_segments)
614
+
615
+ @classmethod
616
+ def _merge_short_segments(cls, segments: list[dict]) -> list[dict]:
617
+ """Merge short segments to reduce flicker."""
618
+ if not segments:
619
+ return []
620
+
621
+ clean: list[dict] = []
622
+ for seg in segments:
623
+ dur = seg["end"] - seg["start"]
624
+ if dur < cls.MIN_SEGMENT_DURATION:
625
+ if (
626
+ clean
627
+ and clean[-1]["speaker"] == seg["speaker"]
628
+ and seg["start"] - clean[-1]["end"] < cls.SHORT_SEGMENT_GAP
629
+ ):
630
+ clean[-1]["end"] = seg["end"]
631
+ continue
632
+
633
+ if (
634
+ clean
635
+ and clean[-1]["speaker"] == seg["speaker"]
636
+ and seg["start"] - clean[-1]["end"] < cls.SAME_SPEAKER_GAP
637
+ ):
638
+ clean[-1]["end"] = seg["end"]
639
+ else:
640
+ clean.append(seg)
641
+
642
+ return clean
643
+
644
+ @classmethod
645
+ def assign_speakers_to_words(
646
+ cls,
647
+ words: list[dict],
648
+ speaker_segments: list[dict],
649
+ ) -> list[dict]:
650
+ """Assign speaker labels to words based on timestamp overlap.
651
+
652
+ Args:
653
+ words: List of word dicts with 'word', 'start', 'end' keys
654
+ speaker_segments: List of speaker dicts with 'speaker', 'start', 'end' keys
655
+
656
+ Returns:
657
+ Words list with 'speaker' key added to each word
658
+ """
659
+ for word in words:
660
+ word_mid = (word["start"] + word["end"]) / 2
661
+
662
+ # Find the speaker segment that contains this word's midpoint
663
+ best_speaker = None
664
+ for seg in speaker_segments:
665
+ if seg["start"] <= word_mid <= seg["end"]:
666
+ best_speaker = seg["speaker"]
667
+ break
668
+
669
+ # If no exact match, find closest segment
670
+ if best_speaker is None and speaker_segments:
671
+ min_dist = float("inf")
672
+ for seg in speaker_segments:
673
+ seg_mid = (seg["start"] + seg["end"]) / 2
674
+ dist = abs(word_mid - seg_mid)
675
+ if dist < min_dist:
676
+ min_dist = dist
677
+ best_speaker = seg["speaker"]
678
+
679
+ word["speaker"] = best_speaker
680
+
681
+ return words
682
+
683
+
684
+ class SpeakerDiarizer:
685
+ """Speaker diarization using TEN-VAD + ECAPA-TDNN + spectral clustering.
686
+
687
+ Example:
688
+ >>> segments = SpeakerDiarizer.diarize(audio_array)
689
+ >>> for seg in segments:
690
+ ... print(f"{seg['speaker']}: {seg['start']:.2f} - {seg['end']:.2f}")
691
+ """
692
+
693
+ @classmethod
694
+ def diarize(
695
+ cls,
696
+ audio: np.ndarray | str,
697
+ sample_rate: int = 16000,
698
+ num_speakers: int | None = None,
699
+ min_speakers: int | None = None,
700
+ max_speakers: int | None = None,
701
+ **_kwargs,
702
+ ) -> list[dict]:
703
+ """Run speaker diarization on audio.
704
+
705
+ Args:
706
+ audio: Audio waveform as numpy array or path to audio file
707
+ sample_rate: Audio sample rate (default 16000)
708
+ num_speakers: Exact number of speakers (if known)
709
+ min_speakers: Minimum number of speakers
710
+ max_speakers: Maximum number of speakers
711
+
712
+ Returns:
713
+ List of dicts with 'speaker', 'start', 'end' keys
714
+ """
715
+ return LocalSpeakerDiarizer.diarize(
716
+ audio,
717
+ sample_rate=sample_rate,
718
+ num_speakers=num_speakers,
719
+ min_speakers=min_speakers or 2,
720
+ max_speakers=max_speakers or 10,
721
+ )
722
+
723
+ @classmethod
724
+ def assign_speakers_to_words(
725
+ cls,
726
+ words: list[dict],
727
+ speaker_segments: list[dict],
728
+ ) -> list[dict]:
729
+ """Assign speaker labels to words based on timestamp overlap."""
730
+ return LocalSpeakerDiarizer.assign_speakers_to_words(words, speaker_segments)
generation_config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_sample": false,
3
+ "eos_token_id": [
4
+ 151645,
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "length_penalty": 1.0,
9
+ "max_new_tokens": 256,
10
+ "min_new_tokens": 0,
11
+ "no_repeat_ngram_size": 0,
12
+ "num_beams": 1,
13
+ "pad_token_id": 151643,
14
+ "repetition_penalty": 1.0,
15
+ "transformers_version": "5.6.1",
16
+ "use_cache": true
17
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0c679dfd91be2c023ffe686e2ef724085115b1a6ccbc51a1f2d8387465e43db0
3
+ size 2470218536
preprocessor_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "chunk_length": 30,
3
+ "dither": 0.0,
4
+ "feature_extractor_type": "WhisperFeatureExtractor",
5
+ "feature_size": 128,
6
+ "hop_length": 160,
7
+ "n_fft": 400,
8
+ "n_samples": 480000,
9
+ "nb_max_frames": 3000,
10
+ "padding": false,
11
+ "padding_side": "right",
12
+ "padding_value": 0.0,
13
+ "return_attention_mask": false,
14
+ "sampling_rate": 16000,
15
+ "processor_class": "ASRProcessor",
16
+ "auto_map": {
17
+ "AutoProcessor": "asr_processing.ASRProcessor"
18
+ }
19
+ }
projectors.py ADDED
@@ -0,0 +1,487 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Audio projector modules for bridging encoder and decoder embeddings.
2
+
3
+ This module contains all projector architectures:
4
+ - MLPAudioProjector: Simple 2-layer MLP with frame stacking downsampling
5
+ - MOSAProjector: MOSA-style dense mixture of experts
6
+ - SharedMoEAudioProjector: Shared expert + sparse routed experts
7
+ - QFormerAudioProjector: BLIP-2 QFormer with learnable queries (Granite-style)
8
+ """
9
+
10
+ import math
11
+
12
+ import torch
13
+ import torch.nn as nn
14
+ import torch.nn.functional as F # noqa: N812
15
+ from transformers import AutoModel, Blip2QFormerConfig
16
+ from transformers.models.llama.modeling_llama import LlamaRMSNorm
17
+
18
+ # =============================================================================
19
+ # MLP Projector
20
+ # =============================================================================
21
+
22
+
23
+ class MLPAudioProjector(nn.Module):
24
+ """2-layer MLP projector with frame-stacking downsampling (matches GLM-ASR)."""
25
+
26
+ def __init__(self, config):
27
+ """Initialize MLP projector.
28
+
29
+ Args:
30
+ config: ASRConfig with encoder_dim, llm_dim, projector_pool_stride
31
+ """
32
+ super().__init__()
33
+
34
+ encoder_dim = getattr(config, "encoder_dim", 768)
35
+ llm_dim = getattr(config, "llm_dim", 2048)
36
+ self.k = getattr(config, "projector_pool_stride", 4)
37
+
38
+ # Frame stacking: concat k adjacent frames then project
39
+ in_dim = encoder_dim * self.k
40
+ # Hidden dim defaults to llm_dim, can be overridden via config
41
+ hidden_dim = getattr(config, "projector_hidden_dim", None) or llm_dim
42
+ self.linear_1 = nn.Linear(in_dim, hidden_dim, bias=False)
43
+ self.norm = LlamaRMSNorm(hidden_dim, eps=1e-6)
44
+ self.act = nn.GELU()
45
+ self.linear_2 = nn.Linear(hidden_dim, llm_dim, bias=False)
46
+ # Output norm aligns the projector's RMS with the LM's embed_tokens
47
+ # distribution. Without it, linear_2's Kaiming-uniform init produces
48
+ # outputs ~30× quieter than embed rows, which saturates softmax at
49
+ # audio positions and starves them of gradient.
50
+ self.norm_2 = LlamaRMSNorm(llm_dim, eps=1e-6)
51
+
52
+ def get_output_length(self, input_length: int) -> int:
53
+ """Calculate output sequence length given input length (matches GLM-ASR)."""
54
+ # GLM-ASR formula: (L - merge_factor) // merge_factor + 1
55
+ return (input_length - self.k) // self.k + 1
56
+
57
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
58
+ """Project audio features to LLM embedding space.
59
+
60
+ Args:
61
+ x: Audio encoder output of shape [batch, seq_len, encoder_dim]
62
+
63
+ Returns:
64
+ Projected features of shape [batch, (seq_len - k) // k + 1, llm_dim]
65
+ """
66
+ x = _frame_stack(x, self.k)
67
+ x = self.linear_1(x)
68
+ x = self.norm(x)
69
+ x = self.act(x)
70
+ x = self.linear_2(x)
71
+ return self.norm_2(x)
72
+
73
+
74
+ # =============================================================================
75
+ # MoE Projector (MOSA-style)
76
+ # =============================================================================
77
+
78
+
79
+ def _frame_stack(x: torch.Tensor, k: int) -> torch.Tensor:
80
+ """Stack k adjacent frames along the feature dim.
81
+
82
+ Truncates trailing frames that don't fill a complete k-frame window,
83
+ matching GLM-ASR's `(seq_len - k) // k + 1` formula.
84
+ """
85
+ batch, seq, dim = x.shape
86
+ out_len = (seq - k) // k + 1
87
+ return x[:, : out_len * k, :].reshape(batch, out_len, dim * k)
88
+
89
+
90
+ class SimpleAdapter(nn.Module):
91
+ """Simple 2-layer GELU adapter (from MOSA paper)."""
92
+
93
+ def __init__(self, input_dim: int, hidden_dim: int, output_dim: int):
94
+ super().__init__()
95
+ self.fc1 = nn.Linear(input_dim, hidden_dim)
96
+ self.act = nn.GELU()
97
+ self.fc2 = nn.Linear(hidden_dim, output_dim)
98
+
99
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
100
+ return self.fc2(self.act(self.fc1(x)))
101
+
102
+
103
+ class MOSAProjector(nn.Module):
104
+ """MOSA-Base projector: simple 2-layer ReLU router with 4 simple adapters.
105
+
106
+ Based on "MOSA: Mixtures of Simple Adapters" (arXiv:2508.18998).
107
+ Uses softmax gating over all experts (dense MoE) with only cross-entropy loss.
108
+ Uses Conv1d for downsampling (2 layers, stride 2 each = 4x total).
109
+ """
110
+
111
+ ADAPTER_HIDDEN_DIM = 4096
112
+ ROUTER_HIDDEN_DIM = 512
113
+ CONV_KERNEL = 3
114
+ CONV_STRIDE = 2
115
+ CONV_PADDING = 1
116
+
117
+ def __init__(self, config):
118
+ """Initialize MOSA projector.
119
+
120
+ Args:
121
+ config: ASRConfig with encoder_dim, llm_dim, num_experts
122
+ """
123
+ super().__init__()
124
+ self.encoder_dim = getattr(config, "encoder_dim", None) or 1280
125
+ self.llm_dim = getattr(config, "llm_dim", None) or 2048
126
+ self.num_experts = getattr(config, "num_experts", None) or 4 # MOSA-Base uses 4
127
+
128
+ conv_kwargs = {
129
+ "kernel_size": self.CONV_KERNEL,
130
+ "stride": self.CONV_STRIDE,
131
+ "padding": self.CONV_PADDING,
132
+ }
133
+ self.downsampler = nn.Sequential(
134
+ nn.Conv1d(self.encoder_dim, self.encoder_dim, **conv_kwargs),
135
+ nn.GELU(),
136
+ nn.Conv1d(self.encoder_dim, self.llm_dim, **conv_kwargs),
137
+ nn.GELU(),
138
+ )
139
+
140
+ self.router = nn.Sequential(
141
+ nn.Linear(self.llm_dim, self.ROUTER_HIDDEN_DIM),
142
+ nn.ReLU(),
143
+ nn.Linear(self.ROUTER_HIDDEN_DIM, self.num_experts),
144
+ )
145
+
146
+ self.experts = nn.ModuleList(
147
+ [
148
+ SimpleAdapter(self.llm_dim, self.ADAPTER_HIDDEN_DIM, self.llm_dim)
149
+ for _ in range(self.num_experts)
150
+ ]
151
+ )
152
+
153
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
154
+ """Project audio features using mixture of experts.
155
+
156
+ Args:
157
+ x: Audio encoder output of shape [batch, seq_len, encoder_dim]
158
+
159
+ Returns:
160
+ Projected features of shape [batch, out_len, llm_dim]
161
+ """
162
+ x = self.downsampler(x.transpose(1, 2)).transpose(1, 2)
163
+
164
+ routing_weights = F.softmax(self.router(x), dim=-1) # (B, out_len, num_experts)
165
+
166
+ # Accumulate weighted expert outputs without materializing all experts at once.
167
+ output = self.experts[0](x) * routing_weights[..., 0:1]
168
+ for i, expert in enumerate(self.experts[1:], start=1):
169
+ output = output + expert(x) * routing_weights[..., i : i + 1]
170
+ return output
171
+
172
+ def get_output_length(self, input_length: int) -> int:
173
+ """Calculate output sequence length after Conv1d downsampling (4x reduction)."""
174
+ length = input_length
175
+ for _ in range(2):
176
+ length = (length + 2 * self.CONV_PADDING - self.CONV_KERNEL) // self.CONV_STRIDE + 1
177
+ return length
178
+
179
+
180
+ # =============================================================================
181
+ # MoE Projector (Pure PyTorch with Shared Expert)
182
+ # =============================================================================
183
+
184
+
185
+ class MoEAudioProjector(nn.Module):
186
+ """MoE projector with shared expert (DeepSeek-style), pure PyTorch implementation.
187
+
188
+ Uses 4 sparse experts with top-2 routing plus a shared expert that processes all tokens.
189
+ No external dependencies (megablocks removed).
190
+
191
+ Architecture matches main branch: norm → experts(in_dim → hidden → out_dim)
192
+ """
193
+
194
+ def __init__(self, config):
195
+ """Initialize MoE projector.
196
+
197
+ Args:
198
+ config: ASRConfig with encoder_dim, llm_dim, num_experts, num_experts_per_tok
199
+ """
200
+ super().__init__()
201
+
202
+ self.k = getattr(config, "projector_pool_stride", 4)
203
+ self.aux_coef = getattr(config, "router_aux_loss_coef", 0.01)
204
+
205
+ # Stability coefficients
206
+ self.router_z_loss_coef = getattr(
207
+ config, "router_z_loss_coef", 1e-4
208
+ ) # Prevents logit explosion
209
+ self.router_jitter_noise = getattr(
210
+ config, "router_jitter_noise", 0.01
211
+ ) # Prevents expert collapse
212
+
213
+ in_dim = config.encoder_dim * self.k
214
+ out_dim = config.llm_dim
215
+
216
+ # Expert hidden dim (default = output dim)
217
+ hidden_dim = getattr(config, "projector_hidden_dim", None) or out_dim
218
+
219
+ # Number of experts and top-k selection
220
+ self.num_experts = getattr(config, "num_experts", 4)
221
+ self.top_k = getattr(config, "num_experts_per_tok", 2)
222
+
223
+ # A. Normalize stacked input (like main branch SharedMoEBlock)
224
+ self.norm = LlamaRMSNorm(in_dim, eps=1e-6)
225
+
226
+ # B. Router (operates on stacked input)
227
+ self.router = nn.Linear(in_dim, self.num_experts, bias=False)
228
+
229
+ # C. Experts: simple 2-layer MLP (same as MLPAudioProjector)
230
+ self.experts = nn.ModuleList(
231
+ [SimpleAdapter(in_dim, hidden_dim, out_dim) for _ in range(self.num_experts)]
232
+ )
233
+
234
+ # D. Shared Expert (same architecture)
235
+ self.shared_expert = SimpleAdapter(in_dim, hidden_dim, out_dim)
236
+
237
+ # E. Initialize weights for stable training
238
+ self._init_weights()
239
+
240
+ self.last_aux_loss = torch.tensor(0.0)
241
+
242
+ def _init_weights(self):
243
+ """Initialize weights for stable training start."""
244
+ with torch.no_grad():
245
+ # Router: small weights -> uniform probability
246
+ nn.init.normal_(self.router.weight, mean=0.0, std=0.02)
247
+
248
+ # Experts: xavier for fc1, small for fc2 (output)
249
+ for expert in [self.shared_expert, *self.experts]:
250
+ nn.init.xavier_uniform_(expert.fc1.weight)
251
+ nn.init.normal_(expert.fc2.weight, mean=0.0, std=0.01) # Small init
252
+
253
+ def get_output_length(self, input_length: int) -> int:
254
+ """Calculate output sequence length given input length (matches MLP projector)."""
255
+ return (input_length - self.k) // self.k + 1
256
+
257
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
258
+ """Project audio features using shared + sparse MoE.
259
+
260
+ Args:
261
+ x: Audio encoder output of shape [batch, seq_len, encoder_dim]
262
+
263
+ Returns:
264
+ Projected features of shape [batch, out_len, llm_dim]
265
+ """
266
+ x = _frame_stack(x, self.k)
267
+ batch, out_len, _ = x.shape
268
+
269
+ # Normalize stacked input (like main branch SharedMoEBlock)
270
+ x = self.norm(x)
271
+ flat_x = x.view(-1, x.size(-1)) # [tokens, in_dim]
272
+
273
+ # 3. Shared Expert (compute first, creates output tensor)
274
+ output = self.shared_expert(flat_x)
275
+
276
+ # 4. Sparse Experts (in-place add to shared output)
277
+ self.last_aux_loss = self._forward_sparse(flat_x, output)
278
+
279
+ return output.view(batch, out_len, -1)
280
+
281
+ def _forward_sparse(self, x: torch.Tensor, output: torch.Tensor) -> torch.Tensor:
282
+ """Stability-hardened sparse expert dispatch (in-place add to output).
283
+
284
+ Args:
285
+ x: Flattened input of shape [tokens, dim]
286
+ output: Output tensor to add sparse expert results into (in-place)
287
+
288
+ Returns:
289
+ Auxiliary loss tensor
290
+ """
291
+ # A. Router Logic with Jitter
292
+ logits = self.router(x)
293
+
294
+ if self.training and self.router_jitter_noise > 0:
295
+ # Jitter: multiply by uniform noise (1-eps, 1+eps) to shake decision boundary
296
+ # Prevents router from getting stuck on one expert early in training
297
+ noise = torch.empty_like(logits).uniform_(
298
+ 1.0 - self.router_jitter_noise, 1.0 + self.router_jitter_noise
299
+ )
300
+ logits = logits * noise
301
+
302
+ # Force float32 for softmax (bf16/fp16 exponentials can overflow)
303
+ probs = torch.softmax(logits, dim=-1, dtype=torch.float32).type_as(x)
304
+
305
+ # B. Top-K Selection
306
+ top_k_weights, top_k_indices = torch.topk(probs, self.top_k, dim=-1)
307
+
308
+ # Normalize weights so they sum to 1.0
309
+ top_k_weights = top_k_weights / (top_k_weights.sum(dim=-1, keepdim=True) + 1e-6)
310
+
311
+ # C. Aux Loss + Z-Loss
312
+ aux_loss = torch.tensor(0.0, device=x.device)
313
+
314
+ if self.training:
315
+ # Load balancing loss (batch-size invariant)
316
+ prob_per_expert = probs.mean(0) # [num_experts]
317
+ target = 1.0 / self.num_experts
318
+ balance_loss = (
319
+ self.aux_coef * ((prob_per_expert - target) ** 2).mean() * self.num_experts
320
+ )
321
+
322
+ # Z-loss: penalty on large logits to prevent softmax saturation
323
+ z_loss = self.router_z_loss_coef * torch.logsumexp(logits, dim=-1).pow(2).mean()
324
+
325
+ aux_loss = balance_loss + z_loss
326
+
327
+ # D. Dispatch Loop (in-place add to output)
328
+ for i, expert in enumerate(self.experts):
329
+ # Create boolean mask for tokens that selected Expert 'i'
330
+ mask = top_k_indices == i
331
+
332
+ if mask.any():
333
+ # token_idx = which tokens, k_idx = 1st or 2nd choice
334
+ token_idx, k_idx = torch.where(mask)
335
+
336
+ # Gather inputs and compute
337
+ expert_input = x[token_idx]
338
+ expert_output = expert(expert_input)
339
+
340
+ # Apply routing weight
341
+ weight = top_k_weights[token_idx, k_idx].unsqueeze(-1)
342
+ weighted_output = (expert_output * weight).type_as(output)
343
+
344
+ # Scatter back in-place (index_add_ is atomic and deterministic)
345
+ output.index_add_(0, token_idx, weighted_output)
346
+
347
+ return aux_loss
348
+
349
+ def get_aux_loss(self) -> torch.Tensor:
350
+ """Return auxiliary load balancing loss."""
351
+ return self.last_aux_loss
352
+
353
+
354
+ # =============================================================================
355
+ # QFormer Projector (Granite-style)
356
+ # =============================================================================
357
+
358
+
359
+ class QFormerAudioProjector(nn.Module):
360
+ """
361
+ BLIP-2 QFormer projector with learnable queries.
362
+
363
+ Based on GraniteSpeechEncoderProjector - uses a QFormer model with learnable
364
+ query embeddings to compress and project audio encoder outputs. The audio
365
+ sequence is processed in windows and downsampled via cross-attention.
366
+ """
367
+
368
+ def __init__(self, config):
369
+ """Initialize QFormer projector.
370
+
371
+ Args:
372
+ config: ASRConfig with encoder_dim, llm_dim, qformer_* settings
373
+ """
374
+ super().__init__()
375
+
376
+ encoder_dim = config.encoder_dim
377
+ llm_dim = config.llm_dim
378
+
379
+ # Window and downsampling parameters (Granite defaults: window=15, downsample=5)
380
+ self.window_size = getattr(config, "qformer_window_size", 15)
381
+ self.downsample_rate = getattr(config, "downsample_rate", 5)
382
+ self.num_queries = self.window_size // self.downsample_rate
383
+
384
+ # QFormer hidden size (matches encoder for cross-attention)
385
+ qformer_hidden = getattr(config, "qformer_hidden_size", None) or encoder_dim
386
+ qformer_num_layers = getattr(config, "qformer_num_layers", 2)
387
+ qformer_num_heads = getattr(config, "qformer_num_heads", 16)
388
+ qformer_intermediate = getattr(config, "qformer_intermediate_size", None) or (
389
+ qformer_hidden * 4
390
+ )
391
+
392
+ # Learnable query embeddings (Granite uses std=1.0)
393
+ self.query = nn.Parameter(torch.zeros(1, self.num_queries, qformer_hidden))
394
+ self.query.data.normal_(mean=0.0, std=1.0)
395
+
396
+ # Optional projection if encoder dim != qformer hidden
397
+ if encoder_dim != qformer_hidden:
398
+ self.encoder_proj = nn.Linear(encoder_dim, qformer_hidden, bias=False)
399
+ else:
400
+ self.encoder_proj = None
401
+
402
+ # Configure QFormer to match Granite's exact config
403
+ qformer_config = Blip2QFormerConfig(
404
+ hidden_size=qformer_hidden,
405
+ num_hidden_layers=qformer_num_layers,
406
+ num_attention_heads=qformer_num_heads,
407
+ intermediate_size=qformer_intermediate,
408
+ encoder_hidden_size=qformer_hidden,
409
+ cross_attention_frequency=1,
410
+ # Granite-specific settings
411
+ hidden_act="gelu",
412
+ attention_probs_dropout_prob=0.1,
413
+ hidden_dropout_prob=0.1,
414
+ layer_norm_eps=1e-12,
415
+ initializer_range=0.02,
416
+ )
417
+ self.qformer = AutoModel.from_config(qformer_config)
418
+
419
+ # Final projection to LLM dimension (Granite uses bias=True)
420
+ self.linear = nn.Linear(qformer_hidden, llm_dim)
421
+
422
+ def get_output_length(self, input_length):
423
+ """Calculate output sequence length given input length.
424
+
425
+ Accepts either Python ints or torch tensors; uses ceiling division so
426
+ the formula is identical for both — math.ceil would block tensors.
427
+ """
428
+ nblocks = (input_length + self.window_size - 1) // self.window_size
429
+ return nblocks * self.num_queries
430
+
431
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
432
+ """
433
+ Args:
434
+ hidden_states: [batch_size, seq_len, encoder_dim]
435
+
436
+ Returns:
437
+ projected: [batch_size, num_output_tokens, llm_dim]
438
+ """
439
+ batch_size, seq_len, dim = hidden_states.size()
440
+
441
+ # Ensure float dtype for QFormer
442
+ target_dtype = self.query.dtype
443
+ if hidden_states.dtype != target_dtype:
444
+ hidden_states = hidden_states.to(target_dtype)
445
+
446
+ # Optional encoder projection
447
+ if self.encoder_proj is not None:
448
+ hidden_states = self.encoder_proj(hidden_states)
449
+
450
+ # Compute number of windows and pad to fit
451
+ nblocks = math.ceil(seq_len / self.window_size)
452
+ pad = nblocks * self.window_size - seq_len
453
+ if pad > 0:
454
+ hidden_states = F.pad(hidden_states, (0, 0, 0, pad), "constant", 0)
455
+
456
+ # Reshape to process each window: [batch*nblocks, window_size, dim]
457
+ effective_batch = batch_size * nblocks
458
+ hidden_states = hidden_states.view(effective_batch, self.window_size, -1)
459
+
460
+ # Expand queries to match batch size
461
+ query_embeds = self.query.expand(effective_batch, -1, -1)
462
+
463
+ # QFormer cross-attention
464
+ query_output = self.qformer(
465
+ query_embeds=query_embeds,
466
+ encoder_hidden_states=hidden_states,
467
+ return_dict=True,
468
+ )
469
+
470
+ # Reshape back: [batch, nblocks * num_queries, hidden]
471
+ output_tokens = nblocks * self.num_queries
472
+ query_proj = query_output.last_hidden_state.view(batch_size, output_tokens, -1)
473
+
474
+ # Project to LLM dimension
475
+ return self.linear(query_proj)
476
+
477
+
478
+ # =============================================================================
479
+ # Projector Registry
480
+ # =============================================================================
481
+
482
+ PROJECTOR_CLASSES = {
483
+ "mlp": MLPAudioProjector,
484
+ "mosa": MOSAProjector,
485
+ "moe": MoEAudioProjector,
486
+ "qformer": QFormerAudioProjector,
487
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33b674fb8444e2553eae8f1b261093371920a28ef75b5c18f4deb3f9217ed0ba
3
+ size 11422834
tokenizer_config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "backend": "tokenizers",
4
+ "bos_token": null,
5
+ "clean_up_tokenization_spaces": false,
6
+ "eos_token": "<|im_end|>",
7
+ "errors": "replace",
8
+ "extra_special_tokens": [
9
+ "<audio>"
10
+ ],
11
+ "is_local": false,
12
+ "local_files_only": false,
13
+ "model_max_length": 131072,
14
+ "pad_token": "<|endoftext|>",
15
+ "split_special_tokens": false,
16
+ "tokenizer_class": "Qwen2Tokenizer",
17
+ "unk_token": null
18
+ }