mazesmazes commited on
Commit
153bac1
·
verified ·
1 Parent(s): f7eadf8

Training in progress - step 1000

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ tokenizer.json filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,199 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ tags: []
4
+ ---
5
+
6
+ # Model Card for Model ID
7
+
8
+ <!-- Provide a quick summary of what the model is/does. -->
9
+
10
+
11
+
12
+ ## Model Details
13
+
14
+ ### Model Description
15
+
16
+ <!-- Provide a longer summary of what this model is. -->
17
+
18
+ This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
19
+
20
+ - **Developed by:** [More Information Needed]
21
+ - **Funded by [optional]:** [More Information Needed]
22
+ - **Shared by [optional]:** [More Information Needed]
23
+ - **Model type:** [More Information Needed]
24
+ - **Language(s) (NLP):** [More Information Needed]
25
+ - **License:** [More Information Needed]
26
+ - **Finetuned from model [optional]:** [More Information Needed]
27
+
28
+ ### Model Sources [optional]
29
+
30
+ <!-- Provide the basic links for the model. -->
31
+
32
+ - **Repository:** [More Information Needed]
33
+ - **Paper [optional]:** [More Information Needed]
34
+ - **Demo [optional]:** [More Information Needed]
35
+
36
+ ## Uses
37
+
38
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
39
+
40
+ ### Direct Use
41
+
42
+ <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
43
+
44
+ [More Information Needed]
45
+
46
+ ### Downstream Use [optional]
47
+
48
+ <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
49
+
50
+ [More Information Needed]
51
+
52
+ ### Out-of-Scope Use
53
+
54
+ <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
55
+
56
+ [More Information Needed]
57
+
58
+ ## Bias, Risks, and Limitations
59
+
60
+ <!-- This section is meant to convey both technical and sociotechnical limitations. -->
61
+
62
+ [More Information Needed]
63
+
64
+ ### Recommendations
65
+
66
+ <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
67
+
68
+ Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
69
+
70
+ ## How to Get Started with the Model
71
+
72
+ Use the code below to get started with the model.
73
+
74
+ [More Information Needed]
75
+
76
+ ## Training Details
77
+
78
+ ### Training Data
79
+
80
+ <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
81
+
82
+ [More Information Needed]
83
+
84
+ ### Training Procedure
85
+
86
+ <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
87
+
88
+ #### Preprocessing [optional]
89
+
90
+ [More Information Needed]
91
+
92
+
93
+ #### Training Hyperparameters
94
+
95
+ - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
96
+
97
+ #### Speeds, Sizes, Times [optional]
98
+
99
+ <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
100
+
101
+ [More Information Needed]
102
+
103
+ ## Evaluation
104
+
105
+ <!-- This section describes the evaluation protocols and provides the results. -->
106
+
107
+ ### Testing Data, Factors & Metrics
108
+
109
+ #### Testing Data
110
+
111
+ <!-- This should link to a Dataset Card if possible. -->
112
+
113
+ [More Information Needed]
114
+
115
+ #### Factors
116
+
117
+ <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
118
+
119
+ [More Information Needed]
120
+
121
+ #### Metrics
122
+
123
+ <!-- These are the evaluation metrics being used, ideally with a description of why. -->
124
+
125
+ [More Information Needed]
126
+
127
+ ### Results
128
+
129
+ [More Information Needed]
130
+
131
+ #### Summary
132
+
133
+
134
+
135
+ ## Model Examination [optional]
136
+
137
+ <!-- Relevant interpretability work for the model goes here -->
138
+
139
+ [More Information Needed]
140
+
141
+ ## Environmental Impact
142
+
143
+ <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
144
+
145
+ Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
146
+
147
+ - **Hardware Type:** [More Information Needed]
148
+ - **Hours used:** [More Information Needed]
149
+ - **Cloud Provider:** [More Information Needed]
150
+ - **Compute Region:** [More Information Needed]
151
+ - **Carbon Emitted:** [More Information Needed]
152
+
153
+ ## Technical Specifications [optional]
154
+
155
+ ### Model Architecture and Objective
156
+
157
+ [More Information Needed]
158
+
159
+ ### Compute Infrastructure
160
+
161
+ [More Information Needed]
162
+
163
+ #### Hardware
164
+
165
+ [More Information Needed]
166
+
167
+ #### Software
168
+
169
+ [More Information Needed]
170
+
171
+ ## Citation [optional]
172
+
173
+ <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
174
+
175
+ **BibTeX:**
176
+
177
+ [More Information Needed]
178
+
179
+ **APA:**
180
+
181
+ [More Information Needed]
182
+
183
+ ## Glossary [optional]
184
+
185
+ <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
186
+
187
+ [More Information Needed]
188
+
189
+ ## More Information [optional]
190
+
191
+ [More Information Needed]
192
+
193
+ ## Model Card Authors [optional]
194
+
195
+ [More Information Needed]
196
+
197
+ ## Model Card Contact
198
+
199
+ [More Information Needed]
alignment.py ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Forced alignment for word-level timestamps using Wav2Vec2."""
2
+
3
+ import numpy as np
4
+ import torch
5
+
6
+
7
+ def _get_device() -> str:
8
+ """Get best available device for non-transformers models."""
9
+ if torch.cuda.is_available():
10
+ return "cuda"
11
+ if torch.backends.mps.is_available():
12
+ return "mps"
13
+ return "cpu"
14
+
15
+
16
+ class ForcedAligner:
17
+ """Lazy-loaded forced aligner for word-level timestamps using torchaudio wav2vec2.
18
+
19
+ Uses Viterbi trellis algorithm for optimal alignment path finding.
20
+ """
21
+
22
+ _bundle = None
23
+ _model = None
24
+ _labels = None
25
+ _dictionary = None
26
+
27
+ @classmethod
28
+ def get_instance(cls, device: str = "cuda"):
29
+ """Get or create the forced alignment model (singleton).
30
+
31
+ Args:
32
+ device: Device to run model on ("cuda" or "cpu")
33
+
34
+ Returns:
35
+ Tuple of (model, labels, dictionary)
36
+ """
37
+ if cls._model is None:
38
+ import torchaudio
39
+
40
+ cls._bundle = torchaudio.pipelines.WAV2VEC2_ASR_BASE_960H
41
+ cls._model = cls._bundle.get_model().to(device)
42
+ cls._model.eval()
43
+ cls._labels = cls._bundle.get_labels()
44
+ cls._dictionary = {c: i for i, c in enumerate(cls._labels)}
45
+ return cls._model, cls._labels, cls._dictionary
46
+
47
+ @staticmethod
48
+ def _get_trellis(emission: torch.Tensor, tokens: list[int], blank_id: int = 0) -> torch.Tensor:
49
+ """Build trellis for forced alignment using forward algorithm.
50
+
51
+ The trellis[t, j] represents the log probability of the best path that
52
+ aligns the first j tokens to the first t frames.
53
+
54
+ Args:
55
+ emission: Log-softmax emission matrix of shape (num_frames, num_classes)
56
+ tokens: List of target token indices
57
+ blank_id: Index of the blank/CTC token (default 0)
58
+
59
+ Returns:
60
+ Trellis matrix of shape (num_frames + 1, num_tokens + 1)
61
+ """
62
+ num_frames = emission.size(0)
63
+ num_tokens = len(tokens)
64
+
65
+ trellis = torch.full((num_frames + 1, num_tokens + 1), -float("inf"))
66
+ trellis[0, 0] = 0
67
+
68
+ for t in range(num_frames):
69
+ for j in range(num_tokens + 1):
70
+ # Stay: emit blank and stay at j tokens
71
+ stay = trellis[t, j] + emission[t, blank_id]
72
+
73
+ # Move: emit token j and advance to j+1 tokens
74
+ move = trellis[t, j - 1] + emission[t, tokens[j - 1]] if j > 0 else -float("inf")
75
+
76
+ trellis[t + 1, j] = max(stay, move) # Viterbi: take best path
77
+
78
+ return trellis
79
+
80
+ @staticmethod
81
+ def _backtrack(
82
+ trellis: torch.Tensor, emission: torch.Tensor, tokens: list[int], blank_id: int = 0
83
+ ) -> list[tuple[int, float, float]]:
84
+ """Backtrack through trellis to find optimal forced monotonic alignment.
85
+
86
+ Guarantees:
87
+ - All tokens are emitted exactly once
88
+ - Strictly monotonic: each token's frames come after previous token's
89
+ - No frame skipping or token teleporting
90
+
91
+ Returns list of (token_id, start_frame, end_frame) for each token.
92
+ """
93
+ num_frames = emission.size(0)
94
+ num_tokens = len(tokens)
95
+
96
+ if num_tokens == 0:
97
+ return []
98
+
99
+ # Find the best ending point (should be at num_tokens)
100
+ # But verify trellis reached a valid state
101
+ if trellis[num_frames, num_tokens] == -float("inf"):
102
+ # Alignment failed - fall back to uniform distribution
103
+ frames_per_token = num_frames / num_tokens
104
+ return [
105
+ (tokens[i], i * frames_per_token, (i + 1) * frames_per_token)
106
+ for i in range(num_tokens)
107
+ ]
108
+
109
+ # Backtrack: find where each token transition occurred
110
+ # path[i] = frame where token i was first emitted
111
+ token_frames: list[list[int]] = [[] for _ in range(num_tokens)]
112
+
113
+ t = num_frames
114
+ j = num_tokens
115
+
116
+ while t > 0 and j > 0:
117
+ # Check: did we transition from j-1 to j at frame t-1?
118
+ stay_score = trellis[t - 1, j] + emission[t - 1, blank_id]
119
+ move_score = trellis[t - 1, j - 1] + emission[t - 1, tokens[j - 1]]
120
+
121
+ if move_score >= stay_score:
122
+ # Token j-1 was emitted at frame t-1
123
+ token_frames[j - 1].append(t - 1)
124
+ j -= 1
125
+ t -= 1
126
+
127
+ # Handle any remaining tokens at the start (edge case)
128
+ while j > 0:
129
+ token_frames[j - 1].append(0)
130
+ j -= 1
131
+
132
+ # We appended in reverse-time order; restore monotonic order
133
+ for frames in token_frames:
134
+ frames.reverse()
135
+
136
+ # Convert to spans
137
+ token_spans: list[tuple[int, float, float]] = []
138
+ for token_idx, frames in enumerate(token_frames):
139
+ if not frames:
140
+ # Token never emitted - assign minimal span after previous
141
+ if token_spans:
142
+ prev_end = token_spans[-1][2]
143
+ frames = [int(prev_end)]
144
+ else:
145
+ frames = [0]
146
+
147
+ token_id = tokens[token_idx]
148
+ start_frame = float(min(frames))
149
+ end_frame = float(max(frames)) + 1.0
150
+ token_spans.append((token_id, start_frame, end_frame))
151
+
152
+ return token_spans
153
+
154
+ # Offset compensation for Wav2Vec2-BASE systematic bias (in seconds)
155
+ # Calibrated on librispeech-alignments dataset
156
+ START_OFFSET = 0.06 # Subtract from start times (shift earlier)
157
+ END_OFFSET = -0.03 # Add to end times (shift later)
158
+
159
+ @classmethod
160
+ def align(
161
+ cls,
162
+ audio: np.ndarray,
163
+ text: str,
164
+ sample_rate: int = 16000,
165
+ _language: str = "eng",
166
+ _batch_size: int = 16,
167
+ ) -> list[dict]:
168
+ """Align transcript to audio and return word-level timestamps.
169
+
170
+ Uses Viterbi trellis algorithm for optimal forced alignment.
171
+
172
+ Args:
173
+ audio: Audio waveform as numpy array
174
+ text: Transcript text to align
175
+ sample_rate: Audio sample rate (default 16000)
176
+ _language: ISO-639-3 language code (default "eng" for English, unused)
177
+ _batch_size: Batch size for alignment model (unused)
178
+
179
+ Returns:
180
+ List of dicts with 'word', 'start', 'end' keys
181
+ """
182
+ import torchaudio
183
+
184
+ device = _get_device()
185
+ model, _labels, dictionary = cls.get_instance(device)
186
+ assert cls._bundle is not None and dictionary is not None # Initialized by get_instance
187
+
188
+ # Convert audio to tensor (copy to ensure array is writable)
189
+ if isinstance(audio, np.ndarray):
190
+ waveform = torch.from_numpy(audio.copy()).float()
191
+ else:
192
+ waveform = audio.clone().float()
193
+
194
+ # Ensure 2D (channels, time)
195
+ if waveform.dim() == 1:
196
+ waveform = waveform.unsqueeze(0)
197
+
198
+ # Resample if needed (wav2vec2 expects 16kHz)
199
+ if sample_rate != cls._bundle.sample_rate:
200
+ waveform = torchaudio.functional.resample(
201
+ waveform, sample_rate, cls._bundle.sample_rate
202
+ )
203
+
204
+ waveform = waveform.to(device)
205
+
206
+ # Get emissions from model
207
+ with torch.inference_mode():
208
+ emissions, _ = model(waveform)
209
+ emissions = torch.log_softmax(emissions, dim=-1)
210
+
211
+ emission = emissions[0].cpu()
212
+
213
+ # Normalize text: uppercase, keep only valid characters
214
+ transcript = text.upper()
215
+
216
+ # Build tokens from transcript (including word separators)
217
+ tokens = []
218
+ for char in transcript:
219
+ if char in dictionary:
220
+ tokens.append(dictionary[char])
221
+ elif char == " ":
222
+ tokens.append(dictionary.get("|", dictionary.get(" ", 0)))
223
+
224
+ if not tokens:
225
+ return []
226
+
227
+ # Build Viterbi trellis and backtrack for optimal path
228
+ trellis = cls._get_trellis(emission, tokens, blank_id=0)
229
+ alignment_path = cls._backtrack(trellis, emission, tokens, blank_id=0)
230
+
231
+ # Convert frame indices to time (model stride is 320 samples at 16kHz = 20ms)
232
+ frame_duration = 320 / cls._bundle.sample_rate
233
+
234
+ # Apply separate offset compensation for start/end (Wav2Vec2 systematic bias)
235
+ start_offset = cls.START_OFFSET
236
+ end_offset = cls.END_OFFSET
237
+
238
+ # Group aligned tokens into words based on pipe separator
239
+ words = text.split()
240
+ word_timestamps = []
241
+ current_word_start = None
242
+ current_word_end = None
243
+ word_idx = 0
244
+ separator_id = dictionary.get("|", dictionary.get(" ", 0))
245
+
246
+ for token_id, start_frame, end_frame in alignment_path:
247
+ if token_id == separator_id: # Word separator
248
+ if (
249
+ current_word_start is not None
250
+ and current_word_end is not None
251
+ and word_idx < len(words)
252
+ ):
253
+ start_time = max(0.0, current_word_start * frame_duration - start_offset)
254
+ end_time = max(0.0, current_word_end * frame_duration - end_offset)
255
+ word_timestamps.append(
256
+ {
257
+ "word": words[word_idx],
258
+ "start": start_time,
259
+ "end": end_time,
260
+ }
261
+ )
262
+ word_idx += 1
263
+ current_word_start = None
264
+ current_word_end = None
265
+ else:
266
+ if current_word_start is None:
267
+ current_word_start = start_frame
268
+ current_word_end = end_frame
269
+
270
+ # Don't forget the last word
271
+ if (
272
+ current_word_start is not None
273
+ and current_word_end is not None
274
+ and word_idx < len(words)
275
+ ):
276
+ start_time = max(0.0, current_word_start * frame_duration - start_offset)
277
+ end_time = max(0.0, current_word_end * frame_duration - end_offset)
278
+ word_timestamps.append(
279
+ {
280
+ "word": words[word_idx],
281
+ "start": start_time,
282
+ "end": end_time,
283
+ }
284
+ )
285
+
286
+ return word_timestamps
asr_config.py ADDED
@@ -0,0 +1,229 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional
2
+
3
+ import transformers
4
+
5
+ # Default conv layers for Whisper/GLM-ASR audio encoders: [(pad, kernel, stride), ...]
6
+ DEFAULT_ENCODER_CONV_LAYERS = [(1, 3, 1), (1, 3, 2)]
7
+
8
+
9
+ def compute_encoder_output_length(mel_length, conv_layers=None):
10
+ """Apply encoder conv layer formulas to compute output length.
11
+
12
+ Works with both Python ints and torch tensors of mel lengths; the formula
13
+ `(L + 2*p - (k-1) - 1) // s + 1` per layer is identical for both.
14
+ """
15
+ layers = conv_layers if conv_layers is not None else DEFAULT_ENCODER_CONV_LAYERS
16
+ length = mel_length
17
+ for padding, kernel_size, stride in layers:
18
+ length = (length + 2 * padding - (kernel_size - 1) - 1) // stride + 1
19
+ return length
20
+
21
+
22
+ class ASRConfig(transformers.PretrainedConfig):
23
+ """Configuration class for the ASR model.
24
+
25
+ This config combines settings for:
26
+ - Audio encoder (GLM-ASR/Whisper)
27
+ - Text decoder (Qwen)
28
+ - Projector (MLP, MOSA, MoE, QFormer)
29
+ - Generation parameters
30
+ - Training options (SpecAugment, LoRA)
31
+ """
32
+
33
+ model_type = "asr_model"
34
+ is_composition = True
35
+
36
+ def __init__(
37
+ self,
38
+ audio_model_id: str = "zai-org/GLM-ASR-Nano-2512",
39
+ text_model_id: str = "Qwen/Qwen3-0.6B",
40
+ attn_implementation: str = "flash_attention_2",
41
+ model_dtype: str = "bfloat16",
42
+ num_beams: Optional[int] = None,
43
+ system_prompt: str = "You are a helpful assistant.",
44
+ encoder_dim: Optional[int] = None,
45
+ llm_dim: Optional[int] = None,
46
+ # Encoder conv layers: list of (padding, kernel_size, stride) tuples
47
+ # Default is Whisper/GLM-ASR structure: conv1(k=3,s=1,p=1) + conv2(k=3,s=2,p=1)
48
+ encoder_conv_layers: Optional[list] = None,
49
+ audio_sample_rate: int = 16000,
50
+ projector_pool_stride: int = 4,
51
+ downsample_rate: int = 5, # Granite default
52
+ projector_hidden_dim: Optional[int] = None,
53
+ projector_type: str = "mlp", # "mlp", "mosa", "moe", "qformer"
54
+ # MoE-specific configuration
55
+ num_experts: int = 4, # Number of experts in MoE projectors
56
+ num_experts_per_tok: int = 2, # Top-k experts per token
57
+ router_aux_loss_coef: float = 0.01, # Auxiliary loss coefficient for load balancing
58
+ # QFormer-specific configuration (Granite defaults)
59
+ qformer_window_size: int = 15, # Window size for QFormer processing
60
+ qformer_hidden_size: Optional[int] = None, # QFormer hidden size (defaults to encoder_dim)
61
+ qformer_num_layers: int = 2, # Number of QFormer transformer layers
62
+ qformer_num_heads: int = 16, # Number of attention heads in QFormer
63
+ qformer_intermediate_size: Optional[int] = None, # FFN size (defaults to 4x hidden)
64
+ # SpecAugment settings
65
+ use_specaugment: bool = False,
66
+ num_time_masks: int = 2,
67
+ time_mask_length: int = 10,
68
+ num_freq_masks: int = 0,
69
+ freq_mask_length: int = 10,
70
+ # LoRA configuration (for Stage 2 fine-tuning)
71
+ use_lora: bool = False,
72
+ lora_rank: int = 8, # SALMONN default
73
+ lora_alpha: int = 32, # SALMONN default (scaling factor 4.0)
74
+ lora_dropout: float = 0.0,
75
+ lora_target_modules: Optional[list] = None, # Default: all linear layers
76
+ freeze_projector: bool = False, # True for Stage 2 (LoRA-only training)
77
+ freeze_language_model: bool = True, # False = full decoder fine-tuning
78
+ do_sample: bool = False,
79
+ temperature: Optional[float] = None,
80
+ top_p: Optional[float] = None,
81
+ top_k: Optional[int] = None,
82
+ max_new_tokens: Optional[int] = None,
83
+ min_new_tokens: Optional[int] = None,
84
+ repetition_penalty: Optional[float] = None,
85
+ length_penalty: Optional[float] = None,
86
+ no_repeat_ngram_size: Optional[int] = None,
87
+ use_cache: Optional[bool] = None,
88
+ **kwargs,
89
+ ):
90
+ """Initialize ASR model configuration.
91
+
92
+ Args:
93
+ audio_model_id: HuggingFace model ID for audio encoder (GLM-ASR/Whisper)
94
+ text_model_id: HuggingFace model ID for text decoder (Qwen)
95
+ attn_implementation: Attention implementation ("flash_attention_2", "sdpa", "eager")
96
+ model_dtype: Model dtype ("bfloat16", "float16", "float32")
97
+ projector_type: Projector architecture ("mlp", "mosa", "moe", "qformer")
98
+ use_lora: Enable LoRA adapters for Stage 2 fine-tuning
99
+ use_specaugment: Enable SpecAugment data augmentation
100
+ """
101
+ # Set default generation parameters (greedy decoding only).
102
+ # Applied via setattr below — keeping these out of kwargs so they
103
+ # don't get re-overwritten by super().__init__(**kwargs) at the end.
104
+ generation_defaults = {
105
+ "num_beams": 1,
106
+ "max_new_tokens": 128,
107
+ "min_new_tokens": 0,
108
+ "repetition_penalty": 1.0,
109
+ "length_penalty": 1.0,
110
+ "no_repeat_ngram_size": 0,
111
+ "use_cache": True,
112
+ }
113
+
114
+ self.audio_model_id = audio_model_id
115
+ self.text_model_id = text_model_id
116
+ self.attn_implementation = attn_implementation
117
+ self.model_dtype = model_dtype
118
+ self.system_prompt = system_prompt
119
+ self.encoder_dim = encoder_dim
120
+ self.llm_dim = llm_dim
121
+ self.encoder_conv_layers = encoder_conv_layers or DEFAULT_ENCODER_CONV_LAYERS
122
+ self.audio_sample_rate = audio_sample_rate
123
+ self.projector_pool_stride = projector_pool_stride
124
+ self.downsample_rate = downsample_rate
125
+ self.projector_hidden_dim = projector_hidden_dim
126
+ self.projector_type = projector_type
127
+ # MoE-specific configuration
128
+ self.num_experts = num_experts
129
+ self.num_experts_per_tok = num_experts_per_tok
130
+ self.router_aux_loss_coef = router_aux_loss_coef
131
+ # QFormer-specific configuration
132
+ self.qformer_window_size = qformer_window_size
133
+ self.qformer_hidden_size = qformer_hidden_size
134
+ self.qformer_num_layers = qformer_num_layers
135
+ self.qformer_num_heads = qformer_num_heads
136
+ self.qformer_intermediate_size = qformer_intermediate_size
137
+ # SpecAugment configuration
138
+ self.use_specaugment = use_specaugment
139
+ self.num_time_masks = num_time_masks
140
+ self.time_mask_length = time_mask_length
141
+ self.num_freq_masks = num_freq_masks
142
+ self.freq_mask_length = freq_mask_length
143
+ # LoRA configuration
144
+ self.use_lora = use_lora
145
+ self.lora_rank = lora_rank
146
+ self.lora_alpha = lora_alpha
147
+ self.lora_dropout = lora_dropout
148
+ self.lora_target_modules = lora_target_modules or [
149
+ "q_proj",
150
+ "k_proj",
151
+ "v_proj",
152
+ "o_proj",
153
+ "gate_proj",
154
+ "up_proj",
155
+ "down_proj",
156
+ ]
157
+ self.freeze_projector = freeze_projector
158
+ self.freeze_language_model = freeze_language_model
159
+
160
+ explicit_generation_args = {
161
+ "num_beams": num_beams,
162
+ "max_new_tokens": max_new_tokens,
163
+ "min_new_tokens": min_new_tokens,
164
+ "repetition_penalty": repetition_penalty,
165
+ "length_penalty": length_penalty,
166
+ "no_repeat_ngram_size": no_repeat_ngram_size,
167
+ "use_cache": use_cache,
168
+ }
169
+ for key, default in generation_defaults.items():
170
+ value = explicit_generation_args[key]
171
+ setattr(self, key, value if value is not None else default)
172
+ self.do_sample = do_sample
173
+ self.temperature = temperature
174
+ self.top_p = top_p
175
+ self.top_k = top_k
176
+
177
+ if "audio_config" not in kwargs:
178
+ self.audio_config = transformers.AutoConfig.from_pretrained(audio_model_id)
179
+ # Override dtype to match model_dtype
180
+ self.audio_config.dtype = model_dtype
181
+ else:
182
+ self.audio_config = kwargs.pop("audio_config")
183
+
184
+ if "text_config" not in kwargs:
185
+ self.text_config = transformers.AutoConfig.from_pretrained(
186
+ text_model_id, trust_remote_code=True
187
+ )
188
+ # Override dtype to match model_dtype
189
+ self.text_config.dtype = model_dtype
190
+ else:
191
+ self.text_config = kwargs.pop("text_config")
192
+
193
+ if isinstance(self.text_config, dict):
194
+ # Reconstruct config from dict using the model_type stored in the dict
195
+ model_type = self.text_config["model_type"]
196
+ config_class = transformers.AutoConfig.for_model(model_type).__class__
197
+ self.text_config = config_class(**self.text_config)
198
+
199
+ if isinstance(self.audio_config, dict):
200
+ model_type = self.audio_config.get("model_type")
201
+ if model_type:
202
+ config_class = transformers.AutoConfig.for_model(model_type).__class__
203
+ self.audio_config = config_class(**self.audio_config)
204
+
205
+ super().__init__(**kwargs)
206
+
207
+ # Point encoder to audio_config so pipeline uses correct feature extractor
208
+ # The pipeline looks for config.encoder._name_or_path for feature extractor
209
+ self.encoder = self.audio_config
210
+
211
+ self.auto_map = {
212
+ "AutoConfig": "asr_config.ASRConfig",
213
+ "AutoModel": "asr_modeling.ASRModel",
214
+ "AutoModelForSpeechSeq2Seq": "asr_modeling.ASRModel",
215
+ "AutoProcessor": "asr_processing.ASRProcessor",
216
+ }
217
+ self.custom_pipelines = {
218
+ "automatic-speech-recognition": {
219
+ "impl": "asr_pipeline.ASRPipeline",
220
+ "pt": ["AutoModelForSpeechSeq2Seq"],
221
+ "tf": [],
222
+ "type": "audio",
223
+ }
224
+ }
225
+ self.architectures = ["ASRModel"]
226
+ self.pipeline_tag = "automatic-speech-recognition"
227
+
228
+
229
+ transformers.AutoConfig.register("asr_model", ASRConfig)
asr_modeling.py ADDED
@@ -0,0 +1,839 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+ from pathlib import Path
3
+ from threading import Thread
4
+ from typing import Iterator, Optional, Union
5
+
6
+ import torch
7
+ import torch.nn as nn
8
+ import torch.nn.functional as F # noqa: N812
9
+ from transformers import (
10
+ AutoModel,
11
+ AutoModelForCausalLM,
12
+ AutoTokenizer,
13
+ PreTrainedModel,
14
+ TextIteratorStreamer,
15
+ )
16
+ from transformers.generation import GenerationMixin
17
+ from transformers.modeling_outputs import CausalLMOutputWithPast
18
+
19
+ try:
20
+ from .asr_config import ASRConfig, compute_encoder_output_length
21
+ from .projectors import PROJECTOR_CLASSES
22
+ except ImportError:
23
+ from asr_config import ASRConfig, compute_encoder_output_length # type: ignore[no-redef]
24
+ from projectors import PROJECTOR_CLASSES # type: ignore[no-redef]
25
+
26
+
27
+ from torchaudio.transforms import SpecAugment
28
+
29
+
30
+ def _gather_audio_embeds(audio_embeds: torch.Tensor, token_counts: torch.Tensor) -> torch.Tensor:
31
+ """Flatten per-sample audio embeddings into a packed tensor.
32
+
33
+ For each row i, takes the first ``token_counts[i]`` rows of
34
+ ``audio_embeds[i]`` and concatenates them. If any token count exceeds
35
+ ``audio_embeds.shape[1]``, the deficit is zero-padded.
36
+
37
+ Equivalent to a per-sample slice/cat loop but with O(1) host-device
38
+ syncs per call (one ``max().item()``) instead of one per sample.
39
+ """
40
+ _, max_len, _ = audio_embeds.shape
41
+ needed = int(token_counts.max().item())
42
+ if needed > max_len:
43
+ audio_embeds = F.pad(audio_embeds, (0, 0, 0, needed - max_len))
44
+ max_len = needed
45
+ indices = torch.arange(max_len, device=audio_embeds.device).unsqueeze(0)
46
+ mask = indices < token_counts.unsqueeze(1)
47
+ return audio_embeds[mask]
48
+
49
+
50
+ class ASRModel(PreTrainedModel, GenerationMixin):
51
+ """Audio-to-text model combining an audio encoder, projector, and language model."""
52
+
53
+ config_class = ASRConfig
54
+ base_model_prefix = "model"
55
+ main_input_name = "input_features"
56
+ _supports_flash_attn_2 = True
57
+ supports_gradient_checkpointing = True
58
+ _is_loading_from_pretrained: bool = False
59
+
60
+ TRANSCRIBE_PROMPT = "Transcribe the speech to text"
61
+
62
+ @classmethod
63
+ def from_pretrained(cls, pretrained_model_name_or_path: str, *args, **kwargs) -> "ASRModel":
64
+ """Load model from pretrained, handling device placement correctly."""
65
+ from safetensors.torch import load_file
66
+ from transformers.utils.hub import cached_file
67
+
68
+ config = kwargs.pop("config", None)
69
+ if config is None:
70
+ config = ASRConfig.from_pretrained(pretrained_model_name_or_path, **kwargs)
71
+
72
+ # Set flag to avoid device_map="auto" in sub-model loaders
73
+ cls._is_loading_from_pretrained = True
74
+
75
+ try:
76
+ model = cls(config, **kwargs)
77
+
78
+ # Load projector weights from safetensors
79
+ subfolder = kwargs.get("subfolder")
80
+ revision = kwargs.get("revision")
81
+ cache_kwargs = {}
82
+ if subfolder:
83
+ cache_kwargs["subfolder"] = subfolder
84
+ if revision:
85
+ cache_kwargs["revision"] = revision
86
+
87
+ model_file = cached_file(
88
+ pretrained_model_name_or_path,
89
+ "model.safetensors",
90
+ _raise_exceptions_for_missing_entries=False,
91
+ **cache_kwargs,
92
+ )
93
+
94
+ if model_file is not None:
95
+ state_dict = load_file(model_file)
96
+ model.load_state_dict(state_dict, strict=False)
97
+
98
+ # Load LoRA adapters if use_lora is enabled
99
+ if getattr(config, "use_lora", False):
100
+ # Check for adapter_config.json (required by PEFT to load adapters)
101
+ adapter_config_file = cached_file(
102
+ pretrained_model_name_or_path,
103
+ "adapter_config.json",
104
+ _raise_exceptions_for_missing_entries=False,
105
+ **cache_kwargs,
106
+ )
107
+ if adapter_config_file is not None:
108
+ # Load saved adapter weights using the original repo_id/path
109
+ # PEFT handles Hub downloads and caching internally
110
+ from peft import PeftModel
111
+
112
+ model.language_model = PeftModel.from_pretrained(
113
+ model.language_model,
114
+ pretrained_model_name_or_path,
115
+ is_trainable=True,
116
+ **cache_kwargs,
117
+ )
118
+ else:
119
+ # No saved adapters - initialize fresh LLM LoRA for training
120
+ from peft import LoraConfig, get_peft_model
121
+
122
+ lora_config = LoraConfig(
123
+ r=config.lora_rank,
124
+ lora_alpha=config.lora_alpha,
125
+ target_modules=config.lora_target_modules,
126
+ lora_dropout=config.lora_dropout,
127
+ bias="none",
128
+ task_type="CAUSAL_LM",
129
+ )
130
+ model.language_model = get_peft_model(model.language_model, lora_config)
131
+
132
+ return model
133
+ finally:
134
+ cls._is_loading_from_pretrained = False
135
+
136
+ def __init__(self, config: ASRConfig, **kwargs) -> None:
137
+ super().__init__(config)
138
+
139
+ self.system_prompt = config.system_prompt
140
+ target_dtype = getattr(torch, config.model_dtype)
141
+
142
+ # Audio encoder (frozen)
143
+ self.audio_tower = self._load_audio_encoder(config, target_dtype)
144
+
145
+ # Language model (frozen)
146
+ self.language_model = self._load_language_model(config, target_dtype)
147
+
148
+ # Initialize tokenizer and special tokens
149
+ self._init_tokenizer(config)
150
+
151
+ # Set up generation config with greedy decoding defaults
152
+ self.generation_config = self.language_model.generation_config
153
+ self.generation_config.max_new_tokens = config.max_new_tokens
154
+ self.generation_config.min_new_tokens = config.min_new_tokens
155
+ self.generation_config.num_beams = config.num_beams
156
+ self.generation_config.do_sample = config.do_sample
157
+ # Set sampling params from config (None means use model defaults)
158
+ self.generation_config.temperature = config.temperature
159
+ self.generation_config.top_p = config.top_p
160
+ self.generation_config.top_k = config.top_k
161
+ self.generation_config.use_cache = config.use_cache
162
+ self.generation_config.length_penalty = config.length_penalty
163
+ self.generation_config.repetition_penalty = config.repetition_penalty
164
+ self.generation_config.no_repeat_ngram_size = config.no_repeat_ngram_size
165
+ # Set EOS tokens, filtering out any that don't exist in the tokenizer
166
+ eos_candidates = [
167
+ self.tokenizer.convert_tokens_to_ids("<|im_end|>"),
168
+ self.tokenizer.convert_tokens_to_ids("<|endoftext|>"),
169
+ ]
170
+ self.generation_config.eos_token_id = [t for t in eos_candidates if t is not None]
171
+ self.generation_config.pad_token_id = self.tokenizer.pad_token_id
172
+
173
+ # Feature extractor for audio preprocessing
174
+ self.feature_extractor = self._create_feature_extractor(config)
175
+
176
+ # Audio projector (trainable unless freeze_projector is set)
177
+ self.projector = self._create_projector(config, target_dtype)
178
+
179
+ # Setup LoRA if enabled (Stage 2 fine-tuning)
180
+ # Skip if loading from pretrained - from_pretrained will handle adapter loading
181
+ if getattr(config, "use_lora", False) and not getattr(
182
+ self.__class__, "_is_loading_from_pretrained", False
183
+ ):
184
+ self._setup_lora(config)
185
+
186
+ # Freeze projector if specified (for Stage 2 LoRA-only training)
187
+ if getattr(config, "freeze_projector", False):
188
+ self.projector.requires_grad_(False)
189
+
190
+ # SpecAugment for data augmentation during training
191
+ if getattr(config, "use_specaugment", False):
192
+ self.spec_augment = SpecAugment(
193
+ n_time_masks=config.num_time_masks,
194
+ time_mask_param=config.time_mask_length,
195
+ n_freq_masks=config.num_freq_masks,
196
+ freq_mask_param=config.freq_mask_length,
197
+ )
198
+ else:
199
+ self.spec_augment = None
200
+
201
+ # For model parallelism
202
+ self._no_split_modules = getattr(self.language_model, "_no_split_modules", [])
203
+
204
+ def _create_feature_extractor(self, config: ASRConfig):
205
+ """Create the appropriate feature extractor for the audio encoder."""
206
+ from transformers import AutoFeatureExtractor
207
+
208
+ feature_extractor = AutoFeatureExtractor.from_pretrained(config.audio_model_id)
209
+ # Whisper's encoder requires a fixed 3000 mel frames (30s) and the
210
+ # feature extractor pads to that by default — leave it alone. Other
211
+ # encoders (e.g. GLM-ASR) accept variable-length input, so we disable
212
+ # padding to avoid wasting compute on silent frames.
213
+ if "whisper" not in config.audio_model_id.lower():
214
+ feature_extractor.padding = False
215
+ return feature_extractor
216
+
217
+ @classmethod
218
+ def _load_audio_encoder(cls, config: ASRConfig, dtype: torch.dtype) -> nn.Module:
219
+ """Load and freeze the audio encoder."""
220
+ encoder_kwargs = {
221
+ "attn_implementation": config.attn_implementation,
222
+ "low_cpu_mem_usage": True,
223
+ "dtype": dtype,
224
+ }
225
+
226
+ if "whisper" in config.audio_model_id.lower():
227
+ from transformers import WhisperModel
228
+
229
+ full_model = WhisperModel.from_pretrained(config.audio_model_id, **encoder_kwargs)
230
+ encoder = full_model.encoder
231
+ del full_model
232
+ elif "glm" in config.audio_model_id.lower():
233
+ # GLM-ASR models use audio_tower as the encoder
234
+ # Requires transformers >= 5.x or installed from source
235
+ from transformers import AutoModelForSeq2SeqLM
236
+
237
+ full_model = AutoModelForSeq2SeqLM.from_pretrained(
238
+ config.audio_model_id, trust_remote_code=True, **encoder_kwargs
239
+ )
240
+ # GLM stores encoder at audio_tower (GlmAsrEncoder)
241
+ encoder = full_model.audio_tower
242
+ # Clear references to free VRAM from the LLM decoder
243
+ full_model.language_model = None
244
+ full_model.multi_modal_projector = None
245
+ del full_model
246
+ else:
247
+ encoder = AutoModel.from_pretrained(config.audio_model_id, **encoder_kwargs)
248
+
249
+ encoder.requires_grad_(False)
250
+ encoder.eval()
251
+ return encoder
252
+
253
+ @classmethod
254
+ def _load_language_model(cls, config: ASRConfig, dtype: torch.dtype) -> PreTrainedModel:
255
+ """Load and freeze the language model."""
256
+ decoder_kwargs = {
257
+ "attn_implementation": config.attn_implementation,
258
+ "trust_remote_code": True,
259
+ "low_cpu_mem_usage": True,
260
+ "dtype": dtype,
261
+ }
262
+
263
+ decoder = AutoModelForCausalLM.from_pretrained(config.text_model_id, **decoder_kwargs)
264
+ decoder.config.use_cache = getattr(config, "use_cache", True)
265
+ if getattr(config, "freeze_language_model", True):
266
+ decoder.requires_grad_(False)
267
+ decoder.train(False)
268
+ return decoder
269
+
270
+ def _create_projector(self, config: ASRConfig, dtype: torch.dtype) -> nn.Module:
271
+ """Create the trainable audio projector."""
272
+ # Auto-detect dimensions if not specified
273
+ if config.encoder_dim is None:
274
+ enc_cfg = self.audio_tower.config
275
+ config.encoder_dim = getattr(enc_cfg, "hidden_size", None) or getattr(
276
+ enc_cfg, "d_model", None
277
+ )
278
+ if config.encoder_dim is None:
279
+ raise ValueError("Could not auto-detect encoder_dim. Please specify in config.")
280
+
281
+ if config.llm_dim is None:
282
+ dec_cfg = self.language_model.config
283
+ config.llm_dim = getattr(dec_cfg, "hidden_size", None) or getattr(
284
+ dec_cfg, "d_model", None
285
+ )
286
+ if config.llm_dim is None:
287
+ raise ValueError("Could not auto-detect llm_dim. Please specify in config.")
288
+
289
+ # Select projector type based on config
290
+ projector_type = getattr(config, "projector_type", "mlp")
291
+ projector_class = PROJECTOR_CLASSES.get(projector_type)
292
+ if projector_class is None:
293
+ raise ValueError(
294
+ f"Unknown projector_type: {projector_type}. "
295
+ f"Valid options: {list(PROJECTOR_CLASSES.keys())}"
296
+ )
297
+ projector = projector_class(config)
298
+
299
+ # Move projector to same device as language model (important when using quantization)
300
+ device = next(self.language_model.parameters()).device
301
+ return projector.to(device=device, dtype=dtype)
302
+
303
+ def _setup_lora(self, config: ASRConfig):
304
+ """Apply LoRA adapters to the language model for Stage 2 fine-tuning."""
305
+ from peft import LoraConfig, get_peft_model
306
+
307
+ lora_config = LoraConfig(
308
+ r=config.lora_rank,
309
+ lora_alpha=config.lora_alpha,
310
+ target_modules=config.lora_target_modules,
311
+ lora_dropout=config.lora_dropout,
312
+ bias="none",
313
+ task_type="CAUSAL_LM",
314
+ )
315
+ self.language_model = get_peft_model(self.language_model, lora_config)
316
+
317
+ def _init_tokenizer(self, config: ASRConfig):
318
+ """Initialize tokenizer with audio token."""
319
+ self.tokenizer = AutoTokenizer.from_pretrained(config.text_model_id, trust_remote_code=True)
320
+
321
+ # Set pad token. Prefer a dedicated pad token if the tokenizer has one
322
+ # (e.g. Qwen's <|finetune_right_pad_id|>); otherwise fall back to
323
+ # eos_token, which is the standard pattern for Llama-style tokenizers
324
+ # (SmolLM2, Llama, etc.) that ship without a separate pad token.
325
+ if (
326
+ self.tokenizer.pad_token is None
327
+ or self.tokenizer.pad_token_id == self.tokenizer.eos_token_id
328
+ ):
329
+ if "<|finetune_right_pad_id|>" in self.tokenizer.get_vocab():
330
+ self.tokenizer.pad_token = "<|finetune_right_pad_id|>"
331
+ elif self.tokenizer.pad_token is None:
332
+ self.tokenizer.pad_token = self.tokenizer.eos_token
333
+
334
+ # Add audio token
335
+ existing_special = getattr(self.tokenizer, "additional_special_tokens", None) or []
336
+ if "<audio>" not in existing_special:
337
+ self.tokenizer.add_special_tokens(
338
+ {"additional_special_tokens": existing_special + ["<audio>"]}
339
+ )
340
+ self.language_model.resize_token_embeddings(len(self.tokenizer), mean_resizing=False)
341
+
342
+ self.audio_token_id = self.tokenizer.convert_tokens_to_ids("<audio>")
343
+ self.tokenizer.padding_side = "right"
344
+
345
+ # Sync token IDs to configs
346
+ for cfg in [self.config.text_config, self.language_model.config, self.generation_config]:
347
+ if cfg is not None:
348
+ cfg.pad_token_id = self.tokenizer.pad_token_id
349
+ cfg.eos_token_id = self.tokenizer.eos_token_id
350
+ cfg.bos_token_id = self.tokenizer.bos_token_id
351
+
352
+ def train(self, mode: bool = True):
353
+ """Set train/eval mode, but keep frozen submodules out of train mode.
354
+
355
+ HF Trainer calls `model.train()` at the top of every training step, which
356
+ recursively switches every submodule into train mode — re-enabling dropout
357
+ on modules with `requires_grad_(False)`. The frozen encoder (and the LM
358
+ when `freeze_language_model=True`) should always run deterministically;
359
+ train-mode dropout only adds noise that can't improve a frozen network.
360
+ """
361
+ super().train(mode)
362
+ self.audio_tower.train(False)
363
+ if getattr(self.config, "freeze_language_model", True):
364
+ self.language_model.train(False)
365
+ return self
366
+
367
+ def _set_gradient_checkpointing(self, enable: bool = True, gradient_checkpointing_func=None):
368
+ """Enable/disable gradient checkpointing for the language model."""
369
+ # The LLM still stores activations during forward for backprop to projector
370
+ # Gradient checkpointing trades compute for memory by recomputing activations
371
+ if hasattr(self.language_model, "_set_gradient_checkpointing"):
372
+ self.language_model._set_gradient_checkpointing(enable, gradient_checkpointing_func)
373
+ elif hasattr(self.language_model, "gradient_checkpointing_enable") and enable:
374
+ self.language_model.gradient_checkpointing_enable(
375
+ gradient_checkpointing_kwargs={"use_reentrant": False}
376
+ )
377
+ elif hasattr(self.language_model, "gradient_checkpointing_disable") and not enable:
378
+ self.language_model.gradient_checkpointing_disable()
379
+
380
+ def get_input_embeddings(self) -> nn.Module:
381
+ return self.language_model.get_input_embeddings()
382
+
383
+ def set_input_embeddings(self, value: nn.Module) -> None:
384
+ self.language_model.set_input_embeddings(value)
385
+
386
+ def get_output_embeddings(self) -> nn.Module:
387
+ return self.language_model.get_output_embeddings()
388
+
389
+ def set_output_embeddings(self, value: nn.Module) -> None:
390
+ self.language_model.set_output_embeddings(value)
391
+
392
+ def get_processor(self):
393
+ """Get the processor for this model."""
394
+ try:
395
+ from .asr_processing import ASRProcessor
396
+ except ImportError:
397
+ from asr_processing import ASRProcessor # type: ignore[no-redef]
398
+
399
+ return ASRProcessor(
400
+ feature_extractor=self.feature_extractor,
401
+ tokenizer=self.tokenizer,
402
+ projector=self.projector,
403
+ encoder_conv_layers=self.config.encoder_conv_layers,
404
+ )
405
+
406
+ def state_dict(self, *args, **kwargs) -> dict[str, torch.Tensor]:
407
+ """Save trainable weights: projector, plus the language model when fine-tuned."""
408
+ sd = {f"projector.{k}": v for k, v in self.projector.state_dict().items()}
409
+ if not getattr(self.config, "freeze_language_model", True):
410
+ sd.update(
411
+ {f"language_model.{k}": v for k, v in self.language_model.state_dict().items()}
412
+ )
413
+ return sd
414
+
415
+ def _compute_encoder_output_lengths(
416
+ self,
417
+ audio_attention_mask: torch.Tensor,
418
+ ) -> torch.Tensor:
419
+ """Compute per-sample encoder output lengths using conv layer formulas."""
420
+ return compute_encoder_output_length(
421
+ audio_attention_mask.sum(dim=-1),
422
+ self.config.encoder_conv_layers,
423
+ )
424
+
425
+ def _encode_audio(
426
+ self,
427
+ audio_features: torch.Tensor,
428
+ expected_token_counts: torch.Tensor,
429
+ ) -> torch.Tensor:
430
+ """Encode audio features and return flattened embeddings matching expected_token_counts.
431
+
432
+ Args:
433
+ audio_features: Mel spectrogram features (batch, n_mels, mel_len)
434
+ expected_token_counts: Per-sample audio token counts as int64 tensor (batch,).
435
+
436
+ Returns:
437
+ Flattened audio embeddings of shape (sum(expected_token_counts), hidden_dim).
438
+ """
439
+ with torch.no_grad():
440
+ encoder_out = self.audio_tower(input_features=audio_features)
441
+ hidden_states = encoder_out.last_hidden_state
442
+
443
+ audio_embeds = self.projector(hidden_states)
444
+
445
+ token_counts = expected_token_counts.to(device=audio_embeds.device, dtype=torch.long)
446
+ return _gather_audio_embeds(audio_embeds, token_counts)
447
+
448
+ def forward(
449
+ self,
450
+ input_ids: Optional[torch.Tensor] = None,
451
+ input_features: Optional[torch.Tensor] = None,
452
+ audio_attention_mask: Optional[torch.Tensor] = None,
453
+ attention_mask: Optional[torch.Tensor] = None,
454
+ position_ids: Optional[torch.Tensor] = None,
455
+ past_key_values: Optional[torch.Tensor] = None,
456
+ inputs_embeds: Optional[torch.Tensor] = None,
457
+ labels: Optional[torch.Tensor] = None,
458
+ use_cache: Optional[bool] = None,
459
+ cache_position: Optional[torch.Tensor] = None,
460
+ audio_token_counts: Optional[torch.Tensor] = None,
461
+ **kwargs,
462
+ ) -> CausalLMOutputWithPast:
463
+ """Forward pass for training and inference."""
464
+ if inputs_embeds is None:
465
+ inputs_embeds = self.language_model.get_input_embeddings()(input_ids)
466
+
467
+ if input_features is not None and input_ids is not None:
468
+ if self.training and self.spec_augment is not None:
469
+ input_features = self.spec_augment(input_features)
470
+
471
+ is_audio_token = input_ids == self.audio_token_id
472
+ if audio_token_counts is None:
473
+ audio_token_counts = is_audio_token.sum(dim=-1)
474
+ else:
475
+ audio_token_counts = audio_token_counts.to(
476
+ device=input_ids.device, dtype=torch.long
477
+ )
478
+
479
+ audio_embeds = self._encode_audio(input_features, audio_token_counts)
480
+
481
+ audio_token_mask = is_audio_token.unsqueeze(-1)
482
+ inputs_embeds = inputs_embeds.masked_scatter(
483
+ audio_token_mask.to(inputs_embeds.device),
484
+ audio_embeds.to(inputs_embeds.device, dtype=inputs_embeds.dtype),
485
+ )
486
+
487
+ outputs = self.language_model(
488
+ attention_mask=attention_mask,
489
+ position_ids=position_ids,
490
+ past_key_values=past_key_values,
491
+ inputs_embeds=inputs_embeds,
492
+ labels=labels,
493
+ use_cache=use_cache,
494
+ cache_position=cache_position,
495
+ **kwargs,
496
+ )
497
+
498
+ if outputs.loss is not None and hasattr(self.projector, "get_aux_loss"):
499
+ aux_loss = self.projector.get_aux_loss()
500
+ if aux_loss is not None and aux_loss.numel() > 0:
501
+ outputs.loss = outputs.loss + aux_loss.to(outputs.loss.device)
502
+
503
+ return outputs
504
+
505
+ def prepare_inputs_for_generation(self, *args, **kwargs):
506
+ """Prepare inputs for generation, handling audio features for cached decoding."""
507
+ input_features = kwargs.pop("input_features", None)
508
+ cache_position = kwargs.get("cache_position")
509
+
510
+ model_inputs = self.language_model.prepare_inputs_for_generation(*args, **kwargs)
511
+
512
+ # Only pass audio features on the first generation step (cache_position[0] == 0)
513
+ if cache_position is not None and cache_position[0] == 0 and input_features is not None:
514
+ model_inputs["input_features"] = input_features
515
+
516
+ return model_inputs
517
+
518
+ def _get_num_audio_tokens(
519
+ self,
520
+ audio_attention_mask: torch.Tensor,
521
+ ) -> int:
522
+ """Calculate number of audio tokens based on actual audio length.
523
+
524
+ Uses attention mask to get real audio length, then computes:
525
+ mel_frames -> encoder_frames (via conv formulas) -> projector output tokens
526
+ """
527
+ encoder_lengths = self._compute_encoder_output_lengths(audio_attention_mask)
528
+ # Use max length for batch (all samples should have same token count for generation)
529
+ encoder_output_len = int(encoder_lengths.max().item())
530
+ return int(self.projector.get_output_length(encoder_output_len))
531
+
532
+ @torch.no_grad()
533
+ def generate(
534
+ self,
535
+ input_ids: Optional[torch.Tensor] = None,
536
+ input_features: Optional[torch.Tensor] = None,
537
+ audio_attention_mask: Optional[torch.Tensor] = None,
538
+ attention_mask: Optional[torch.Tensor] = None,
539
+ system_prompt: Optional[str] = None,
540
+ **generate_kwargs,
541
+ ) -> torch.Tensor:
542
+ """Generate transcription from audio input.
543
+
544
+ Can be called in two ways:
545
+ 1. With input_ids containing <audio> tokens (from processor)
546
+ 2. With just audio, and we build the prompt internally
547
+ """
548
+ if input_features is None:
549
+ raise ValueError("input_features required for generation")
550
+ if audio_attention_mask is None:
551
+ raise ValueError("audio_attention_mask required for generation")
552
+
553
+ device = input_features.device
554
+ batch_size = input_features.shape[0]
555
+
556
+ # Encode audio -> flattened embeddings (no per-sample host sync)
557
+ encoder_lengths = self._compute_encoder_output_lengths(audio_attention_mask)
558
+ token_counts = self.projector.get_output_length(encoder_lengths).to(torch.long)
559
+ audio_embeds = self._encode_audio(input_features, token_counts)
560
+
561
+ # If input_ids not provided, build prompt with correct number of audio tokens
562
+ if input_ids is None:
563
+ num_audio_tokens = self._get_num_audio_tokens(audio_attention_mask)
564
+ audio_placeholder = "<audio>" * num_audio_tokens
565
+
566
+ system_prompt = system_prompt or self.system_prompt
567
+
568
+ messages: list[dict[str, str]] = []
569
+ if system_prompt:
570
+ messages.append({"role": "system", "content": system_prompt})
571
+ # Audio tokens only (instruction-free)
572
+ user_content = audio_placeholder
573
+ if self.TRANSCRIBE_PROMPT:
574
+ user_content += " " + self.TRANSCRIBE_PROMPT
575
+ messages.append({"role": "user", "content": user_content})
576
+
577
+ chat_result = self.tokenizer.apply_chat_template(
578
+ messages,
579
+ tokenize=True,
580
+ add_generation_prompt=True,
581
+ return_tensors="pt",
582
+ enable_thinking=False, # Disable Qwen3 thinking mode for ASR
583
+ )
584
+ input_ids = chat_result.input_ids.to(device)
585
+
586
+ if input_ids.dim() == 1:
587
+ input_ids = input_ids.unsqueeze(0)
588
+ if input_ids.shape[0] == 1 and batch_size > 1:
589
+ input_ids = input_ids.expand(batch_size, -1)
590
+
591
+ attention_mask = torch.ones_like(input_ids)
592
+
593
+ # Get text embeddings and replace audio tokens with audio embeddings
594
+ inputs_embeds = self.language_model.get_input_embeddings()(input_ids)
595
+ audio_token_mask = (input_ids == self.audio_token_id).unsqueeze(-1)
596
+ inputs_embeds = inputs_embeds.masked_scatter(
597
+ audio_token_mask.to(inputs_embeds.device),
598
+ audio_embeds.to(inputs_embeds.device, dtype=inputs_embeds.dtype),
599
+ )
600
+
601
+ # Generate using language model
602
+ # Pass both input_ids and inputs_embeds so repetition_penalty works correctly
603
+ # (it needs input_ids to track which tokens have been used)
604
+ output = self.language_model.generate(
605
+ input_ids=input_ids,
606
+ inputs_embeds=inputs_embeds,
607
+ attention_mask=attention_mask,
608
+ generation_config=self.generation_config,
609
+ **generate_kwargs,
610
+ )
611
+
612
+ # When using inputs_embeds with input_ids, generate returns full sequence
613
+ # Strip the input tokens to return only generated tokens
614
+ sequences = output if isinstance(output, torch.Tensor) else output.sequences
615
+ input_len = input_ids.shape[1]
616
+ return sequences[:, input_len:]
617
+
618
+ def generate_streaming(
619
+ self,
620
+ input_features: torch.Tensor,
621
+ audio_attention_mask: torch.Tensor,
622
+ system_prompt: Optional[str] = None,
623
+ **generate_kwargs,
624
+ ) -> Iterator[str]:
625
+ """Generate transcription with streaming token output.
626
+
627
+ Yields partial transcript strings as tokens are generated.
628
+ Reduces time-to-first-word by streaming tokens as they're decoded.
629
+
630
+ Args:
631
+ input_features: Mel spectrogram features (batch, n_mels, mel_len)
632
+ audio_attention_mask: Mask for real vs padded mel frames (batch, mel_len)
633
+ system_prompt: Optional system prompt override
634
+ **generate_kwargs: Additional generation arguments
635
+
636
+ Yields:
637
+ Partial transcript text as each token is generated
638
+ """
639
+ device = input_features.device
640
+ batch_size = input_features.shape[0]
641
+
642
+ # Encode audio -> flattened embeddings (no per-sample host sync)
643
+ encoder_lengths = self._compute_encoder_output_lengths(audio_attention_mask)
644
+ token_counts = self.projector.get_output_length(encoder_lengths).to(torch.long)
645
+ audio_embeds = self._encode_audio(input_features, token_counts)
646
+
647
+ # Build prompt with correct number of audio tokens
648
+ num_audio_tokens = self._get_num_audio_tokens(audio_attention_mask)
649
+ audio_placeholder = "<audio>" * num_audio_tokens
650
+
651
+ system_prompt = system_prompt or self.system_prompt
652
+
653
+ messages: list[dict[str, str]] = []
654
+ if system_prompt:
655
+ messages.append({"role": "system", "content": system_prompt})
656
+ # Audio tokens only (instruction-free)
657
+ user_content = audio_placeholder
658
+ if self.TRANSCRIBE_PROMPT:
659
+ user_content += " " + self.TRANSCRIBE_PROMPT
660
+ messages.append({"role": "user", "content": user_content})
661
+
662
+ chat_result = self.tokenizer.apply_chat_template(
663
+ messages,
664
+ tokenize=True,
665
+ add_generation_prompt=True,
666
+ return_tensors="pt",
667
+ enable_thinking=False, # Disable Qwen3 thinking mode for ASR
668
+ )
669
+ input_ids = chat_result.input_ids.to(device)
670
+
671
+ if input_ids.dim() == 1:
672
+ input_ids = input_ids.unsqueeze(0)
673
+ if input_ids.shape[0] == 1 and batch_size > 1:
674
+ input_ids = input_ids.expand(batch_size, -1)
675
+
676
+ attention_mask = torch.ones_like(input_ids)
677
+
678
+ # Get text embeddings and replace audio tokens with audio embeddings
679
+ inputs_embeds = self.language_model.get_input_embeddings()(input_ids)
680
+ audio_token_mask = (input_ids == self.audio_token_id).unsqueeze(-1)
681
+ inputs_embeds = inputs_embeds.masked_scatter(
682
+ audio_token_mask.to(inputs_embeds.device),
683
+ audio_embeds.to(inputs_embeds.device, dtype=inputs_embeds.dtype),
684
+ )
685
+
686
+ # Setup streamer for token-by-token output
687
+ streamer = TextIteratorStreamer(
688
+ self.tokenizer,
689
+ skip_prompt=True,
690
+ skip_special_tokens=True,
691
+ )
692
+
693
+ # Prepare generation kwargs
694
+ gen_kwargs = {
695
+ "inputs_embeds": inputs_embeds,
696
+ "attention_mask": attention_mask,
697
+ "generation_config": self.generation_config,
698
+ "streamer": streamer,
699
+ **generate_kwargs,
700
+ }
701
+
702
+ # Run generation in background thread
703
+ thread = Thread(target=self.language_model.generate, kwargs=gen_kwargs)
704
+ thread.start()
705
+
706
+ # Yield tokens as they're generated, filtering out <think>...</think> blocks
707
+ # Start assuming no think block - only filter when we see <think>
708
+ in_think_block = False
709
+ buffer = ""
710
+
711
+ for text in streamer:
712
+ buffer += text
713
+
714
+ # Check for think block start (in case model outputs think blocks)
715
+ while "<think>" in buffer:
716
+ in_think_block = True
717
+ # Yield any text before <think>
718
+ before_think = buffer.split("<think>")[0]
719
+ if before_think:
720
+ yield before_think
721
+ buffer = buffer.split("<think>", 1)[-1]
722
+
723
+ # Check for think block end
724
+ while in_think_block and "</think>" in buffer:
725
+ in_think_block = False
726
+ buffer = buffer.split("</think>", 1)[-1]
727
+
728
+ # Yield text if not in think block
729
+ if not in_think_block and buffer:
730
+ yield buffer
731
+ buffer = ""
732
+
733
+ # Yield any remaining buffer
734
+ if buffer and not in_think_block:
735
+ yield buffer
736
+
737
+ thread.join()
738
+
739
+ def save_pretrained(self, save_directory: Union[str, Path], **kwargs) -> None:
740
+ """Save model, tokenizer, and processor."""
741
+ import shutil
742
+
743
+ save_dir = Path(save_directory)
744
+ save_dir.mkdir(parents=True, exist_ok=True)
745
+
746
+ # Update config with actual vocab size
747
+ self.config.vocab_size = self.language_model.config.vocab_size
748
+ self.config.text_config.vocab_size = self.language_model.config.vocab_size
749
+
750
+ if hasattr(self.audio_tower.config, "num_mel_bins"):
751
+ self.config.audio_config.num_mel_bins = self.audio_tower.config.num_mel_bins
752
+
753
+ # Save model (temporarily remove non-serializable attributes)
754
+ tokenizer = self.tokenizer
755
+ del self.tokenizer
756
+
757
+ try:
758
+ super().save_pretrained(save_dir, **kwargs)
759
+ finally:
760
+ self.tokenizer = tokenizer
761
+
762
+ # Save tokenizer and feature extractor
763
+ self.tokenizer.save_pretrained(save_dir)
764
+ self.feature_extractor.save_pretrained(save_dir)
765
+
766
+ # Save LoRA adapters if present (creates adapter_model.safetensors and adapter_config.json)
767
+ # Don't save embedding layers - the <audio> token embedding is never used
768
+ # (it's replaced with projected audio embeddings before the LLM sees it)
769
+ if hasattr(self.language_model, "peft_config"):
770
+ self.language_model.save_pretrained(save_dir, save_embedding_layers=False)
771
+
772
+ # Clear base_model_name_or_path in adapter_config.json to prevent HF pipeline
773
+ # from redirecting to the base LLM repo (like Qwen) which breaks feature
774
+ # extractor loading for multimodal models. If a repo_id is provided, use that
775
+ # so the model can be loaded directly from the Hub.
776
+ adapter_config_path = save_dir / "adapter_config.json"
777
+ if adapter_config_path.exists():
778
+ with adapter_config_path.open() as f:
779
+ adapter_config = json.load(f)
780
+
781
+ # Use repo_id if available, otherwise clear to prevent redirect.
782
+ # Use empty string instead of None to avoid str(None) -> "None" bug
783
+ # in some transformers/PEFT versions.
784
+ repo_id = (
785
+ kwargs.get("repo_id")
786
+ or kwargs.get("push_to_hub_model_id")
787
+ or getattr(self.config, "pretrained_model_path", None)
788
+ or "" # Use empty string instead of None
789
+ )
790
+ adapter_config["base_model_name_or_path"] = repo_id
791
+
792
+ with adapter_config_path.open("w") as f:
793
+ json.dump(adapter_config, f, indent=2)
794
+
795
+ # Add processor auto_map to preprocessor_config.json
796
+ config_path = save_dir / "preprocessor_config.json"
797
+ if config_path.exists():
798
+ with config_path.open() as f:
799
+ processor_config = json.load(f)
800
+ else:
801
+ processor_config = {}
802
+
803
+ processor_config.update(
804
+ {
805
+ "processor_class": "ASRProcessor",
806
+ "auto_map": {"AutoProcessor": "asr_processing.ASRProcessor"},
807
+ }
808
+ )
809
+
810
+ with config_path.open("w") as f:
811
+ json.dump(processor_config, f, indent=2)
812
+
813
+ # Copy source files for auto-loading
814
+ src_dir = Path(__file__).parent
815
+ for asr_file in src_dir.glob("asr_*.py"):
816
+ shutil.copy(asr_file, save_dir / asr_file.name)
817
+ # Copy projectors module
818
+ shutil.copy(src_dir / "projectors.py", save_dir / "projectors.py")
819
+ # Copy alignment module
820
+ shutil.copy(src_dir / "alignment.py", save_dir / "alignment.py")
821
+ # Copy diarization module
822
+ shutil.copy(src_dir / "diarization.py", save_dir / "diarization.py")
823
+
824
+ def push_to_hub(self, repo_id: str, **kwargs) -> str:
825
+ """Push model to HuggingFace Hub, ensuring adapter_config points to repo.
826
+
827
+ IMPORTANT: Sets base_model_name_or_path in adapter_config.json to repo_id
828
+ so that transformers pipeline() can load the model correctly. Without this,
829
+ the pipeline tries to load from "None" which fails.
830
+ """
831
+ # Store repo_id in config so save_pretrained can access it
832
+ self.config.pretrained_model_path = repo_id
833
+ # Call parent's push_to_hub
834
+ return super().push_to_hub(repo_id, **kwargs)
835
+
836
+
837
+ # Register with transformers Auto classes
838
+ # (AutoConfig.register is handled in asr_config.py at module load.)
839
+ AutoModel.register(ASRConfig, ASRModel)
asr_pipeline.py ADDED
@@ -0,0 +1,324 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """ASR pipeline for audio-to-text transcription with optional timestamps and diarization."""
2
+
3
+ import re
4
+ from pathlib import Path
5
+ from typing import Any
6
+
7
+ import numpy as np
8
+ import torch
9
+ import transformers
10
+ from transformers.pipelines.audio_utils import ffmpeg_read
11
+
12
+ try:
13
+ from .alignment import ForcedAligner
14
+ from .asr_modeling import ASRModel
15
+ from .diarization import SpeakerDiarizer
16
+ except ImportError:
17
+ from alignment import ForcedAligner # type: ignore[no-redef]
18
+ from asr_modeling import ASRModel # type: ignore[no-redef]
19
+ from diarization import SpeakerDiarizer # type: ignore[no-redef]
20
+
21
+ # Re-export for backwards compatibility
22
+ __all__ = ["ForcedAligner", "SpeakerDiarizer", "ASRPipeline"]
23
+
24
+ _THINK_TAG_RE = re.compile(r"<think>.*?</think>\s*", flags=re.DOTALL)
25
+ _DEFAULT_MIN_REPEATS = 3
26
+ _TRAILING_CHAR_RE = re.compile(rf"(.)\1{{{_DEFAULT_MIN_REPEATS - 1},}}$")
27
+ _TRAILING_WORD_RE = re.compile(
28
+ rf"\b(\w+)(?:\s+\1){{{_DEFAULT_MIN_REPEATS - 1},}}\s*$", re.IGNORECASE
29
+ )
30
+
31
+
32
+ class ASRPipeline(transformers.AutomaticSpeechRecognitionPipeline):
33
+ """ASR Pipeline for audio-to-text transcription."""
34
+
35
+ model: ASRModel
36
+
37
+ def __init__(self, model: ASRModel, **kwargs):
38
+ """Initialize ASR pipeline.
39
+
40
+ Args:
41
+ model: ASRModel instance for transcription
42
+ **kwargs: Additional arguments (feature_extractor, tokenizer, device)
43
+ """
44
+ feature_extractor = kwargs.pop("feature_extractor", None)
45
+ tokenizer = kwargs.pop("tokenizer", model.tokenizer)
46
+
47
+ if feature_extractor is None:
48
+ feature_extractor = model.get_processor().feature_extractor
49
+
50
+ super().__init__(
51
+ model=model, feature_extractor=feature_extractor, tokenizer=tokenizer, **kwargs
52
+ )
53
+ self._current_audio = None
54
+
55
+ def _sanitize_parameters(self, **kwargs):
56
+ """Intercept our custom parameters before parent class validates them."""
57
+ # Remove our custom parameters so parent doesn't see them
58
+ kwargs.pop("return_timestamps", None)
59
+ kwargs.pop("return_speakers", None)
60
+ kwargs.pop("num_speakers", None)
61
+ kwargs.pop("min_speakers", None)
62
+ kwargs.pop("max_speakers", None)
63
+ kwargs.pop("hf_token", None)
64
+ kwargs.pop("user_prompt", None)
65
+ kwargs.pop("diarization_backend", None)
66
+
67
+ return super()._sanitize_parameters(**kwargs)
68
+
69
+ def __call__(
70
+ self,
71
+ inputs,
72
+ **kwargs,
73
+ ):
74
+ """Transcribe audio with optional word-level timestamps and speaker diarization.
75
+
76
+ Args:
77
+ inputs: Audio input (file path, dict with array/sampling_rate, etc.)
78
+ return_timestamps: If True, return word-level timestamps using forced alignment
79
+ return_speakers: If True, return speaker labels for each word
80
+ user_prompt: Custom transcription prompt (default: "Transcribe: ")
81
+ num_speakers: Exact number of speakers (if known, for diarization)
82
+ min_speakers: Minimum number of speakers (for diarization)
83
+ max_speakers: Maximum number of speakers (for diarization)
84
+ **kwargs: Additional arguments passed to the pipeline
85
+
86
+ Returns:
87
+ Dict with 'text' key, 'words' key if return_timestamps=True,
88
+ and speaker labels on words if return_speakers=True
89
+ """
90
+ # Extract our params before super().__call__ (which will also call _sanitize_parameters)
91
+ return_timestamps = kwargs.pop("return_timestamps", False)
92
+ return_speakers = kwargs.pop("return_speakers", False)
93
+ user_prompt = kwargs.pop("user_prompt", None)
94
+ diarization_params = {
95
+ "num_speakers": kwargs.pop("num_speakers", None),
96
+ "min_speakers": kwargs.pop("min_speakers", None),
97
+ "max_speakers": kwargs.pop("max_speakers", None),
98
+ }
99
+
100
+ if return_speakers:
101
+ return_timestamps = True
102
+
103
+ # Set custom user prompt if provided
104
+ original_prompt = None
105
+ if user_prompt:
106
+ original_prompt = self.model.TRANSCRIBE_PROMPT
107
+ self.model.TRANSCRIBE_PROMPT = user_prompt
108
+
109
+ # Store audio for timestamp alignment and diarization
110
+ if return_timestamps or return_speakers:
111
+ self._current_audio = self._extract_audio(inputs)
112
+
113
+ # Run standard transcription
114
+ result = super().__call__(inputs, **kwargs)
115
+
116
+ # Add timestamps if requested
117
+ if return_timestamps and self._current_audio is not None:
118
+ text = result.get("text", "")
119
+ if text:
120
+ try:
121
+ words = ForcedAligner.align(
122
+ self._current_audio["array"],
123
+ text,
124
+ sample_rate=self._current_audio.get("sampling_rate", 16000),
125
+ )
126
+ result["words"] = words
127
+ except Exception as e:
128
+ result["words"] = []
129
+ result["timestamp_error"] = str(e)
130
+ else:
131
+ result["words"] = []
132
+
133
+ # Add speaker diarization if requested
134
+ if return_speakers and self._current_audio is not None:
135
+ try:
136
+ # Run diarization
137
+ speaker_segments = SpeakerDiarizer.diarize(
138
+ self._current_audio["array"],
139
+ sample_rate=self._current_audio.get("sampling_rate", 16000),
140
+ **{k: v for k, v in diarization_params.items() if v is not None},
141
+ )
142
+ result["speaker_segments"] = speaker_segments
143
+
144
+ # Assign speakers to words
145
+ if result.get("words"):
146
+ result["words"] = SpeakerDiarizer.assign_speakers_to_words(
147
+ result["words"],
148
+ speaker_segments,
149
+ )
150
+ except Exception as e:
151
+ result["speaker_segments"] = []
152
+ result["diarization_error"] = str(e)
153
+
154
+ # Clean up
155
+ self._current_audio = None
156
+ if original_prompt is not None:
157
+ self.model.TRANSCRIBE_PROMPT = original_prompt
158
+
159
+ return result
160
+
161
+ def _extract_audio(self, inputs) -> dict | None:
162
+ """Extract audio array from various input formats using HF utilities."""
163
+ if isinstance(inputs, dict):
164
+ if "array" in inputs:
165
+ return {
166
+ "array": inputs["array"],
167
+ "sampling_rate": inputs.get("sampling_rate", 16000),
168
+ }
169
+ if "raw" in inputs:
170
+ return {
171
+ "array": inputs["raw"],
172
+ "sampling_rate": inputs.get("sampling_rate", 16000),
173
+ }
174
+ elif isinstance(inputs, str):
175
+ # File path - load audio using ffmpeg (same as HF pipeline)
176
+ with Path(inputs).open("rb") as f:
177
+ audio = ffmpeg_read(f.read(), sampling_rate=16000)
178
+ return {"array": audio, "sampling_rate": 16000}
179
+ elif isinstance(inputs, bytes):
180
+ audio = ffmpeg_read(inputs, sampling_rate=16000)
181
+ return {"array": audio, "sampling_rate": 16000}
182
+ elif isinstance(inputs, np.ndarray):
183
+ return {"array": inputs, "sampling_rate": 16000}
184
+
185
+ return None
186
+
187
+ def preprocess(self, inputs, **preprocess_params):
188
+ """Preprocess audio inputs for the model.
189
+
190
+ Args:
191
+ inputs: Audio input (dict with array, file path, etc.)
192
+ **preprocess_params: Additional preprocessing parameters
193
+
194
+ Yields:
195
+ Model input dicts with input_features and attention_mask
196
+ """
197
+ # Handle dict with "array" key (from datasets)
198
+ if isinstance(inputs, dict) and "array" in inputs:
199
+ inputs = {
200
+ "raw": inputs["array"],
201
+ "sampling_rate": inputs.get("sampling_rate", self.feature_extractor.sampling_rate),
202
+ }
203
+
204
+ for item in super().preprocess(inputs, **preprocess_params):
205
+ if "is_last" not in item:
206
+ item["is_last"] = True
207
+ yield item
208
+
209
+ def _forward(self, model_inputs, **generate_kwargs) -> dict[str, Any]:
210
+ """Run model forward pass to generate transcription.
211
+
212
+ Args:
213
+ model_inputs: Dict with input_features and attention_mask
214
+ **generate_kwargs: Generation parameters
215
+
216
+ Returns:
217
+ Dict with generated token IDs
218
+ """
219
+ # Extract audio features and is_last flag
220
+ is_last = model_inputs.pop("is_last", True) if isinstance(model_inputs, dict) else True
221
+
222
+ input_features = model_inputs["input_features"].to(self.model.device)
223
+ audio_attention_mask = model_inputs["attention_mask"].to(self.model.device)
224
+
225
+ generated_ids = self.model.generate(
226
+ input_features=input_features,
227
+ audio_attention_mask=audio_attention_mask,
228
+ **generate_kwargs,
229
+ )
230
+
231
+ return {"tokens": generated_ids, "is_last": is_last}
232
+
233
+ def postprocess(self, model_outputs, **kwargs) -> dict[str, str]:
234
+ """Convert model output tokens to text.
235
+
236
+ Args:
237
+ model_outputs: Dict with 'tokens' key containing generated IDs
238
+ **kwargs: Additional postprocessing parameters
239
+
240
+ Returns:
241
+ Dict with 'text' key containing transcription
242
+ """
243
+ # Handle list of outputs (from chunking)
244
+ if isinstance(model_outputs, list):
245
+ model_outputs = model_outputs[0] if model_outputs else {}
246
+
247
+ tokens = model_outputs.get("tokens")
248
+ if tokens is None:
249
+ return super().postprocess(model_outputs, **kwargs)
250
+
251
+ if torch.is_tensor(tokens):
252
+ tokens = tokens.cpu()
253
+ if tokens.dim() > 1:
254
+ tokens = tokens[0]
255
+
256
+ # Filter out eos tokens that the tokenizer doesn't recognize as special
257
+ # (generation_config.eos_token_id may differ from tokenizer.eos_token_id)
258
+ if hasattr(self, "model") and hasattr(self.model, "generation_config"):
259
+ eos_ids = self.model.generation_config.eos_token_id
260
+ if eos_ids is not None:
261
+ eos_set = set(eos_ids) if isinstance(eos_ids, list) else {eos_ids}
262
+ tokens = [t for t in tokens.tolist() if t not in eos_set]
263
+
264
+ text = self.tokenizer.decode(tokens, skip_special_tokens=True).strip()
265
+ # Strip <think>...</think> tags (Qwen3 doesn't respect /no_think prompt)
266
+ if "<think>" in text:
267
+ text = _THINK_TAG_RE.sub("", text).strip()
268
+ text = _truncate_repetitions(text)
269
+ return {"text": text}
270
+
271
+
272
+ def _truncate_repetitions(text: str, min_repeats: int = 3) -> str:
273
+ """Truncate repeated words/phrases/characters at end of text.
274
+
275
+ Detects patterns like:
276
+ - Repeated words: "the the the the" -> "the"
277
+ - Repeated phrases: "i am sorry i am sorry i am sorry" -> "i am sorry"
278
+ - Repeated characters: "444444" -> "4"
279
+
280
+ Args:
281
+ text: Input text to process
282
+ min_repeats: Minimum repetitions to trigger truncation (default 3)
283
+
284
+ Returns:
285
+ Text with trailing repetitions removed
286
+ """
287
+ if not text:
288
+ return text
289
+
290
+ if min_repeats == _DEFAULT_MIN_REPEATS:
291
+ char_pattern = _TRAILING_CHAR_RE
292
+ word_pattern = _TRAILING_WORD_RE
293
+ else:
294
+ char_pattern = re.compile(rf"(.)\1{{{min_repeats - 1},}}$")
295
+ word_pattern = re.compile(rf"\b(\w+)(?:\s+\1){{{min_repeats - 1},}}\s*$", re.IGNORECASE)
296
+
297
+ text = char_pattern.sub(r"\1", text)
298
+ while word_pattern.search(text):
299
+ text = word_pattern.sub(r"\1", text)
300
+
301
+ # 3. Truncate repeated phrases (2-20 words) at end
302
+ # e.g., "i am sorry i am sorry i am sorry" -> "i am sorry"
303
+ words = text.split()
304
+ if len(words) < min_repeats * 2:
305
+ return text
306
+
307
+ # Cheap pre-check: trailing window must contain duplicates for any phrase repeat
308
+ # to be possible. set(window) == window means all unique → no repetition.
309
+ window = words[-min_repeats * 2 :]
310
+ if len(set(window)) == len(window):
311
+ return text
312
+
313
+ for phrase_len in range(2, min(21, len(words) // min_repeats + 1)):
314
+ phrase_escaped = re.escape(" ".join(words[-phrase_len:]))
315
+ phrase_pattern = re.compile(
316
+ rf"(^|.*?\s)({phrase_escaped})(?:\s+{phrase_escaped}){{{min_repeats - 1},}}\s*$",
317
+ re.IGNORECASE,
318
+ )
319
+ match = phrase_pattern.match(text)
320
+ if match:
321
+ text = (match.group(1) + match.group(2)).strip()
322
+ break
323
+
324
+ return text
asr_processing.py ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Optional, Union
2
+
3
+ import torch
4
+ import transformers
5
+ from transformers import ProcessorMixin
6
+
7
+ try:
8
+ from .asr_config import DEFAULT_ENCODER_CONV_LAYERS, ASRConfig, compute_encoder_output_length
9
+ except ImportError:
10
+ from asr_config import ( # type: ignore[no-redef]
11
+ DEFAULT_ENCODER_CONV_LAYERS,
12
+ ASRConfig,
13
+ compute_encoder_output_length,
14
+ )
15
+
16
+
17
+ class ASRProcessor(ProcessorMixin):
18
+ """Processor for Whisper-based ASR models."""
19
+
20
+ attributes = ["feature_extractor", "tokenizer"]
21
+ feature_extractor_class = "AutoFeatureExtractor"
22
+ tokenizer_class = "AutoTokenizer"
23
+ AUDIO_TOKEN = "<audio>"
24
+ TRANSCRIBE_PROMPT = "Transcribe the speech to text"
25
+
26
+ def __init__(
27
+ self,
28
+ feature_extractor,
29
+ tokenizer,
30
+ projector=None,
31
+ encoder_conv_layers: Optional[list] = None,
32
+ ):
33
+ """Initialize the ASR processor.
34
+
35
+ Args:
36
+ feature_extractor: Audio feature extractor (WhisperFeatureExtractor)
37
+ tokenizer: Text tokenizer for the language model
38
+ projector: Audio projector module (for computing output lengths)
39
+ encoder_conv_layers: Conv layer specs [(pad, kernel, stride), ...]
40
+ """
41
+ self.feature_extractor = feature_extractor
42
+ self.tokenizer = tokenizer
43
+ self.audio_token_id = tokenizer.convert_tokens_to_ids(self.AUDIO_TOKEN)
44
+ self.projector = projector
45
+ self.encoder_conv_layers = encoder_conv_layers or DEFAULT_ENCODER_CONV_LAYERS
46
+
47
+ def _compute_encoder_output_length(self, mel_length: int) -> int:
48
+ """Compute encoder output length using conv layer formulas."""
49
+ return compute_encoder_output_length(mel_length, self.encoder_conv_layers)
50
+
51
+ def __call__(
52
+ self,
53
+ audio: Optional[Union[list, "torch.Tensor"]] = None,
54
+ text: Optional[str] = None,
55
+ system_prompt: Optional[str] = None,
56
+ return_tensors: str = "pt",
57
+ **kwargs,
58
+ ) -> dict:
59
+ """Process audio and text inputs for inference.
60
+
61
+ Args:
62
+ audio: Raw audio waveform(s)
63
+ text: Target transcription (optional, for training - but use DataCollator instead)
64
+ system_prompt: Optional system prompt
65
+ return_tensors: Return format ("pt" for PyTorch)
66
+
67
+ Returns:
68
+ Dict with input_features, input_ids, attention_mask
69
+ """
70
+ result = {}
71
+
72
+ # Process audio
73
+ if audio is not None:
74
+ audio_inputs = self.feature_extractor(
75
+ audio,
76
+ sampling_rate=getattr(self.feature_extractor, "sampling_rate", 16000),
77
+ return_attention_mask=True,
78
+ return_tensors=return_tensors,
79
+ **kwargs,
80
+ )
81
+ result["input_features"] = audio_inputs["input_features"]
82
+ result["audio_attention_mask"] = audio_inputs["attention_mask"]
83
+
84
+ # Use actual audio length (from attention mask) for token count
85
+ real_mel_len = int(audio_inputs["attention_mask"].sum(dim=-1).max().item())
86
+ encoder_output_len = self._compute_encoder_output_length(real_mel_len)
87
+ num_audio_tokens = self.projector.get_output_length(encoder_output_len)
88
+ else:
89
+ num_audio_tokens = 0
90
+
91
+ # Build prompt with audio token placeholders (instruction-free)
92
+ if num_audio_tokens > 0:
93
+ user_content = self.AUDIO_TOKEN * num_audio_tokens
94
+ if self.TRANSCRIBE_PROMPT:
95
+ user_content += " " + self.TRANSCRIBE_PROMPT
96
+ else:
97
+ user_content = self.TRANSCRIBE_PROMPT or ""
98
+
99
+ messages = []
100
+ if system_prompt:
101
+ messages.append({"role": "system", "content": system_prompt})
102
+ messages.append({"role": "user", "content": user_content})
103
+ if text is not None:
104
+ messages.append({"role": "assistant", "content": text})
105
+
106
+ # Tokenize
107
+ tokenized = self.tokenizer.apply_chat_template(
108
+ messages,
109
+ tokenize=True,
110
+ add_generation_prompt=(text is None),
111
+ return_tensors=return_tensors,
112
+ enable_thinking=False, # Disable Qwen3 thinking mode for ASR
113
+ )
114
+
115
+ # Handle both tensor and BatchEncoding returns
116
+ if isinstance(tokenized, torch.Tensor):
117
+ input_ids = tokenized
118
+ else:
119
+ # BatchEncoding or dict-like object
120
+ input_ids = tokenized.get("input_ids", tokenized.input_ids)
121
+
122
+ if input_ids.dim() == 1:
123
+ input_ids = input_ids.unsqueeze(0)
124
+
125
+ result["input_ids"] = input_ids
126
+ result["attention_mask"] = torch.ones_like(input_ids)
127
+
128
+ return result
129
+
130
+
131
+ ASRProcessor.register_for_auto_class()
132
+ transformers.AutoProcessor.register(ASRConfig, ASRProcessor)
chat_template.jinja ADDED
@@ -0,0 +1,89 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- if tools %}
2
+ {{- '<|im_start|>system\n' }}
3
+ {%- if messages[0].role == 'system' %}
4
+ {{- messages[0].content + '\n\n' }}
5
+ {%- endif %}
6
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
7
+ {%- for tool in tools %}
8
+ {{- "\n" }}
9
+ {{- tool | tojson }}
10
+ {%- endfor %}
11
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
12
+ {%- else %}
13
+ {%- if messages[0].role == 'system' %}
14
+ {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
15
+ {%- endif %}
16
+ {%- endif %}
17
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
18
+ {%- for message in messages[::-1] %}
19
+ {%- set index = (messages|length - 1) - loop.index0 %}
20
+ {%- if ns.multi_step_tool and message.role == "user" and message.content is string and not(message.content.startswith('<tool_response>') and message.content.endswith('</tool_response>')) %}
21
+ {%- set ns.multi_step_tool = false %}
22
+ {%- set ns.last_query_index = index %}
23
+ {%- endif %}
24
+ {%- endfor %}
25
+ {%- for message in messages %}
26
+ {%- if message.content is string %}
27
+ {%- set content = message.content %}
28
+ {%- else %}
29
+ {%- set content = '' %}
30
+ {%- endif %}
31
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
32
+ {{- '<|im_start|>' + message.role + '\n' + content + '<|im_end|>' + '\n' }}
33
+ {%- elif message.role == "assistant" %}
34
+ {%- set reasoning_content = '' %}
35
+ {%- if message.reasoning_content is string %}
36
+ {%- set reasoning_content = message.reasoning_content %}
37
+ {%- else %}
38
+ {%- if '</think>' in content %}
39
+ {%- set reasoning_content = content.split('</think>')[0].rstrip('\n').split('<think>')[-1].lstrip('\n') %}
40
+ {%- set content = content.split('</think>')[-1].lstrip('\n') %}
41
+ {%- endif %}
42
+ {%- endif %}
43
+ {%- if loop.index0 > ns.last_query_index %}
44
+ {%- if loop.last or (not loop.last and reasoning_content) %}
45
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
46
+ {%- else %}
47
+ {{- '<|im_start|>' + message.role + '\n' + content }}
48
+ {%- endif %}
49
+ {%- else %}
50
+ {{- '<|im_start|>' + message.role + '\n' + content }}
51
+ {%- endif %}
52
+ {%- if message.tool_calls %}
53
+ {%- for tool_call in message.tool_calls %}
54
+ {%- if (loop.first and content) or (not loop.first) %}
55
+ {{- '\n' }}
56
+ {%- endif %}
57
+ {%- if tool_call.function %}
58
+ {%- set tool_call = tool_call.function %}
59
+ {%- endif %}
60
+ {{- '<tool_call>\n{"name": "' }}
61
+ {{- tool_call.name }}
62
+ {{- '", "arguments": ' }}
63
+ {%- if tool_call.arguments is string %}
64
+ {{- tool_call.arguments }}
65
+ {%- else %}
66
+ {{- tool_call.arguments | tojson }}
67
+ {%- endif %}
68
+ {{- '}\n</tool_call>' }}
69
+ {%- endfor %}
70
+ {%- endif %}
71
+ {{- '<|im_end|>\n' }}
72
+ {%- elif message.role == "tool" %}
73
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
74
+ {{- '<|im_start|>user' }}
75
+ {%- endif %}
76
+ {{- '\n<tool_response>\n' }}
77
+ {{- content }}
78
+ {{- '\n</tool_response>' }}
79
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
80
+ {{- '<|im_end|>\n' }}
81
+ {%- endif %}
82
+ {%- endif %}
83
+ {%- endfor %}
84
+ {%- if add_generation_prompt %}
85
+ {{- '<|im_start|>assistant\n' }}
86
+ {%- if true %}
87
+ {{- '<think>\n\n</think>\n\n' }}
88
+ {%- endif %}
89
+ {%- endif %}
config.json ADDED
@@ -0,0 +1,352 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architectures": [
3
+ "ASRModel"
4
+ ],
5
+ "attn_implementation": "sdpa",
6
+ "audio_config": {
7
+ "_name_or_path": "zai-org/GLM-ASR-Nano-2512",
8
+ "architectures": [
9
+ "GlmAsrForConditionalGeneration"
10
+ ],
11
+ "audio_config": {
12
+ "_name_or_path": "",
13
+ "architectures": null,
14
+ "attention_dropout": 0.0,
15
+ "chunk_size_feed_forward": 0,
16
+ "dtype": null,
17
+ "head_dim": 64,
18
+ "hidden_act": "gelu",
19
+ "hidden_size": 1280,
20
+ "id2label": {
21
+ "0": "LABEL_0",
22
+ "1": "LABEL_1"
23
+ },
24
+ "initializer_range": 0.02,
25
+ "intermediate_size": 5120,
26
+ "is_encoder_decoder": false,
27
+ "label2id": {
28
+ "LABEL_0": 0,
29
+ "LABEL_1": 1
30
+ },
31
+ "max_position_embeddings": 1500,
32
+ "model_type": "glmasr_encoder",
33
+ "num_attention_heads": 20,
34
+ "num_hidden_layers": 32,
35
+ "num_key_value_heads": 20,
36
+ "num_mel_bins": 128,
37
+ "output_attentions": false,
38
+ "output_hidden_states": false,
39
+ "partial_rotary_factor": 0.5,
40
+ "problem_type": null,
41
+ "return_dict": true,
42
+ "rope_parameters": {
43
+ "partial_rotary_factor": 0.5,
44
+ "rope_theta": 10000.0,
45
+ "rope_type": "default"
46
+ }
47
+ },
48
+ "audio_token_id": 59260,
49
+ "dtype": "bfloat16",
50
+ "hidden_size": 2048,
51
+ "model_type": "glmasr",
52
+ "num_mel_bins": 128,
53
+ "projector_hidden_act": "gelu",
54
+ "text_config": {
55
+ "_name_or_path": "",
56
+ "architectures": null,
57
+ "attention_bias": false,
58
+ "attention_dropout": 0.0,
59
+ "bos_token_id": 1,
60
+ "chunk_size_feed_forward": 0,
61
+ "dtype": null,
62
+ "eos_token_id": [
63
+ 59246,
64
+ 59253,
65
+ 59255
66
+ ],
67
+ "head_dim": 128,
68
+ "hidden_act": "silu",
69
+ "hidden_size": 2048,
70
+ "id2label": {
71
+ "0": "LABEL_0",
72
+ "1": "LABEL_1"
73
+ },
74
+ "initializer_range": 0.02,
75
+ "intermediate_size": 6144,
76
+ "is_encoder_decoder": false,
77
+ "label2id": {
78
+ "LABEL_0": 0,
79
+ "LABEL_1": 1
80
+ },
81
+ "max_position_embeddings": 8192,
82
+ "mlp_bias": false,
83
+ "model_type": "llama",
84
+ "num_attention_heads": 16,
85
+ "num_hidden_layers": 28,
86
+ "num_key_value_heads": 4,
87
+ "output_attentions": false,
88
+ "output_hidden_states": false,
89
+ "pad_token_id": null,
90
+ "pretraining_tp": 1,
91
+ "problem_type": null,
92
+ "return_dict": true,
93
+ "rms_norm_eps": 1e-05,
94
+ "rope_parameters": {
95
+ "rope_theta": 10000.0,
96
+ "rope_type": "default"
97
+ },
98
+ "tie_word_embeddings": false,
99
+ "use_cache": true,
100
+ "vocab_size": 59264
101
+ },
102
+ "vocab_size": 59264
103
+ },
104
+ "audio_model_id": "zai-org/GLM-ASR-Nano-2512",
105
+ "audio_sample_rate": 16000,
106
+ "auto_map": {
107
+ "AutoConfig": "asr_config.ASRConfig",
108
+ "AutoModel": "asr_modeling.ASRModel",
109
+ "AutoModelForSpeechSeq2Seq": "asr_modeling.ASRModel",
110
+ "AutoProcessor": "asr_processing.ASRProcessor"
111
+ },
112
+ "bos_token_id": null,
113
+ "custom_pipelines": {
114
+ "automatic-speech-recognition": {
115
+ "impl": "asr_pipeline.ASRPipeline",
116
+ "pt": [
117
+ "AutoModelForSpeechSeq2Seq"
118
+ ],
119
+ "tf": [],
120
+ "type": "audio"
121
+ }
122
+ },
123
+ "do_sample": false,
124
+ "downsample_rate": 5,
125
+ "dtype": "bfloat16",
126
+ "encoder": {
127
+ "_name_or_path": "zai-org/GLM-ASR-Nano-2512",
128
+ "architectures": [
129
+ "GlmAsrForConditionalGeneration"
130
+ ],
131
+ "audio_config": {
132
+ "_name_or_path": "",
133
+ "architectures": null,
134
+ "attention_dropout": 0.0,
135
+ "chunk_size_feed_forward": 0,
136
+ "dtype": null,
137
+ "head_dim": 64,
138
+ "hidden_act": "gelu",
139
+ "hidden_size": 1280,
140
+ "id2label": {
141
+ "0": "LABEL_0",
142
+ "1": "LABEL_1"
143
+ },
144
+ "initializer_range": 0.02,
145
+ "intermediate_size": 5120,
146
+ "is_encoder_decoder": false,
147
+ "label2id": {
148
+ "LABEL_0": 0,
149
+ "LABEL_1": 1
150
+ },
151
+ "max_position_embeddings": 1500,
152
+ "model_type": "glmasr_encoder",
153
+ "num_attention_heads": 20,
154
+ "num_hidden_layers": 32,
155
+ "num_key_value_heads": 20,
156
+ "num_mel_bins": 128,
157
+ "output_attentions": false,
158
+ "output_hidden_states": false,
159
+ "partial_rotary_factor": 0.5,
160
+ "problem_type": null,
161
+ "return_dict": true,
162
+ "rope_parameters": {
163
+ "partial_rotary_factor": 0.5,
164
+ "rope_theta": 10000.0,
165
+ "rope_type": "default"
166
+ }
167
+ },
168
+ "audio_token_id": 59260,
169
+ "dtype": "bfloat16",
170
+ "hidden_size": 2048,
171
+ "model_type": "glmasr",
172
+ "num_mel_bins": 128,
173
+ "projector_hidden_act": "gelu",
174
+ "text_config": {
175
+ "_name_or_path": "",
176
+ "architectures": null,
177
+ "attention_bias": false,
178
+ "attention_dropout": 0.0,
179
+ "bos_token_id": 1,
180
+ "chunk_size_feed_forward": 0,
181
+ "dtype": null,
182
+ "eos_token_id": [
183
+ 59246,
184
+ 59253,
185
+ 59255
186
+ ],
187
+ "head_dim": 128,
188
+ "hidden_act": "silu",
189
+ "hidden_size": 2048,
190
+ "id2label": {
191
+ "0": "LABEL_0",
192
+ "1": "LABEL_1"
193
+ },
194
+ "initializer_range": 0.02,
195
+ "intermediate_size": 6144,
196
+ "is_encoder_decoder": false,
197
+ "label2id": {
198
+ "LABEL_0": 0,
199
+ "LABEL_1": 1
200
+ },
201
+ "max_position_embeddings": 8192,
202
+ "mlp_bias": false,
203
+ "model_type": "llama",
204
+ "num_attention_heads": 16,
205
+ "num_hidden_layers": 28,
206
+ "num_key_value_heads": 4,
207
+ "output_attentions": false,
208
+ "output_hidden_states": false,
209
+ "pad_token_id": null,
210
+ "pretraining_tp": 1,
211
+ "problem_type": null,
212
+ "return_dict": true,
213
+ "rms_norm_eps": 1e-05,
214
+ "rope_parameters": {
215
+ "rope_theta": 10000.0,
216
+ "rope_type": "default"
217
+ },
218
+ "tie_word_embeddings": false,
219
+ "use_cache": true,
220
+ "vocab_size": 59264
221
+ },
222
+ "vocab_size": 59264
223
+ },
224
+ "encoder_conv_layers": [
225
+ [
226
+ 1,
227
+ 3,
228
+ 1
229
+ ],
230
+ [
231
+ 1,
232
+ 3,
233
+ 2
234
+ ]
235
+ ],
236
+ "encoder_dim": 1280,
237
+ "eos_token_id": 151645,
238
+ "freeze_language_model": false,
239
+ "freeze_projector": false,
240
+ "freq_mask_length": 27,
241
+ "length_penalty": 1.0,
242
+ "llm_dim": 1024,
243
+ "lora_alpha": 32,
244
+ "lora_dropout": 0.0,
245
+ "lora_rank": 8,
246
+ "lora_target_modules": [
247
+ "q_proj",
248
+ "k_proj",
249
+ "v_proj",
250
+ "o_proj",
251
+ "gate_proj",
252
+ "up_proj",
253
+ "down_proj"
254
+ ],
255
+ "max_new_tokens": 256,
256
+ "min_new_tokens": 0,
257
+ "model_dtype": "bfloat16",
258
+ "model_type": "asr_model",
259
+ "no_repeat_ngram_size": 0,
260
+ "num_beams": 1,
261
+ "num_experts": 4,
262
+ "num_experts_per_tok": 2,
263
+ "num_freq_masks": 2,
264
+ "num_time_masks": 10,
265
+ "pad_token_id": 151643,
266
+ "pipeline_tag": "automatic-speech-recognition",
267
+ "pretrained_model_path": "mazesmazes/tiny-audio-embedded-2",
268
+ "projector_hidden_dim": 2048,
269
+ "projector_pool_stride": 4,
270
+ "projector_type": "mlp",
271
+ "qformer_hidden_size": null,
272
+ "qformer_intermediate_size": null,
273
+ "qformer_num_heads": 16,
274
+ "qformer_num_layers": 2,
275
+ "qformer_window_size": 15,
276
+ "repetition_penalty": 1.0,
277
+ "router_aux_loss_coef": 0.01,
278
+ "system_prompt": "",
279
+ "temperature": null,
280
+ "text_config": {
281
+ "_name_or_path": "Qwen/Qwen3-0.6B",
282
+ "architectures": [
283
+ "Qwen3ForCausalLM"
284
+ ],
285
+ "attention_bias": false,
286
+ "attention_dropout": 0.0,
287
+ "bos_token_id": null,
288
+ "dtype": "bfloat16",
289
+ "eos_token_id": 151645,
290
+ "head_dim": 128,
291
+ "hidden_act": "silu",
292
+ "hidden_size": 1024,
293
+ "initializer_range": 0.02,
294
+ "intermediate_size": 3072,
295
+ "layer_types": [
296
+ "full_attention",
297
+ "full_attention",
298
+ "full_attention",
299
+ "full_attention",
300
+ "full_attention",
301
+ "full_attention",
302
+ "full_attention",
303
+ "full_attention",
304
+ "full_attention",
305
+ "full_attention",
306
+ "full_attention",
307
+ "full_attention",
308
+ "full_attention",
309
+ "full_attention",
310
+ "full_attention",
311
+ "full_attention",
312
+ "full_attention",
313
+ "full_attention",
314
+ "full_attention",
315
+ "full_attention",
316
+ "full_attention",
317
+ "full_attention",
318
+ "full_attention",
319
+ "full_attention",
320
+ "full_attention",
321
+ "full_attention",
322
+ "full_attention",
323
+ "full_attention"
324
+ ],
325
+ "max_position_embeddings": 40960,
326
+ "max_window_layers": 28,
327
+ "model_type": "qwen3",
328
+ "num_attention_heads": 16,
329
+ "num_hidden_layers": 28,
330
+ "num_key_value_heads": 8,
331
+ "pad_token_id": 151643,
332
+ "rms_norm_eps": 1e-06,
333
+ "rope_parameters": {
334
+ "rope_theta": 1000000,
335
+ "rope_type": "default"
336
+ },
337
+ "sliding_window": null,
338
+ "tie_word_embeddings": true,
339
+ "use_cache": true,
340
+ "use_sliding_window": false,
341
+ "vocab_size": 151670
342
+ },
343
+ "text_model_id": "Qwen/Qwen3-0.6B",
344
+ "time_mask_length": 100,
345
+ "top_k": null,
346
+ "top_p": null,
347
+ "transformers_version": "5.6.1",
348
+ "use_cache": false,
349
+ "use_lora": false,
350
+ "use_specaugment": true,
351
+ "vocab_size": 151670
352
+ }
diarization.py ADDED
@@ -0,0 +1,730 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Speaker diarization using TEN-VAD + ECAPA-TDNN + spectral clustering.
2
+
3
+ Spectral clustering implementation adapted from FunASR/3D-Speaker:
4
+ https://github.com/alibaba-damo-academy/FunASR
5
+ MIT License (https://opensource.org/licenses/MIT)
6
+ """
7
+
8
+ import warnings
9
+
10
+ import numpy as np
11
+ import scipy
12
+ import sklearn.metrics.pairwise
13
+ import torch
14
+ from sklearn.cluster._kmeans import k_means
15
+ from sklearn.preprocessing import normalize
16
+
17
+
18
+ def _get_device() -> torch.device:
19
+ """Get best available device for inference."""
20
+ if torch.cuda.is_available():
21
+ return torch.device("cuda")
22
+ if torch.backends.mps.is_available():
23
+ return torch.device("mps")
24
+ return torch.device("cpu")
25
+
26
+
27
+ class SpectralCluster:
28
+ """Spectral clustering using unnormalized Laplacian of affinity matrix.
29
+
30
+ Adapted from FunASR/3D-Speaker and SpeechBrain implementations.
31
+ Uses eigenvalue gap to automatically determine number of speakers.
32
+ """
33
+
34
+ def __init__(self, min_num_spks: int = 1, max_num_spks: int = 15, pval: float = 0.06):
35
+ self.min_num_spks = min_num_spks
36
+ self.max_num_spks = max_num_spks
37
+ self.pval = pval
38
+
39
+ def __call__(self, embeddings: np.ndarray, oracle_num: int | None = None) -> np.ndarray:
40
+ """Run spectral clustering on embeddings.
41
+
42
+ Args:
43
+ embeddings: Speaker embeddings of shape [N, D]
44
+ oracle_num: Optional known number of speakers
45
+
46
+ Returns:
47
+ Cluster labels of shape [N]
48
+ """
49
+ # Similarity matrix computation
50
+ sim_mat = self.get_sim_mat(embeddings)
51
+
52
+ # Refining similarity matrix with pval
53
+ prunned_sim_mat = self.p_pruning(sim_mat)
54
+
55
+ # Symmetrization
56
+ sym_prund_sim_mat = 0.5 * (prunned_sim_mat + prunned_sim_mat.T)
57
+
58
+ # Laplacian calculation
59
+ laplacian = self.get_laplacian(sym_prund_sim_mat)
60
+
61
+ # Get Spectral Embeddings
62
+ emb, num_of_spk = self.get_spec_embs(laplacian, oracle_num)
63
+
64
+ # Perform clustering
65
+ return self.cluster_embs(emb, num_of_spk)
66
+
67
+ def get_sim_mat(self, embeddings: np.ndarray) -> np.ndarray:
68
+ """Compute cosine similarity matrix."""
69
+ return sklearn.metrics.pairwise.cosine_similarity(embeddings, embeddings)
70
+
71
+ def p_pruning(self, affinity: np.ndarray) -> np.ndarray:
72
+ """Prune low similarity values in affinity matrix (keep top pval fraction)."""
73
+ n = affinity.shape[0]
74
+ pval = max(self.pval, 6.0 / n)
75
+ k_keep = max(1, int(pval * n))
76
+
77
+ # Vectorized: find top-k indices per row and zero out the rest
78
+ top_k_idx = np.argpartition(affinity, -k_keep, axis=1)[:, -k_keep:]
79
+ mask = np.zeros_like(affinity, dtype=bool)
80
+ np.put_along_axis(mask, top_k_idx, True, axis=1)
81
+ affinity[~mask] = 0
82
+ return affinity
83
+
84
+ def get_laplacian(self, sim_mat: np.ndarray) -> np.ndarray:
85
+ """Compute unnormalized Laplacian matrix."""
86
+ from scipy.sparse.csgraph import laplacian
87
+
88
+ np.fill_diagonal(sim_mat, 0)
89
+ return laplacian(sim_mat, normed=False)
90
+
91
+ def get_spec_embs(
92
+ self, laplacian: np.ndarray, k_oracle: int | None = None
93
+ ) -> tuple[np.ndarray, int]:
94
+ """Extract spectral embeddings from Laplacian."""
95
+ lambdas, eig_vecs = scipy.linalg.eigh(laplacian)
96
+
97
+ if k_oracle is not None:
98
+ num_of_spk = k_oracle
99
+ else:
100
+ lambda_gap_list = self.get_eigen_gaps(
101
+ lambdas[self.min_num_spks - 1 : self.max_num_spks + 1]
102
+ )
103
+ num_of_spk = np.argmax(lambda_gap_list) + self.min_num_spks
104
+
105
+ emb = eig_vecs[:, :num_of_spk]
106
+ return emb, num_of_spk
107
+
108
+ def cluster_embs(self, emb: np.ndarray, k: int) -> np.ndarray:
109
+ """Cluster spectral embeddings using k-means."""
110
+ _, labels, _ = k_means(emb, k, n_init=10)
111
+ return labels
112
+
113
+ def get_eigen_gaps(self, eig_vals: np.ndarray) -> np.ndarray:
114
+ """Compute gaps between consecutive eigenvalues."""
115
+ return np.diff(eig_vals)
116
+
117
+
118
+ class SpeakerClusterer:
119
+ """Speaker clustering backend using spectral clustering with speaker merging.
120
+
121
+ Features:
122
+ - Spectral clustering with eigenvalue gap for auto speaker count detection
123
+ - P-pruning for affinity matrix refinement
124
+ - Post-clustering speaker merging by cosine similarity
125
+ """
126
+
127
+ def __init__(
128
+ self,
129
+ min_num_spks: int = 2,
130
+ max_num_spks: int = 10,
131
+ merge_thr: float = 0.90, # Moderate merging
132
+ ):
133
+ self.min_num_spks = min_num_spks
134
+ self.max_num_spks = max_num_spks
135
+ self.merge_thr = merge_thr
136
+ self._spectral_cluster: SpectralCluster | None = None
137
+
138
+ def _get_spectral_cluster(self) -> SpectralCluster:
139
+ """Lazy-load spectral clusterer."""
140
+ if self._spectral_cluster is None:
141
+ self._spectral_cluster = SpectralCluster(
142
+ min_num_spks=self.min_num_spks,
143
+ max_num_spks=self.max_num_spks,
144
+ )
145
+ return self._spectral_cluster
146
+
147
+ def __call__(self, embeddings: np.ndarray, num_speakers: int | None = None) -> np.ndarray:
148
+ """Cluster speaker embeddings and return labels.
149
+
150
+ Args:
151
+ embeddings: Speaker embeddings of shape [N, D]
152
+ num_speakers: Optional oracle number of speakers
153
+
154
+ Returns:
155
+ Cluster labels of shape [N]
156
+ """
157
+ if len(embeddings.shape) != 2:
158
+ raise ValueError(f"Expected 2D array, got shape {embeddings.shape}")
159
+
160
+ # Handle edge cases
161
+ if embeddings.shape[0] == 0:
162
+ return np.array([], dtype=int)
163
+ if embeddings.shape[0] == 1:
164
+ return np.array([0], dtype=int)
165
+ if embeddings.shape[0] < 6:
166
+ return np.zeros(embeddings.shape[0], dtype=int)
167
+
168
+ # Normalize embeddings and replace NaN/inf
169
+ embeddings = np.nan_to_num(embeddings, nan=0.0, posinf=0.0, neginf=0.0)
170
+ embeddings = normalize(embeddings)
171
+
172
+ # Run spectral clustering (suppress numerical warnings)
173
+ spectral = self._get_spectral_cluster()
174
+
175
+ # Update min/max for oracle case
176
+ if num_speakers is not None:
177
+ spectral.min_num_spks = num_speakers
178
+ spectral.max_num_spks = num_speakers
179
+
180
+ with warnings.catch_warnings():
181
+ warnings.filterwarnings("ignore", category=RuntimeWarning)
182
+ labels = spectral(embeddings, oracle_num=num_speakers)
183
+
184
+ # Reset min/max
185
+ if num_speakers is not None:
186
+ spectral.min_num_spks = self.min_num_spks
187
+ spectral.max_num_spks = self.max_num_spks
188
+
189
+ # Merge similar speakers if no oracle
190
+ if num_speakers is None:
191
+ labels = self._merge_by_cos(labels, embeddings, self.merge_thr)
192
+
193
+ # Re-index labels sequentially
194
+ _, labels = np.unique(labels, return_inverse=True)
195
+
196
+ return labels
197
+
198
+ def _merge_by_cos(self, labels: np.ndarray, embs: np.ndarray, cos_thr: float) -> np.ndarray:
199
+ """Merge similar speakers by cosine similarity of centroids."""
200
+ from scipy.cluster.hierarchy import fcluster, linkage
201
+ from scipy.spatial.distance import pdist
202
+
203
+ unique_labels = np.unique(labels)
204
+ if len(unique_labels) <= 1:
205
+ return labels
206
+
207
+ # Compute normalized speaker centroids
208
+ centroids = np.array([embs[labels == lbl].mean(0) for lbl in unique_labels])
209
+ centroids = normalize(centroids)
210
+
211
+ # Hierarchical clustering with cosine distance
212
+ distances = pdist(centroids, metric="cosine")
213
+ linkage_matrix = linkage(distances, method="average")
214
+ merged_labels = fcluster(linkage_matrix, t=1.0 - cos_thr, criterion="distance") - 1
215
+
216
+ # Map original labels to merged labels
217
+ label_map = dict(zip(unique_labels, merged_labels))
218
+ return np.array([label_map[lbl] for lbl in labels])
219
+
220
+
221
+ class LocalSpeakerDiarizer:
222
+ """Local speaker diarization using TEN-VAD + ECAPA-TDNN + spectral clustering.
223
+
224
+ Pipeline:
225
+ 1. TEN-VAD detects speech segments
226
+ 2. Sliding window (1.0s, 75% overlap) for uniform embedding extraction
227
+ 3. ECAPA-TDNN extracts speaker embeddings per window
228
+ 4. Spectral clustering with eigenvalue gap for auto speaker detection
229
+ 5. Frame-level consensus voting for segment reconstruction
230
+ 6. Post-processing merges short segments to reduce flicker
231
+
232
+ Tunable Parameters (class attributes):
233
+ - WINDOW_SIZE: Embedding extraction window size in seconds
234
+ - STEP_SIZE: Sliding window step size (overlap = WINDOW_SIZE - STEP_SIZE)
235
+ - VAD_THRESHOLD: Speech detection threshold (lower = more sensitive)
236
+ - VAD_MIN_DURATION: Minimum speech segment duration
237
+ - VAD_MAX_GAP: Maximum gap to bridge between segments
238
+ - VAD_PAD_ONSET/OFFSET: Padding added to speech segments
239
+ - VOTING_RATE: Frame resolution for consensus voting
240
+ - MIN_SEGMENT_DURATION: Minimum final segment duration
241
+ - SAME_SPEAKER_GAP: Maximum gap to merge same-speaker segments
242
+ - TAIL_COVERAGE_RATIO: Minimum tail coverage to add extra window
243
+ """
244
+
245
+ _ten_vad_model = None
246
+ _ecapa_model = None
247
+ _device = None
248
+
249
+ # ==================== TUNABLE PARAMETERS ====================
250
+
251
+ # Sliding window for embedding extraction
252
+ WINDOW_SIZE = 0.75 # seconds - shorter window for finer resolution
253
+ STEP_SIZE = 0.15 # seconds (80% overlap for more votes)
254
+ TAIL_COVERAGE_RATIO = 0.1 # Add extra window if tail > this ratio of window
255
+
256
+ # VAD hysteresis parameters
257
+ VAD_THRESHOLD = 0.25 # Balanced threshold
258
+ VAD_MIN_DURATION = 0.05 # Minimum speech segment duration (seconds)
259
+ VAD_MAX_GAP = 0.50 # Bridge gaps shorter than this (seconds)
260
+ VAD_PAD_ONSET = 0.05 # Padding at segment start (seconds)
261
+ VAD_PAD_OFFSET = 0.05 # Padding at segment end (seconds)
262
+
263
+ # Frame-level voting
264
+ VOTING_RATE = 0.01 # 10ms resolution for consensus voting
265
+
266
+ # Post-processing
267
+ MIN_SEGMENT_DURATION = 0.15 # Minimum final segment duration (seconds)
268
+ SHORT_SEGMENT_GAP = 0.1 # Gap threshold for merging short segments
269
+ SAME_SPEAKER_GAP = 0.5 # Gap threshold for merging same-speaker segments
270
+
271
+ # ===========================================================
272
+
273
+ @classmethod
274
+ def _get_ten_vad_model(cls):
275
+ """Lazy-load TEN-VAD model (singleton)."""
276
+ if cls._ten_vad_model is None:
277
+ from ten_vad import TenVad
278
+
279
+ cls._ten_vad_model = TenVad(hop_size=256, threshold=cls.VAD_THRESHOLD)
280
+ return cls._ten_vad_model
281
+
282
+ @classmethod
283
+ def _get_device(cls) -> torch.device:
284
+ """Get the best available device."""
285
+ if cls._device is None:
286
+ cls._device = _get_device()
287
+ return cls._device
288
+
289
+ @classmethod
290
+ def _get_ecapa_model(cls):
291
+ """Lazy-load ECAPA-TDNN speaker embedding model (singleton)."""
292
+ if cls._ecapa_model is None:
293
+ # Suppress torchaudio deprecation warning from SpeechBrain
294
+ with warnings.catch_warnings():
295
+ warnings.filterwarnings("ignore", message="torchaudio._backend")
296
+ from speechbrain.inference.speaker import EncoderClassifier
297
+
298
+ device = cls._get_device()
299
+ cls._ecapa_model = EncoderClassifier.from_hparams(
300
+ source="speechbrain/spkrec-ecapa-voxceleb",
301
+ run_opts={"device": str(device)},
302
+ )
303
+
304
+ return cls._ecapa_model
305
+
306
+ @classmethod
307
+ def diarize(
308
+ cls,
309
+ audio: np.ndarray | str,
310
+ sample_rate: int = 16000,
311
+ num_speakers: int | None = None,
312
+ min_speakers: int = 2,
313
+ max_speakers: int = 10,
314
+ **_kwargs,
315
+ ) -> list[dict]:
316
+ """Run speaker diarization on audio.
317
+
318
+ Args:
319
+ audio: Audio waveform as numpy array or path to audio file
320
+ sample_rate: Audio sample rate (default 16000)
321
+ num_speakers: Exact number of speakers (if known)
322
+ min_speakers: Minimum number of speakers
323
+ max_speakers: Maximum number of speakers
324
+
325
+ Returns:
326
+ List of dicts with 'speaker', 'start', 'end' keys
327
+ """
328
+ # Handle file path input
329
+ if isinstance(audio, str):
330
+ import librosa
331
+
332
+ audio, sample_rate = librosa.load(audio, sr=16000)
333
+
334
+ # Ensure correct sample rate
335
+ if sample_rate != 16000:
336
+ import librosa
337
+
338
+ audio = librosa.resample(audio, orig_sr=sample_rate, target_sr=16000)
339
+ sample_rate = 16000
340
+
341
+ audio = audio.astype(np.float32)
342
+ total_duration = len(audio) / sample_rate
343
+
344
+ # Step 1: VAD (returns segments and raw frame-level decisions)
345
+ segments, vad_frames = cls._get_speech_segments(audio, sample_rate)
346
+ if not segments:
347
+ return []
348
+
349
+ # Step 2: Extract embeddings
350
+ embeddings, window_segments = cls._extract_embeddings(audio, segments, sample_rate)
351
+ if len(embeddings) == 0:
352
+ return []
353
+
354
+ # Step 3: Cluster
355
+ clusterer = SpeakerClusterer(min_num_spks=min_speakers, max_num_spks=max_speakers)
356
+ labels = clusterer(embeddings, num_speakers)
357
+
358
+ # Step 4: Post-process with consensus voting (VAD-aware)
359
+ return cls._postprocess_segments(window_segments, labels, total_duration, vad_frames)
360
+
361
+ @classmethod
362
+ def _get_speech_segments(
363
+ cls, audio_array: np.ndarray, sample_rate: int = 16000
364
+ ) -> tuple[list[dict], list[bool]]:
365
+ """Get speech segments using TEN-VAD.
366
+
367
+ Returns:
368
+ Tuple of (segments list, vad_frames list of per-frame speech decisions)
369
+ """
370
+ vad_model = cls._get_ten_vad_model()
371
+
372
+ # Convert to int16 as required by TEN-VAD
373
+ # Clip to prevent integer overflow
374
+ if audio_array.dtype != np.int16:
375
+ audio_int16 = (np.clip(audio_array, -1.0, 1.0) * 32767).astype(np.int16)
376
+ else:
377
+ audio_int16 = audio_array
378
+
379
+ # Process frame by frame
380
+ hop_size = 256
381
+ frame_duration = hop_size / sample_rate
382
+ speech_frames: list[bool] = []
383
+
384
+ for i in range(0, len(audio_int16) - hop_size, hop_size):
385
+ frame = audio_int16[i : i + hop_size]
386
+ _, is_speech = vad_model.process(frame)
387
+ speech_frames.append(is_speech)
388
+
389
+ # Convert frame-level decisions to segments
390
+ segments = []
391
+ in_speech = False
392
+ start_idx = 0
393
+
394
+ for i, is_speech in enumerate(speech_frames):
395
+ if is_speech and not in_speech:
396
+ start_idx = i
397
+ in_speech = True
398
+ elif not is_speech and in_speech:
399
+ start_time = start_idx * frame_duration
400
+ end_time = i * frame_duration
401
+ segments.append(
402
+ {
403
+ "start": start_time,
404
+ "end": end_time,
405
+ "start_sample": int(start_time * sample_rate),
406
+ "end_sample": int(end_time * sample_rate),
407
+ }
408
+ )
409
+ in_speech = False
410
+
411
+ # Handle trailing speech
412
+ if in_speech:
413
+ start_time = start_idx * frame_duration
414
+ end_time = len(speech_frames) * frame_duration
415
+ segments.append(
416
+ {
417
+ "start": start_time,
418
+ "end": end_time,
419
+ "start_sample": int(start_time * sample_rate),
420
+ "end_sample": int(end_time * sample_rate),
421
+ }
422
+ )
423
+
424
+ return cls._apply_vad_hysteresis(segments, sample_rate), speech_frames
425
+
426
+ @classmethod
427
+ def _apply_vad_hysteresis(cls, segments: list[dict], sample_rate: int = 16000) -> list[dict]:
428
+ """Apply hysteresis-like post-processing to VAD segments."""
429
+ if not segments:
430
+ return segments
431
+
432
+ segments = sorted(segments, key=lambda x: x["start"])
433
+
434
+ # Fill short gaps
435
+ merged = [segments[0].copy()]
436
+ for seg in segments[1:]:
437
+ gap = seg["start"] - merged[-1]["end"]
438
+ if gap <= cls.VAD_MAX_GAP:
439
+ merged[-1]["end"] = seg["end"]
440
+ merged[-1]["end_sample"] = seg["end_sample"]
441
+ else:
442
+ merged.append(seg.copy())
443
+
444
+ # Remove short segments
445
+ filtered = [seg for seg in merged if (seg["end"] - seg["start"]) >= cls.VAD_MIN_DURATION]
446
+
447
+ # Dilate segments (add padding)
448
+ for seg in filtered:
449
+ seg["start"] = max(0.0, seg["start"] - cls.VAD_PAD_ONSET)
450
+ seg["end"] = seg["end"] + cls.VAD_PAD_OFFSET
451
+ seg["start_sample"] = int(seg["start"] * sample_rate)
452
+ seg["end_sample"] = int(seg["end"] * sample_rate)
453
+
454
+ return filtered
455
+
456
+ @classmethod
457
+ def _extract_embeddings(
458
+ cls, audio_array: np.ndarray, segments: list[dict], sample_rate: int
459
+ ) -> tuple[np.ndarray, list[dict]]:
460
+ """Extract speaker embeddings using sliding windows."""
461
+ speaker_model = cls._get_ecapa_model()
462
+
463
+ window_samples = int(cls.WINDOW_SIZE * sample_rate)
464
+ step_samples = int(cls.STEP_SIZE * sample_rate)
465
+
466
+ embeddings = []
467
+ window_segments = []
468
+
469
+ with torch.no_grad():
470
+ for seg in segments:
471
+ seg_start = seg["start_sample"]
472
+ seg_end = seg["end_sample"]
473
+ seg_len = seg_end - seg_start
474
+
475
+ # Generate window positions
476
+ if seg_len <= window_samples:
477
+ starts = [seg_start]
478
+ ends = [seg_end]
479
+ else:
480
+ starts = list(range(seg_start, seg_end - window_samples + 1, step_samples))
481
+ ends = [s + window_samples for s in starts]
482
+
483
+ # Cover tail if > TAIL_COVERAGE_RATIO of window remains
484
+ if ends and ends[-1] < seg_end:
485
+ remainder = seg_end - ends[-1]
486
+ if remainder > (window_samples * cls.TAIL_COVERAGE_RATIO):
487
+ starts.append(seg_end - window_samples)
488
+ ends.append(seg_end)
489
+
490
+ for c_start, c_end in zip(starts, ends):
491
+ chunk = audio_array[c_start:c_end]
492
+
493
+ # Pad short chunks with reflection
494
+ if len(chunk) < window_samples:
495
+ pad_width = window_samples - len(chunk)
496
+ chunk = np.pad(chunk, (0, pad_width), mode="reflect")
497
+
498
+ # Extract embedding using SpeechBrain's encode_batch
499
+ chunk_tensor = torch.from_numpy(chunk).float().unsqueeze(0)
500
+ embedding = (
501
+ speaker_model.encode_batch(chunk_tensor).squeeze(0).squeeze(0).cpu().numpy()
502
+ )
503
+
504
+ # Validate embedding
505
+ if np.isfinite(embedding).all() and np.linalg.norm(embedding) > 1e-8:
506
+ embeddings.append(embedding)
507
+ window_segments.append(
508
+ {
509
+ "start": c_start / sample_rate,
510
+ "end": c_end / sample_rate,
511
+ }
512
+ )
513
+
514
+ # Normalize all embeddings at once
515
+ if embeddings:
516
+ return normalize(np.array(embeddings)), window_segments
517
+ return np.array([]), []
518
+
519
+ @classmethod
520
+ def _resample_vad(cls, vad_frames: list[bool], num_frames: int) -> np.ndarray:
521
+ """Resample VAD frame decisions to match voting grid resolution.
522
+
523
+ VAD operates at 256 samples / 16000 Hz = 16ms per frame.
524
+ Voting operates at VOTING_RATE (default 10ms) per frame.
525
+ This maps VAD decisions to the finer voting grid.
526
+ """
527
+ if not vad_frames:
528
+ return np.zeros(num_frames, dtype=bool)
529
+
530
+ vad_rate = 256 / 16000 # 16ms per VAD frame
531
+ vad_arr = np.array(vad_frames)
532
+
533
+ # Vectorized: compute VAD frame indices for each voting frame
534
+ voting_times = np.arange(num_frames) * cls.VOTING_RATE
535
+ vad_indices = np.clip((voting_times / vad_rate).astype(int), 0, len(vad_arr) - 1)
536
+ return vad_arr[vad_indices]
537
+
538
+ @classmethod
539
+ def _postprocess_segments(
540
+ cls,
541
+ window_segments: list[dict],
542
+ labels: np.ndarray,
543
+ total_duration: float,
544
+ vad_frames: list[bool],
545
+ ) -> list[dict]:
546
+ """Post-process using frame-level consensus voting with VAD-aware silence."""
547
+ if not window_segments or len(labels) == 0:
548
+ return []
549
+
550
+ # Correct labels to be contiguous
551
+ unique_labels = np.unique(labels)
552
+ label_map = {old: new for new, old in enumerate(unique_labels)}
553
+ clean_labels = np.array([label_map[lbl] for lbl in labels])
554
+ num_speakers = len(unique_labels)
555
+
556
+ if num_speakers == 0:
557
+ return []
558
+
559
+ # Create voting grid
560
+ num_frames = int(np.ceil(total_duration / cls.VOTING_RATE)) + 1
561
+ votes = np.zeros((num_frames, num_speakers), dtype=np.float32)
562
+
563
+ # Accumulate votes
564
+ for win, label in zip(window_segments, clean_labels):
565
+ start_frame = int(win["start"] / cls.VOTING_RATE)
566
+ end_frame = int(win["end"] / cls.VOTING_RATE)
567
+ end_frame = min(end_frame, num_frames)
568
+ if start_frame < end_frame:
569
+ votes[start_frame:end_frame, label] += 1.0
570
+
571
+ # Determine winner per frame
572
+ frame_speakers = np.argmax(votes, axis=1)
573
+ max_votes = np.max(votes, axis=1)
574
+
575
+ # Resample VAD to voting grid resolution for silence-aware voting
576
+ vad_resampled = cls._resample_vad(vad_frames, num_frames)
577
+
578
+ # Convert frames to segments
579
+ final_segments = []
580
+ current_speaker = -1
581
+ seg_start = 0.0
582
+
583
+ for f in range(num_frames):
584
+ speaker = int(frame_speakers[f])
585
+ score = max_votes[f]
586
+
587
+ # Force silence if VAD says no speech OR no votes
588
+ if score == 0 or not vad_resampled[f]:
589
+ speaker = -1
590
+
591
+ if speaker != current_speaker:
592
+ if current_speaker != -1:
593
+ final_segments.append(
594
+ {
595
+ "speaker": f"SPEAKER_{current_speaker}",
596
+ "start": seg_start,
597
+ "end": f * cls.VOTING_RATE,
598
+ }
599
+ )
600
+ current_speaker = speaker
601
+ seg_start = f * cls.VOTING_RATE
602
+
603
+ # Close last segment
604
+ if current_speaker != -1:
605
+ final_segments.append(
606
+ {
607
+ "speaker": f"SPEAKER_{current_speaker}",
608
+ "start": seg_start,
609
+ "end": num_frames * cls.VOTING_RATE,
610
+ }
611
+ )
612
+
613
+ return cls._merge_short_segments(final_segments)
614
+
615
+ @classmethod
616
+ def _merge_short_segments(cls, segments: list[dict]) -> list[dict]:
617
+ """Merge short segments to reduce flicker."""
618
+ if not segments:
619
+ return []
620
+
621
+ clean: list[dict] = []
622
+ for seg in segments:
623
+ dur = seg["end"] - seg["start"]
624
+ if dur < cls.MIN_SEGMENT_DURATION:
625
+ if (
626
+ clean
627
+ and clean[-1]["speaker"] == seg["speaker"]
628
+ and seg["start"] - clean[-1]["end"] < cls.SHORT_SEGMENT_GAP
629
+ ):
630
+ clean[-1]["end"] = seg["end"]
631
+ continue
632
+
633
+ if (
634
+ clean
635
+ and clean[-1]["speaker"] == seg["speaker"]
636
+ and seg["start"] - clean[-1]["end"] < cls.SAME_SPEAKER_GAP
637
+ ):
638
+ clean[-1]["end"] = seg["end"]
639
+ else:
640
+ clean.append(seg)
641
+
642
+ return clean
643
+
644
+ @classmethod
645
+ def assign_speakers_to_words(
646
+ cls,
647
+ words: list[dict],
648
+ speaker_segments: list[dict],
649
+ ) -> list[dict]:
650
+ """Assign speaker labels to words based on timestamp overlap.
651
+
652
+ Args:
653
+ words: List of word dicts with 'word', 'start', 'end' keys
654
+ speaker_segments: List of speaker dicts with 'speaker', 'start', 'end' keys
655
+
656
+ Returns:
657
+ Words list with 'speaker' key added to each word
658
+ """
659
+ for word in words:
660
+ word_mid = (word["start"] + word["end"]) / 2
661
+
662
+ # Find the speaker segment that contains this word's midpoint
663
+ best_speaker = None
664
+ for seg in speaker_segments:
665
+ if seg["start"] <= word_mid <= seg["end"]:
666
+ best_speaker = seg["speaker"]
667
+ break
668
+
669
+ # If no exact match, find closest segment
670
+ if best_speaker is None and speaker_segments:
671
+ min_dist = float("inf")
672
+ for seg in speaker_segments:
673
+ seg_mid = (seg["start"] + seg["end"]) / 2
674
+ dist = abs(word_mid - seg_mid)
675
+ if dist < min_dist:
676
+ min_dist = dist
677
+ best_speaker = seg["speaker"]
678
+
679
+ word["speaker"] = best_speaker
680
+
681
+ return words
682
+
683
+
684
+ class SpeakerDiarizer:
685
+ """Speaker diarization using TEN-VAD + ECAPA-TDNN + spectral clustering.
686
+
687
+ Example:
688
+ >>> segments = SpeakerDiarizer.diarize(audio_array)
689
+ >>> for seg in segments:
690
+ ... print(f"{seg['speaker']}: {seg['start']:.2f} - {seg['end']:.2f}")
691
+ """
692
+
693
+ @classmethod
694
+ def diarize(
695
+ cls,
696
+ audio: np.ndarray | str,
697
+ sample_rate: int = 16000,
698
+ num_speakers: int | None = None,
699
+ min_speakers: int | None = None,
700
+ max_speakers: int | None = None,
701
+ **_kwargs,
702
+ ) -> list[dict]:
703
+ """Run speaker diarization on audio.
704
+
705
+ Args:
706
+ audio: Audio waveform as numpy array or path to audio file
707
+ sample_rate: Audio sample rate (default 16000)
708
+ num_speakers: Exact number of speakers (if known)
709
+ min_speakers: Minimum number of speakers
710
+ max_speakers: Maximum number of speakers
711
+
712
+ Returns:
713
+ List of dicts with 'speaker', 'start', 'end' keys
714
+ """
715
+ return LocalSpeakerDiarizer.diarize(
716
+ audio,
717
+ sample_rate=sample_rate,
718
+ num_speakers=num_speakers,
719
+ min_speakers=min_speakers or 2,
720
+ max_speakers=max_speakers or 10,
721
+ )
722
+
723
+ @classmethod
724
+ def assign_speakers_to_words(
725
+ cls,
726
+ words: list[dict],
727
+ speaker_segments: list[dict],
728
+ ) -> list[dict]:
729
+ """Assign speaker labels to words based on timestamp overlap."""
730
+ return LocalSpeakerDiarizer.assign_speakers_to_words(words, speaker_segments)
generation_config.json ADDED
@@ -0,0 +1,17 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_sample": false,
3
+ "eos_token_id": [
4
+ 151645,
5
+ 151645,
6
+ 151643
7
+ ],
8
+ "length_penalty": 1.0,
9
+ "max_new_tokens": 256,
10
+ "min_new_tokens": 0,
11
+ "no_repeat_ngram_size": 0,
12
+ "num_beams": 1,
13
+ "pad_token_id": 151643,
14
+ "repetition_penalty": 1.0,
15
+ "transformers_version": "5.6.1",
16
+ "use_cache": true
17
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:cc6533d5a1fac0565a1b7cbe689e34e885b1d08f148a5921f4fbaf92037b11c0
3
+ size 1216765200
preprocessor_config.json ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "chunk_length": 30,
3
+ "dither": 0.0,
4
+ "feature_extractor_type": "WhisperFeatureExtractor",
5
+ "feature_size": 128,
6
+ "hop_length": 160,
7
+ "n_fft": 400,
8
+ "n_samples": 480000,
9
+ "nb_max_frames": 3000,
10
+ "padding": false,
11
+ "padding_side": "right",
12
+ "padding_value": 0.0,
13
+ "return_attention_mask": false,
14
+ "sampling_rate": 16000,
15
+ "processor_class": "ASRProcessor",
16
+ "auto_map": {
17
+ "AutoProcessor": "asr_processing.ASRProcessor"
18
+ }
19
+ }
projectors.py ADDED
@@ -0,0 +1,481 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Audio projector modules for bridging encoder and decoder embeddings.
2
+
3
+ This module contains all projector architectures:
4
+ - MLPAudioProjector: Simple 2-layer MLP with frame stacking downsampling
5
+ - MOSAProjector: MOSA-style dense mixture of experts
6
+ - SharedMoEAudioProjector: Shared expert + sparse routed experts
7
+ - QFormerAudioProjector: BLIP-2 QFormer with learnable queries (Granite-style)
8
+ """
9
+
10
+ import math
11
+
12
+ import torch
13
+ import torch.nn as nn
14
+ import torch.nn.functional as F # noqa: N812
15
+ from transformers import AutoModel, Blip2QFormerConfig
16
+ from transformers.models.llama.modeling_llama import LlamaRMSNorm
17
+
18
+ # =============================================================================
19
+ # MLP Projector
20
+ # =============================================================================
21
+
22
+
23
+ class MLPAudioProjector(nn.Module):
24
+ """2-layer MLP projector with frame-stacking downsampling (matches GLM-ASR)."""
25
+
26
+ def __init__(self, config):
27
+ """Initialize MLP projector.
28
+
29
+ Args:
30
+ config: ASRConfig with encoder_dim, llm_dim, projector_pool_stride
31
+ """
32
+ super().__init__()
33
+
34
+ encoder_dim = getattr(config, "encoder_dim", 768)
35
+ llm_dim = getattr(config, "llm_dim", 2048)
36
+ self.k = getattr(config, "projector_pool_stride", 4)
37
+
38
+ # Frame stacking: concat k adjacent frames then project
39
+ in_dim = encoder_dim * self.k
40
+ # Hidden dim defaults to llm_dim, can be overridden via config
41
+ hidden_dim = getattr(config, "projector_hidden_dim", None) or llm_dim
42
+ self.linear_1 = nn.Linear(in_dim, hidden_dim, bias=False)
43
+ self.norm = LlamaRMSNorm(hidden_dim, eps=1e-6)
44
+ self.act = nn.GELU()
45
+ self.linear_2 = nn.Linear(hidden_dim, llm_dim, bias=False)
46
+
47
+ def get_output_length(self, input_length: int) -> int:
48
+ """Calculate output sequence length given input length (matches GLM-ASR)."""
49
+ # GLM-ASR formula: (L - merge_factor) // merge_factor + 1
50
+ return (input_length - self.k) // self.k + 1
51
+
52
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
53
+ """Project audio features to LLM embedding space.
54
+
55
+ Args:
56
+ x: Audio encoder output of shape [batch, seq_len, encoder_dim]
57
+
58
+ Returns:
59
+ Projected features of shape [batch, (seq_len - k) // k + 1, llm_dim]
60
+ """
61
+ x = _frame_stack(x, self.k)
62
+ x = self.linear_1(x)
63
+ x = self.norm(x)
64
+ x = self.act(x)
65
+ return self.linear_2(x)
66
+
67
+
68
+ # =============================================================================
69
+ # MoE Projector (MOSA-style)
70
+ # =============================================================================
71
+
72
+
73
+ def _frame_stack(x: torch.Tensor, k: int) -> torch.Tensor:
74
+ """Stack k adjacent frames along the feature dim.
75
+
76
+ Truncates trailing frames that don't fill a complete k-frame window,
77
+ matching GLM-ASR's `(seq_len - k) // k + 1` formula.
78
+ """
79
+ batch, seq, dim = x.shape
80
+ out_len = (seq - k) // k + 1
81
+ return x[:, : out_len * k, :].reshape(batch, out_len, dim * k)
82
+
83
+
84
+ class SimpleAdapter(nn.Module):
85
+ """Simple 2-layer GELU adapter (from MOSA paper)."""
86
+
87
+ def __init__(self, input_dim: int, hidden_dim: int, output_dim: int):
88
+ super().__init__()
89
+ self.fc1 = nn.Linear(input_dim, hidden_dim)
90
+ self.act = nn.GELU()
91
+ self.fc2 = nn.Linear(hidden_dim, output_dim)
92
+
93
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
94
+ return self.fc2(self.act(self.fc1(x)))
95
+
96
+
97
+ class MOSAProjector(nn.Module):
98
+ """MOSA-Base projector: simple 2-layer ReLU router with 4 simple adapters.
99
+
100
+ Based on "MOSA: Mixtures of Simple Adapters" (arXiv:2508.18998).
101
+ Uses softmax gating over all experts (dense MoE) with only cross-entropy loss.
102
+ Uses Conv1d for downsampling (2 layers, stride 2 each = 4x total).
103
+ """
104
+
105
+ ADAPTER_HIDDEN_DIM = 4096
106
+ ROUTER_HIDDEN_DIM = 512
107
+ CONV_KERNEL = 3
108
+ CONV_STRIDE = 2
109
+ CONV_PADDING = 1
110
+
111
+ def __init__(self, config):
112
+ """Initialize MOSA projector.
113
+
114
+ Args:
115
+ config: ASRConfig with encoder_dim, llm_dim, num_experts
116
+ """
117
+ super().__init__()
118
+ self.encoder_dim = getattr(config, "encoder_dim", None) or 1280
119
+ self.llm_dim = getattr(config, "llm_dim", None) or 2048
120
+ self.num_experts = getattr(config, "num_experts", None) or 4 # MOSA-Base uses 4
121
+
122
+ conv_kwargs = {
123
+ "kernel_size": self.CONV_KERNEL,
124
+ "stride": self.CONV_STRIDE,
125
+ "padding": self.CONV_PADDING,
126
+ }
127
+ self.downsampler = nn.Sequential(
128
+ nn.Conv1d(self.encoder_dim, self.encoder_dim, **conv_kwargs),
129
+ nn.GELU(),
130
+ nn.Conv1d(self.encoder_dim, self.llm_dim, **conv_kwargs),
131
+ nn.GELU(),
132
+ )
133
+
134
+ self.router = nn.Sequential(
135
+ nn.Linear(self.llm_dim, self.ROUTER_HIDDEN_DIM),
136
+ nn.ReLU(),
137
+ nn.Linear(self.ROUTER_HIDDEN_DIM, self.num_experts),
138
+ )
139
+
140
+ self.experts = nn.ModuleList(
141
+ [
142
+ SimpleAdapter(self.llm_dim, self.ADAPTER_HIDDEN_DIM, self.llm_dim)
143
+ for _ in range(self.num_experts)
144
+ ]
145
+ )
146
+
147
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
148
+ """Project audio features using mixture of experts.
149
+
150
+ Args:
151
+ x: Audio encoder output of shape [batch, seq_len, encoder_dim]
152
+
153
+ Returns:
154
+ Projected features of shape [batch, out_len, llm_dim]
155
+ """
156
+ x = self.downsampler(x.transpose(1, 2)).transpose(1, 2)
157
+
158
+ routing_weights = F.softmax(self.router(x), dim=-1) # (B, out_len, num_experts)
159
+
160
+ # Accumulate weighted expert outputs without materializing all experts at once.
161
+ output = self.experts[0](x) * routing_weights[..., 0:1]
162
+ for i, expert in enumerate(self.experts[1:], start=1):
163
+ output = output + expert(x) * routing_weights[..., i : i + 1]
164
+ return output
165
+
166
+ def get_output_length(self, input_length: int) -> int:
167
+ """Calculate output sequence length after Conv1d downsampling (4x reduction)."""
168
+ length = input_length
169
+ for _ in range(2):
170
+ length = (length + 2 * self.CONV_PADDING - self.CONV_KERNEL) // self.CONV_STRIDE + 1
171
+ return length
172
+
173
+
174
+ # =============================================================================
175
+ # MoE Projector (Pure PyTorch with Shared Expert)
176
+ # =============================================================================
177
+
178
+
179
+ class MoEAudioProjector(nn.Module):
180
+ """MoE projector with shared expert (DeepSeek-style), pure PyTorch implementation.
181
+
182
+ Uses 4 sparse experts with top-2 routing plus a shared expert that processes all tokens.
183
+ No external dependencies (megablocks removed).
184
+
185
+ Architecture matches main branch: norm → experts(in_dim → hidden → out_dim)
186
+ """
187
+
188
+ def __init__(self, config):
189
+ """Initialize MoE projector.
190
+
191
+ Args:
192
+ config: ASRConfig with encoder_dim, llm_dim, num_experts, num_experts_per_tok
193
+ """
194
+ super().__init__()
195
+
196
+ self.k = getattr(config, "projector_pool_stride", 4)
197
+ self.aux_coef = getattr(config, "router_aux_loss_coef", 0.01)
198
+
199
+ # Stability coefficients
200
+ self.router_z_loss_coef = getattr(
201
+ config, "router_z_loss_coef", 1e-4
202
+ ) # Prevents logit explosion
203
+ self.router_jitter_noise = getattr(
204
+ config, "router_jitter_noise", 0.01
205
+ ) # Prevents expert collapse
206
+
207
+ in_dim = config.encoder_dim * self.k
208
+ out_dim = config.llm_dim
209
+
210
+ # Expert hidden dim (default = output dim)
211
+ hidden_dim = getattr(config, "projector_hidden_dim", None) or out_dim
212
+
213
+ # Number of experts and top-k selection
214
+ self.num_experts = getattr(config, "num_experts", 4)
215
+ self.top_k = getattr(config, "num_experts_per_tok", 2)
216
+
217
+ # A. Normalize stacked input (like main branch SharedMoEBlock)
218
+ self.norm = LlamaRMSNorm(in_dim, eps=1e-6)
219
+
220
+ # B. Router (operates on stacked input)
221
+ self.router = nn.Linear(in_dim, self.num_experts, bias=False)
222
+
223
+ # C. Experts: simple 2-layer MLP (same as MLPAudioProjector)
224
+ self.experts = nn.ModuleList(
225
+ [SimpleAdapter(in_dim, hidden_dim, out_dim) for _ in range(self.num_experts)]
226
+ )
227
+
228
+ # D. Shared Expert (same architecture)
229
+ self.shared_expert = SimpleAdapter(in_dim, hidden_dim, out_dim)
230
+
231
+ # E. Initialize weights for stable training
232
+ self._init_weights()
233
+
234
+ self.last_aux_loss = torch.tensor(0.0)
235
+
236
+ def _init_weights(self):
237
+ """Initialize weights for stable training start."""
238
+ with torch.no_grad():
239
+ # Router: small weights -> uniform probability
240
+ nn.init.normal_(self.router.weight, mean=0.0, std=0.02)
241
+
242
+ # Experts: xavier for fc1, small for fc2 (output)
243
+ for expert in [self.shared_expert, *self.experts]:
244
+ nn.init.xavier_uniform_(expert.fc1.weight)
245
+ nn.init.normal_(expert.fc2.weight, mean=0.0, std=0.01) # Small init
246
+
247
+ def get_output_length(self, input_length: int) -> int:
248
+ """Calculate output sequence length given input length (matches MLP projector)."""
249
+ return (input_length - self.k) // self.k + 1
250
+
251
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
252
+ """Project audio features using shared + sparse MoE.
253
+
254
+ Args:
255
+ x: Audio encoder output of shape [batch, seq_len, encoder_dim]
256
+
257
+ Returns:
258
+ Projected features of shape [batch, out_len, llm_dim]
259
+ """
260
+ x = _frame_stack(x, self.k)
261
+ batch, out_len, _ = x.shape
262
+
263
+ # Normalize stacked input (like main branch SharedMoEBlock)
264
+ x = self.norm(x)
265
+ flat_x = x.view(-1, x.size(-1)) # [tokens, in_dim]
266
+
267
+ # 3. Shared Expert (compute first, creates output tensor)
268
+ output = self.shared_expert(flat_x)
269
+
270
+ # 4. Sparse Experts (in-place add to shared output)
271
+ self.last_aux_loss = self._forward_sparse(flat_x, output)
272
+
273
+ return output.view(batch, out_len, -1)
274
+
275
+ def _forward_sparse(self, x: torch.Tensor, output: torch.Tensor) -> torch.Tensor:
276
+ """Stability-hardened sparse expert dispatch (in-place add to output).
277
+
278
+ Args:
279
+ x: Flattened input of shape [tokens, dim]
280
+ output: Output tensor to add sparse expert results into (in-place)
281
+
282
+ Returns:
283
+ Auxiliary loss tensor
284
+ """
285
+ # A. Router Logic with Jitter
286
+ logits = self.router(x)
287
+
288
+ if self.training and self.router_jitter_noise > 0:
289
+ # Jitter: multiply by uniform noise (1-eps, 1+eps) to shake decision boundary
290
+ # Prevents router from getting stuck on one expert early in training
291
+ noise = torch.empty_like(logits).uniform_(
292
+ 1.0 - self.router_jitter_noise, 1.0 + self.router_jitter_noise
293
+ )
294
+ logits = logits * noise
295
+
296
+ # Force float32 for softmax (bf16/fp16 exponentials can overflow)
297
+ probs = torch.softmax(logits, dim=-1, dtype=torch.float32).type_as(x)
298
+
299
+ # B. Top-K Selection
300
+ top_k_weights, top_k_indices = torch.topk(probs, self.top_k, dim=-1)
301
+
302
+ # Normalize weights so they sum to 1.0
303
+ top_k_weights = top_k_weights / (top_k_weights.sum(dim=-1, keepdim=True) + 1e-6)
304
+
305
+ # C. Aux Loss + Z-Loss
306
+ aux_loss = torch.tensor(0.0, device=x.device)
307
+
308
+ if self.training:
309
+ # Load balancing loss (batch-size invariant)
310
+ prob_per_expert = probs.mean(0) # [num_experts]
311
+ target = 1.0 / self.num_experts
312
+ balance_loss = (
313
+ self.aux_coef * ((prob_per_expert - target) ** 2).mean() * self.num_experts
314
+ )
315
+
316
+ # Z-loss: penalty on large logits to prevent softmax saturation
317
+ z_loss = self.router_z_loss_coef * torch.logsumexp(logits, dim=-1).pow(2).mean()
318
+
319
+ aux_loss = balance_loss + z_loss
320
+
321
+ # D. Dispatch Loop (in-place add to output)
322
+ for i, expert in enumerate(self.experts):
323
+ # Create boolean mask for tokens that selected Expert 'i'
324
+ mask = top_k_indices == i
325
+
326
+ if mask.any():
327
+ # token_idx = which tokens, k_idx = 1st or 2nd choice
328
+ token_idx, k_idx = torch.where(mask)
329
+
330
+ # Gather inputs and compute
331
+ expert_input = x[token_idx]
332
+ expert_output = expert(expert_input)
333
+
334
+ # Apply routing weight
335
+ weight = top_k_weights[token_idx, k_idx].unsqueeze(-1)
336
+ weighted_output = (expert_output * weight).type_as(output)
337
+
338
+ # Scatter back in-place (index_add_ is atomic and deterministic)
339
+ output.index_add_(0, token_idx, weighted_output)
340
+
341
+ return aux_loss
342
+
343
+ def get_aux_loss(self) -> torch.Tensor:
344
+ """Return auxiliary load balancing loss."""
345
+ return self.last_aux_loss
346
+
347
+
348
+ # =============================================================================
349
+ # QFormer Projector (Granite-style)
350
+ # =============================================================================
351
+
352
+
353
+ class QFormerAudioProjector(nn.Module):
354
+ """
355
+ BLIP-2 QFormer projector with learnable queries.
356
+
357
+ Based on GraniteSpeechEncoderProjector - uses a QFormer model with learnable
358
+ query embeddings to compress and project audio encoder outputs. The audio
359
+ sequence is processed in windows and downsampled via cross-attention.
360
+ """
361
+
362
+ def __init__(self, config):
363
+ """Initialize QFormer projector.
364
+
365
+ Args:
366
+ config: ASRConfig with encoder_dim, llm_dim, qformer_* settings
367
+ """
368
+ super().__init__()
369
+
370
+ encoder_dim = config.encoder_dim
371
+ llm_dim = config.llm_dim
372
+
373
+ # Window and downsampling parameters (Granite defaults: window=15, downsample=5)
374
+ self.window_size = getattr(config, "qformer_window_size", 15)
375
+ self.downsample_rate = getattr(config, "downsample_rate", 5)
376
+ self.num_queries = self.window_size // self.downsample_rate
377
+
378
+ # QFormer hidden size (matches encoder for cross-attention)
379
+ qformer_hidden = getattr(config, "qformer_hidden_size", None) or encoder_dim
380
+ qformer_num_layers = getattr(config, "qformer_num_layers", 2)
381
+ qformer_num_heads = getattr(config, "qformer_num_heads", 16)
382
+ qformer_intermediate = getattr(config, "qformer_intermediate_size", None) or (
383
+ qformer_hidden * 4
384
+ )
385
+
386
+ # Learnable query embeddings (Granite uses std=1.0)
387
+ self.query = nn.Parameter(torch.zeros(1, self.num_queries, qformer_hidden))
388
+ self.query.data.normal_(mean=0.0, std=1.0)
389
+
390
+ # Optional projection if encoder dim != qformer hidden
391
+ if encoder_dim != qformer_hidden:
392
+ self.encoder_proj = nn.Linear(encoder_dim, qformer_hidden, bias=False)
393
+ else:
394
+ self.encoder_proj = None
395
+
396
+ # Configure QFormer to match Granite's exact config
397
+ qformer_config = Blip2QFormerConfig(
398
+ hidden_size=qformer_hidden,
399
+ num_hidden_layers=qformer_num_layers,
400
+ num_attention_heads=qformer_num_heads,
401
+ intermediate_size=qformer_intermediate,
402
+ encoder_hidden_size=qformer_hidden,
403
+ cross_attention_frequency=1,
404
+ # Granite-specific settings
405
+ hidden_act="gelu",
406
+ attention_probs_dropout_prob=0.1,
407
+ hidden_dropout_prob=0.1,
408
+ layer_norm_eps=1e-12,
409
+ initializer_range=0.02,
410
+ )
411
+ self.qformer = AutoModel.from_config(qformer_config)
412
+
413
+ # Final projection to LLM dimension (Granite uses bias=True)
414
+ self.linear = nn.Linear(qformer_hidden, llm_dim)
415
+
416
+ def get_output_length(self, input_length):
417
+ """Calculate output sequence length given input length.
418
+
419
+ Accepts either Python ints or torch tensors; uses ceiling division so
420
+ the formula is identical for both — math.ceil would block tensors.
421
+ """
422
+ nblocks = (input_length + self.window_size - 1) // self.window_size
423
+ return nblocks * self.num_queries
424
+
425
+ def forward(self, hidden_states: torch.Tensor) -> torch.Tensor:
426
+ """
427
+ Args:
428
+ hidden_states: [batch_size, seq_len, encoder_dim]
429
+
430
+ Returns:
431
+ projected: [batch_size, num_output_tokens, llm_dim]
432
+ """
433
+ batch_size, seq_len, dim = hidden_states.size()
434
+
435
+ # Ensure float dtype for QFormer
436
+ target_dtype = self.query.dtype
437
+ if hidden_states.dtype != target_dtype:
438
+ hidden_states = hidden_states.to(target_dtype)
439
+
440
+ # Optional encoder projection
441
+ if self.encoder_proj is not None:
442
+ hidden_states = self.encoder_proj(hidden_states)
443
+
444
+ # Compute number of windows and pad to fit
445
+ nblocks = math.ceil(seq_len / self.window_size)
446
+ pad = nblocks * self.window_size - seq_len
447
+ if pad > 0:
448
+ hidden_states = F.pad(hidden_states, (0, 0, 0, pad), "constant", 0)
449
+
450
+ # Reshape to process each window: [batch*nblocks, window_size, dim]
451
+ effective_batch = batch_size * nblocks
452
+ hidden_states = hidden_states.view(effective_batch, self.window_size, -1)
453
+
454
+ # Expand queries to match batch size
455
+ query_embeds = self.query.expand(effective_batch, -1, -1)
456
+
457
+ # QFormer cross-attention
458
+ query_output = self.qformer(
459
+ query_embeds=query_embeds,
460
+ encoder_hidden_states=hidden_states,
461
+ return_dict=True,
462
+ )
463
+
464
+ # Reshape back: [batch, nblocks * num_queries, hidden]
465
+ output_tokens = nblocks * self.num_queries
466
+ query_proj = query_output.last_hidden_state.view(batch_size, output_tokens, -1)
467
+
468
+ # Project to LLM dimension
469
+ return self.linear(query_proj)
470
+
471
+
472
+ # =============================================================================
473
+ # Projector Registry
474
+ # =============================================================================
475
+
476
+ PROJECTOR_CLASSES = {
477
+ "mlp": MLPAudioProjector,
478
+ "mosa": MOSAProjector,
479
+ "moe": MoEAudioProjector,
480
+ "qformer": QFormerAudioProjector,
481
+ }
tokenizer.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:33b674fb8444e2553eae8f1b261093371920a28ef75b5c18f4deb3f9217ed0ba
3
+ size 11422834
tokenizer_config.json ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "backend": "tokenizers",
4
+ "bos_token": null,
5
+ "clean_up_tokenization_spaces": false,
6
+ "eos_token": "<|im_end|>",
7
+ "errors": "replace",
8
+ "extra_special_tokens": [
9
+ "<audio>"
10
+ ],
11
+ "is_local": false,
12
+ "local_files_only": false,
13
+ "model_max_length": 131072,
14
+ "pad_token": "<|endoftext|>",
15
+ "split_special_tokens": false,
16
+ "tokenizer_class": "Qwen2Tokenizer",
17
+ "unk_token": null
18
+ }