amity-diarization-v00
This model is a fine-tuned version of pyannote/segmentation-3.0 on the amityco/sample-voice-dataset dataset. It achieves the following results on the evaluation set:
- Loss: 0.3132
- Model Preparation Time: 0.0075
- Der: 0.1175
- False Alarm: 0.0395
- Missed Detection: 0.0579
- Confusion: 0.0201
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- num_epochs: 20.0
Training results
Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Der | False Alarm | Missed Detection | Confusion |
---|---|---|---|---|---|---|---|---|
No log | 1.0 | 18 | 0.3317 | 0.0075 | 0.1263 | 0.0361 | 0.0699 | 0.0203 |
0.4825 | 2.0 | 36 | 0.3305 | 0.0075 | 0.1249 | 0.0366 | 0.0672 | 0.0211 |
0.475 | 3.0 | 54 | 0.3238 | 0.0075 | 0.1231 | 0.0369 | 0.0643 | 0.0218 |
0.4054 | 4.0 | 72 | 0.3232 | 0.0075 | 0.1213 | 0.0363 | 0.0631 | 0.0218 |
0.432 | 5.0 | 90 | 0.3246 | 0.0075 | 0.1206 | 0.0367 | 0.0616 | 0.0223 |
0.4053 | 6.0 | 108 | 0.3209 | 0.0075 | 0.1196 | 0.0376 | 0.0599 | 0.0222 |
0.4023 | 7.0 | 126 | 0.3193 | 0.0075 | 0.1196 | 0.0384 | 0.0589 | 0.0223 |
0.407 | 8.0 | 144 | 0.3195 | 0.0075 | 0.1197 | 0.0388 | 0.0585 | 0.0224 |
0.3969 | 9.0 | 162 | 0.3190 | 0.0075 | 0.1195 | 0.0388 | 0.0586 | 0.0220 |
0.3645 | 10.0 | 180 | 0.3184 | 0.0075 | 0.1197 | 0.0392 | 0.0583 | 0.0222 |
0.3645 | 11.0 | 198 | 0.3155 | 0.0075 | 0.1176 | 0.0397 | 0.0579 | 0.0200 |
0.3733 | 12.0 | 216 | 0.3223 | 0.0075 | 0.1195 | 0.0397 | 0.0578 | 0.0220 |
0.3842 | 13.0 | 234 | 0.3236 | 0.0075 | 0.1194 | 0.0397 | 0.0575 | 0.0222 |
0.377 | 14.0 | 252 | 0.3143 | 0.0075 | 0.1177 | 0.0396 | 0.0577 | 0.0204 |
0.3655 | 15.0 | 270 | 0.3159 | 0.0075 | 0.1176 | 0.0396 | 0.0577 | 0.0203 |
0.352 | 16.0 | 288 | 0.3162 | 0.0075 | 0.1176 | 0.0395 | 0.0577 | 0.0204 |
0.3753 | 17.0 | 306 | 0.3157 | 0.0075 | 0.1177 | 0.0396 | 0.0579 | 0.0201 |
0.3652 | 18.0 | 324 | 0.3169 | 0.0075 | 0.1194 | 0.0395 | 0.0579 | 0.0221 |
0.3736 | 19.0 | 342 | 0.3131 | 0.0075 | 0.1174 | 0.0394 | 0.0579 | 0.0200 |
0.3673 | 20.0 | 360 | 0.3132 | 0.0075 | 0.1175 | 0.0395 | 0.0579 | 0.0201 |
Framework versions
- Transformers 4.51.2
- Pytorch 2.6.0+cu124
- Datasets 3.5.0
- Tokenizers 0.21.1
- Downloads last month
- 9
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
HF Inference deployability: The model has no pipeline_tag.
Model tree for aongwachi/amity-diarization-v00
Base model
pyannote/segmentation-3.0