File size: 2,911 Bytes
26d3939
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
license: apache-2.0
base_model: openai/whisper-small
tags:
- generated_from_trainer
datasets:
- facebook/voxpopuli
metrics:
- wer
model-index:
- name: WhisperForSpokenNER-end2end
  results:
  - task:
      name: Automatic Speech Recognition
      type: automatic-speech-recognition
    dataset:
      name: facebook/voxpopuli de+es+fr+nl
      type: facebook/voxpopuli
      config: de+es+fr_nl
      split: None
    metrics:
    - name: Wer
      type: wer
      value: 0.08582479210984335
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# WhisperForSpokenNER-end2end

This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the facebook/voxpopuli de+es+fr+nl dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2755
- Combined Wer: 0.1491
- F1 Score: 0.7163
- Label F1: 0.8200
- Wer: 0.0858

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Combined Wer | F1 Score | Label F1 | Wer    |
|:-------------:|:-----:|:----:|:---------------:|:------------:|:--------:|:--------:|:------:|
| 0.3252        | 0.1   | 500  | 0.3396          | 0.1918       | 0.6148   | 0.7578   | 0.1193 |
| 0.2729        | 0.2   | 1000 | 0.3158          | 0.1730       | 0.6449   | 0.7907   | 0.1058 |
| 0.2369        | 0.3   | 1500 | 0.2971          | 0.1736       | 0.6917   | 0.8083   | 0.1067 |
| 0.1967        | 0.4   | 2000 | 0.2823          | 0.1634       | 0.6915   | 0.8095   | 0.0999 |
| 0.1623        | 0.5   | 2500 | 0.2804          | 0.1693       | 0.7088   | 0.8249   | 0.1052 |
| 0.1146        | 1.02  | 3000 | 0.2820          | 0.1593       | 0.7012   | 0.8106   | 0.0951 |
| 0.0938        | 1.12  | 3500 | 0.2792          | 0.1500       | 0.7205   | 0.8238   | 0.0875 |
| 0.1001        | 1.22  | 4000 | 0.2750          | 0.1549       | 0.7072   | 0.8061   | 0.0928 |
| 0.0848        | 1.32  | 4500 | 0.2741          | 0.1471       | 0.7243   | 0.8318   | 0.0860 |
| 0.0649        | 1.42  | 5000 | 0.2745          | 0.1468       | 0.7304   | 0.8350   | 0.0858 |


### Framework versions

- Transformers 4.37.0.dev0
- Pytorch 2.1.0
- Datasets 2.14.6
- Tokenizers 0.14.1