File size: 2,495 Bytes
a4f2851
fd5b912
 
 
a4f2851
fd5b912
a4f2851
 
fd5b912
a4f2851
 
15306b2
a4f2851
fd5b912
a4f2851
 
 
15306b2
a4f2851
fd5b912
 
a4f2851
 
 
 
15306b2
e6c60f5
15306b2
a4f2851
 
 
 
 
fd5b912
a4f2851
fd5b912
a4f2851
e6c60f5
 
a4f2851
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e6c60f5
 
a4f2851
 
 
e6c60f5
 
a4f2851
 
 
 
e6c60f5
 
 
 
 
 
 
 
 
 
 
 
a4f2851
 
 
 
 
e6c60f5
a4f2851
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
language:
- ja
license: other
tags:
- whisper-event
- generated_from_trainer
datasets:
- Elite35P-Server/EliteVoiceProject
metrics:
- wer
base_model: openai/whisper-base
model-index:
- name: Whisper Base Japanese Elite
  results:
  - task:
      type: automatic-speech-recognition
      name: Automatic Speech Recognition
    dataset:
      name: Elite35P-Server/EliteVoiceProject twitter
      type: Elite35P-Server/EliteVoiceProject
      config: twitter
      split: test
      args: twitter
    metrics:
    - type: wer
      value: 17.073170731707318
      name: Wer
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# Whisper Base Japanese Elite

This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the Elite35P-Server/EliteVoiceProject twitter dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4385
- Wer: 17.0732

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant_with_warmup
- lr_scheduler_warmup_steps: 200
- training_steps: 10000
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step  | Validation Loss | Wer     |
|:-------------:|:------:|:-----:|:---------------:|:-------:|
| 0.0002        | 111.0  | 1000  | 0.2155          | 9.7561  |
| 0.0001        | 222.0  | 2000  | 0.2448          | 12.1951 |
| 0.0           | 333.0  | 3000  | 0.2674          | 13.4146 |
| 0.0           | 444.0  | 4000  | 0.2943          | 15.8537 |
| 0.0           | 555.0  | 5000  | 0.3182          | 17.0732 |
| 0.0           | 666.0  | 6000  | 0.3501          | 18.9024 |
| 0.0           | 777.0  | 7000  | 0.3732          | 16.4634 |
| 0.0           | 888.0  | 8000  | 0.4025          | 17.0732 |
| 0.0           | 999.0  | 9000  | 0.4178          | 20.1220 |
| 0.0           | 1111.0 | 10000 | 0.4385          | 17.0732 |


### Framework versions

- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2