File size: 2,894 Bytes
5b995b6
 
 
 
35c8369
 
5b995b6
 
 
 
 
 
 
 
 
 
 
 
 
35c8369
5b995b6
2e0e447
5b995b6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2e0e447
5b995b6
 
 
 
 
 
 
 
2e0e447
5b995b6
 
 
 
 
 
2e0e447
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5b995b6
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
---
license: mit
base_model: facebook/w2v-bert-2.0
tags:
- automatic-speech-recognition
- DewiBrynJones/banc-trawsgrifiadau-bangor-normalized
- generated_from_trainer
metrics:
- wer
model-index:
- name: w2v2-bert-ft-btb-cy
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# w2v2-bert-ft-btb-cy

This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the DEWIBRYNJONES/BANC-TRAWSGRIFIADAU-BANGOR-NORMALIZED - DEFAULT dataset.
It achieves the following results on the evaluation set:
- Loss: 2.9177
- Wer: 1.0

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10.0
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:---:|
| No log        | 0.4243 | 300  | 5.9903          | 1.0 |
| 7.061         | 0.8487 | 600  | 3.0451          | 1.0 |
| 7.061         | 1.2730 | 900  | 2.9642          | 1.0 |
| 3.0081        | 1.6973 | 1200 | 2.9564          | 1.0 |
| 2.9733        | 2.1216 | 1500 | 2.9480          | 1.0 |
| 2.9733        | 2.5460 | 1800 | 2.9451          | 1.0 |
| 2.9454        | 2.9703 | 2100 | 2.9147          | 1.0 |
| 2.9454        | 3.3946 | 2400 | 2.9019          | 1.0 |
| 2.9064        | 3.8190 | 2700 | 2.8850          | 1.0 |
| 2.9048        | 4.2433 | 3000 | 2.8812          | 1.0 |
| 2.9048        | 4.6676 | 3300 | 2.8844          | 1.0 |
| 2.8965        | 5.0919 | 3600 | 2.9125          | 1.0 |
| 2.8965        | 5.5163 | 3900 | 2.8981          | 1.0 |
| 2.9261        | 5.9406 | 4200 | 2.9053          | 1.0 |
| 2.9273        | 6.3649 | 4500 | 2.9167          | 1.0 |
| 2.9273        | 6.7893 | 4800 | 2.9113          | 1.0 |
| 2.9302        | 7.2136 | 5100 | 2.9133          | 1.0 |
| 2.9302        | 7.6379 | 5400 | 2.9213          | 1.0 |
| 2.9397        | 8.0622 | 5700 | 2.9251          | 1.0 |
| 2.937         | 8.4866 | 6000 | 2.9210          | 1.0 |
| 2.937         | 8.9109 | 6300 | 2.9215          | 1.0 |
| 2.9406        | 9.3352 | 6600 | 2.9171          | 1.0 |
| 2.9406        | 9.7595 | 6900 | 2.9177          | 1.0 |


### Framework versions

- Transformers 4.40.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1