File size: 1,474 Bytes
357e4c9
e1e78fc
 
 
 
 
 
 
 
357e4c9
 
e1e78fc
 
357e4c9
e1e78fc
357e4c9
e1e78fc
357e4c9
e1e78fc
357e4c9
e1e78fc
357e4c9
e1e78fc
357e4c9
e1e78fc
357e4c9
e1e78fc
357e4c9
e1e78fc
357e4c9
e1e78fc
357e4c9
e1e78fc
357e4c9
e1e78fc
 
 
 
 
 
 
 
 
 
357e4c9
e1e78fc
357e4c9
e1e78fc
 
 
357e4c9
 
e1e78fc
357e4c9
e1e78fc
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
base_model: Samuael/asr-amharic-phoneme-based-38
tags:
- generated_from_trainer
datasets:
- alffa_amharic
model-index:
- name: asr-amharic-phoneme-based-38
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# asr-amharic-phoneme-based-38

This model is a fine-tuned version of [Samuael/asr-amharic-phoneme-based-38](https://huggingface.co/Samuael/asr-amharic-phoneme-based-38) on the alffa_amharic dataset.

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 20
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch | Step | Validation Loss | Wer    | Phoneme Cer | Cer    |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-----------:|:------:|
| No log        | 11.76 | 200  | 1.0671          | 0.6497 | 0.1946      | 0.2828 |


### Framework versions

- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2