m3hrdadfi commited on
Commit
acdefb8
1 Parent(s): 0474ebb

Initial model

Browse files
README.md ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: ka
3
+ datasets:
4
+ - common_voice
5
+ tags:
6
+ - audio
7
+ - automatic-speech-recognition
8
+ - speech
9
+ - xlsr-fine-tuning-week
10
+ license: apache-2.0
11
+ widget:
12
+ - label: Common Voice sample 566
13
+ src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-georgian/resolve/main/sample566.flac
14
+ - label: Common Voice sample 95
15
+ src: https://huggingface.co/m3hrdadfi/wav2vec2-large-xlsr-georgian/resolve/main/sample95.flac
16
+ model-index:
17
+ - name: XLSR Wav2Vec2 Georgian by Mehrdad Farahani
18
+ results:
19
+ - task:
20
+ name: Speech Recognition
21
+ type: automatic-speech-recognition
22
+ dataset:
23
+ name: Common Voice ka
24
+ type: common_voice
25
+ args: ka
26
+ metrics:
27
+ - name: Test WER
28
+ type: wer
29
+ value: 54.00
30
+
31
+ ---
32
+
33
+ # Wav2Vec2-Large-XLSR-53 Georgian
34
+
35
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Georgian using [Common Voice](https://huggingface.co/datasets/common_voice). When using this model, make sure that your speech input is sampled at 16kHz.
36
+
37
+ ## Usage
38
+ The model can be used directly (without a language model) as follows:
39
+
40
+ ```bash
41
+ !pip install git+https://github.com/huggingface/datasets.git
42
+ !pip install git+https://github.com/huggingface/transformers.git
43
+ !pip install torchaudio
44
+ !pip install librosa
45
+ !pip install jiwer
46
+ ```
47
+
48
+ ```python
49
+ import torch
50
+ import torchaudio
51
+ from datasets import load_dataset, load_metric
52
+ from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
53
+
54
+ import librosa
55
+
56
+ import pandas as pd
57
+ import numpy as np
58
+
59
+ import random
60
+ import os
61
+ import string
62
+ import six
63
+ import re
64
+
65
+ import IPython.display as ipd
66
+
67
+ # Loading the datasets
68
+ dataset = load_dataset("common_voice", "ka", split="test")
69
+ print(dataset)
70
+
71
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
72
+ processor = Wav2Vec2Processor.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian")
73
+ model = Wav2Vec2ForCTC.from_pretrained("m3hrdadfi/wav2vec2-large-xlsr-georgian").to(device)
74
+
75
+
76
+ # Preprocessing the datasets.
77
+ chars_to_ignore_regex = f"""[{"".join([
78
+ ",", "?", ".", "!", "-", ";", ":", '""', "%", "'", '"', "�",
79
+ "#", "!", "?", "«", "»", "(", ")", "؛", ",", "?", ".", "!", "-", ";", ":", '"',
80
+ "“", "%", "‘", "�", "–", "…", "_", "”", '“', '„'
81
+ ])}]"""
82
+
83
+ def remove_special_characters(text, chars_to_ignore):
84
+ text = re.sub(chars_to_ignore, '', text).lower() + " "
85
+ return text
86
+
87
+ def normalizer(batch, chars_to_ignore):
88
+ text = batch["sentence"]
89
+ text = remove_special_characters(text, chars_to_ignore)
90
+ batch["sentence"] = text
91
+ return batch
92
+
93
+ # We need to read the aduio files as arrays
94
+ def speech_file_to_array_fn(batch):
95
+ speech_array, sampling_rate = torchaudio.load(batch["path"])
96
+ speech_array = speech_array.squeeze().numpy()
97
+ speech_array = librosa.resample(np.asarray(speech_array), sampling_rate, 16_000)
98
+
99
+ batch["speech"] = speech_array
100
+ return batch
101
+
102
+ def predict(batch):
103
+ features = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
104
+
105
+ input_values = features.input_values.to(device)
106
+ attention_mask = features.attention_mask.to(device)
107
+
108
+ with torch.no_grad():
109
+ logits = model(input_values, attention_mask=attention_mask).logits
110
+
111
+ pred_ids = torch.argmax(logits, dim=-1)
112
+
113
+ batch["predicted"] = processor.batch_decode(pred_ids)[0]
114
+ return batch
115
+
116
+ dataset = dataset.map(normalizer, fn_kwargs={"chars_to_ignore": chars_to_ignore_regex})
117
+ dataset = dataset.map(speech_file_to_array_fn, remove_columns=list(set(dataset.column_names) - set(['sentence', 'path'])))
118
+ result = dataset.map(predict)
119
+ ```
120
+
121
+ ## Prediction
122
+
123
+ ```python
124
+ max_items = np.random.randint(0, len(result), 20).tolist()
125
+ for i in max_items:
126
+ reference, predicted = result["sentence"][i], result["predicted"][i]
127
+ print("reference:", reference)
128
+ print("predicted:", predicted)
129
+ print('---')
130
+ ```
131
+
132
+ ```text
133
+ reference: ადმინისტრაციული ცენტრი ქალაქი იმიშლი
134
+ predicted: ადმინისტრაციული ცენტრი ქალაქი იმიშლი
135
+ ---
136
+ reference: დაიბადა ადვოკატის ოჯახში
137
+ predicted: აიბადა ადმოკატის ოჯახში
138
+ ---
139
+ reference: აღსანიშნავია რომ სიმღერა წარმოადგენს პოლ მაკკარტნისა და ჯორჯ ჰარისონის იშვიათ ვოკალურ დუეტს
140
+ predicted: აღსენიშნავიარო სიმღე რაწარმოადგემს ბოლ მაკარდნის და ჯორჩხარისონის იშვიად ვოკალურ დუეთს
141
+ ---
142
+ reference: იკრძალებოდა წირვალოცვა ქართულ ენაზე
143
+ predicted: იკრძალებოდ��� წირვა ლოცვა ქართულ ენაზე
144
+ ---
145
+ reference: აღმართულია ვალესა და ბერნის კანტონების საზღვარზე
146
+ predicted: აღმართულია ვალესა და ბერნის კანთონების საზღვარზე
147
+ ---
148
+ reference: აქ იგი მიიწვიეს სამხატვრო აკადემიაში სადაც სიცოცხლის ბოლომდე ეწეოდა პედაგოგიურ მოღვაწეობას
149
+ predicted: აქ იგი მიისწრვიეს სამხატრო აკადემი აშისა და ციცაცხლის ბოლომდე ეწყებობ და პედაგუდივირ მოყვაწევებას
150
+ ---
151
+ reference: კლარისა თანხმდება შემოთავაზებაზე და ლექტერის დახმარებით სერიული მკვლელის კვალს დაადგება
152
+ predicted: კლარის თან ხვდება შემუთავაზე ბაზე და ლექტერის დახმარებიც სერიური მკვლელის კველს დაადგებაა
153
+ ---
154
+ reference: იბრძოდა ტყვეებით ვაჭრობის წინააღმდეგ
155
+ predicted: დიბრძოტო ტყვეებით ვაჭრობის წინააღდეგ
156
+ ---
157
+ reference: სათავსს აღმოსავლეთით და დასავლეთით თითო სარკმელი აქვს
158
+ predicted: სათავს აღმოსაველეთი და დასავლეთ მთიდო სარკმელი აქვს
159
+ ---
160
+ reference: იგი მდებარეობს ქალაქის ჩრდილოაღმოსავლეთ ნაწილში
161
+ predicted: იგი მდებარეობს ქალაქის ჩრდილო აღმოსავლეთ ნაწილში
162
+ ---
163
+ ```
164
+
165
+ ## Evaluation
166
+
167
+ ```python
168
+ wer = load_metric("wer")
169
+
170
+ print("WER: {:.2f}".format(100 * wer.compute(predictions=result["predicted"], references=result["sentence"])))
171
+ ```
172
+
173
+ **Test Result**:
174
+ - WER: 54.00%
175
+
176
+
177
+ ## Training & Report
178
+ The Common Voice `train`, `validation` datasets were used for training.
179
+
180
+ You can see the training states [here](https://wandb.ai/m3hrdadfi/finetuned_wav2vec_xlsr_georgian/reports/Fine-Tuning-for-Wav2Vec2-Large-XLSR-53-Georgian--Vmlldzo1NTg5MDQ?accessToken=rsmd0p83iln13yq23b9kzj8bim6nco21w8cqn2tb19v51okakqk92c71h6hbxmfj)
181
+
182
+ The script used for training can be found [here](https://colab.research.google.com/github/m3hrdadfi/notebooks/blob/main/Fine_Tune_XLSR_Wav2Vec2_on_Georgian_ASR_with_%F0%9F%A4%97_Transformers_ipynb.ipynb)
all_results.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 30.0,
3
+ "eval_loss": 0.4455166161060333,
4
+ "eval_mem_cpu_alloc_delta": 52916700,
5
+ "eval_mem_cpu_peaked_delta": 92345080,
6
+ "eval_mem_gpu_alloc_delta": 0,
7
+ "eval_mem_gpu_peaked_delta": 5249111040,
8
+ "eval_runtime": 81.4513,
9
+ "eval_samples": 654,
10
+ "eval_samples_per_second": 8.029,
11
+ "eval_wer": 0.5288702928870292,
12
+ "init_mem_cpu_alloc_delta": 9478038,
13
+ "init_mem_cpu_peaked_delta": 18306,
14
+ "init_mem_gpu_alloc_delta": 1261911040,
15
+ "init_mem_gpu_peaked_delta": 0,
16
+ "total_flos": 8.556740517881789e+18,
17
+ "train_mem_cpu_alloc_delta": 12260352,
18
+ "train_mem_cpu_peaked_delta": 186508822,
19
+ "train_mem_gpu_alloc_delta": 3794085376,
20
+ "train_mem_gpu_peaked_delta": 6038033408,
21
+ "train_runtime": 8781.3793,
22
+ "train_samples": 1585,
23
+ "train_samples_per_second": 0.109
24
+ }
config.json ADDED
@@ -0,0 +1,76 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "facebook/wav2vec2-large-xlsr-53",
3
+ "activation_dropout": 0.0,
4
+ "apply_spec_augment": true,
5
+ "architectures": [
6
+ "Wav2Vec2ForCTC"
7
+ ],
8
+ "attention_dropout": 0.1,
9
+ "bos_token_id": 1,
10
+ "conv_bias": true,
11
+ "conv_dim": [
12
+ 512,
13
+ 512,
14
+ 512,
15
+ 512,
16
+ 512,
17
+ 512,
18
+ 512
19
+ ],
20
+ "conv_kernel": [
21
+ 10,
22
+ 3,
23
+ 3,
24
+ 3,
25
+ 3,
26
+ 2,
27
+ 2
28
+ ],
29
+ "conv_stride": [
30
+ 5,
31
+ 2,
32
+ 2,
33
+ 2,
34
+ 2,
35
+ 2,
36
+ 2
37
+ ],
38
+ "ctc_loss_reduction": "mean",
39
+ "ctc_zero_infinity": true,
40
+ "do_stable_layer_norm": true,
41
+ "eos_token_id": 2,
42
+ "feat_extract_activation": "gelu",
43
+ "feat_extract_dropout": 0.0,
44
+ "feat_extract_norm": "layer",
45
+ "feat_proj_dropout": 0.0,
46
+ "final_dropout": 0.0,
47
+ "gradient_checkpointing": true,
48
+ "hidden_act": "gelu",
49
+ "hidden_dropout": 0.1,
50
+ "hidden_size": 1024,
51
+ "initializer_range": 0.02,
52
+ "intermediate_size": 4096,
53
+ "layer_norm_eps": 1e-05,
54
+ "layerdrop": 0.1,
55
+ "mask_channel_length": 10,
56
+ "mask_channel_min_space": 1,
57
+ "mask_channel_other": 0.0,
58
+ "mask_channel_prob": 0.0,
59
+ "mask_channel_selection": "static",
60
+ "mask_feature_length": 10,
61
+ "mask_feature_prob": 0.0,
62
+ "mask_time_length": 10,
63
+ "mask_time_min_space": 1,
64
+ "mask_time_other": 0.0,
65
+ "mask_time_prob": 0.05,
66
+ "mask_time_selection": "static",
67
+ "model_type": "wav2vec2",
68
+ "num_attention_heads": 16,
69
+ "num_conv_pos_embedding_groups": 16,
70
+ "num_conv_pos_embeddings": 128,
71
+ "num_feat_extract_layers": 7,
72
+ "num_hidden_layers": 24,
73
+ "pad_token_id": 0,
74
+ "transformers_version": "4.5.0.dev0",
75
+ "vocab_size": 38
76
+ }
eval_results.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 30.0,
3
+ "eval_loss": 0.4455166161060333,
4
+ "eval_mem_cpu_alloc_delta": 52916700,
5
+ "eval_mem_cpu_peaked_delta": 92345080,
6
+ "eval_mem_gpu_alloc_delta": 0,
7
+ "eval_mem_gpu_peaked_delta": 5249111040,
8
+ "eval_runtime": 81.4513,
9
+ "eval_samples": 654,
10
+ "eval_samples_per_second": 8.029,
11
+ "eval_wer": 0.5288702928870292
12
+ }
predictions.csv ADDED
The diff for this file is too large to render. See raw diff
preprocessor_config.json ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "do_normalize": true,
3
+ "feature_size": 1,
4
+ "padding_side": "right",
5
+ "padding_value": 0.0,
6
+ "return_attention_mask": true,
7
+ "sampling_rate": 16000
8
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7c0938b613c1c3fe1abd582f27dfec45efeef27402c30a8bc0de2408aef51c21
3
+ size 1262089623
result.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1c42bc8fa4f5eca7ff0ab10d3692e0b33144189969b2b23f107e68c3f4e47803
3
+ size 3183
sample566.flac ADDED
Binary file (64.4 kB). View file
sample95.flac ADDED
Binary file (75 kB). View file
special_tokens_map.json ADDED
@@ -0,0 +1 @@
 
1
+ {"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
tokenizer_config.json ADDED
@@ -0,0 +1 @@
 
1
+ {"unk_token": "<unk>", "bos_token": "<s>", "eos_token": "</s>", "pad_token": "<pad>", "do_lower_case": false, "word_delimiter_token": "|"}
train_results.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "epoch": 30.0,
3
+ "init_mem_cpu_alloc_delta": 9478038,
4
+ "init_mem_cpu_peaked_delta": 18306,
5
+ "init_mem_gpu_alloc_delta": 1261911040,
6
+ "init_mem_gpu_peaked_delta": 0,
7
+ "total_flos": 8.556740517881789e+18,
8
+ "train_mem_cpu_alloc_delta": 12260352,
9
+ "train_mem_cpu_peaked_delta": 186508822,
10
+ "train_mem_gpu_alloc_delta": 3794085376,
11
+ "train_mem_gpu_peaked_delta": 6038033408,
12
+ "train_runtime": 8781.3793,
13
+ "train_samples": 1585,
14
+ "train_samples_per_second": 0.109
15
+ }
trainer_state.json ADDED
@@ -0,0 +1,297 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "best_metric": null,
3
+ "best_model_checkpoint": null,
4
+ "epoch": 30.0,
5
+ "global_step": 960,
6
+ "is_hyper_param_search": false,
7
+ "is_local_process_zero": true,
8
+ "is_world_process_zero": true,
9
+ "log_history": [
10
+ {
11
+ "epoch": 1.56,
12
+ "learning_rate": 7.5e-05,
13
+ "loss": 13.0978,
14
+ "step": 50
15
+ },
16
+ {
17
+ "epoch": 1.56,
18
+ "eval_loss": 13.780136108398438,
19
+ "eval_runtime": 82.8799,
20
+ "eval_samples_per_second": 7.891,
21
+ "eval_wer": 1.0,
22
+ "step": 50
23
+ },
24
+ {
25
+ "epoch": 3.12,
26
+ "learning_rate": 0.00015,
27
+ "loss": 7.3093,
28
+ "step": 100
29
+ },
30
+ {
31
+ "epoch": 3.12,
32
+ "eval_loss": 3.198237419128418,
33
+ "eval_runtime": 81.5893,
34
+ "eval_samples_per_second": 8.016,
35
+ "eval_wer": 1.0,
36
+ "step": 100
37
+ },
38
+ {
39
+ "epoch": 4.69,
40
+ "learning_rate": 0.000225,
41
+ "loss": 3.0745,
42
+ "step": 150
43
+ },
44
+ {
45
+ "epoch": 4.69,
46
+ "eval_loss": 3.1082892417907715,
47
+ "eval_runtime": 82.4037,
48
+ "eval_samples_per_second": 7.937,
49
+ "eval_wer": 1.0,
50
+ "step": 150
51
+ },
52
+ {
53
+ "epoch": 6.25,
54
+ "learning_rate": 0.0003,
55
+ "loss": 3.0551,
56
+ "step": 200
57
+ },
58
+ {
59
+ "epoch": 6.25,
60
+ "eval_loss": 3.0994772911071777,
61
+ "eval_runtime": 82.7226,
62
+ "eval_samples_per_second": 7.906,
63
+ "eval_wer": 1.0,
64
+ "step": 200
65
+ },
66
+ {
67
+ "epoch": 7.81,
68
+ "learning_rate": 0.00028026315789473683,
69
+ "loss": 3.0632,
70
+ "step": 250
71
+ },
72
+ {
73
+ "epoch": 7.81,
74
+ "eval_loss": 3.0916755199432373,
75
+ "eval_runtime": 83.5323,
76
+ "eval_samples_per_second": 7.829,
77
+ "eval_wer": 1.0,
78
+ "step": 250
79
+ },
80
+ {
81
+ "epoch": 9.38,
82
+ "learning_rate": 0.0002605263157894737,
83
+ "loss": 3.0391,
84
+ "step": 300
85
+ },
86
+ {
87
+ "epoch": 9.38,
88
+ "eval_loss": 3.0707435607910156,
89
+ "eval_runtime": 82.7328,
90
+ "eval_samples_per_second": 7.905,
91
+ "eval_wer": 1.0,
92
+ "step": 300
93
+ },
94
+ {
95
+ "epoch": 10.94,
96
+ "learning_rate": 0.00024078947368421052,
97
+ "loss": 3.0321,
98
+ "step": 350
99
+ },
100
+ {
101
+ "epoch": 10.94,
102
+ "eval_loss": 3.0443670749664307,
103
+ "eval_runtime": 84.1437,
104
+ "eval_samples_per_second": 7.772,
105
+ "eval_wer": 1.0,
106
+ "step": 350
107
+ },
108
+ {
109
+ "epoch": 12.5,
110
+ "learning_rate": 0.00022105263157894733,
111
+ "loss": 3.0069,
112
+ "step": 400
113
+ },
114
+ {
115
+ "epoch": 12.5,
116
+ "eval_loss": 2.998474359512329,
117
+ "eval_runtime": 83.9178,
118
+ "eval_samples_per_second": 7.793,
119
+ "eval_wer": 1.0,
120
+ "step": 400
121
+ },
122
+ {
123
+ "epoch": 14.06,
124
+ "learning_rate": 0.0002013157894736842,
125
+ "loss": 2.9623,
126
+ "step": 450
127
+ },
128
+ {
129
+ "epoch": 14.06,
130
+ "eval_loss": 2.866849184036255,
131
+ "eval_runtime": 82.5906,
132
+ "eval_samples_per_second": 7.919,
133
+ "eval_wer": 1.0,
134
+ "step": 450
135
+ },
136
+ {
137
+ "epoch": 15.62,
138
+ "learning_rate": 0.00018157894736842105,
139
+ "loss": 2.4771,
140
+ "step": 500
141
+ },
142
+ {
143
+ "epoch": 15.62,
144
+ "eval_loss": 1.5367902517318726,
145
+ "eval_runtime": 85.6456,
146
+ "eval_samples_per_second": 7.636,
147
+ "eval_wer": 0.9838912133891213,
148
+ "step": 500
149
+ },
150
+ {
151
+ "epoch": 17.19,
152
+ "learning_rate": 0.00016184210526315788,
153
+ "loss": 1.0561,
154
+ "step": 550
155
+ },
156
+ {
157
+ "epoch": 17.19,
158
+ "eval_loss": 0.6924143433570862,
159
+ "eval_runtime": 85.1658,
160
+ "eval_samples_per_second": 7.679,
161
+ "eval_wer": 0.7548117154811715,
162
+ "step": 550
163
+ },
164
+ {
165
+ "epoch": 18.75,
166
+ "learning_rate": 0.0001421052631578947,
167
+ "loss": 0.5288,
168
+ "step": 600
169
+ },
170
+ {
171
+ "epoch": 18.75,
172
+ "eval_loss": 0.5334728956222534,
173
+ "eval_runtime": 83.737,
174
+ "eval_samples_per_second": 7.81,
175
+ "eval_wer": 0.6569037656903766,
176
+ "step": 600
177
+ },
178
+ {
179
+ "epoch": 20.31,
180
+ "learning_rate": 0.00012236842105263157,
181
+ "loss": 0.3581,
182
+ "step": 650
183
+ },
184
+ {
185
+ "epoch": 20.31,
186
+ "eval_loss": 0.48591092228889465,
187
+ "eval_runtime": 86.2479,
188
+ "eval_samples_per_second": 7.583,
189
+ "eval_wer": 0.605857740585774,
190
+ "step": 650
191
+ },
192
+ {
193
+ "epoch": 21.88,
194
+ "learning_rate": 0.00010263157894736841,
195
+ "loss": 0.2638,
196
+ "step": 700
197
+ },
198
+ {
199
+ "epoch": 21.88,
200
+ "eval_loss": 0.4631027579307556,
201
+ "eval_runtime": 84.0825,
202
+ "eval_samples_per_second": 7.778,
203
+ "eval_wer": 0.5648535564853556,
204
+ "step": 700
205
+ },
206
+ {
207
+ "epoch": 23.44,
208
+ "learning_rate": 8.289473684210526e-05,
209
+ "loss": 0.2284,
210
+ "step": 750
211
+ },
212
+ {
213
+ "epoch": 23.44,
214
+ "eval_loss": 0.4597685933113098,
215
+ "eval_runtime": 86.122,
216
+ "eval_samples_per_second": 7.594,
217
+ "eval_wer": 0.5594142259414226,
218
+ "step": 750
219
+ },
220
+ {
221
+ "epoch": 25.0,
222
+ "learning_rate": 6.315789473684209e-05,
223
+ "loss": 0.1965,
224
+ "step": 800
225
+ },
226
+ {
227
+ "epoch": 25.0,
228
+ "eval_loss": 0.4614764153957367,
229
+ "eval_runtime": 86.0272,
230
+ "eval_samples_per_second": 7.602,
231
+ "eval_wer": 0.5535564853556485,
232
+ "step": 800
233
+ },
234
+ {
235
+ "epoch": 26.56,
236
+ "learning_rate": 4.342105263157895e-05,
237
+ "loss": 0.1837,
238
+ "step": 850
239
+ },
240
+ {
241
+ "epoch": 26.56,
242
+ "eval_loss": 0.4499300718307495,
243
+ "eval_runtime": 89.3292,
244
+ "eval_samples_per_second": 7.321,
245
+ "eval_wer": 0.5349372384937239,
246
+ "step": 850
247
+ },
248
+ {
249
+ "epoch": 28.12,
250
+ "learning_rate": 2.3684210526315787e-05,
251
+ "loss": 0.187,
252
+ "step": 900
253
+ },
254
+ {
255
+ "epoch": 28.12,
256
+ "eval_loss": 0.45425695180892944,
257
+ "eval_runtime": 85.6275,
258
+ "eval_samples_per_second": 7.638,
259
+ "eval_wer": 0.5345188284518828,
260
+ "step": 900
261
+ },
262
+ {
263
+ "epoch": 29.69,
264
+ "learning_rate": 3.947368421052631e-06,
265
+ "loss": 0.1568,
266
+ "step": 950
267
+ },
268
+ {
269
+ "epoch": 29.69,
270
+ "eval_loss": 0.4458238184452057,
271
+ "eval_runtime": 84.9753,
272
+ "eval_samples_per_second": 7.696,
273
+ "eval_wer": 0.5290794979079498,
274
+ "step": 950
275
+ },
276
+ {
277
+ "epoch": 30.0,
278
+ "step": 960,
279
+ "total_flos": 8.556740517881789e+18,
280
+ "train_runtime": 8781.3793,
281
+ "train_samples_per_second": 0.109
282
+ },
283
+ {
284
+ "epoch": 30.0,
285
+ "eval_loss": 0.4455166161060333,
286
+ "eval_runtime": 81.4513,
287
+ "eval_samples_per_second": 8.029,
288
+ "eval_wer": 0.5288702928870292,
289
+ "step": 960
290
+ }
291
+ ],
292
+ "max_steps": 960,
293
+ "num_train_epochs": 30,
294
+ "total_flos": 8.556740517881789e+18,
295
+ "trial_name": null,
296
+ "trial_params": null
297
+ }
training_args.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b09499473860372d1a5755c32562e264aa7cbc7d9f4c4491ed862c399a413bb7
3
+ size 2351
vocab.json ADDED
@@ -0,0 +1 @@
 
1
+ {"<pad>": 0, "<s>": 1, "</s>": 2, "<unk>": 3, "|": 4, "ა": 5, "ბ": 6, "გ": 7, "დ": 8, "ე": 9, "ვ": 10, "ზ": 11, "თ": 12, "ი": 13, "კ": 14, "ლ": 15, "მ": 16, "ნ": 17, "ო": 18, "პ": 19, "ჟ": 20, "რ": 21, "ს": 22, "ტ": 23, "უ": 24, "ფ": 25, "ქ": 26, "ღ": 27, "ყ": 28, "შ": 29, "ჩ": 30, "ც": 31, "ძ": 32, "წ": 33, "ჭ": 34, "ხ": 35, "ჯ": 36, "ჰ": 37}