ylacombe HF staff commited on
Commit
f3422b5
1 Parent(s): 4e8dad9

Model save

Browse files
README.md ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ base_model: facebook/w2v-bert-2.0
4
+ tags:
5
+ - generated_from_trainer
6
+ metrics:
7
+ - wer
8
+ model-index:
9
+ - name: wav2vec2-bert-CV16-en-libri
10
+ results: []
11
+ ---
12
+
13
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
14
+ should probably proofread and complete it, then remove this comment. -->
15
+
16
+ # wav2vec2-bert-CV16-en-libri
17
+
18
+ This model is a fine-tuned version of [facebook/w2v-bert-2.0](https://huggingface.co/facebook/w2v-bert-2.0) on the None dataset.
19
+ It achieves the following results on the evaluation set:
20
+ - Loss: 0.2870
21
+ - Wer: 0.2577
22
+ - Cer: 0.0659
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 3e-05
42
+ - train_batch_size: 12
43
+ - eval_batch_size: 12
44
+ - seed: 42
45
+ - distributed_type: multi-GPU
46
+ - num_devices: 3
47
+ - gradient_accumulation_steps: 2
48
+ - total_train_batch_size: 72
49
+ - total_eval_batch_size: 36
50
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
51
+ - lr_scheduler_type: linear
52
+ - lr_scheduler_warmup_steps: 10000
53
+ - num_epochs: 3.0
54
+ - mixed_precision_training: Native AMP
55
+
56
+ ### Training results
57
+
58
+ | Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
59
+ |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|
60
+ | 2.8812 | 0.63 | 250 | 2.8923 | 1.0 | 1.0000 |
61
+ | 1.2899 | 1.26 | 500 | 1.1471 | 0.7030 | 0.2563 |
62
+ | 0.5276 | 1.89 | 750 | 0.4687 | 0.4114 | 0.1127 |
63
+ | 0.3313 | 2.52 | 1000 | 0.2870 | 0.2577 | 0.0659 |
64
+
65
+
66
+ ### Framework versions
67
+
68
+ - Transformers 4.37.0.dev0
69
+ - Pytorch 2.1.0+cu121
70
+ - Datasets 2.16.1
71
+ - Tokenizers 0.15.0
emissions.csv ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ timestamp,experiment_id,project_name,duration,emissions,energy_consumed,country_name,country_iso_code,region,on_cloud,cloud_provider,cloud_region
2
+ 2024-01-16T12:18:10,100ce0c1-abd3-4e89-a129-ac49a10532a2,codecarbon,5727.051881551743,0.48059596920304154,1.131922185780931,France,FRA,île-de-france,N,,
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:616db20c0308312afa7b216c415e6f34b53ff3a70a11908d6518efa102a32613
3
  size 2422949860
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f77bb4b7022603caca2aecc74c9cc0116d069255aedd64e77f0f43e5ccd5aab3
3
  size 2422949860
runs/Jan16_10-42-05_vorace/events.out.tfevents.1705401761.vorace.462641.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:43cda48ef96a461bf9cdbb001ac96c8566467d001c6ac2c0c276524fb28eb8df
3
- size 21994
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:42caaacad21a949320a0dd9500bb07cbb66c498c558b15f610335904feb86514
3
+ size 44587