zinc75 commited on
Commit
4f7e2dd
1 Parent(s): ebc107e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -36,6 +36,12 @@ model-index:
36
 
37
  Fine-tuned [facebook/wav2vec2-base-fr-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-fr-voxpopuli-v2) for **French speech-to-phoneme** using the train and validation splits of [Common Voice v13](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0).
38
 
 
 
 
 
 
 
39
  - The model has been trained for 14 epochs on 4x2080 Ti GPUs using a ddp strategy and gradient-accumulation procedure (256 audios per update, corresponding roughly to 25 minutes of speech per update -> 2k updates per epoch)
40
  - Learning rate schedule : Double Tri-state schedule
41
  - Warmup from 1e-5 for 7% of total updates
@@ -47,5 +53,3 @@ Fine-tuned [facebook/wav2vec2-base-fr-voxpopuli-v2](https://huggingface.co/faceb
47
 
48
  - The set of hyperparameters used for training are those detailed in Annex B and Table 6 of [wav2vec2 paper](https://arxiv.org/pdf/2006.11477.pdf).
49
 
50
- When using this model, make sure that your speech input is **sampled at 16kHz**.
51
-
 
36
 
37
  Fine-tuned [facebook/wav2vec2-base-fr-voxpopuli-v2](https://huggingface.co/facebook/wav2vec2-base-fr-voxpopuli-v2) for **French speech-to-phoneme** using the train and validation splits of [Common Voice v13](https://huggingface.co/datasets/mozilla-foundation/common_voice_13_0).
38
 
39
+ ## Samplerate of audio
40
+
41
+ When using this model, make sure that your speech input is **sampled at 16kHz**.
42
+
43
+ ## Training procedure details
44
+
45
  - The model has been trained for 14 epochs on 4x2080 Ti GPUs using a ddp strategy and gradient-accumulation procedure (256 audios per update, corresponding roughly to 25 minutes of speech per update -> 2k updates per epoch)
46
  - Learning rate schedule : Double Tri-state schedule
47
  - Warmup from 1e-5 for 7% of total updates
 
53
 
54
  - The set of hyperparameters used for training are those detailed in Annex B and Table 6 of [wav2vec2 paper](https://arxiv.org/pdf/2006.11477.pdf).
55