bayartsogt commited on
Commit
1594d42
1 Parent(s): ff31531

readme add

Browse files
Files changed (1) hide show
  1. README.md +27 -8
README.md CHANGED
@@ -9,11 +9,13 @@ license: apache-2.0
9
 
10
  # Wav2Vec2-Base
11
 
12
- [Facebook's Wav2Vec2 Blog](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
13
 
14
- The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz.
15
 
16
- **Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model **speech recognition**, a tokenizer should be created and the model should be fine-tuned on labeled text data.
 
 
17
 
18
  # Data
19
 
@@ -27,10 +29,27 @@ The base model pretrained on 16kHz sampled speech audio. When using the model ma
27
  - min: 0.0005440165748211712
28
 
29
 
30
- # [Paper](https://arxiv.org/abs/2006.11477)
31
 
32
- Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
33
 
34
- **Abstract**
35
- We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
36
- The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
10
  # Wav2Vec2-Base
11
 
12
+ [Paper](https://arxiv.org/abs/2006.11477)
13
 
14
+ Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli
15
 
16
+ **Abstract**
17
+ We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.
18
+ The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.
19
 
20
  # Data
21
 
 
29
  - min: 0.0005440165748211712
30
 
31
 
32
+ # Convert from FAIRSEQ to HF
33
 
34
+ 1. Create a config
35
 
36
+ ```python
37
+ from transformers import Wav2Vec2Config
38
+ config = Wav2Vec2Config.from_pretrained('facebook/wav2vec2-base')
39
+ config.save_pretrained('./')
40
+ ```
41
+
42
+ 2. Convert using [the script](https://github.com/huggingface/transformers/blob/main/src/transformers/models/wav2vec2/convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py) written by HF team
43
+
44
+ ```bash
45
+ wget convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py
46
+
47
+ hf_name="<my-hf-repo-name>"
48
+ ckpt="<path-to-pth-checkpoint>"
49
+
50
+ python ./convert_wav2vec2_original_pytorch_checkpoint_to_pytorch.py \
51
+ --pytorch_dump_folder ${hf_name} \
52
+ --checkpoint_path ${ckpt} \
53
+ --config_path ./config.json \
54
+ --not_finetuned
55
+ ```