File size: 1,414 Bytes
38cffaa
 
6b9cca3
 
 
 
38cffaa
6b9cca3
 
42da244
6b9cca3
 
 
 
 
1e2277b
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
license: cc-by-sa-4.0
language: vi
tags:
- speech
- automatic-speech-recognition
---
# Wav2Vec2 base model trained of 1.5K hours of Vietnamese speech
The base model is pre-trained on 16kHz sampled speech audio from Vietnamese speech corpus containing 1.5K hours of reading and broadcasting speech. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Vietnamese Automatic Speech Recognition. 

**Note**: This model does not have a tokenizer as it was pretrained on audio alone. In order to use this model speech recognition, a tokenizer should be created and the model should be fine-tuned on labeled text data. Check out [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more in-detail explanation of how to fine-tune the model.  
[Facebook's Wav2Vec2 blog](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)
[Paper](https://arxiv.org/abs/2006.11477)

# Usage
See [this notebook](https://colab.research.google.com/drive/1FjTsqbYKphl9kL-eILgUc-bl4zVThL8F?usp=sharing) for more information on how to fine-tune the English pre-trained model.

```python
import torch
from transformers import Wav2Vec2Model

model = Wav2Vec2Model.from_pretrained("dragonSwing/viwav2vec2-base-1.5k")

# Sanity check
inputs = torch.rand([1, 16000])
outputs = model(inputs)
```