nhanv commited on
Commit
e5d20a5
1 Parent(s): d018eb0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -23,15 +23,15 @@ model-index:
23
  metrics:
24
  - name: Test WER
25
  type: wer
26
- value: 81.80
27
  - name: Test CER
28
  type: cer
29
- value: 20.16
30
  ---
31
- # Wav2Vec2-Large-XLSR-53-Japanese
32
- Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice), [CSS10](https://github.com/Kyubyong/css10) and [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut).
33
  When using this model, make sure that your speech input is sampled at 16kHz.
34
- The script used for training can be found here: https://github.com/jonatasgrosman/wav2vec2-sprint
35
  ## Usage
36
  The model can be used directly (without a language model) as follows:
37
  ```python
@@ -40,7 +40,7 @@ import librosa
40
  from datasets import load_dataset
41
  from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
42
  LANG_ID = "ja"
43
- MODEL_ID = "jonatasgrosman/wav2vec2-large-xlsr-53-japanese"
44
  SAMPLES = 10
45
  test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
46
  processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
 
23
  metrics:
24
  - name: Test WER
25
  type: wer
26
+ value: 81.3
27
  - name: Test CER
28
  type: cer
29
+ value: 21.9
30
  ---
31
+ # Wav2Vec2-Large-Japanese
32
+ Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Japanese using the [Common Voice](https://huggingface.co/datasets/common_voice), [CSS10](https://github.com/Kyubyong/css10) and [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut) and [CSJ]
33
  When using this model, make sure that your speech input is sampled at 16kHz.
34
+
35
  ## Usage
36
  The model can be used directly (without a language model) as follows:
37
  ```python
 
40
  from datasets import load_dataset
41
  from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
42
  LANG_ID = "ja"
43
+ MODEL_ID = "NTQAI/wav2vec2-large-japanese"
44
  SAMPLES = 10
45
  test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
46
  processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)