jonatasgrosman commited on
Commit
62592d7
1 Parent(s): a35ec72

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -22,7 +22,7 @@ model-index:
22
  name: Automatic Speech Recognition
23
  type: automatic-speech-recognition
24
  dataset:
25
- name: Common Voice pt
26
  type: common_voice
27
  args: en
28
  metrics:
@@ -73,15 +73,15 @@ The script used for training can be found here: https://github.com/jonatasgrosma
73
 
74
  The model can be used directly (without a language model) as follows...
75
 
76
- Using the [ASRecognition](https://github.com/jonatasgrosman/asrecognition) library:
77
 
78
  ```python
79
- from asrecognition import ASREngine
80
-
81
- asr = ASREngine("en", model_path="jonatasgrosman/wav2vec2-large-xlsr-53-english")
82
 
 
83
  audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
84
- transcriptions = asr.transcribe(audio_paths)
 
85
  ```
86
 
87
  Writing your own inference script:
22
  name: Automatic Speech Recognition
23
  type: automatic-speech-recognition
24
  dataset:
25
+ name: Common Voice en
26
  type: common_voice
27
  args: en
28
  metrics:
73
 
74
  The model can be used directly (without a language model) as follows...
75
 
76
+ Using the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) library:
77
 
78
  ```python
79
+ from huggingsound import SpeechRecognitionModel
 
 
80
 
81
+ model = SpeechRecognitionModel("jonatasgrosman/wav2vec2-large-xlsr-53-english")
82
  audio_paths = ["/path/to/file.mp3", "/path/to/another_file.wav"]
83
+
84
+ transcriptions = model.transcribe(audio_paths)
85
  ```
86
 
87
  Writing your own inference script: