patrickvonplaten commited on
Commit
e0a33f3
1 Parent(s): 309c15c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -219,13 +219,13 @@ ar_sample = next(iter(stream_data))["audio"]["array"]
219
  Next, we load the model and processor
220
 
221
  ```py
222
- from transformers import Wav2Vec2ForCTC, AutoFeatureExtractor
223
  import torch
224
 
225
  model_id = "facebook/mms-lid-126"
226
 
227
  processor = AutoFeatureExtractor.from_pretrained(model_id)
228
- model = Wav2Vec2ForCTC.from_pretrained(model_id)
229
  ```
230
 
231
  Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/harshit345/xlsr-wav2vec-speech-emotion-recognition)
 
219
  Next, we load the model and processor
220
 
221
  ```py
222
+ from transformers import Wav2Vec2ForSequenceClassification, AutoFeatureExtractor
223
  import torch
224
 
225
  model_id = "facebook/mms-lid-126"
226
 
227
  processor = AutoFeatureExtractor.from_pretrained(model_id)
228
+ model = Wav2Vec2ForSequenceClassification.from_pretrained(model_id)
229
  ```
230
 
231
  Now we process the audio data, pass the processed audio data to the model to classify it into a language, just like we usually do for Wav2Vec2 audio classification models such as [ehcalabres/wav2vec2-lg-xlsr-en-speech-emotion-recognition](https://huggingface.co/harshit345/xlsr-wav2vec-speech-emotion-recognition)