Edit model card

inference

The model can be used directly (without a language model) as follows...

Using the HuggingSound library:

```python from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC from datasets import load_dataset import torch import torchaudio

load model and processor

processor = Wav2Vec2Processor.from_pretrained("gymeee/demo_code_switching") model = Wav2Vec2ForCTC.from_pretrained("gymeee/demo_code_switching")

load speech

speech_array, sampling_rate = torchaudio.load("speech.wav")

tokenize

input_values = processor(speech_array[0], return_tensors="pt", padding="longest").input_values # Batch size 1

retrieve logits

logits = model(input_values).logits

take argmax and decode

predicted_ids = torch.argmax(logits, dim=-1) transcription = processor.batch_decode(predicted_ids)

transcription

Downloads last month
10

Space using gymeee/demo_code_switching 1