rukaiyah-indika-ai
commited on
Commit
·
0fc7b0a
1
Parent(s):
b680c8a
Update README.md
Browse files
README.md
CHANGED
@@ -39,8 +39,7 @@ After fine-tuning, the model shows a 2.5% increase in transcription accuracy for
|
|
39 |
# How to Use
|
40 |
You can use this model directly with a simple API call in Hugging Face. Here is a Python code snippet for using the model:
|
41 |
|
42 |
-
|
43 |
-
# Copy code
|
44 |
from transformers import AutoModelForCTC, Wav2Vec2Processor
|
45 |
|
46 |
model = AutoModelForCTC.from_pretrained("your-username/your-model-name")
|
@@ -52,6 +51,7 @@ input_audio = processor(path_to_audio_file, return_tensors="pt", padding=True)
|
|
52 |
# Perform the transcription
|
53 |
transcription = model.generate(**input_audio)
|
54 |
print("Transcription:", transcription)
|
|
|
55 |
|
56 |
# Citation
|
57 |
If you use this model in your research, please cite it as follows:
|
|
|
39 |
# How to Use
|
40 |
You can use this model directly with a simple API call in Hugging Face. Here is a Python code snippet for using the model:
|
41 |
|
42 |
+
```python
|
|
|
43 |
from transformers import AutoModelForCTC, Wav2Vec2Processor
|
44 |
|
45 |
model = AutoModelForCTC.from_pretrained("your-username/your-model-name")
|
|
|
51 |
# Perform the transcription
|
52 |
transcription = model.generate(**input_audio)
|
53 |
print("Transcription:", transcription)
|
54 |
+
```
|
55 |
|
56 |
# Citation
|
57 |
If you use this model in your research, please cite it as follows:
|