washeed commited on
Commit
3bc5b15
1 Parent(s): b4a62f7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +51 -7
README.md CHANGED
@@ -19,19 +19,63 @@ should probably proofread and complete it, then remove this comment. -->
19
 
20
  This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the google/fleurs dataset.
21
 
22
- ## Model description
 
 
 
 
 
 
 
 
 
 
 
23
 
24
- More information needed
 
25
 
26
- ## Intended uses & limitations
 
 
 
27
 
28
- More information needed
29
 
30
- ## Training and evaluation data
 
31
 
32
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
33
 
34
- ## Training procedure
35
 
36
  ### Training hyperparameters
37
 
 
19
 
20
  This model is a fine-tuned version of [openai/whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) on the google/fleurs dataset.
21
 
22
+ # to run
23
+ simply install chocolatey run this on your cmd:
24
+ ```
25
+ @"%SystemRoot%\System32\WindowsPowerShell\v1.0\powershell.exe" -NoProfile -InputFormat None -ExecutionPolicy Bypass -Command "[System.Net.ServicePointManager]::SecurityProtocol = 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://community.chocolatey.org/install.ps1'))" && SET "PATH=%PATH%;%ALLUSERSPROFILE%\chocolatey\bin"
26
+ ```
27
+ # after that install ffmpeg in your device using choco install by running this on cmd after:
28
+ ```
29
+ choco install ffmpeg
30
+ ```
31
+ # install dependencies in python IDE using:
32
+ ```
33
+ pip install --upgrade pip
34
 
35
+ pip install --upgrade git+https://github.com/huggingface/transformers.git accelerate datasets[audio]
36
+ ```
37
 
38
+ # then lastly to inference the model:
39
+ ```
40
+ import torch
41
+ from transformers import AutoModelForSpeechSeq2Seq, AutoProcessor, pipeline
42
 
 
43
 
44
+ device = "cuda:0" if torch.cuda.is_available() else "cpu"
45
+ torch_dtype = torch.float16 if torch.cuda.is_available() else torch.float32
46
 
47
+ model_id = "washeed/audio-transcribe"
48
+
49
+ model = AutoModelForSpeechSeq2Seq.from_pretrained(
50
+ model_id, torch_dtype=torch_dtype, low_cpu_mem_usage=True, use_safetensors=True
51
+ )
52
+ model.to(device)
53
+
54
+ processor = AutoProcessor.from_pretrained(model_id)
55
+
56
+ pipe = pipeline(
57
+ "automatic-speech-recognition",
58
+ model=model,
59
+ tokenizer=processor.tokenizer,
60
+ feature_extractor=processor.feature_extractor,
61
+ max_new_tokens=128,
62
+ chunk_length_s=30,
63
+ batch_size=16,
64
+ return_timestamps=True,
65
+ torch_dtype=torch_dtype,
66
+ device=device,
67
+ )
68
+
69
+ result = pipe("audio.mp3")
70
+ print(result["text"])
71
+ ```
72
+
73
+ # if you want to transcribe instead of translating just replace the :
74
+
75
+ ```result = pipe("audio.mp3")```
76
+ # with
77
+ ``` result = pipe("inference.mp3", generate_kwargs={"task": "transcribe"})```
78
 
 
79
 
80
  ### Training hyperparameters
81