devasheeshG commited on
Commit
803f441
1 Parent(s): 967796b

added model card

Browse files
Files changed (4) hide show
  1. .gitattributes +1 -0
  2. README.md +58 -0
  3. requirements.txt +5 -0
  4. test.wav +3 -0
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ test.wav filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -1,3 +1,61 @@
1
  ---
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  ---
4
+ ## Versions:
5
+
6
+ - CUDA: 12.1
7
+ - cuDNN Version: 8.9.2.26_1.0-1_amd64
8
+
9
+ * tensorflow Version: 2.12.0
10
+ * torch Version: 2.1.0.dev20230606+cu121
11
+ * transformers Version: 4.30.2
12
+ * accelerate Version: 0.20.3
13
+
14
+ ## BENCHMARK:
15
+
16
+ - RAM: 2.8 GB (Original_Model: 5.5GB)
17
+ - VRAM: 1812 MB (Original_Model: 6GB)
18
+ - test.wav: 23 s (Multilingual Speech i.e. English+Hindi)
19
+
20
+ | Device Name | float32 (Original) | float16 | CudaCores | TensorCores |
21
+ | ----------------- | -------------------- | ------- | --------- | ----------- |
22
+ | 3060 | 1.7 | 1.1 | 3,584 | 112 |
23
+ | 1660 Super | can't use this model | 3.3 | 1,408 | - |
24
+ | Collab (Tesla T4) | 2.8 | 2.2 | 2,560 | 320 |
25
+ | CPU | - | - | - | - |
26
+
27
+
28
+ - CPU -> torch.float16 not supported on CPU (AMD Ryzen 5 3600 or Collab GPU)
29
+ - Punchuation: True
30
+
31
+ ## Usage
32
+
33
+ A file ``__init__.py`` is contained inside this repo which contains all the code to use this model.
34
+
35
+ Firstly, clone this repo and place all the files inside a folder.
36
+
37
+ **Please try in jupyter notebook**
38
+
39
+ ```python
40
+ # Import the Model
41
+ from whisper_medium_fp16_transformers import Model
42
+ ```
43
+
44
+ ```python
45
+ # Initilise the model
46
+ model = Model(
47
+ model_name_or_path='whisper_medium_fp16_transformers',
48
+ cuda_visible_device="0",
49
+ device='cuda',
50
+ )
51
+ ```
52
+
53
+ ```python
54
+ # Load Audio
55
+ audio = model.load_audio('test.wav')
56
+ ```
57
+
58
+ ```python
59
+ # Transcribe (First transcription takes time.)
60
+ model.transcribe(audio)
61
+ ```
requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ ffmpeg_python==0.2.0
2
+ numpy==1.23.5
3
+ torch==2.1.0.dev20230606+cu121
4
+ transformers==4.30.2
5
+ accelerate==0.20.3
test.wav ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1483a4b2c200e9c0fd9c3006158665740f739c81c20da572afbbf33e3a5a3fd6
3
+ size 4452466