waveletdeboshir
commited on
Update numbers
Browse files
README.md
CHANGED
@@ -19,15 +19,15 @@ This is a pruned version of [openai/whisper-tiny](https://huggingface.co/openai/
|
|
19 |
Pruning was made without any fine-tuning. Method from [this post](https://medium.com/m/global-identity-2?redirectUrl=https%3A%2F%2Ftowardsdatascience.com%2Fhow-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) was used.
|
20 |
|
21 |
## Size
|
22 |
-
Only 10% tokens was left including special whisper tokens
|
23 |
|
24 |
Model size is 50% less then original whisper-tiny:
|
25 |
| | openai/whisper-tiny | waveletdeboshir/whisper-tiny-ru-pruned |
|
26 |
| :------ | :------ | :------ |
|
27 |
-
| n of parameters | 38 M | 19.
|
28 |
-
| n of parameters (with proj_out layer) | 57.6 M | 21
|
29 |
| model file size | 151 Mb | 86 Mb |
|
30 |
-
| vocab_size | 51865 |
|
31 |
|
32 |
## Usage
|
33 |
Model can be used as an original whisper:
|
|
|
19 |
Pruning was made without any fine-tuning. Method from [this post](https://medium.com/m/global-identity-2?redirectUrl=https%3A%2F%2Ftowardsdatascience.com%2Fhow-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) was used.
|
20 |
|
21 |
## Size
|
22 |
+
Only 10% tokens was left including special whisper tokens (no language tokens except \<|ru|\> and \<|en|\>, no timestamp tokens), 200 most popular tokens from tokenizer and 4000 most popular Russian tokens computed by tokenization of russian text corpus.
|
23 |
|
24 |
Model size is 50% less then original whisper-tiny:
|
25 |
| | openai/whisper-tiny | waveletdeboshir/whisper-tiny-ru-pruned |
|
26 |
| :------ | :------ | :------ |
|
27 |
+
| n of parameters | 38 M | 19.4 M |
|
28 |
+
| n of parameters (with proj_out layer) | 57.6 M | 21 M |
|
29 |
| model file size | 151 Mb | 86 Mb |
|
30 |
+
| vocab_size | 51865 | 4207 |
|
31 |
|
32 |
## Usage
|
33 |
Model can be used as an original whisper:
|