flozi00 commited on
Commit
00cd4fd
1 Parent(s): 5b6eb6b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +59 -0
README.md CHANGED
@@ -1,3 +1,62 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ language:
4
+ - de
5
+ library_name: transformers
6
+ pipeline_tag: automatic-speech-recognition
7
  ---
8
+
9
+ # whisper-tiny-german
10
+
11
+ This model is a German Speech Recognition model based on the [whisper-tiny](https://huggingface.co/openai/whisper-tiny) model.
12
+ The model weights count 756M parameters and with a size of 1.51GB in bfloat16 format.
13
+
14
+ As a follow-up to the [Whisper large v3 german](https://huggingface.co/primeline/whisper-large-v3-german) we decided to create a distilled version for a faster inference with minimal quality loss.
15
+
16
+ ## Intended uses & limitations
17
+
18
+ The model is intended to be used for German speech recognition tasks.
19
+ It can be used as local transkription service or as a part of a larger pipeline for speech recognition tasks.
20
+ While counting only half of the parameters of the large model, the quality is still very good and can be used for most tasks.
21
+ The latency is low enough to be used in real-time applications when using optimization toolkits like tensorrt.
22
+
23
+ ## Dataset
24
+
25
+ The dataset used for training is a filtered subset of the [Common Voice](https://huggingface.co/datasets/common_voice) dataset, multilingual librispeech and some internal data.
26
+ The data was filtered and double checked for quality and correctness.
27
+ We did some normalization to the text data, especially for casing and punctuation.
28
+
29
+
30
+ ## Model family
31
+
32
+ | Model | Parameters | link |
33
+ |----------------------------------|------------|--------------------------------------------------------------|
34
+ | Whisper large v3 german | 1.54B | [link](https://huggingface.co/primeline/whisper-large-v3-german) |
35
+ | Distil-whisper large v3 german | 756M | [link](https://huggingface.co/primeline/distil-whisper-large-v3-german) |
36
+ | tiny whisper | 37.8M | [link](https://huggingface.co/primeline/whisper-tiny-german) |
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 3e-05
42
+ - total_train_batch_size: 512
43
+ - num_epochs: 5.0
44
+
45
+ ### Framework versions
46
+
47
+ - Transformers 4.39.3
48
+ - Pytorch 2.3.0a0+ebedce2
49
+ - Datasets 2.18.0
50
+ - Tokenizers 0.15.2
51
+
52
+ ## [About us](https://primeline-ai.com/en/)
53
+
54
+ [![primeline AI](https://primeline-ai.com/wp-content/uploads/2024/02/pl_ai_bildwortmarke_original.svg)](https://primeline-ai.com/en/)
55
+
56
+
57
+ Your partner for AI infrastructure in Germany <br>
58
+ Experience the powerful AI infrastructure that drives your ambitions in Deep Learning, Machine Learning & High-Performance Computing. Optimized for AI training and inference.
59
+
60
+
61
+
62
+ Model author: [Florian Zimmermeister](https://huggingface.co/flozi00)