Improve model card with abstract, detailed usage, and comprehensive benchmarks
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
@@ -10,36 +10,90 @@ tags:
|
|
10 |
- hf-asr-leaderboard
|
11 |
---
|
12 |
|
13 |
-
|
14 |
|
15 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
16 |
|
17 |
## Benchmark Results
|
18 |
|
19 |
-
|
|
|
20 |
|
21 |
| Model | Average WER (↓) | Encoder Size | Decoder Size |
|
22 |
|-------|----------------|--------------|--------------|
|
23 |
-
| [whisper-
|
24 |
-
| [lite-whisper-
|
25 |
-
| [lite-whisper-
|
26 |
-
| [lite-whisper-
|
27 |
| | | | |
|
28 |
-
| [whisper-
|
29 |
-
| [lite-whisper-
|
30 |
-
| [lite-whisper-
|
31 |
-
| [lite-whisper-
|
|
|
|
|
|
|
|
|
|
|
32 |
| | | | |
|
33 |
| [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M |
|
34 |
| [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M |
|
35 |
| [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M |
|
36 |
| [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M |
|
37 |
| | | | |
|
38 |
-
| [whisper-
|
39 |
-
| [lite-whisper-
|
40 |
-
| [lite-whisper-
|
41 |
-
| [lite-whisper-
|
42 |
-
|
|
|
|
|
|
|
|
|
43 |
|
44 |
## Citation
|
45 |
|
@@ -47,12 +101,12 @@ If you use LiteASR in your research, please cite the following paper:
|
|
47 |
|
48 |
```
|
49 |
@misc{kamahori2025liteasrefficientautomaticspeech,
|
50 |
-
title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation},
|
51 |
author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci},
|
52 |
year={2025},
|
53 |
eprint={2502.20583},
|
54 |
archivePrefix={arXiv},
|
55 |
primaryClass={cs.LG},
|
56 |
-
url={https://arxiv.org/abs/2502.20583},
|
57 |
}
|
58 |
```
|
|
|
10 |
- hf-asr-leaderboard
|
11 |
---
|
12 |
|
13 |
+
# LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation
|
14 |
|
15 |
+
LiteASR is a compression scheme for automatic speech recognition (ASR) models that leverages the _low-rank_ properties of activation values. Our method can compress OpenAI's Whisper encoder by up to **~50%**.
|
16 |
+
|
17 |
+
See our [GitHub repository](https://github.com/efeslab/LiteASR) and [paper](https://arxiv.org/abs/2502.20583) for technical details.
|
18 |
+
|
19 |
+
## Abstract
|
20 |
+
|
21 |
+
Modern automatic speech recognition (ASR) models, such as OpenAI's Whisper, rely on deep encoder-decoder architectures, and their encoders are a critical bottleneck for efficient deployment due to high computational intensity. We introduce LiteASR, a low-rank compression scheme for ASR encoders that significantly reduces inference costs while maintaining transcription accuracy. Our approach leverages the strong low-rank properties observed in intermediate activations: by applying principal component analysis (PCA) with a small calibration dataset, we approximate linear transformations with a chain of low-rank matrix multiplications, and further optimize self-attention to work in reduced dimensionality. Evaluation results show that our method can compress Whisper large-v3's encoder size by over 50%, matching Whisper medium's size with better transcription accuracy, thereby establishing a new Pareto frontier of accuracy and efficiency.
|
22 |
+
|
23 |
+
## Quick Start
|
24 |
+
|
25 |
+
The easiest way to run our model is to use our integration with HuggingFace Transformers library. We provide model weights for the compressed version of OpenAI Whisper series [here](https://huggingface.co/efficient-speech).
|
26 |
+
|
27 |
+
```python
|
28 |
+
import librosa
|
29 |
+
import torch
|
30 |
+
from transformers import AutoProcessor, AutoModel
|
31 |
+
|
32 |
+
device = "cuda:0"
|
33 |
+
dtype = torch.float16
|
34 |
+
|
35 |
+
# load the compressed Whisper model
|
36 |
+
model = AutoModel.from_pretrained(
|
37 |
+
"efficient-speech/lite-whisper-tiny-fast", # This is the current model repository
|
38 |
+
trust_remote_code=True,
|
39 |
+
)
|
40 |
+
model.to(dtype).to(device)
|
41 |
+
|
42 |
+
# we use the same processor as the original base model (whisper-tiny)
|
43 |
+
processor = AutoProcessor.from_pretrained("openai/whisper-tiny")
|
44 |
+
|
45 |
+
# set the path to your audio file
|
46 |
+
path = "path/to/audio.wav"
|
47 |
+
audio, _ = librosa.load(path, sr=16000)
|
48 |
+
|
49 |
+
input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features
|
50 |
+
input_features = input_features.to(dtype).to(device)
|
51 |
+
|
52 |
+
predicted_ids = model.generate(input_features)
|
53 |
+
transcription = processor.batch_decode(
|
54 |
+
predicted_ids,
|
55 |
+
skip_special_tokens=True
|
56 |
+
)[0]
|
57 |
+
|
58 |
+
print(transcription)
|
59 |
+
```
|
60 |
|
61 |
## Benchmark Results
|
62 |
|
63 |
+
LiteASR can compress Whisper models with minimal degradation in accuracy (`lite-whisper` series). We provide three checkpoints per model: fast, plain, and acc, to be chosen based on resource and accuracy requirements.
|
64 |
+
Here is the average word error rate (WER) evaluated on the [ESB datasets](https://huggingface.co/datasets/hf-audio/esb-datasets-test-only-sorted):
|
65 |
|
66 |
| Model | Average WER (↓) | Encoder Size | Decoder Size |
|
67 |
|-------|----------------|--------------|--------------|
|
68 |
+
| [whisper-large-v3](https://huggingface.co/openai/whisper-large-v3) | 10.1 | 635M | 907M |
|
69 |
+
| [lite-whisper-large-v3-acc](https://huggingface.co/efficient-speech/lite-whisper-large-v3-acc) | 10.1 | 429M | 907M |
|
70 |
+
| [lite-whisper-large-v3](https://huggingface.co/efficient-speech/lite-whisper-large-v3) | 10.2 | 377M | 907M |
|
71 |
+
| [lite-whisper-large-v3-fast](https://huggingface.co/efficient-speech/lite-whisper-large-v3-fast) | 11.3 | 308M | 907M |
|
72 |
| | | | |
|
73 |
+
| [whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) | 10.1 | 635M | 172M |
|
74 |
+
| [lite-whisper-large-v3-turbo-acc](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo-acc) | 10.2 | 421M | 172M |
|
75 |
+
| [lite-whisper-large-v3-turbo](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo) | 12.6 | 374M | 172M |
|
76 |
+
| [lite-whisper-large-v3-turbo-fast](https://huggingface.co/efficient-speech/lite-whisper-large-v3-turbo-fast) | 20.1 | 313M | 172M |
|
77 |
+
| | | | |
|
78 |
+
| [whisper-medium](https://huggingface.co/openai/whisper-medium) | 14.8 | 306M | 457M |
|
79 |
+
| [lite-whisper-medium-acc](https://huggingface.co/efficient-speech/lite-whisper-medium-acc) | 13.46 | 269.93M | 456.64M |
|
80 |
+
| [lite-whisper-medium](https://huggingface.co/efficient-speech/lite-whisper-medium) | 14.50 | 239.99M | 456.64M |
|
81 |
+
| [lite-whisper-medium-fast](https://huggingface.co/efficient-speech/lite-whisper-medium-fast) | 14.52 | 215.31M | 456.64M |
|
82 |
| | | | |
|
83 |
| [whisper-small](https://huggingface.co/openai/whisper-small) | 15.89 | 87.00M | 153.58M |
|
84 |
| [lite-whisper-small-acc](https://huggingface.co/efficient-speech/lite-whisper-small-acc) | 15.37 | 76.99M | 153.58M |
|
85 |
| [lite-whisper-small](https://huggingface.co/efficient-speech/lite-whisper-small) | 14.96 | 70.16M | 153.58M |
|
86 |
| [lite-whisper-small-fast](https://huggingface.co/efficient-speech/lite-whisper-small-fast) | 14.92 | 63.11M | 153.58M |
|
87 |
| | | | |
|
88 |
+
| [whisper-base](https://huggingface.co/openai/whisper-base) | 17.67 | 19.82M | 52.00M |
|
89 |
+
| [lite-whisper-base-acc](https://huggingface.co/efficient-speech/lite-whisper-base-acc) | 19.07 | 18.64M | 52.00M |
|
90 |
+
| [lite-whisper-base](https://huggingface.co/efficient-speech/lite-whisper-base) | 19.71 | 17.44M | 52.00M |
|
91 |
+
| [lite-whisper-base-fast](https://huggingface.co/efficient-speech/lite-whisper-base-fast) | 23.05 | 16.07M | 52.00M |
|
92 |
+
| | | | |
|
93 |
+
| [whisper-tiny](https://huggingface.co/openai/whisper-tiny) | 22.01 | 7.63M | 29.55M |
|
94 |
+
| [lite-whisper-tiny-acc](https://huggingface.co/efficient-speech/lite-whisper-tiny-acc) | 22.97 | 7.41M | 29.55M |
|
95 |
+
| [lite-whisper-tiny](https://huggingface.co/efficient-speech/lite-whisper-tiny) | 23.95 | 7.00M | 29.55M |
|
96 |
+
| [lite-whisper-tiny-fast](https://huggingface.co/efficient-speech/lite-whisper-tiny-fast) | 27.09 | 6.48M | 29.55M |
|
97 |
|
98 |
## Citation
|
99 |
|
|
|
101 |
|
102 |
```
|
103 |
@misc{kamahori2025liteasrefficientautomaticspeech,
|
104 |
+
title={LiteASR: Efficient Automatic Speech Recognition with Low-Rank Approximation},
|
105 |
author={Keisuke Kamahori and Jungo Kasai and Noriyuki Kojima and Baris Kasikci},
|
106 |
year={2025},
|
107 |
eprint={2502.20583},
|
108 |
archivePrefix={arXiv},
|
109 |
primaryClass={cs.LG},
|
110 |
+
url={https://arxiv.org/abs/2502.20583},
|
111 |
}
|
112 |
```
|