mazesmazes commited on
Commit
8fdecc4
·
verified ·
1 Parent(s): 8978983

Model save

Browse files
Files changed (1) hide show
  1. README.md +61 -35
README.md CHANGED
@@ -1,54 +1,80 @@
1
  ---
2
- license: mit
3
- language:
4
- - en
5
- datasets:
6
- - speechbrain/LoquaciousSet
7
- base_model:
8
- - facebook/hubert-xlarge-ls960-ft
9
- - HuggingFaceTB/SmolLM3-3B
10
- pipeline_tag: automatic-speech-recognition
11
  tags:
12
- - asr
13
- - speech-recognition
14
- - audio
15
- - smollm
16
- - hubert
 
17
  ---
18
 
19
- # Tiny Audio Model Card
 
 
 
20
 
21
- This model was born from a simple idea: what if anyone could train a powerful, modern speech recognition model for the price of a few coffees? This model is the result of the [Tiny Audio course](https://github.com/alexkroman/tiny-audio/blob/main/docs/course/0-course-overview.md), a free, hands-on guide to building your own ASR system from scratch.
 
 
22
 
23
- ## The Story of this Model
24
 
25
- This model isn't the product of a massive research lab with an unlimited budget. It's the result of a 24-hour training run on a single GPU, made possible by an efficient projector-only training approach. By combining the strengths of a massive pretrained audio encoder (`facebook/hubert-xlarge-ls960-ft`) and a powerful language model (`HuggingFaceTB/SmolLM3-3B`), and only training a small projector between them, we can create a high-quality ASR model with minimal resources.
26
 
27
- This model is a testament to the power of open-source and the incredible tools and models that are now available to everyone.
28
 
29
- ## Intended Use
30
 
31
- This model is for you. It's for the curious, the builders, the learners. It's for anyone who wants to understand how modern AI works by getting their hands dirty. Use it to transcribe your podcasts, your meetings, your voice memos. But more importantly, use it as a starting point. Fork it, fine-tune it, break it, and make it your own.
32
 
33
- ## Performance
34
 
35
- This model achieves a Word Error Rate (WER) of **12.14%** on the LoquaciousSet test set. It's not perfect, but it's a solid baseline that you can build on. See how it compares to other models on the [community leaderboard](https://github.com/alexkroman/tiny-audio#leaderboard).
36
 
37
- ## How to Use
38
 
39
- ```python
40
- from transformers import pipeline
 
 
 
 
 
 
 
 
 
41
 
42
- pipe = pipeline("automatic-speech-recognition", model="mazesmazes/tiny-audio", trust_remote_code=True)
43
 
44
- result = pipe("path/to/audio.wav")
45
- print(result["text"])
46
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
47
 
48
- ## How to Get Involved
49
 
50
- This project is more than just a model; it's a community. Here's how you can get involved:
51
 
52
- - **Take the course**: The best way to start is to go through the [free 6-hour course](https://github.com/alexkroman/tiny-audio/blob/main/docs/course/0-course-overview.md) and train your own model.
53
- - **Share your results**: Add your model to the [leaderboard](https://github.com/alexkroman/tiny-audio#leaderboard) and share what you've learned.
54
- - **Join the conversation**: Ask questions, share your ideas, and connect with other builders in the [GitHub Discussions](https://github.com/alexkroman/tiny-audio/discussions).
 
 
1
  ---
2
+ library_name: transformers
 
 
 
 
 
 
 
 
3
  tags:
4
+ - generated_from_trainer
5
+ datasets:
6
+ - generator
7
+ model-index:
8
+ - name: tiny-audio
9
+ results: []
10
  ---
11
 
12
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
+ should probably proofread and complete it, then remove this comment. -->
14
+
15
+ # tiny-audio
16
 
17
+ This model is a fine-tuned version of [](https://huggingface.co/) on the generator dataset.
18
+ It achieves the following results on the evaluation set:
19
+ - Loss: 0.4543
20
 
21
+ ## Model description
22
 
23
+ More information needed
24
 
25
+ ## Intended uses & limitations
26
 
27
+ More information needed
28
 
29
+ ## Training and evaluation data
30
 
31
+ More information needed
32
 
33
+ ## Training procedure
34
 
35
+ ### Training hyperparameters
36
 
37
+ The following hyperparameters were used during training:
38
+ - learning_rate: 0.0001
39
+ - train_batch_size: 8
40
+ - eval_batch_size: 32
41
+ - seed: 123
42
+ - gradient_accumulation_steps: 3
43
+ - total_train_batch_size: 24
44
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
45
+ - lr_scheduler_type: cosine
46
+ - lr_scheduler_warmup_steps: 1000
47
+ - training_steps: 20000
48
 
49
+ ### Training results
50
 
51
+ | Training Loss | Epoch | Step | Validation Loss |
52
+ |:-------------:|:-----:|:-----:|:---------------:|
53
+ | 8.8546 | 0.05 | 1000 | 3.8075 |
54
+ | 0.8286 | 0.1 | 2000 | 0.5193 |
55
+ | 0.7909 | 0.15 | 3000 | 0.4701 |
56
+ | 0.6955 | 0.2 | 4000 | 0.4581 |
57
+ | 0.599 | 0.25 | 5000 | 0.4434 |
58
+ | 0.6159 | 0.3 | 6000 | 0.4353 |
59
+ | 0.5764 | 0.35 | 7000 | 0.4260 |
60
+ | 0.602 | 0.05 | 8000 | 0.4298 |
61
+ | 0.5363 | 0.1 | 9000 | 0.4430 |
62
+ | 0.5643 | 0.15 | 10000 | 0.4636 |
63
+ | 0.5135 | 0.2 | 11000 | 0.4423 |
64
+ | 0.4419 | 0.25 | 12000 | 0.4473 |
65
+ | 0.4848 | 0.3 | 13000 | 0.4539 |
66
+ | 0.4692 | 0.35 | 14000 | 0.4481 |
67
+ | 0.5154 | 0.05 | 15000 | 0.4482 |
68
+ | 0.4736 | 0.1 | 16000 | 0.4522 |
69
+ | 0.5097 | 0.15 | 17000 | 0.4537 |
70
+ | 0.4729 | 0.2 | 18000 | 0.4542 |
71
+ | 0.4142 | 0.25 | 19000 | 0.4543 |
72
+ | 0.4718 | 0.3 | 20000 | 0.4543 |
73
 
 
74
 
75
+ ### Framework versions
76
 
77
+ - Transformers 4.57.1
78
+ - Pytorch 2.8.0+cu128
79
+ - Datasets 4.4.1
80
+ - Tokenizers 0.22.1