munish0838
commited on
Commit
•
d835f5b
1
Parent(s):
ebf407d
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,142 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
language:
|
4 |
+
- de
|
5 |
+
- en
|
6 |
+
tags:
|
7 |
+
- spectrum
|
8 |
+
- continuous pretraining
|
9 |
+
- sft
|
10 |
+
- dpo
|
11 |
+
pipeline_tag: text-generation
|
12 |
+
base_model: VAGOsolutions/SauerkrautLM-1.5b
|
13 |
+
---
|
14 |
+
|
15 |
+
# QuantFactory/SauerkrautLM-1.5b-GGUF
|
16 |
+
This is quantized version of [VAGOsolutions/SauerkrautLM-1.5b](https://huggingface.co/VAGOsolutions/SauerkrautLM-1.5b) created suing llama.cpp
|
17 |
+
|
18 |
+
# Model Description
|
19 |
+
|
20 |
+
![SauerkrautLM-1.5b](https://vago-solutions.ai/wp-content/uploads/2024/06/SauerkrautLM-1.5b-pic.png "SauerkrautLM-1.5b")
|
21 |
+
## VAGO solutions SauerkrautLM-1.5b
|
22 |
+
|
23 |
+
**DEMO Model** - *to showcase the potential of resource-efficient Continuous Pre-Training of Large Language Models using **Spectrum CPT***
|
24 |
+
|
25 |
+
Introducing **SauerkrautLM-1.5b** – our Sauerkraut version of the powerful [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B)!
|
26 |
+
|
27 |
+
- Continuous Pretraining on German Data with [**Spectrum**](https://github.com/cognitivecomputations/spectrum) CPT (by Eric Hartford, Lucas Atkins, Fernando Fernandes Neto and David Golchinfar) **targeting 25% of the layers.**
|
28 |
+
- Finetuned with SFT
|
29 |
+
- Aligned with DPO
|
30 |
+
|
31 |
+
# Table of Contents
|
32 |
+
1. [Overview of all SauerkrautLM-1.5b](#all-SauerkrautLM-1.5b)
|
33 |
+
2. [Model Details](#model-details)
|
34 |
+
- [Training procedure](#training-procedure)
|
35 |
+
3. [Evaluation](#evaluation)
|
36 |
+
5. [Disclaimer](#disclaimer)
|
37 |
+
6. [Contact](#contact)
|
38 |
+
7. [Collaborations](#collaborations)
|
39 |
+
8. [Acknowledgement](#acknowledgement)
|
40 |
+
|
41 |
+
|
42 |
+
## All SauerkrautLM-1.5b
|
43 |
+
|
44 |
+
| Model | HF | EXL2 | GGUF | AWQ |
|
45 |
+
|-------|-------|-------|-------|-------|
|
46 |
+
| SauerkrautLM-1.5b | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-1.5b) | coming soon | [Link](https://huggingface.co/VAGOsolutions/SauerkrautLM-1.5b.GGUF) | coming soon |
|
47 |
+
|
48 |
+
## Model Details
|
49 |
+
**SauerkrautLM-1.5b**
|
50 |
+
- **Model Type:** SauerkrautLM-1.5b is a finetuned Model based on [Qwen/Qwen2-1.5B](https://huggingface.co/Qwen/Qwen2-1.5B)
|
51 |
+
- **Language(s):** German, English
|
52 |
+
- **License:** Apache 2.0
|
53 |
+
- **Contact:** [VAGO solutions](https://vago-solutions.ai)
|
54 |
+
|
55 |
+
## Training Procedure
|
56 |
+
|
57 |
+
This model is a demo intended to showcase the potential of resource-efficient training of large language models using Spectrum CPT. Here's a brief on the procedure:
|
58 |
+
|
59 |
+
**Continuous Pre-training (CPT) on German Data**:
|
60 |
+
|
61 |
+
Utilizing Spectrum by Eric Hartford, Lucas Atkins, Fernando Fernandes Neto, and David Golchinfar, the model targeted 25% of its layers during training. This approach allowed significant resource savings:
|
62 |
+
Spectrum with 25% layer targeting consumed 309.78GB at a batch size of 2048.
|
63 |
+
Full Fine-tuning targeting 100% of layers used 633.55GB at the same batch size.
|
64 |
+
Using Spectrum, we enhanced the German language capabilities of the Qwen2-1.5B model via CPT while achieving substantial resource savings.
|
65 |
+
Spectrum enabled faster training and cost reductions. By not targeting all layers for CPT, we managed to prevent substantial performance degradation in the model's primary language (English), thus markedly improving its German proficiency.
|
66 |
+
|
67 |
+
The model was further trained with **6.1 billion German tokens**, costing $1152 GPU-Rent for CPT.
|
68 |
+
In the German Rag evaluation, it is on par with 8 billion parameter models and, with its 1.5 billion parameter size, is well-suited for mobile deployment on smartphones and tablets.
|
69 |
+
|
70 |
+
Despite the large volume of German CPT data, the model competes well against the Qwen2-1.5B-Instruct model and performs significantly better in German.
|
71 |
+
|
72 |
+
**Post-CPT Training**:
|
73 |
+
|
74 |
+
The model underwent 3 epochs of Supervised Fine-Tuning (SFT) with 700K samples.
|
75 |
+
|
76 |
+
|
77 |
+
**Further Steps**:
|
78 |
+
|
79 |
+
The model was aligned with Direct Preference Optimization (DPO) using 70K samples.
|
80 |
+
|
81 |
+
## Objective and Results
|
82 |
+
|
83 |
+
The primary goal of this training was to demonstrate that with Spectrum CPT targeting 25% of the layers, even a relatively small model with 1.5 billion parameters can significantly enhance language capabilities while using a fraction of the resources of the classic CPT approach.
|
84 |
+
This method has an even more pronounced effect on larger models. It is feasible to teach a model a new language by training just a quarter of the available layers.
|
85 |
+
|
86 |
+
The model has substantially improved German skills as demonstrated in RAG evaluations and numerous recognized benchmarks. In some English benchmarks, it even surpasses the Qwen2-1.5B-Instruct model.
|
87 |
+
|
88 |
+
**Spectrum CPT can efficiently teach a new language to a large language model (LLM) while preserving the majority of its previously acquired knowledge.**
|
89 |
+
|
90 |
+
Stay tuned for the next big models employing Spectrum CPT!
|
91 |
+
|
92 |
+
**NOTE**
|
93 |
+
|
94 |
+
For the demo, the performance of the model is sufficient.
|
95 |
+
For productive use, more German tokens can be trained on the SauerkrautLM-1.5b as required in order to teach the model even firmer German while only having a relative influence on the performance of the model (25% of the layers).
|
96 |
+
The SauerkrautLM-1.5b offers an excellent starting point for this.
|
97 |
+
|
98 |
+
|
99 |
+
## Evaluation
|
100 |
+
|
101 |
+
**VRAM usage Spectrum CPT vs. FFT CPT - with a batchsize of 2048**
|
102 |
+
|
103 |
+
|
104 |
+
![SauerkrautLM-1.5b_vram](https://vago-solutions.ai/wp-content/uploads/2024/06/VRAM-Usage_new.png "SauerkrautLM-1.5b_vram")
|
105 |
+
|
106 |
+
**Open LLM Leaderboard H6:**
|
107 |
+
|
108 |
+
![SauerkrautLM-1.5b_h6](https://vago-solutions.ai/wp-content/uploads/2024/06/H6-Benchmarks.png "SauerkrautLM-1.5b_h6")
|
109 |
+
|
110 |
+
|
111 |
+
**German H4**
|
112 |
+
|
113 |
+
![SauerkrautLM-1.5b_h4](https://vago-solutions.ai/wp-content/uploads/2024/06/H4_ger_new.png "SauerkrautLM-1.5b_h4")
|
114 |
+
|
115 |
+
|
116 |
+
**German RAG:**
|
117 |
+
|
118 |
+
![SauerkrautLM-1.5b_ger_rag](https://vago-solutions.ai/wp-content/uploads/2024/06/ger_rag_eval.png "SauerkrautLM-1.5b_ger_rag")
|
119 |
+
|
120 |
+
|
121 |
+
**GPT4ALL**
|
122 |
+
|
123 |
+
![SauerkrautLM-1.5b_gpt4all](https://vago-solutions.ai/wp-content/uploads/2024/06/GPT4All-1.png "SauerkrautLM-1.5b_gpt4all")
|
124 |
+
|
125 |
+
|
126 |
+
**AGIEval**
|
127 |
+
|
128 |
+
![SauerkrautLM-1.5b_agieval](https://vago-solutions.ai/wp-content/uploads/2024/06/AGIEval-1.png "SauerkrautLM-1.5b_agieval")
|
129 |
+
|
130 |
+
## Disclaimer
|
131 |
+
We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out.
|
132 |
+
However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided.
|
133 |
+
Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.
|
134 |
+
|
135 |
+
## Contact
|
136 |
+
If you are interested in customized LLMs for business applications, please get in contact with us via our website. We are also grateful for your feedback and suggestions.
|
137 |
+
|
138 |
+
## Collaborations
|
139 |
+
We are also keenly seeking support and investment for our startup, VAGO solutions where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at [VAGO solutions](https://vago-solutions.de/#Kontakt)
|
140 |
+
|
141 |
+
## Acknowledgement
|
142 |
+
Many thanks to [Qwen](https://huggingface.co/Qwen) for providing such valuable model to the Open-Source community.
|