DavidGF commited on
Commit
e062dc0
1 Parent(s): acacd61

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -15,7 +15,7 @@ tags:
15
  - english
16
  ---
17
 
18
- ![SauerkrautLM]( "SauerkrautLM-7b-LaserChat")
19
  ## VAGO solutions SauerkrautLM-7b-LaserChat
20
  Introducing **SauerkrautLM-7b-LaserChat** – our Sauerkraut version of the powerful [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) !
21
 
@@ -32,7 +32,7 @@ Without their independent research collaboration this model release would not ha
32
  1. [Overview of all SauerkrautLM-7b-LaserChat models](#all-sauerkrautlm-7b-laserchat-models)
33
  2. [Model Details](#model-details)
34
  - [Prompt template](#prompt-template)
35
- - [Proceed of the training](#proceed-of-the-training)
36
  3. [Evaluation](#evaluation)
37
  5. [Disclaimer](#disclaimer)
38
  6. [Contact](#contact)
@@ -55,10 +55,11 @@ Without their independent research collaboration this model release would not ha
55
 
56
  ### Training procedure:
57
 
58
- Anyone who has attempted or succeeded in fine-tuning a model is aware of the difficulty in nudging it towards a specific skill, such as mastering new languages, as well as the challenges associated with achieving significant improvements in performance. Experimenting with a novel training strategy and Spherical Linear Interpolation alongside a lasered version of the model itself has proven to be both fascinating and revealing.
 
59
  Furthermore, we developed one iteration of the model using our entire SFT -Sauerkraut dataset and two additional iterations using subsets of the full dataset—one focused on enhancing MMLU and TQA capabilities, and the other on boosting GSM8K and Winogrande skills.
60
  After optimizing our primary SFT model, we applied a similar strategy to our new DPO Dataset, dividing it into further subsets. We trained one model on the entire dataset again and two more on these specialized subsets. Actively monitoring and intervening based on a decrease in perplexity on the gsm8k benchmark, led to an overall improvement in performance, especially in math abilities, without detracting from performance on other benchmarks—a task that is typically quite difficult. This process not only helps in understanding the effectiveness of Spherical Linear Interpolation but also introduces a new method for refining models with enhanced skills through a cycle of targeted data selection (Laser data(x)) + SLERP, followed by a subsequent focus on different data (Laser again on data(y)).
61
- Additionally, we integrated a novel training strategy on the SFT and DPO training process inspired by the LaserRMT approach, were we partially freeze the model according to a laser-like analysis aiming to navigate and optimize the trade-offs highlighted by the no free lunch theorem. This innovative training method effectively prevents the significant problem of forgetting previously acquired knowledge. This aspect is particularly crucial when attempting to teach the model specific skills, such as a new language, where traditionally, the model might lose a considerable amount of its prior knowledge and exhibit a decline in overall intelligence. Concrete information on how the new training strategy works and the advantages it offers over conventional training methods will soon be published in a detailed paper by the LaserRMT research group.
62
 
63
 
64
  We improved the German language skills on this model. Nevertheless, certain formulations may occur that are not entirely correct.
 
15
  - english
16
  ---
17
 
18
+ ![SauerkrautLM]("https://vago-solutions.de/wp-content/uploads/2024/02/Sauerkraut_Laserchat.png" "SauerkrautLM-7b-LaserChat")
19
  ## VAGO solutions SauerkrautLM-7b-LaserChat
20
  Introducing **SauerkrautLM-7b-LaserChat** – our Sauerkraut version of the powerful [openchat/openchat-3.5-0106](https://huggingface.co/openchat/openchat-3.5-0106) !
21
 
 
32
  1. [Overview of all SauerkrautLM-7b-LaserChat models](#all-sauerkrautlm-7b-laserchat-models)
33
  2. [Model Details](#model-details)
34
  - [Prompt template](#prompt-template)
35
+ - [Training procedure](#proceed-of-the-training)
36
  3. [Evaluation](#evaluation)
37
  5. [Disclaimer](#disclaimer)
38
  6. [Contact](#contact)
 
55
 
56
  ### Training procedure:
57
 
58
+ Anyone who has attempted or succeeded in fine-tuning a model is aware of the difficulty in nudging it towards a specific skill, such as mastering new languages, as well as the challenges associated with achieving significant improvements in performance.
59
+ Experimenting with a novel training strategy and Spherical Linear Interpolation alongside a lasered version of the model itself has proven to be both fascinating and revealing.
60
  Furthermore, we developed one iteration of the model using our entire SFT -Sauerkraut dataset and two additional iterations using subsets of the full dataset—one focused on enhancing MMLU and TQA capabilities, and the other on boosting GSM8K and Winogrande skills.
61
  After optimizing our primary SFT model, we applied a similar strategy to our new DPO Dataset, dividing it into further subsets. We trained one model on the entire dataset again and two more on these specialized subsets. Actively monitoring and intervening based on a decrease in perplexity on the gsm8k benchmark, led to an overall improvement in performance, especially in math abilities, without detracting from performance on other benchmarks—a task that is typically quite difficult. This process not only helps in understanding the effectiveness of Spherical Linear Interpolation but also introduces a new method for refining models with enhanced skills through a cycle of targeted data selection (Laser data(x)) + SLERP, followed by a subsequent focus on different data (Laser again on data(y)).
62
+ Additionally, we integrated a novel training strategy on the SFT and DPO training process, were we partially freeze the model according to a laser-like analysis aiming to navigate and optimize the trade-offs highlighted by the no free lunch theorem. This innovative training method effectively prevents the significant problem of forgetting previously acquired knowledge. This aspect is particularly crucial when attempting to teach the model specific skills, such as a new language, where traditionally, the model might lose a considerable amount of its prior knowledge and exhibit a decline in overall intelligence. Concrete information on how the new training strategy works and the advantages it offers over conventional training methods will soon be published in a detailed paper by the LaserRMT research group.
63
 
64
 
65
  We improved the German language skills on this model. Nevertheless, certain formulations may occur that are not entirely correct.