SauerkrautLM

VAGO solutions SauerkrautLM-14b-MoE-LaserChat

Introducing SauerkrautLM-14b-MoE-LaserChat – our Sauerkraut (2x7b) 14b MoE version of the powerful SauerkrautLM-7b-LaserChat and yam-peleg/Experiment26-7B !

By combining the two models, we were able to significantly increase both the German and English language skills. In addition, the initial SauerkrautLM-7b-LaserChat also acts as an adapter for Experiment26-7B, which means it benefits from the chat capabilities of the SauerkrautLM-7b-LaserChat. At the same time, the SauerkrautLM-7b-LaserChat benefits from the knowledge and creativity of Experiment26-7B.

The model SauerkrautLM-14b-MoE-LaserChat is a joint effort between VAGO solutions and Hyperspace.ai. Much appreciation goes to the tremendous research effort of Fernando Fernandes Neto, David Golchinfar and Eric Hartford on their laserRMT approach. Without their independent research collaboration this model release would not have been possible.

Table of Contents

  1. Overview of all SauerkrautLM-14b-MoE-LaserChat models
  2. Model Details
  3. Evaluation
  4. Disclaimer
  5. Contact
  6. Collaborations
  7. Acknowledgement

All SauerkrautLM-14b-MoE-LaserChat Models

Model HF GPTQ GGUF AWQ
SauerkrautLM-14b-MoE-LaserChat Link coming soon coming soon coming soon

Model Details

SauerkrautLM-14b-MoE-LaserChat

We improved the German language skills on this model further. Nevertheless, certain formulations may occur that are not entirely correct.

Prompt Template:

GPT4 Correct User: Hallo, wie geht es dir?<|end_of_turn|>GPT4 Correct Assistant: Hallo! Ich bin ein künstliches Intelligenzsystem und habe keine persönlichen Gefühle oder körperliche Zustände. Wie kann ich Ihnen helfen?<|end_of_turn|>GPT4 Correct User: Ich benötige nur einen kurzen Satz, den ich in das Prompt Template veröffentlichen kann.<|end_of_turn|>GPT4 Correct Assistant:

GPT4 Correct User: Hello<|end_of_turn|>GPT4 Correct Assistant: Hello! How can I help you today? If you have any questions or need assistance, feel free to ask.<|end_of_turn|>GPT4 Correct User: I just need a short sentence to post in the prompt template.<|end_of_turn|>GPT4 Correct Assistant:

Evaluation

Open LLM Leaderboard:

benchmarked on lm-evaluation-harness 0.4.1

Metric Value
Avg. 71.65
ARC (25-shot) 68.09
HellaSwag (10-shot) 84.78
MMLU (5-shot) 63.59
TruthfulQA (0-shot) 58.57
Winogrande (5-shot) 80.74
GSM8K (5-shot) 74.15

Performance

Model AGIEval GPT4All TruthfulQA BigBench Average ⬇️
VAGOsolutions/SauerkrautLM-14b-MoE-LaserChat 44.38 74.76 58.57 47.98 56.42
VAGOsolutions/SauerkrautLM-Gemma-7b 37.5 72.46 61.24 45.33 54.13
zephyr-7b-beta 37.52 71.77 55.26 39.77 51.08
zephyr-7b-gemma-v0.1 34.22 66.37 52.19 37.10 47.47
google/gemma-7b-it 21.33 40.84 41.70 30.25 33.53
Details of AGIEval, GPT4All, TruthfulQA, BigBench

AGIEval

Tasks Version Filter n-shot Metric Value Stderr
agieval_sat_math 1 none None acc 0.3727 ± 0.0327
none None acc_norm 0.3045 ± 0.0311
agieval_sat_en_without_passage 1 none None acc 0.4806 ± 0.0349
none None acc_norm 0.4612 ± 0.0348
agieval_sat_en 1 none None acc 0.7816 ± 0.0289
none None acc_norm 0.7621 ± 0.0297
agieval_lsat_rc 1 none None acc 0.6134 ± 0.0297
none None acc_norm 0.6059 ± 0.0298
agieval_lsat_lr 1 none None acc 0.5431 ± 0.0221
none None acc_norm 0.5216 ± 0.0221
agieval_lsat_ar 1 none None acc 0.2435 ± 0.0284
none None acc_norm 0.2174 ± 0.0273
agieval_logiqa_en 1 none None acc 0.3871 ± 0.0191
none None acc_norm 0.4101 ± 0.0193
agieval_aqua_rat 1 none None acc 0.3031 ± 0.0289
none None acc_norm 0.2677 ± 0.0278

Average: 44.38%

GPT4All

Tasks Version Filter n-shot Metric Value Stderr
arc_challenge 1 none None acc 0.5947 ± 0.0143
none None acc_norm 0.6280 ± 0.0141
arc_easy 1 none None acc 0.8506 ± 0.0073
none None acc_norm 0.8468 ± 0.0074
boolq 2 none None acc 0.8761 ± 0.0058
hellaswag 1 none None acc 0.6309 ± 0.0048
none None acc_norm 0.8323 ± 0.0037
openbookqa 1 none None acc 0.326 ± 0.0210
none None acc_norm 0.470 ± 0.0223
piqa 1 none None acc 0.8237 ± 0.0089
none None acc_norm 0.8335 ± 0.0087
winogrande 1 none None acc 0.7466 ± 0.0122

Average: 74.76%

TruthfulQA

Tasks Version Filter n-shot Metric Value Stderr
truthfulqa_mc2 2 none 0 acc 0.5857 ± 0.0141

Average: 58.57%

Bigbench

Tasks Version Filter n-shot Metric Value Stderr
bbh_zeroshot_tracking_shuffled_objects_three_objects 2 flexible-extract 0 exact_match 0.3120 ± 0.0294
bbh_zeroshot_tracking_shuffled_objects_seven_objects 2 flexible-extract 0 exact_match 0.1560 ± 0.0230
bbh_zeroshot_tracking_shuffled_objects_five_objects 2 flexible-extract 0 exact_match 0.1720 ± 0.0239
bbh_zeroshot_temporal_sequences 2 flexible-extract 0 exact_match 0.3960 ± 0.0310
bbh_zeroshot_sports_understanding 2 flexible-extract 0 exact_match 0.8120 ± 0.0248
bbh_zeroshot_snarks 2 flexible-extract 0 exact_match 0.5843 ± 0.0370
bbh_zeroshot_salient_translation_error_detection 2 flexible-extract 0 exact_match 0.4640 ± 0.0316
bbh_zeroshot_ruin_names 2 flexible-extract 0 exact_match 0.4360 ± 0.0314
bbh_zeroshot_reasoning_about_colored_objects 2 flexible-extract 0 exact_match 0.5520 ± 0.0315
bbh_zeroshot_navigate 2 flexible-extract 0 exact_match 0.5800 ± 0.0313
bbh_zeroshot_movie_recommendation 2 flexible-extract 0 exact_match 0.7320 ± 0.0281
bbh_zeroshot_logical_deduction_three_objects 2 flexible-extract 0 exact_match 0.5680 ± 0.0314
bbh_zeroshot_logical_deduction_seven_objects 2 flexible-extract 0 exact_match 0.3920 ± 0.0309
bbh_zeroshot_logical_deduction_five_objects 2 flexible-extract 0 exact_match 0.3960 ± 0.0310
bbh_zeroshot_geometric_shapes 2 flexible-extract 0 exact_match 0.3800 ± 0.0308
bbh_zeroshot_disambiguation_qa 2 flexible-extract 0 exact_match 0.6760 ± 0.0297
bbh_zeroshot_date_understanding 2 flexible-extract 0 exact_match 0.4400 ± 0.0315
bbh_zeroshot_causal_judgement 2 flexible-extract 0 exact_match 0.5882 ± 0.0361

Average: 47.98%

Disclaimer

We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.  

Contact

If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.  

Collaborations

We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer

Acknowledgement

Many thanks to yam-peleg for providing such valuable model to the Open-Source community

Downloads last month
91
Safetensors
Model size
12.9B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for VAGOsolutions/SauerkrautLM-14b-MoE-LaserChat

Quantizations
2 models

Collection including VAGOsolutions/SauerkrautLM-14b-MoE-LaserChat