Edit model card

SauerkrautLM

VAGO solutions Llama-3-SauerkrautLM-70b-Instruct

Introducing Llama-3-SauerkrautLM-70b-Instruct – our Sauerkraut version of the powerful meta-llama/Meta-Llama-3-70B-Instruct!

The model Llama-3-SauerkrautLM-70b-Instruct is a joint effort between VAGO Solutions and Hyperspace.ai.

  • Aligned with DPO

Table of Contents

  1. Overview of all Llama-3-SauerkrautLM-70b-Instruct
  2. Model Details
  3. Evaluation
  4. Disclaimer
  5. Contact
  6. Collaborations
  7. Acknowledgement

All SauerkrautLM-llama-3-70b-Instruct

Model HF EXL2 GGUF AWQ
Llama-3-SauerkrautLM-70b-Instruct Link Link Link Link

Model Details

SauerkrautLM-llama-3-70B-Instruct

Training procedure:

  • We trained this model with DPO Fine-Tuning for 1 epoch with 70k data.

We improved the model's capabilities noticably by feeding it with curated German data.

Prompt Template:

English:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

You are a helpful AI assistant.<|eot_id|><|start_header_id|>user<|end_header_id|>

Input<|eot_id|><|start_header_id|>assistant<|end_header_id|>

German:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

Du bist ein freundlicher und hilfreicher deutscher KI-Assistent.<|eot_id|><|start_header_id|>user<|end_header_id|>

Input<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Evaluation

Open LLM Leaderboard:

evaluated with lm-evaluation-benchmark-harness 0.4.2

Metric Value
Avg. 80.98
ARC (25-shot) 74.31
HellaSwag (10-shot) 87.56
MMLU (5-shot) 81.09
TruthfulQA (0-shot) 67.01
Winogrande (5-shot) 84.69
GSM8K (5-shot) 91.20

MT-Bench English

########## First turn ##########
                                          score
model                             turn
Llama-3-SauerkrautLM-70b-Instruct 1     8.86875

########## Second turn ##########
                                           score
model                             turn
Llama-3-SauerkrautLM-70b-Instruct 2     8.506329

########## Average ##########
                                      score
model
Llama-3-SauerkrautLM-70b-Instruct  8.688679

MT-Bench German

########## First turn ##########
                                        score
model                             turn
Llama-3-SauerkrautLM-70b-Instruct 1     8.725

########## Second turn ##########
                                        score
model                             turn
Llama-3-SauerkrautLM-70b-Instruct 2       8.5

########## Average ##########
                                    score
model
Llama-3-SauerkrautLM-70b-Instruct  8.6125

German RAG LLM Evaluation corrected result after FIX: https://github.com/huggingface/lighteval/pull/171

|                         Task                         |Version|Metric|Value|   |Stderr|
|------------------------------------------------------|------:|------|----:|---|-----:|
|all                                                   |       |acc   |0.980|±  |0.0034|
|community:german_rag_eval:_average:0                  |       |acc   |0.980|±  |0.0034|
|community:german_rag_eval:choose_context_by_question:0|      0|acc   |0.998|±  |0.0014|
|community:german_rag_eval:choose_question_by_context:0|      0|acc   |1.000|±  |0.0000|
|community:german_rag_eval:context_question_match:0    |      0|acc   |0.973|±  |0.0051|
|community:german_rag_eval:question_answer_match:0     |      0|acc   |0.949|±  |0.0070|

Disclaimer

We must inform users that despite our best efforts in data cleansing, the possibility of uncensored content slipping through cannot be entirely ruled out. However, we cannot guarantee consistently appropriate behavior. Therefore, if you encounter any issues or come across inappropriate content, we kindly request that you inform us through the contact information provided. Additionally, it is essential to understand that the licensing of these models does not constitute legal advice. We are not held responsible for the actions of third parties who utilize our models.

Contact

If you are interested in customized LLMs for business applications, please get in contact with us via our websites. We are also grateful for your feedback and suggestions.

Collaborations

We are also keenly seeking support and investment for our startups, VAGO solutions and Hyperspace where we continuously advance the development of robust language models designed to address a diverse range of purposes and requirements. If the prospect of collaboratively navigating future challenges excites you, we warmly invite you to reach out to us at VAGO solutions, Hyperspace.computer

Acknowledgement

Many thanks to Meta for providing such valuable model to the Open-Source community. Many thanks to redponike and cortecs for the quant. version

Downloads last month
1,830
Safetensors
Model size
70.6B params
Tensor type
BF16
·