Edit model card

Wiedervereinigung-7b-dpo-laser

image/png

Some of the best german models with 7b parameters as lasered dpo-trained dare_ties merge.

Since the original models based on mistral - three of them on the brilliant german LeoLM/leo-mistral-hessianai-7b - they are reunited in this merged model. Hence the name, no right wing or nationalistic ideas involved :-). To improve the result quality they are dpo-trained with a german translation of intel-orca-dpo using our german fork of LLaMA-Factory. After that this model got a laserRMT treatment with german datasets.

Wiedervereinigung-7b itself is a LazyMergekit merge of:

All the actual heavylifting has been done by the creators of these models.

🧩 Configuration

models:
  - model: LeoLM/leo-mistral-hessianai-7b
    # No parameters necessary for base model
  - model: DiscoResearch/DiscoLM_German_7b_v1
    parameters:
      density: 0.6
      weight: 0.25
  - model: DRXD1000/Phoenix
    parameters:
      density: 0.6
      weight: 0.25
  - model: VAGOsolutions/SauerkrautLM-7b-v1-mistral
    parameters:
      density: 0.6
      weight: 0.25
  - model: malteos/hermeo-7b
    parameters:
      density: 0.6
      weight: 0.25
merge_method: dare_ties
base_model: LeoLM/leo-mistral-hessianai-7b
parameters:
  int8_mask: true
dtype: bfloat16

mt-bench-de

Using laser and dpo results seems to help.

{
    "first_turn": 7.51875,
    "second_turn": 6.4,
    "categories": {
        "writing": 8.425,
        "roleplay": 8.025,
        "reasoning": 5.45,
        "math": 3.2,
        "coding": 4.95,
        "extraction": 7.525,
        "stem": 8.775,
        "humanities": 9.325
    },
    "average": 6.959375
}

πŸ’» Usage

!pip install -qU transformers accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mayflowergmbh/Wiedervereinigung-7b-dpo-laser"
messages = [{"role": "user", "content": "Was ist ein large language model?"}]

tokenizer = AutoTokenizer.from_pretrained(model)
prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    torch_dtype=torch.float16,
    device_map="auto",
)

outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
Downloads last month
6
Safetensors
Model size
7.24B params
Tensor type
BF16
Β·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mayflowergmbh/Wiedervereinigung-7b-dpo-laser

Dataset used to train mayflowergmbh/Wiedervereinigung-7b-dpo-laser

Collections including mayflowergmbh/Wiedervereinigung-7b-dpo-laser