This was just an experiment. One that went bad. I actually managed to decrease the ability to do math by using a math dpo dataset with a german translation.

{
    "first_turn": 6.48125,
    "second_turn": 6.19375,
    "categories": {
        "writing": 8.425,
        "roleplay": 7.4,
        "reasoning": 4.6,
        "math": 2.65,
        "coding": 4.6,
        "extraction": 7,
        "stem": 8.0,
        "humanities": 8.025
    },
    "average": 6.3375
}
Downloads last month
17
Safetensors
Model size
7.24B params
Tensor type
BF16
·
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for johannhartmann/BreznChatML

Quantizations
1 model