---
base_model:
- mlabonne/AlphaMonarch-7B
- datatab/Yugo55-GPT-v4
- datatab/Yugo55-GPT-DPO-v1-chkp-300
- NousResearch/Nous-Hermes-2-Mistral-7B-DPO
library_name: transformers
tags:
- mergekit
- merge
---
# # Yugo55A-GPT
- **Developed by:** datatab
- **License:** mit
## 🏆 Results
> Results obtained through the Serbian LLM evaluation, released by Aleksa Gordić: [serbian-llm-eval](https://github.com/gordicaleksa/serbian-llm-eval)
> * Evaluation was conducted on a 4-bit version of the model due to hardware resource constraints.
MODEL |
ARC-E |
ARC-C |
Hellaswag |
BoolQ |
Winogrande |
OpenbookQA |
PiQA |
Yugo55-GPT-v4-4bit |
51.41 |
36.00 |
57.51 |
80.92 |
65.75 |
34.70 |
70.54 |
Yugo55A-GPT |
51.52 |
37.78 |
57.52 |
84.40 |
65.43 |
35.60 |
69.43 |
# 🔗 Merge Details
### Merge Method
> This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
> This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
### Models Merged
The following models were included in the merge:
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [datatab/Yugo55-GPT-v4](https://huggingface.co/datatab/Yugo55-GPT-v4)
* [datatab/Yugo55-GPT-DPO-v1-chkp-300](https://huggingface.co/datatab/Yugo55-GPT-DPO-v1-chkp-300)
* [NousResearch/Nous-Hermes-2-Mistral-7B-DPO](https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO)
## 🧩 Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: datatab/Yugo55-GPT-v4
parameters:
weight: 1.0
- model: datatab/Yugo55-GPT-DPO-v1-chkp-300
parameters:
weight: 1.0
- model: mlabonne/AlphaMonarch-7B
parameters:
weight: 0.5
- model: NousResearch/Nous-Hermes-2-Mistral-7B-DPO
parameters:
weight: 0.5
merge_method: linear
dtype: float16
```