mlabonne's picture
Upload folder using huggingface_hub
fec91ca verified
|
raw
history blame
935 Bytes
---
base_model:
- NousResearch/Hermes-3-Llama-3.1-8B
library_name: transformers
tags:
- mergekit
- merge
---
# Hermes-3-Llama-3.1-8B-lorablated
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B) + Llama-3.1-8B-Instruct-abliterated-LORA as a base.
## 🧩 Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: NousResearch/Hermes-3-Llama-3.1-8B+Llama-3.1-8B-Instruct-abliterated-LORA
dtype: bfloat16
merge_method: task_arithmetic
parameters:
normalize: false
slices:
- sources:
- layer_range: [0, 32]
model: NousResearch/Hermes-3-Llama-3.1-8B+Llama-3.1-8B-Instruct-abliterated-LORA
parameters:
weight: 1.0
```