merge1 / README.md
khrisintw's picture
Update README.md
4e61fac verified
---
base_model:
- NousResearch/Llama-2-7b-chat-hf
- NousResearch/Llama-2-7b-hf
- taide/TAIDE-LX-7B
library_name: transformers
tags:
- mergekit
- merge
license: mit
---
# merged
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [NousResearch/Llama-2-7b-hf](https://huggingface.co/NousResearch/Llama-2-7b-hf) as a base.
### Models Merged
The following models were included in the merge:
* [NousResearch/Llama-2-7b-chat-hf](https://huggingface.co/NousResearch/Llama-2-7b-chat-hf)
* [taide/TAIDE-LX-7B](https://huggingface.co/taide/TAIDE-LX-7B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
base_model: NousResearch/Llama-2-7b-hf
dtype: bfloat16
merge_method: task_arithmetic
slices:
- sources:
- layer_range: [0, 32]
model: taide/TAIDE-LX-7B
parameters:
weight: 1.0
- layer_range: [0, 32]
model: NousResearch/Llama-2-7b-chat-hf
parameters:
weight: 1.0
- layer_range: [0, 32]
model: NousResearch/Llama-2-7b-hf
```