File size: 1,344 Bytes
48f080c 3f3ec36 48f080c 3f3ec36 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
base_model:
- rishiraj/smol-7b
- FuseAI/OpenChat-3.5-7B-Mixtral
- openchat/openchat_3.5
- berkeley-nest/Starling-LM-7B-alpha
- FuseAI/OpenChat-3.5-7B-Solar
library_name: transformers
tags:
- mergekit
- merge
license: apache-2.0
---
# merge
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Model Stock](https://arxiv.org/abs/2403.19522) merge method using [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) as a base.
### Models Merged
The following models were included in the merge:
* [rishiraj/smol-7b](https://huggingface.co/rishiraj/smol-7b)
* [FuseAI/OpenChat-3.5-7B-Mixtral](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Mixtral)
* [berkeley-nest/Starling-LM-7B-alpha](https://huggingface.co/berkeley-nest/Starling-LM-7B-alpha)
* [FuseAI/OpenChat-3.5-7B-Solar](https://huggingface.co/FuseAI/OpenChat-3.5-7B-Solar)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: openchat/openchat_3.5
- model: FuseAI/OpenChat-3.5-7B-Mixtral
- model: FuseAI/OpenChat-3.5-7B-Solar
- model: berkeley-nest/Starling-LM-7B-alpha
- model: rishiraj/smol-7b
merge_method: model_stock
base_model: openchat/openchat_3.5
dtype: bfloat16
``` |