Marcoro14-7B-slerp
Marcoro14-7B-slerp is a merge of the following models using mergekit:
🧩 Configuration
```yaml slices:
- sources:
- model: Orenguteng/Llama-3-8B-Lexi-Uncensored layer_range: [0, 32]
- model: nbeerbower/llama-3-spicy-abliterated-stella-8B layer_range: [0, 32]
merge_method: slerp base_model: Orenguteng/Llama-3-8B-Lexi-Uncensored parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.