Guardian-Samantha-7b-slerp

Guardian-Samantha-7b-slerp is a merge of the following models using mergekit:

🧩 Configuration

```yaml{'slices': [{'sources': [{'model': 'llamas-community/LlamaGuard-7b', 'layer_range': [0, 32]}, {'model': 'ParthasarathyShanmugam/llama-2-7b-samantha', 'layer_range': [0, 32]}]}], 'merge_method': 'slerp', 'base_model': 'llamas-community/LlamaGuard-7b', 'parameters': {'t': [{'filter': 'self_attn', 'value': [0, 0.5, 0.3, 0.7, 1]}, {'filter': 'mlp', 'value': [1, 0.5, 0.7, 0.3, 0]}, {'value': 0.5}]}, 'dtype': 'float16'}```

Downloads last month
6
Safetensors
Model size
6.74B params
Tensor type
FP16
·
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Model tree for brichett/Guardian-Samantha-7b-slerp

Quantizations
1 model