Llama-3-linear-8B / mergekit_config.yml
mlabonne's picture
Upload folder using huggingface_hub
ce924eb verified
raw
history blame contribute delete
No virus
312 Bytes
models:
- layer_range: [0, 40]
model: meta-llama/Meta-Llama-3-8B
parameters:
weight: 0.2
- layer_range: [0, 40]
model: meta-llama/Meta-Llama-3-8B-Instruct
parameters:
weight: 0.8
merge_method: task_arithmetic
base_model: meta-llama/Meta-Llama-3-8B
dtype: bfloat16
random_seed: 0