output
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the task arithmetic merge method using ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087 as a base.
Models Merged
The following models were included in the merge:
- ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
- ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
Configuration
The following YAML configuration was used to produce this model:
base_model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
dtype: bfloat16
merge_method: task_arithmetic
parameters:
int8_mask: 1.0
normalize: 0.0
slices:
- sources:
- layer_range: [0, 8]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.4751898929029342
- layer_range: [0, 8]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.15973634379963977
- layer_range: [0, 8]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.640283894579152
- sources:
- layer_range: [8, 16]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.19424736508900337
- layer_range: [8, 16]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.08221575886362112
- layer_range: [8, 16]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: -0.23713388005117378
- sources:
- layer_range: [16, 24]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.033742143324615545
- layer_range: [16, 24]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.5125748346054765
- layer_range: [16, 24]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.35282253920558426
- sources:
- layer_range: [24, 32]
model: ./evol_merge_storage/input_models/RakutenAI-7B-chat_2028928689
parameters:
weight: 0.7795484206247641
- layer_range: [24, 32]
model: ./evol_merge_storage/input_models/OpenMath-Mistral-7B-v0.1-hf_3930120330
parameters:
weight: 0.27517180468664126
- layer_range: [24, 32]
model: ./evol_merge_storage/input_models/Mistral-7B-Instruct-v0.2_674785087
parameters:
weight: 0.4537203169876529
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.