merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SCE merge method using TareksLab/M-BASE-SCE as a base.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: TareksLab/M-MERGE4
parameters:
select_topk: 0.15
- model: TareksLab/M-MERGE3
parameters:
select_topk: 0.16
- model: TareksLab/M-MERGE2
parameters:
select_topk: 0.17
- model: TareksLab/M-MERGE1
parameters:
select_topk: 0.18
merge_method: sce
base_model: TareksLab/M-BASE-SCE
parameters:
int8_mask: true
chat_template: llama3
tokenizer:
source: TareksLab/M-TOKENIZER-SCE
dtype: bfloat16
- Downloads last month
- 31
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for TareksTesting/Legion-V1.8-LLaMa-70B
Merge model
this model