7b GGUF
Collection
All Current GGUF MODELS 32k/128k fp16
•
22 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit.
This model was merged using the linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: LeroyDyer/Mixtral_AI_128k_bioMedical
parameters:
weight: 1.6128
- model: filipealmeida/Mistral-7B-Instruct-v0.1-sharded
parameters:
weight: 0.3312
merge_method: linear
dtype: float16