File size: 2,384 Bytes
7f753b6 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 |
---
base_model:
- unsloth/Qwen2.5-3B-Instruct
- unsloth/Qwen2.5-3B
library_name: transformers
tags:
- mergekit
- merge
---
# merged_output_ties_1_4
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [unsloth/Qwen2.5-3B](https://huggingface.co/unsloth/Qwen2.5-3B) as a base.
### Models Merged
The following models were included in the merge:
* [unsloth/Qwen2.5-3B-Instruct](https://huggingface.co/unsloth/Qwen2.5-3B-Instruct)
* triples/merged_model
* genstruct/merged_model
* kg/merged_model
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
# Base instructed model
- model: unsloth/Qwen2.5-3B-Instruct
parameters:
weight: 1
density: 1
# Merged LoRA models
- model: genstruct/merged_model
parameters:
weight: 1.0
density: 1.0
# - model: summary/merged_model
# parameters:
# weight: 1.0
# density: 1.0
- model: kg/merged_model
parameters:
weight: 1.0
density: 1.0
#### THIS BREAKS KG!!!
# - model: pII/merged_model
# parameters:
# weight: 1.0
# density: 1.0
# #### Breaks KG!
# - model: preference/merged_model
# parameters:
# weight: 1.0
# density: 1.0
- model: triples/merged_model
parameters:
weight: 1.0
density: 1.0
# - model: suitable/merged_model
# parameters:
# weight: 1.0
# density: 1.0
# - model: feedback/merged_model
# parameters:
# weight: 1.0
# density: 1.0
# Merge configuration
merge_method: ties
base_model: unsloth/Qwen2.5-3B
parameters:
normalize: true
int8_mask: true
dtype: bfloat16
# # Tokenizer configuration
# tokenizer_source: Qwen/Qwen1.5-14B-Chat
# tokenizer_parameters:
# trust_remote_code: true
# # Output configuration
# output:
# precision: bfloat16
# model_format: safetensors
# max_shard_size: "4GB"
# # Training configuration (for potential fine-tuning)
# training:
# learning_rate: 2e-5
# warmup_steps: 100
# gradient_checkpointing: true
# gradient_accumulation_steps: 4
# # Hardware optimization
# hardware:
# mixed_precision: true
# cuda_memory_fraction: 0.95
# optimize_model_memory: true
```
|