Apel-sin commited on
Commit
eaf3ecb
·
1 Parent(s): 7179f63

add measurement.json

Browse files
Files changed (2) hide show
  1. README.md +43 -0
  2. measurement.json +0 -0
README.md ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - BenevolenceMessiah/Qwen2.5-Coder-32B-Instruct-abliterated-Rombo-TIES-v1.0
4
+ library_name: transformers
5
+ tags:
6
+ - mergekit
7
+ - merge
8
+
9
+ ---
10
+ # merge
11
+
12
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
13
+
14
+ ## Merge Details
15
+ ### Merge Method
16
+
17
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated](https://huggingface.co/huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated) as a base.
18
+
19
+ ### Models Merged
20
+
21
+ The following models were included in the merge:
22
+ * [rombodawg/Rombos-Coder-V2.5-Qwen-32b](https://huggingface.co/rombodawg/Rombos-Coder-V2.5-Qwen-32b)
23
+
24
+ ### Configuration
25
+
26
+ The following YAML configuration was used to produce this model:
27
+
28
+ ```yaml
29
+ # Qwen2.5-Coder-32B-Instruct-abliterated-Rombo-TIES-v1.0
30
+
31
+ models:
32
+ - model: rombodawg/Rombos-Coder-V2.5-Qwen-32b # Self-instruct fine-tuning
33
+ parameters:
34
+ density: 1.0
35
+ weight: 1.0
36
+ merge_method: ties
37
+ base_model: huihui-ai/Qwen2.5-Coder-32B-Instruct-abliterated # Abliterated model
38
+ parameters:
39
+ normalize: true
40
+ int8_mask: false
41
+ dtype: bfloat16
42
+ tokenizer_source: union
43
+ ```
measurement.json ADDED
The diff for this file is too large to render. See raw diff