File size: 2,608 Bytes
364aaf4
 
 
 
 
 
 
 
 
0d1d423
 
 
 
 
 
 
 
5ea4aa1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
0d1d423
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
---
license: cc-by-nc-4.0
datasets:
- Intel/orca_dpo_pairs
language:
- en
pipeline_tag: text-generation
---


**Base Model**
- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
- [beomi/OPEN-SOLAR-KO-10.7B](https://huggingface.co/beomi/OPEN-SOLAR-KO-10.7B)

**Training Corpus**
- [Intel/orca_dpo_pairs](https://huggingface.co/datasets/Intel/orca_dpo_pairs)

**Explanation**
- Merge two base models using [mergekit](https://github.com/arcee-ai/mergekit) (slerp)
- Apply DPO to the merged model, just an adapter part is saved
- merge the adpater and the merged model

**Merge Script**
```
slices:
  - sources:
      - model: upstage/SOLAR-10.7B-Instruct-v1.0
        layer_range: [0, 48]
      - model: beomi/OPEN-SOLAR-KO-10.7B
        layer_range: [0, 48]
merge_method: slerp
base_model: upstage/SOLAR-10.7B-Instruct-v1.0
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5 # fallback for rest of tensors
dtype: float16
```

**Score**
|Average|ARC|HellaSwag|MMLU|TruthfulQA|Winogrande|GSM8K|
|:---:|:---:|:---:|:---:|:---:|:---:|:---:|
|65.96|63.91|84.58|63.18|51.49|82|50.57|

**Usage**
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
model = AutoModelForCausalLM.from_pretrained(
        "dddsaty/SOLAR_Merge_Adapter_DPO_Orca",
        low_cpu_mem_usage = True,
        torch_dtype = torch.float16,
        device_map = device_map,
    )

tokenizer = AutoTokenizer.from_pretrained("dddsaty/SOLAR_Merge_Adapter_DPO_Orca")
```

**Log**
- 2024.02.05: Initial version Upload

**LICENSE**  
Following the upstage/SOLAR-10.7B-Instruct-v1.0 License
- cc-by-nc-4.0

**Citation**
- beomi/OPEN-SOLAR-KO-10.7B  
```
@misc {solar_ko_junbum_2023,
    author       = { {L. Junbum} },
    title        = { Solar-Ko-10.7b },
    year         = 2024,
    url          = { https://huggingface.co/beomi/SOLAR-KO-10.7B },
    publisher    = { Hugging Face }
}
```  
  
- upstage/SOLAR-10.7B-Instruct-v1.0  
```
@misc{kim2023solar,
      title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling}, 
      author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
      year={2023},
      eprint={2312.15166},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```