kuotient commited on
Commit
8a75dbb
1 Parent(s): a63b5a6

Upload folder using huggingface_hub

Browse files
README.md CHANGED
@@ -1,51 +1,27 @@
1
  ---
2
- model-index:
3
- - name: EEVE-Instruct-Math-10.8B
4
- results:
5
- - task:
6
- type: text-generation
7
- dataset:
8
- name: gsm8k-ko
9
- type: gsm8k
10
- metrics:
11
- - name: pass@1
12
- type: pass@1
13
- value: 0.4951
14
- verified: false
15
  base_model:
16
  - yanolja/EEVE-Korean-Instruct-10.8B-v1.0
17
- - kuotient/EEVE-Math-10.8B-SFT
 
18
  library_name: transformers
19
  tags:
 
20
  - merge
21
 
22
  ---
23
- # EEVE-Instruct-Math-10.8B
24
 
25
- `EEVE-Math` 프로젝트는
26
- - Orca-Math-200k 번역 ([Orca-Math: Unlocking the potential of SLMs in Grade School Math](https://arxiv.org/pdf/2402.14830.pdf))
27
- - gsm8k 번역, lm_eval 활용
28
- - Mergekit을 이용한 dare-ties 사용 ([DARE](https://arxiv.org/abs/2311.03099))
29
-
30
- 에 대한 내용을 포괄하고 있습니다.
31
-
32
- > 이 모델은 EEVE-Math와 EEVE-Instruct의 dare-ties로 병합한 병합 모델입니다. 이 프로젝트는 이런 과정을 통해 특화 모델의 EEVE-Math의 성능을 많이 잃지 않고 Instruct 모델의 사용성을 유지할 수 있음을 보여주는 Proof of concept의 성격을 가지고 있습니다.
33
-
34
- | Model | gsm8k-ko(pass@1) |
35
- |---|---|
36
- | EEVE(Base) | 0.4049 |
37
- | [EEVE-Math](https://huggingface.co/kuotient/EEVE-Math-10.8B) (epoch 1) | 0.508 |
38
- | EEVE-Math (epoch 2) | **0.539** |
39
- | [EEVE-Instruct](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) | 0.4511 |
40
- | EEVE-Instruct + Math(epoch 1) | 0.4951 |
41
- | EEVE-Instruct + Math(epoch 2) | **0.5148** |
42
 
43
  ## Merge Details
44
- This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [yanolja/EEVE-Korean-Instruct-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0) as a base.
 
 
45
 
46
  ### Models Merged
47
 
48
  The following models were included in the merge:
 
49
  * [kuotient/EEVE-Math-10.8B](https://huggingface.co/kuotient/EEVE-Math-10.8B)
50
 
51
  ### Configuration
@@ -54,37 +30,20 @@ The following YAML configuration was used to produce this model:
54
 
55
  ```yaml
56
  models:
57
- - model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
58
  # no parameters necessary for base model
59
- - model: kuotient/EEVE-Math-10.8B
60
  parameters:
61
- density: 1
62
  weight: 0.6
 
 
 
 
63
  merge_method: dare_ties
64
- base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
65
  parameters:
66
  int8_mask: true
67
  dtype: bfloat16
68
- ```
69
 
70
- ## Evaluation
71
- [gsm8k-ko](https://huggingface.co/datasets/kuotient/gsm8k-ko), kobest
72
- ```
73
- git clone https://github.com/kuotient/lm-evaluation-harness.git
74
- cd lm-evaluation-harness
75
- pip install -e .
76
  ```
77
- ```
78
- lm_eval --model hf \
79
- --model_args pretrained=yanolja/EEVE-Korean-Instruct-2.8B-v1.0 \
80
- --tasks gsm8k-ko \
81
- --device cuda:0 \
82
- --batch_size auto:4
83
- ```
84
-
85
- | Model | gsm8k(pass@1) | boolq(acc) | copa(acc) | hellaswag(acc) | Overall |
86
- |---|---|---|---|---|---|
87
- | yanolja/EEVE-Korean-10.8B-v1.0 | 0.4049 | - | - | - | - | - |
88
- | yanolja/EEVE-Korean-Instruct-10.8B-v1.0 | 0.4511 | **0.8668** | **0.7450** | **0.4940** | 0.6392 |
89
- | [**EEVE-Math-10.8B**](https://huggingface.co/kuotient/EEVE-Math-10.8B) | **0.5390** | 0.8027 | 0.7260 | 0.4760 | 0.6359 |
90
- | **EEVE-Instruct-Math-10.8B** | 0.4951 | 0.8283 | 0.7500 | 0.4880 | **0.6403** |
 
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  base_model:
3
  - yanolja/EEVE-Korean-Instruct-10.8B-v1.0
4
+ - yanolja/EEVE-Korean-10.8B-v1.0
5
+ - kuotient/EEVE-Math-10.8B
6
  library_name: transformers
7
  tags:
8
+ - mergekit
9
  - merge
10
 
11
  ---
12
+ # merge
13
 
14
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
 
16
  ## Merge Details
17
+ ### Merge Method
18
+
19
+ This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [yanolja/EEVE-Korean-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-10.8B-v1.0) as a base.
20
 
21
  ### Models Merged
22
 
23
  The following models were included in the merge:
24
+ * [yanolja/EEVE-Korean-Instruct-10.8B-v1.0](https://huggingface.co/yanolja/EEVE-Korean-Instruct-10.8B-v1.0)
25
  * [kuotient/EEVE-Math-10.8B](https://huggingface.co/kuotient/EEVE-Math-10.8B)
26
 
27
  ### Configuration
 
30
 
31
  ```yaml
32
  models:
33
+ - model: yanolja/EEVE-Korean-10.8B-v1.0
34
  # no parameters necessary for base model
35
+ - model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
36
  parameters:
37
+ density: 0.53
38
  weight: 0.6
39
+ - model: kuotient/EEVE-Math-10.8B
40
+ parameters:
41
+ density: 0.53
42
+ weight: 0.4
43
  merge_method: dare_ties
44
+ base_model: yanolja/EEVE-Korean-10.8B-v1.0
45
  parameters:
46
  int8_mask: true
47
  dtype: bfloat16
 
48
 
 
 
 
 
 
 
49
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
config.json CHANGED
@@ -1,5 +1,5 @@
1
  {
2
- "_name_or_path": "yanolja/EEVE-Korean-Instruct-10.8B-v1.0",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
 
1
  {
2
+ "_name_or_path": "yanolja/EEVE-Korean-10.8B-v1.0",
3
  "architectures": [
4
  "LlamaForCausalLM"
5
  ],
mergekit_config.yml CHANGED
@@ -1,12 +1,16 @@
1
  models:
2
- - model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
3
  # no parameters necessary for base model
4
- - model: kuotient/EEVE-Math-10.8B-SFT
5
  parameters:
6
- density: 1
7
  weight: 0.6
 
 
 
 
8
  merge_method: dare_ties
9
- base_model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
10
  parameters:
11
  int8_mask: true
12
  dtype: bfloat16
 
1
  models:
2
+ - model: yanolja/EEVE-Korean-10.8B-v1.0
3
  # no parameters necessary for base model
4
+ - model: yanolja/EEVE-Korean-Instruct-10.8B-v1.0
5
  parameters:
6
+ density: 0.53
7
  weight: 0.6
8
+ - model: kuotient/EEVE-Math-10.8B
9
+ parameters:
10
+ density: 0.53
11
+ weight: 0.4
12
  merge_method: dare_ties
13
+ base_model: yanolja/EEVE-Korean-10.8B-v1.0
14
  parameters:
15
  int8_mask: true
16
  dtype: bfloat16
model-00001-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0fc615af07a9208d6b93211c980fd97a05057455c30deb236988d46f87a7b551
3
  size 9898924776
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2f3c0f5a8b822a00ce02dba7b5501155e0ba361a89e43033dad5808e62e29166
3
  size 9898924776
model-00002-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b22e2f03056393d4f8f6c920c1980fd4dd82b51df20a24c4465d77d1eae22a44
3
  size 9831848976
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:908c9149709e701103ea90b6eb46e306c64768c4780c0f6f859f508320718d03
3
  size 9831848976
model-00003-of-00003.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:ce55c3d6aa92c0a2216cc9cc87ae2d3fe1ce030025578d77e6a82903b902691e
3
  size 1879125728
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a90104a8aeaa3f67074a68805f41f5886a908fb14dcf23e0e6dc6c95ba635e2d
3
  size 1879125728