davidkim205 commited on
Commit
e3d48b3
1 Parent(s): cdc26cb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -1
README.md CHANGED
@@ -24,4 +24,25 @@ This study addresses these challenges by introducing a multi-task instruction te
24
 
25
  * **Model Developers** : davidkim(changyeon kim)
26
  * **Repository** : https://github.com/davidkim205/komt
27
- * **quant methods** : q4_0, q4_1, q5_0, q5_1, q2_k, q3_k, q3_k_m, q3_k_l, q4_k, q4_k_s, q4_k_m, q5_k, q5_k_s, q5_k_m, q8_0, q4_0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
 
25
  * **Model Developers** : davidkim(changyeon kim)
26
  * **Repository** : https://github.com/davidkim205/komt
27
+ * **quant methods** : q4_0, q4_1, q5_0, q5_1, q2_k, q3_k, q3_k_m, q3_k_l, q4_k, q4_k_s, q4_k_m, q5_k, q5_k_s, q5_k_m, q8_0, q4_0
28
+
29
+
30
+ ## Training
31
+ Refer https://github.com/davidkim205/komt
32
+
33
+ ## Evaluation
34
+
35
+ | model | score | average(0~5) | percentage |
36
+ | --------------------------------------- | ------- | ------------ | ---------- |
37
+ | gpt-3.5-turbo(close) | 147 | 3.97 | 79.45% |
38
+ | naver Cue(close) | 140 | 3.78 | 75.67% |
39
+ | clova X(close) | 136 | 3.67 | 73.51% |
40
+ | WizardLM-13B-V1.2(open) | 96 | 2.59 | 51.89% |
41
+ | Llama-2-7b-chat-hf(open) | 67 | 1.81 | 36.21% |
42
+ | Llama-2-13b-chat-hf(open) | 73 | 1.91 | 38.37% |
43
+ | nlpai-lab/kullm-polyglot-12.8b-v2(open) | 70 | 1.89 | 37.83% |
44
+ | kfkas/Llama-2-ko-7b-Chat(open) | 96 | 2.59 | 51.89% |
45
+ | beomi/KoAlpaca-Polyglot-12.8B(open) | 100 | 2.70 | 54.05% |
46
+ | **komt-llama2-7b-v1 (open)(ours)** | **117** | **3.16** | **63.24%** |
47
+ | **komt-llama2-13b-v1 (open)(ours)** | **129** | **3.48** | **69.72%** |
48
+