Files changed (1) hide show
  1. README.md +117 -0
README.md CHANGED
@@ -1,4 +1,121 @@
1
  ---
2
  license: unknown
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
4
  Our model is based on Mistral-7B-v0.1 as the basic model, with English chat dataset added for fine-tuning training, and further reinforcement training based on specific datasets. The trained model has a certain level of chat ability, which was found to be enhanced during self testing. We will continue to train the model in the future to improve our Chinese chat ability
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: unknown
3
+ model-index:
4
+ - name: Mistral-7B-golden
5
+ results:
6
+ - task:
7
+ type: text-generation
8
+ name: Text Generation
9
+ dataset:
10
+ name: AI2 Reasoning Challenge (25-Shot)
11
+ type: ai2_arc
12
+ config: ARC-Challenge
13
+ split: test
14
+ args:
15
+ num_few_shot: 25
16
+ metrics:
17
+ - type: acc_norm
18
+ value: 60.75
19
+ name: normalized accuracy
20
+ source:
21
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liuda1/Mistral-7B-golden
22
+ name: Open LLM Leaderboard
23
+ - task:
24
+ type: text-generation
25
+ name: Text Generation
26
+ dataset:
27
+ name: HellaSwag (10-Shot)
28
+ type: hellaswag
29
+ split: validation
30
+ args:
31
+ num_few_shot: 10
32
+ metrics:
33
+ - type: acc_norm
34
+ value: 44.42
35
+ name: normalized accuracy
36
+ source:
37
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liuda1/Mistral-7B-golden
38
+ name: Open LLM Leaderboard
39
+ - task:
40
+ type: text-generation
41
+ name: Text Generation
42
+ dataset:
43
+ name: MMLU (5-Shot)
44
+ type: cais/mmlu
45
+ config: all
46
+ split: test
47
+ args:
48
+ num_few_shot: 5
49
+ metrics:
50
+ - type: acc
51
+ value: 59.29
52
+ name: accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liuda1/Mistral-7B-golden
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: TruthfulQA (0-shot)
61
+ type: truthful_qa
62
+ config: multiple_choice
63
+ split: validation
64
+ args:
65
+ num_few_shot: 0
66
+ metrics:
67
+ - type: mc2
68
+ value: 53.51
69
+ source:
70
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liuda1/Mistral-7B-golden
71
+ name: Open LLM Leaderboard
72
+ - task:
73
+ type: text-generation
74
+ name: Text Generation
75
+ dataset:
76
+ name: Winogrande (5-shot)
77
+ type: winogrande
78
+ config: winogrande_xl
79
+ split: validation
80
+ args:
81
+ num_few_shot: 5
82
+ metrics:
83
+ - type: acc
84
+ value: 76.64
85
+ name: accuracy
86
+ source:
87
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liuda1/Mistral-7B-golden
88
+ name: Open LLM Leaderboard
89
+ - task:
90
+ type: text-generation
91
+ name: Text Generation
92
+ dataset:
93
+ name: GSM8k (5-shot)
94
+ type: gsm8k
95
+ config: main
96
+ split: test
97
+ args:
98
+ num_few_shot: 5
99
+ metrics:
100
+ - type: acc
101
+ value: 20.32
102
+ name: accuracy
103
+ source:
104
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=liuda1/Mistral-7B-golden
105
+ name: Open LLM Leaderboard
106
  ---
107
  Our model is based on Mistral-7B-v0.1 as the basic model, with English chat dataset added for fine-tuning training, and further reinforcement training based on specific datasets. The trained model has a certain level of chat ability, which was found to be enhanced during self testing. We will continue to train the model in the future to improve our Chinese chat ability
108
+
109
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
110
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_liuda1__Mistral-7B-golden)
111
+
112
+ | Metric |Value|
113
+ |---------------------------------|----:|
114
+ |Avg. |52.49|
115
+ |AI2 Reasoning Challenge (25-Shot)|60.75|
116
+ |HellaSwag (10-Shot) |44.42|
117
+ |MMLU (5-Shot) |59.29|
118
+ |TruthfulQA (0-shot) |53.51|
119
+ |Winogrande (5-shot) |76.64|
120
+ |GSM8k (5-shot) |20.32|
121
+