Files changed (1) hide show
  1. README.md +119 -3
README.md CHANGED
@@ -1,10 +1,113 @@
1
  ---
 
 
2
  license: apache-2.0
3
  datasets:
4
  - teknium/OpenHermes-2.5
5
  - abhinand/ultrachat_200k_sharegpt
6
- language:
7
- - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8
  ---
9
 
10
  # TinyLLaMA OpenHermes2.5 [Work in Progress]
@@ -144,4 +247,17 @@ The following hyperparameters were used during training:
144
  - Transformers 4.38.0.dev0
145
  - Pytorch 2.0.1
146
  - Datasets 2.16.1
147
- - Tokenizers 0.15.0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
  license: apache-2.0
5
  datasets:
6
  - teknium/OpenHermes-2.5
7
  - abhinand/ultrachat_200k_sharegpt
8
+ model-index:
9
+ - name: TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
10
+ results:
11
+ - task:
12
+ type: text-generation
13
+ name: Text Generation
14
+ dataset:
15
+ name: AI2 Reasoning Challenge (25-Shot)
16
+ type: ai2_arc
17
+ config: ARC-Challenge
18
+ split: test
19
+ args:
20
+ num_few_shot: 25
21
+ metrics:
22
+ - type: acc_norm
23
+ value: 33.79
24
+ name: normalized accuracy
25
+ source:
26
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
27
+ name: Open LLM Leaderboard
28
+ - task:
29
+ type: text-generation
30
+ name: Text Generation
31
+ dataset:
32
+ name: HellaSwag (10-Shot)
33
+ type: hellaswag
34
+ split: validation
35
+ args:
36
+ num_few_shot: 10
37
+ metrics:
38
+ - type: acc_norm
39
+ value: 58.72
40
+ name: normalized accuracy
41
+ source:
42
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
43
+ name: Open LLM Leaderboard
44
+ - task:
45
+ type: text-generation
46
+ name: Text Generation
47
+ dataset:
48
+ name: MMLU (5-Shot)
49
+ type: cais/mmlu
50
+ config: all
51
+ split: test
52
+ args:
53
+ num_few_shot: 5
54
+ metrics:
55
+ - type: acc
56
+ value: 24.52
57
+ name: accuracy
58
+ source:
59
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
60
+ name: Open LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: TruthfulQA (0-shot)
66
+ type: truthful_qa
67
+ config: multiple_choice
68
+ split: validation
69
+ args:
70
+ num_few_shot: 0
71
+ metrics:
72
+ - type: mc2
73
+ value: 36.22
74
+ source:
75
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
76
+ name: Open LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: Winogrande (5-shot)
82
+ type: winogrande
83
+ config: winogrande_xl
84
+ split: validation
85
+ args:
86
+ num_few_shot: 5
87
+ metrics:
88
+ - type: acc
89
+ value: 60.93
90
+ name: accuracy
91
+ source:
92
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
93
+ name: Open LLM Leaderboard
94
+ - task:
95
+ type: text-generation
96
+ name: Text Generation
97
+ dataset:
98
+ name: GSM8k (5-shot)
99
+ type: gsm8k
100
+ config: main
101
+ split: test
102
+ args:
103
+ num_few_shot: 5
104
+ metrics:
105
+ - type: acc
106
+ value: 5.38
107
+ name: accuracy
108
+ source:
109
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=abhinand/TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft
110
+ name: Open LLM Leaderboard
111
  ---
112
 
113
  # TinyLLaMA OpenHermes2.5 [Work in Progress]
 
247
  - Transformers 4.38.0.dev0
248
  - Pytorch 2.0.1
249
  - Datasets 2.16.1
250
+ - Tokenizers 0.15.0
251
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
252
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_abhinand__TinyLlama-1.1B-OpenHermes-2.5-Chat-v0.1-sft)
253
+
254
+ | Metric |Value|
255
+ |---------------------------------|----:|
256
+ |Avg. |36.59|
257
+ |AI2 Reasoning Challenge (25-Shot)|33.79|
258
+ |HellaSwag (10-Shot) |58.72|
259
+ |MMLU (5-Shot) |24.52|
260
+ |TruthfulQA (0-shot) |36.22|
261
+ |Winogrande (5-shot) |60.93|
262
+ |GSM8k (5-shot) | 5.38|
263
+