Files changed (1) hide show
  1. README.md +117 -0
README.md CHANGED
@@ -29,6 +29,109 @@ prompt_template: '[INST] <<SYS>>
29
 
30
  '
31
  quantized_by: TheBloke
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
32
  ---
33
 
34
  <!-- header start -->
@@ -385,3 +488,17 @@ Please report any software “bug,” or other problems with the models through
385
  |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
386
  |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
387
  |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
 
30
  '
31
  quantized_by: TheBloke
32
+ model-index:
33
+ - name: Llama-2-70B-chat-GPTQ
34
+ results:
35
+ - task:
36
+ type: text-generation
37
+ name: Text Generation
38
+ dataset:
39
+ name: AI2 Reasoning Challenge (25-Shot)
40
+ type: ai2_arc
41
+ config: ARC-Challenge
42
+ split: test
43
+ args:
44
+ num_few_shot: 25
45
+ metrics:
46
+ - type: acc_norm
47
+ value: 62.63
48
+ name: normalized accuracy
49
+ source:
50
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/Llama-2-70B-chat-GPTQ
51
+ name: Open LLM Leaderboard
52
+ - task:
53
+ type: text-generation
54
+ name: Text Generation
55
+ dataset:
56
+ name: HellaSwag (10-Shot)
57
+ type: hellaswag
58
+ split: validation
59
+ args:
60
+ num_few_shot: 10
61
+ metrics:
62
+ - type: acc_norm
63
+ value: 84.81
64
+ name: normalized accuracy
65
+ source:
66
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/Llama-2-70B-chat-GPTQ
67
+ name: Open LLM Leaderboard
68
+ - task:
69
+ type: text-generation
70
+ name: Text Generation
71
+ dataset:
72
+ name: MMLU (5-Shot)
73
+ type: cais/mmlu
74
+ config: all
75
+ split: test
76
+ args:
77
+ num_few_shot: 5
78
+ metrics:
79
+ - type: acc
80
+ value: 62.74
81
+ name: accuracy
82
+ source:
83
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/Llama-2-70B-chat-GPTQ
84
+ name: Open LLM Leaderboard
85
+ - task:
86
+ type: text-generation
87
+ name: Text Generation
88
+ dataset:
89
+ name: TruthfulQA (0-shot)
90
+ type: truthful_qa
91
+ config: multiple_choice
92
+ split: validation
93
+ args:
94
+ num_few_shot: 0
95
+ metrics:
96
+ - type: mc2
97
+ value: 50.98
98
+ source:
99
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/Llama-2-70B-chat-GPTQ
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: Winogrande (5-shot)
106
+ type: winogrande
107
+ config: winogrande_xl
108
+ split: validation
109
+ args:
110
+ num_few_shot: 5
111
+ metrics:
112
+ - type: acc
113
+ value: 78.69
114
+ name: accuracy
115
+ source:
116
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/Llama-2-70B-chat-GPTQ
117
+ name: Open LLM Leaderboard
118
+ - task:
119
+ type: text-generation
120
+ name: Text Generation
121
+ dataset:
122
+ name: GSM8k (5-shot)
123
+ type: gsm8k
124
+ config: main
125
+ split: test
126
+ args:
127
+ num_few_shot: 5
128
+ metrics:
129
+ - type: acc
130
+ value: 18.65
131
+ name: accuracy
132
+ source:
133
+ url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=TheBloke/Llama-2-70B-chat-GPTQ
134
+ name: Open LLM Leaderboard
135
  ---
136
 
137
  <!-- header start -->
 
488
  |7B| [Link](https://huggingface.co/llamaste/Llama-2-7b) | [Link](https://huggingface.co/llamaste/Llama-2-7b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-7b-chat-hf)|
489
  |13B| [Link](https://huggingface.co/llamaste/Llama-2-13b) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-13b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-13b-hf)|
490
  |70B| [Link](https://huggingface.co/llamaste/Llama-2-70b) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf) | [Link](https://huggingface.co/llamaste/Llama-2-70b-chat) | [Link](https://huggingface.co/llamaste/Llama-2-70b-hf)|
491
+
492
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)
493
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_TheBloke__Llama-2-70B-chat-GPTQ)
494
+
495
+ | Metric |Value|
496
+ |---------------------------------|----:|
497
+ |Avg. |59.75|
498
+ |AI2 Reasoning Challenge (25-Shot)|62.63|
499
+ |HellaSwag (10-Shot) |84.81|
500
+ |MMLU (5-Shot) |62.74|
501
+ |TruthfulQA (0-shot) |50.98|
502
+ |Winogrande (5-shot) |78.69|
503
+ |GSM8k (5-shot) |18.65|
504
+