prithivMLmods commited on
Commit
2a6a8e7
·
verified ·
1 Parent(s): 4b9918f

Adding Evaluation Results

Browse files

This is an automated PR created with [this space](https://huggingface.co/spaces/T145/open-llm-leaderboard-results-to-modelcard)!

The purpose of this PR is to add evaluation results from the Open LLM Leaderboard to your model card.

Please report any issues here: https://huggingface.co/spaces/T145/open-llm-leaderboard-results-to-modelcard/discussions

Files changed (1) hide show
  1. README.md +114 -1
README.md CHANGED
@@ -12,6 +12,105 @@ tags:
12
  - cot
13
  - lcot
14
  - LlaMa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
 
17
  # **Taurus-Opus-7B**
@@ -109,4 +208,18 @@ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
109
  The model lacks awareness of events or knowledge updates beyond its training data.
110
 
111
  5. **Prompt Dependency**:
112
- Results heavily depend on the specificity and clarity of input prompts, requiring well-structured queries for the best performance.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  - cot
13
  - lcot
14
  - LlaMa
15
+ model-index:
16
+ - name: Taurus-Opus-7B
17
+ results:
18
+ - task:
19
+ type: text-generation
20
+ name: Text Generation
21
+ dataset:
22
+ name: IFEval (0-Shot)
23
+ type: wis-k/instruction-following-eval
24
+ split: train
25
+ args:
26
+ num_few_shot: 0
27
+ metrics:
28
+ - type: inst_level_strict_acc and prompt_level_strict_acc
29
+ value: 42.23
30
+ name: averaged accuracy
31
+ source:
32
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FTaurus-Opus-7B
33
+ name: Open LLM Leaderboard
34
+ - task:
35
+ type: text-generation
36
+ name: Text Generation
37
+ dataset:
38
+ name: BBH (3-Shot)
39
+ type: SaylorTwift/bbh
40
+ split: test
41
+ args:
42
+ num_few_shot: 3
43
+ metrics:
44
+ - type: acc_norm
45
+ value: 34.23
46
+ name: normalized accuracy
47
+ source:
48
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FTaurus-Opus-7B
49
+ name: Open LLM Leaderboard
50
+ - task:
51
+ type: text-generation
52
+ name: Text Generation
53
+ dataset:
54
+ name: MATH Lvl 5 (4-Shot)
55
+ type: lighteval/MATH-Hard
56
+ split: test
57
+ args:
58
+ num_few_shot: 4
59
+ metrics:
60
+ - type: exact_match
61
+ value: 22.73
62
+ name: exact match
63
+ source:
64
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FTaurus-Opus-7B
65
+ name: Open LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: GPQA (0-shot)
71
+ type: Idavidrein/gpqa
72
+ split: train
73
+ args:
74
+ num_few_shot: 0
75
+ metrics:
76
+ - type: acc_norm
77
+ value: 10.18
78
+ name: acc_norm
79
+ source:
80
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FTaurus-Opus-7B
81
+ name: Open LLM Leaderboard
82
+ - task:
83
+ type: text-generation
84
+ name: Text Generation
85
+ dataset:
86
+ name: MuSR (0-shot)
87
+ type: TAUR-Lab/MuSR
88
+ args:
89
+ num_few_shot: 0
90
+ metrics:
91
+ - type: acc_norm
92
+ value: 14.22
93
+ name: acc_norm
94
+ source:
95
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FTaurus-Opus-7B
96
+ name: Open LLM Leaderboard
97
+ - task:
98
+ type: text-generation
99
+ name: Text Generation
100
+ dataset:
101
+ name: MMLU-PRO (5-shot)
102
+ type: TIGER-Lab/MMLU-Pro
103
+ config: main
104
+ split: test
105
+ args:
106
+ num_few_shot: 5
107
+ metrics:
108
+ - type: acc
109
+ value: 32.79
110
+ name: accuracy
111
+ source:
112
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/?search=prithivMLmods%2FTaurus-Opus-7B
113
+ name: Open LLM Leaderboard
114
  ---
115
 
116
  # **Taurus-Opus-7B**
 
208
  The model lacks awareness of events or knowledge updates beyond its training data.
209
 
210
  5. **Prompt Dependency**:
211
+ Results heavily depend on the specificity and clarity of input prompts, requiring well-structured queries for the best performance.
212
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
213
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/prithivMLmods__Taurus-Opus-7B-details)!
214
+ Summarized results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/contents/viewer/default/train?q=prithivMLmods%2FTaurus-Opus-7B&sort[column]=Average%20%E2%AC%86%EF%B8%8F&sort[direction]=desc)!
215
+
216
+ | Metric |Value (%)|
217
+ |-------------------|--------:|
218
+ |**Average** | 26.06|
219
+ |IFEval (0-Shot) | 42.23|
220
+ |BBH (3-Shot) | 34.23|
221
+ |MATH Lvl 5 (4-Shot)| 22.73|
222
+ |GPQA (0-shot) | 10.18|
223
+ |MuSR (0-shot) | 14.22|
224
+ |MMLU-PRO (5-shot) | 32.79|
225
+