Adding Evaluation Results

#2
Files changed (1) hide show
  1. README.md +117 -7
README.md CHANGED
@@ -1,16 +1,16 @@
1
  ---
2
- license: apache-2.0
3
- base_model:
4
- - speakleash/Bielik-11B-v2
5
- - speakleash/Bielik-11B-v2.0-Instruct
6
- - speakleash/Bielik-11B-v2.1-Instruct
7
- - speakleash/Bielik-11B-v2.2-Instruct
8
  language:
9
  - pl
 
10
  library_name: transformers
11
  tags:
12
  - merge
13
  - mergekit
 
 
 
 
 
14
  inference:
15
  parameters:
16
  temperature: 0.2
@@ -18,7 +18,103 @@ widget:
18
  - messages:
19
  - role: user
20
  content: Co przedstawia polskie godło?
21
- extra_gated_description: If you want to learn more about how you can use the model, please refer to our <a href="https://bielik.ai/terms/">Terms of Use</a>.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ---
23
 
24
  <p align="center">
@@ -392,3 +488,17 @@ Members of the ACK Cyfronet AGH team providing valuable support and expertise:
392
  ## Contact Us
393
 
394
  If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/pv4brQMDTy).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
 
2
  language:
3
  - pl
4
+ license: apache-2.0
5
  library_name: transformers
6
  tags:
7
  - merge
8
  - mergekit
9
+ base_model:
10
+ - speakleash/Bielik-11B-v2
11
+ - speakleash/Bielik-11B-v2.0-Instruct
12
+ - speakleash/Bielik-11B-v2.1-Instruct
13
+ - speakleash/Bielik-11B-v2.2-Instruct
14
  inference:
15
  parameters:
16
  temperature: 0.2
 
18
  - messages:
19
  - role: user
20
  content: Co przedstawia polskie godło?
21
+ extra_gated_description: If you want to learn more about how you can use the model,
22
+ please refer to our <a href="https://bielik.ai/terms/">Terms of Use</a>.
23
+ model-index:
24
+ - name: Bielik-11B-v2.3-Instruct
25
+ results:
26
+ - task:
27
+ type: text-generation
28
+ name: Text Generation
29
+ dataset:
30
+ name: IFEval (0-Shot)
31
+ type: HuggingFaceH4/ifeval
32
+ args:
33
+ num_few_shot: 0
34
+ metrics:
35
+ - type: inst_level_strict_acc and prompt_level_strict_acc
36
+ value: 55.83
37
+ name: strict accuracy
38
+ source:
39
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=speakleash/Bielik-11B-v2.3-Instruct
40
+ name: Open LLM Leaderboard
41
+ - task:
42
+ type: text-generation
43
+ name: Text Generation
44
+ dataset:
45
+ name: BBH (3-Shot)
46
+ type: BBH
47
+ args:
48
+ num_few_shot: 3
49
+ metrics:
50
+ - type: acc_norm
51
+ value: 38.06
52
+ name: normalized accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=speakleash/Bielik-11B-v2.3-Instruct
55
+ name: Open LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: MATH Lvl 5 (4-Shot)
61
+ type: hendrycks/competition_math
62
+ args:
63
+ num_few_shot: 4
64
+ metrics:
65
+ - type: exact_match
66
+ value: 8.01
67
+ name: exact match
68
+ source:
69
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=speakleash/Bielik-11B-v2.3-Instruct
70
+ name: Open LLM Leaderboard
71
+ - task:
72
+ type: text-generation
73
+ name: Text Generation
74
+ dataset:
75
+ name: GPQA (0-shot)
76
+ type: Idavidrein/gpqa
77
+ args:
78
+ num_few_shot: 0
79
+ metrics:
80
+ - type: acc_norm
81
+ value: 12.08
82
+ name: acc_norm
83
+ source:
84
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=speakleash/Bielik-11B-v2.3-Instruct
85
+ name: Open LLM Leaderboard
86
+ - task:
87
+ type: text-generation
88
+ name: Text Generation
89
+ dataset:
90
+ name: MuSR (0-shot)
91
+ type: TAUR-Lab/MuSR
92
+ args:
93
+ num_few_shot: 0
94
+ metrics:
95
+ - type: acc_norm
96
+ value: 16.01
97
+ name: acc_norm
98
+ source:
99
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=speakleash/Bielik-11B-v2.3-Instruct
100
+ name: Open LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: MMLU-PRO (5-shot)
106
+ type: TIGER-Lab/MMLU-Pro
107
+ config: main
108
+ split: test
109
+ args:
110
+ num_few_shot: 5
111
+ metrics:
112
+ - type: acc
113
+ value: 27.16
114
+ name: accuracy
115
+ source:
116
+ url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=speakleash/Bielik-11B-v2.3-Instruct
117
+ name: Open LLM Leaderboard
118
  ---
119
 
120
  <p align="center">
 
488
  ## Contact Us
489
 
490
  If you have any questions or suggestions, please use the discussion tab. If you want to contact us directly, join our [Discord SpeakLeash](https://discord.gg/pv4brQMDTy).
491
+
492
+ # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard)
493
+ Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/speakleash__Bielik-11B-v2.3-Instruct-details)
494
+
495
+ | Metric |Value|
496
+ |-------------------|----:|
497
+ |Avg. |26.19|
498
+ |IFEval (0-Shot) |55.83|
499
+ |BBH (3-Shot) |38.06|
500
+ |MATH Lvl 5 (4-Shot)| 8.01|
501
+ |GPQA (0-shot) |12.08|
502
+ |MuSR (0-shot) |16.01|
503
+ |MMLU-PRO (5-shot) |27.16|
504
+