leaderboard-pt-pr-bot commited on
Commit
14b2bbf
1 Parent(s): 992c46b

Adding the Open Portuguese LLM Leaderboard Evaluation Results

Browse files

This is an automated PR created with https://huggingface.co/spaces/eduagarcia-temp/portuguese-leaderboard-results-to-modelcard

The purpose of this PR is to add evaluation results from the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard) to your model card.

If you encounter any issues, please report them to https://huggingface.co/spaces/eduagarcia-temp/portuguese-leaderboard-results-to-modelcard/discussions

Files changed (1) hide show
  1. README.md +168 -2
README.md CHANGED
@@ -1,5 +1,4 @@
1
  ---
2
- base_model: unsloth/gemma-2-9b-it-bnb-4bit
3
  language:
4
  - pt
5
  license: apache-2.0
@@ -10,8 +9,156 @@ tags:
10
  - gemma2
11
  - trl
12
  - sft
 
13
  datasets:
14
  - lucianosb/cetacean-ptbr
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  ---
16
 
17
  # Boto 9B IT
@@ -50,4 +197,23 @@ O uso do modelo é de inteira responsabilidade do usuário. O desenvolvedor do m
50
 
51
  This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
52
 
53
- [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  language:
3
  - pt
4
  license: apache-2.0
 
9
  - gemma2
10
  - trl
11
  - sft
12
+ base_model: unsloth/gemma-2-9b-it-bnb-4bit
13
  datasets:
14
  - lucianosb/cetacean-ptbr
15
+ model-index:
16
+ - name: boto-9B-it
17
+ results:
18
+ - task:
19
+ type: text-generation
20
+ name: Text Generation
21
+ dataset:
22
+ name: ENEM Challenge (No Images)
23
+ type: eduagarcia/enem_challenge
24
+ split: train
25
+ args:
26
+ num_few_shot: 3
27
+ metrics:
28
+ - type: acc
29
+ value: 70.4
30
+ name: accuracy
31
+ source:
32
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=lucianosb/boto-9B-it
33
+ name: Open Portuguese LLM Leaderboard
34
+ - task:
35
+ type: text-generation
36
+ name: Text Generation
37
+ dataset:
38
+ name: BLUEX (No Images)
39
+ type: eduagarcia-temp/BLUEX_without_images
40
+ split: train
41
+ args:
42
+ num_few_shot: 3
43
+ metrics:
44
+ - type: acc
45
+ value: 60.78
46
+ name: accuracy
47
+ source:
48
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=lucianosb/boto-9B-it
49
+ name: Open Portuguese LLM Leaderboard
50
+ - task:
51
+ type: text-generation
52
+ name: Text Generation
53
+ dataset:
54
+ name: OAB Exams
55
+ type: eduagarcia/oab_exams
56
+ split: train
57
+ args:
58
+ num_few_shot: 3
59
+ metrics:
60
+ - type: acc
61
+ value: 52.57
62
+ name: accuracy
63
+ source:
64
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=lucianosb/boto-9B-it
65
+ name: Open Portuguese LLM Leaderboard
66
+ - task:
67
+ type: text-generation
68
+ name: Text Generation
69
+ dataset:
70
+ name: Assin2 RTE
71
+ type: assin2
72
+ split: test
73
+ args:
74
+ num_few_shot: 15
75
+ metrics:
76
+ - type: f1_macro
77
+ value: 94.04
78
+ name: f1-macro
79
+ source:
80
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=lucianosb/boto-9B-it
81
+ name: Open Portuguese LLM Leaderboard
82
+ - task:
83
+ type: text-generation
84
+ name: Text Generation
85
+ dataset:
86
+ name: Assin2 STS
87
+ type: eduagarcia/portuguese_benchmark
88
+ split: test
89
+ args:
90
+ num_few_shot: 15
91
+ metrics:
92
+ - type: pearson
93
+ value: 80.83
94
+ name: pearson
95
+ source:
96
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=lucianosb/boto-9B-it
97
+ name: Open Portuguese LLM Leaderboard
98
+ - task:
99
+ type: text-generation
100
+ name: Text Generation
101
+ dataset:
102
+ name: FaQuAD NLI
103
+ type: ruanchaves/faquad-nli
104
+ split: test
105
+ args:
106
+ num_few_shot: 15
107
+ metrics:
108
+ - type: f1_macro
109
+ value: 77.59
110
+ name: f1-macro
111
+ source:
112
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=lucianosb/boto-9B-it
113
+ name: Open Portuguese LLM Leaderboard
114
+ - task:
115
+ type: text-generation
116
+ name: Text Generation
117
+ dataset:
118
+ name: HateBR Binary
119
+ type: ruanchaves/hatebr
120
+ split: test
121
+ args:
122
+ num_few_shot: 25
123
+ metrics:
124
+ - type: f1_macro
125
+ value: 82.85
126
+ name: f1-macro
127
+ source:
128
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=lucianosb/boto-9B-it
129
+ name: Open Portuguese LLM Leaderboard
130
+ - task:
131
+ type: text-generation
132
+ name: Text Generation
133
+ dataset:
134
+ name: PT Hate Speech Binary
135
+ type: hate_speech_portuguese
136
+ split: test
137
+ args:
138
+ num_few_shot: 25
139
+ metrics:
140
+ - type: f1_macro
141
+ value: 72.58
142
+ name: f1-macro
143
+ source:
144
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=lucianosb/boto-9B-it
145
+ name: Open Portuguese LLM Leaderboard
146
+ - task:
147
+ type: text-generation
148
+ name: Text Generation
149
+ dataset:
150
+ name: tweetSentBR
151
+ type: eduagarcia/tweetsentbr_fewshot
152
+ split: test
153
+ args:
154
+ num_few_shot: 25
155
+ metrics:
156
+ - type: f1_macro
157
+ value: 71.19
158
+ name: f1-macro
159
+ source:
160
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=lucianosb/boto-9B-it
161
+ name: Open Portuguese LLM Leaderboard
162
  ---
163
 
164
  # Boto 9B IT
 
197
 
198
  This gemma2 model was trained 2x faster with [Unsloth](https://github.com/unslothai/unsloth) and Huggingface's TRL library.
199
 
200
+ [<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth)
201
+
202
+
203
+ # Open Portuguese LLM Leaderboard Evaluation Results
204
+
205
+ Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/lucianosb/boto-9B-it) and on the [🚀 Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
206
+
207
+ | Metric | Value |
208
+ |--------------------------|---------|
209
+ |Average |**73.65**|
210
+ |ENEM Challenge (No Images)| 70.40|
211
+ |BLUEX (No Images) | 60.78|
212
+ |OAB Exams | 52.57|
213
+ |Assin2 RTE | 94.04|
214
+ |Assin2 STS | 80.83|
215
+ |FaQuAD NLI | 77.59|
216
+ |HateBR Binary | 82.85|
217
+ |PT Hate Speech Binary | 72.58|
218
+ |tweetSentBR | 71.19|
219
+