wandgibaut leaderboard-pt-pr-bot commited on
Commit
661ec62
1 Parent(s): 58ca939

Adding the Open Portuguese LLM Leaderboard Evaluation Results (#1)

Browse files

- Adding the Open Portuguese LLM Leaderboard Evaluation Results (73ed09c9465c89a654ced223c9ef6b863f324f5b)
- Fixing some errors of the leaderboard evaluation results in the ModelCard yaml (892079dc716e3e7cee14e22f5e77eabbc2f3c9bf)


Co-authored-by: Open PT LLM Leaderboard PR Bot <leaderboard-pt-pr-bot@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +167 -3
README.md CHANGED
@@ -1,12 +1,159 @@
1
  ---
 
 
2
  license: apache-2.0
 
3
  datasets:
4
  - wikimedia/wikipedia
5
- language:
6
- - pt
7
  metrics:
8
  - accuracy
9
- library_name: transformers
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
  # Model Card for Model ID
12
 
@@ -207,3 +354,20 @@ If you found periquito-3B useful in your research or applications, please cite u
207
  url = {https://huggingface.co/wandgibaut/periquito-3B}
208
  }
209
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - pt
4
  license: apache-2.0
5
+ library_name: transformers
6
  datasets:
7
  - wikimedia/wikipedia
 
 
8
  metrics:
9
  - accuracy
10
+ model-index:
11
+ - name: periquito-3B
12
+ results:
13
+ - task:
14
+ type: text-generation
15
+ name: Text Generation
16
+ dataset:
17
+ name: ENEM Challenge (No Images)
18
+ type: eduagarcia/enem_challenge
19
+ split: train
20
+ args:
21
+ num_few_shot: 3
22
+ metrics:
23
+ - type: acc
24
+ value: 17.98
25
+ name: accuracy
26
+ source:
27
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
28
+ name: Open Portuguese LLM Leaderboard
29
+ - task:
30
+ type: text-generation
31
+ name: Text Generation
32
+ dataset:
33
+ name: BLUEX (No Images)
34
+ type: eduagarcia-temp/BLUEX_without_images
35
+ split: train
36
+ args:
37
+ num_few_shot: 3
38
+ metrics:
39
+ - type: acc
40
+ value: 21.14
41
+ name: accuracy
42
+ source:
43
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
44
+ name: Open Portuguese LLM Leaderboard
45
+ - task:
46
+ type: text-generation
47
+ name: Text Generation
48
+ dataset:
49
+ name: OAB Exams
50
+ type: eduagarcia/oab_exams
51
+ split: train
52
+ args:
53
+ num_few_shot: 3
54
+ metrics:
55
+ - type: acc
56
+ value: 22.69
57
+ name: accuracy
58
+ source:
59
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
60
+ name: Open Portuguese LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: Assin2 RTE
66
+ type: assin2
67
+ split: test
68
+ args:
69
+ num_few_shot: 15
70
+ metrics:
71
+ - type: f1_macro
72
+ value: 43.01
73
+ name: f1-macro
74
+ source:
75
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
76
+ name: Open Portuguese LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: Assin2 STS
82
+ type: eduagarcia/portuguese_benchmark
83
+ split: test
84
+ args:
85
+ num_few_shot: 15
86
+ metrics:
87
+ - type: pearson
88
+ value: 8.92
89
+ name: pearson
90
+ source:
91
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
92
+ name: Open Portuguese LLM Leaderboard
93
+ - task:
94
+ type: text-generation
95
+ name: Text Generation
96
+ dataset:
97
+ name: FaQuAD NLI
98
+ type: ruanchaves/faquad-nli
99
+ split: test
100
+ args:
101
+ num_few_shot: 15
102
+ metrics:
103
+ - type: f1_macro
104
+ value: 43.97
105
+ name: f1-macro
106
+ source:
107
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
108
+ name: Open Portuguese LLM Leaderboard
109
+ - task:
110
+ type: text-generation
111
+ name: Text Generation
112
+ dataset:
113
+ name: HateBR Binary
114
+ type: ruanchaves/hatebr
115
+ split: test
116
+ args:
117
+ num_few_shot: 25
118
+ metrics:
119
+ - type: f1_macro
120
+ value: 50.46
121
+ name: f1-macro
122
+ source:
123
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
124
+ name: Open Portuguese LLM Leaderboard
125
+ - task:
126
+ type: text-generation
127
+ name: Text Generation
128
+ dataset:
129
+ name: PT Hate Speech Binary
130
+ type: hate_speech_portuguese
131
+ split: test
132
+ args:
133
+ num_few_shot: 25
134
+ metrics:
135
+ - type: f1_macro
136
+ value: 41.19
137
+ name: f1-macro
138
+ source:
139
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
140
+ name: Open Portuguese LLM Leaderboard
141
+ - task:
142
+ type: text-generation
143
+ name: Text Generation
144
+ dataset:
145
+ name: tweetSentBR
146
+ type: eduagarcia-temp/tweetsentbr
147
+ split: test
148
+ args:
149
+ num_few_shot: 25
150
+ metrics:
151
+ - type: f1_macro
152
+ value: 47.96
153
+ name: f1-macro
154
+ source:
155
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=wandgibaut/periquito-3B
156
+ name: Open Portuguese LLM Leaderboard
157
  ---
158
  # Model Card for Model ID
159
 
 
354
  url = {https://huggingface.co/wandgibaut/periquito-3B}
355
  }
356
  ```
357
+
358
+ # [Open Portuguese LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
359
+ Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/wandgibaut/periquito-3B)
360
+
361
+ | Metric | Value |
362
+ |--------------------------|---------|
363
+ |Average |**33.04**|
364
+ |ENEM Challenge (No Images)| 17.98|
365
+ |BLUEX (No Images) | 21.14|
366
+ |OAB Exams | 22.69|
367
+ |Assin2 RTE | 43.01|
368
+ |Assin2 STS | 8.92|
369
+ |FaQuAD NLI | 43.97|
370
+ |HateBR Binary | 50.46|
371
+ |PT Hate Speech Binary | 41.19|
372
+ |tweetSentBR | 47.96|
373
+