Adding the Open Portuguese LLM Leaderboard Evaluation Results

#1
Files changed (1) hide show
  1. README.md +166 -0
README.md CHANGED
@@ -2,6 +2,153 @@
2
  license: apache-2.0
3
  library_name: transformers
4
  pipeline_tag: text-generation
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
 
7
  # Model Card for Model ID,
@@ -198,3 +345,22 @@ Carbon emissions can be estimated using the [Machine Learning Impact calculator]
198
  ## Model Card Contact
199
 
200
  [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  library_name: transformers
4
  pipeline_tag: text-generation
5
+ model-index:
6
+ - name: internlm2-chat-1_8b-ultracabrita
7
+ results:
8
+ - task:
9
+ type: text-generation
10
+ name: Text Generation
11
+ dataset:
12
+ name: ENEM Challenge (No Images)
13
+ type: eduagarcia/enem_challenge
14
+ split: train
15
+ args:
16
+ num_few_shot: 3
17
+ metrics:
18
+ - type: acc
19
+ value: 35.48
20
+ name: accuracy
21
+ source:
22
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm2-chat-1_8b-ultracabrita
23
+ name: Open Portuguese LLM Leaderboard
24
+ - task:
25
+ type: text-generation
26
+ name: Text Generation
27
+ dataset:
28
+ name: BLUEX (No Images)
29
+ type: eduagarcia-temp/BLUEX_without_images
30
+ split: train
31
+ args:
32
+ num_few_shot: 3
33
+ metrics:
34
+ - type: acc
35
+ value: 30.74
36
+ name: accuracy
37
+ source:
38
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm2-chat-1_8b-ultracabrita
39
+ name: Open Portuguese LLM Leaderboard
40
+ - task:
41
+ type: text-generation
42
+ name: Text Generation
43
+ dataset:
44
+ name: OAB Exams
45
+ type: eduagarcia/oab_exams
46
+ split: train
47
+ args:
48
+ num_few_shot: 3
49
+ metrics:
50
+ - type: acc
51
+ value: 30.11
52
+ name: accuracy
53
+ source:
54
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm2-chat-1_8b-ultracabrita
55
+ name: Open Portuguese LLM Leaderboard
56
+ - task:
57
+ type: text-generation
58
+ name: Text Generation
59
+ dataset:
60
+ name: Assin2 RTE
61
+ type: assin2
62
+ split: test
63
+ args:
64
+ num_few_shot: 15
65
+ metrics:
66
+ - type: f1_macro
67
+ value: 84.74
68
+ name: f1-macro
69
+ source:
70
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm2-chat-1_8b-ultracabrita
71
+ name: Open Portuguese LLM Leaderboard
72
+ - task:
73
+ type: text-generation
74
+ name: Text Generation
75
+ dataset:
76
+ name: Assin2 STS
77
+ type: eduagarcia/portuguese_benchmark
78
+ split: test
79
+ args:
80
+ num_few_shot: 15
81
+ metrics:
82
+ - type: pearson
83
+ value: 60.31
84
+ name: pearson
85
+ source:
86
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm2-chat-1_8b-ultracabrita
87
+ name: Open Portuguese LLM Leaderboard
88
+ - task:
89
+ type: text-generation
90
+ name: Text Generation
91
+ dataset:
92
+ name: FaQuAD NLI
93
+ type: ruanchaves/faquad-nli
94
+ split: test
95
+ args:
96
+ num_few_shot: 15
97
+ metrics:
98
+ - type: f1_macro
99
+ value: 43.97
100
+ name: f1-macro
101
+ source:
102
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm2-chat-1_8b-ultracabrita
103
+ name: Open Portuguese LLM Leaderboard
104
+ - task:
105
+ type: text-generation
106
+ name: Text Generation
107
+ dataset:
108
+ name: HateBR Binary
109
+ type: ruanchaves/hatebr
110
+ split: test
111
+ args:
112
+ num_few_shot: 25
113
+ metrics:
114
+ - type: f1_macro
115
+ value: 72.31
116
+ name: f1-macro
117
+ source:
118
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm2-chat-1_8b-ultracabrita
119
+ name: Open Portuguese LLM Leaderboard
120
+ - task:
121
+ type: text-generation
122
+ name: Text Generation
123
+ dataset:
124
+ name: PT Hate Speech Binary
125
+ type: hate_speech_portuguese
126
+ split: test
127
+ args:
128
+ num_few_shot: 25
129
+ metrics:
130
+ - type: f1_macro
131
+ value: 55.21
132
+ name: f1-macro
133
+ source:
134
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm2-chat-1_8b-ultracabrita
135
+ name: Open Portuguese LLM Leaderboard
136
+ - task:
137
+ type: text-generation
138
+ name: Text Generation
139
+ dataset:
140
+ name: tweetSentBR
141
+ type: eduagarcia/tweetsentbr_fewshot
142
+ split: test
143
+ args:
144
+ num_few_shot: 25
145
+ metrics:
146
+ - type: f1_macro
147
+ value: 50.71
148
+ name: f1-macro
149
+ source:
150
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=recogna-nlp/internlm2-chat-1_8b-ultracabrita
151
+ name: Open Portuguese LLM Leaderboard
152
  ---
153
 
154
  # Model Card for Model ID,
 
345
  ## Model Card Contact
346
 
347
  [More Information Needed]
348
+
349
+
350
+ # Open Portuguese LLM Leaderboard Evaluation Results
351
+
352
+ Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/recogna-nlp/internlm2-chat-1_8b-ultracabrita) and on the [πŸš€ Open Portuguese LLM Leaderboard](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
353
+
354
+ | Metric | Value |
355
+ |--------------------------|---------|
356
+ |Average |**51.51**|
357
+ |ENEM Challenge (No Images)| 35.48|
358
+ |BLUEX (No Images) | 30.74|
359
+ |OAB Exams | 30.11|
360
+ |Assin2 RTE | 84.74|
361
+ |Assin2 STS | 60.31|
362
+ |FaQuAD NLI | 43.97|
363
+ |HateBR Binary | 72.31|
364
+ |PT Hate Speech Binary | 55.21|
365
+ |tweetSentBR | 50.71|
366
+