Adding the Open Portuguese LLM Leaderboard Evaluation Results

#6
Files changed (1) hide show
  1. README.md +169 -6
README.md CHANGED
@@ -1,16 +1,163 @@
1
  ---
2
- library_name: transformers
3
- base_model: codellama/CodeLlama-7b-Instruct-hf
4
- license: llama2
5
- datasets:
6
- - semantixai/Test-Dataset-Lloro
7
  language:
8
  - pt
 
 
9
  tags:
10
  - code
11
  - analytics
12
  - analise-dados
13
  - portugues-BR
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
  ---
15
 
16
  **Lloro 7B**
@@ -166,4 +313,20 @@ The following parameters related with the Quantized Low-Rank Adaptation and Qua
166
  | Datasets | 2.14.3 |
167
  | Pytorch | 2.0.1 |
168
  | Tokenizers | 0.14.1 |
169
- | Transformers | 4.34.0 |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
 
 
 
 
2
  language:
3
  - pt
4
+ license: llama2
5
+ library_name: transformers
6
  tags:
7
  - code
8
  - analytics
9
  - analise-dados
10
  - portugues-BR
11
+ datasets:
12
+ - semantixai/Test-Dataset-Lloro
13
+ base_model: codellama/CodeLlama-7b-Instruct-hf
14
+ model-index:
15
+ - name: LloroV2
16
+ results:
17
+ - task:
18
+ type: text-generation
19
+ name: Text Generation
20
+ dataset:
21
+ name: ENEM Challenge (No Images)
22
+ type: eduagarcia/enem_challenge
23
+ split: train
24
+ args:
25
+ num_few_shot: 3
26
+ metrics:
27
+ - type: acc
28
+ value: 26.03
29
+ name: accuracy
30
+ source:
31
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=semantixai/LloroV2
32
+ name: Open Portuguese LLM Leaderboard
33
+ - task:
34
+ type: text-generation
35
+ name: Text Generation
36
+ dataset:
37
+ name: BLUEX (No Images)
38
+ type: eduagarcia-temp/BLUEX_without_images
39
+ split: train
40
+ args:
41
+ num_few_shot: 3
42
+ metrics:
43
+ - type: acc
44
+ value: 29.07
45
+ name: accuracy
46
+ source:
47
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=semantixai/LloroV2
48
+ name: Open Portuguese LLM Leaderboard
49
+ - task:
50
+ type: text-generation
51
+ name: Text Generation
52
+ dataset:
53
+ name: OAB Exams
54
+ type: eduagarcia/oab_exams
55
+ split: train
56
+ args:
57
+ num_few_shot: 3
58
+ metrics:
59
+ - type: acc
60
+ value: 32.53
61
+ name: accuracy
62
+ source:
63
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=semantixai/LloroV2
64
+ name: Open Portuguese LLM Leaderboard
65
+ - task:
66
+ type: text-generation
67
+ name: Text Generation
68
+ dataset:
69
+ name: Assin2 RTE
70
+ type: assin2
71
+ split: test
72
+ args:
73
+ num_few_shot: 15
74
+ metrics:
75
+ - type: f1_macro
76
+ value: 57.19
77
+ name: f1-macro
78
+ source:
79
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=semantixai/LloroV2
80
+ name: Open Portuguese LLM Leaderboard
81
+ - task:
82
+ type: text-generation
83
+ name: Text Generation
84
+ dataset:
85
+ name: Assin2 STS
86
+ type: eduagarcia/portuguese_benchmark
87
+ split: test
88
+ args:
89
+ num_few_shot: 15
90
+ metrics:
91
+ - type: pearson
92
+ value: 26.81
93
+ name: pearson
94
+ source:
95
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=semantixai/LloroV2
96
+ name: Open Portuguese LLM Leaderboard
97
+ - task:
98
+ type: text-generation
99
+ name: Text Generation
100
+ dataset:
101
+ name: FaQuAD NLI
102
+ type: ruanchaves/faquad-nli
103
+ split: test
104
+ args:
105
+ num_few_shot: 15
106
+ metrics:
107
+ - type: f1_macro
108
+ value: 43.77
109
+ name: f1-macro
110
+ source:
111
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=semantixai/LloroV2
112
+ name: Open Portuguese LLM Leaderboard
113
+ - task:
114
+ type: text-generation
115
+ name: Text Generation
116
+ dataset:
117
+ name: HateBR Binary
118
+ type: ruanchaves/hatebr
119
+ split: test
120
+ args:
121
+ num_few_shot: 25
122
+ metrics:
123
+ - type: f1_macro
124
+ value: 68.02
125
+ name: f1-macro
126
+ source:
127
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=semantixai/LloroV2
128
+ name: Open Portuguese LLM Leaderboard
129
+ - task:
130
+ type: text-generation
131
+ name: Text Generation
132
+ dataset:
133
+ name: PT Hate Speech Binary
134
+ type: hate_speech_portuguese
135
+ split: test
136
+ args:
137
+ num_few_shot: 25
138
+ metrics:
139
+ - type: f1_macro
140
+ value: 38.53
141
+ name: f1-macro
142
+ source:
143
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=semantixai/LloroV2
144
+ name: Open Portuguese LLM Leaderboard
145
+ - task:
146
+ type: text-generation
147
+ name: Text Generation
148
+ dataset:
149
+ name: tweetSentBR
150
+ type: eduagarcia-temp/tweetsentbr
151
+ split: test
152
+ args:
153
+ num_few_shot: 25
154
+ metrics:
155
+ - type: f1_macro
156
+ value: 35.21
157
+ name: f1-macro
158
+ source:
159
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=semantixai/LloroV2
160
+ name: Open Portuguese LLM Leaderboard
161
  ---
162
 
163
  **Lloro 7B**
 
313
  | Datasets | 2.14.3 |
314
  | Pytorch | 2.0.1 |
315
  | Tokenizers | 0.14.1 |
316
+ | Transformers | 4.34.0 |
317
+ # [Open Portuguese LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard)
318
+ Detailed results can be found [here](https://huggingface.co/datasets/eduagarcia-temp/llm_pt_leaderboard_raw_results/tree/main/semantixai/LloroV2)
319
+
320
+ | Metric | Value |
321
+ |--------------------------|---------|
322
+ |Average |**39.68**|
323
+ |ENEM Challenge (No Images)| 26.03|
324
+ |BLUEX (No Images) | 29.07|
325
+ |OAB Exams | 32.53|
326
+ |Assin2 RTE | 57.19|
327
+ |Assin2 STS | 26.81|
328
+ |FaQuAD NLI | 43.77|
329
+ |HateBR Binary | 68.02|
330
+ |PT Hate Speech Binary | 38.53|
331
+ |tweetSentBR | 35.21|
332
+