NikolayKozloff commited on
Commit
131102b
1 Parent(s): 2ba073f

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +203 -0
README.md ADDED
@@ -0,0 +1,203 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - pt
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - portuguese
8
+ - brasil
9
+ - gemma
10
+ - portugues
11
+ - instrucao
12
+ - llama-cpp
13
+ - gguf-my-repo
14
+ datasets:
15
+ - rhaymison/superset
16
+ pipeline_tag: text-generation
17
+ widget:
18
+ - text: Me explique como funciona um computador.
19
+ example_title: Computador.
20
+ - text: Me conte sobre a ida do homem a Lua.
21
+ example_title: Homem na Lua.
22
+ - text: Fale sobre uma curiosidade sobre a história do mundo
23
+ example_title: História.
24
+ - text: Escreva um poema bem interessante sobre o Sol e as flores.
25
+ example_title: Escreva um poema.
26
+ model-index:
27
+ - name: gemma-portuguese-luana-2b
28
+ results:
29
+ - task:
30
+ type: text-generation
31
+ name: Text Generation
32
+ dataset:
33
+ name: ENEM Challenge (No Images)
34
+ type: eduagarcia/enem_challenge
35
+ split: train
36
+ args:
37
+ num_few_shot: 3
38
+ metrics:
39
+ - type: acc
40
+ value: 24.42
41
+ name: accuracy
42
+ source:
43
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
44
+ name: Open Portuguese LLM Leaderboard
45
+ - task:
46
+ type: text-generation
47
+ name: Text Generation
48
+ dataset:
49
+ name: BLUEX (No Images)
50
+ type: eduagarcia-temp/BLUEX_without_images
51
+ split: train
52
+ args:
53
+ num_few_shot: 3
54
+ metrics:
55
+ - type: acc
56
+ value: 24.34
57
+ name: accuracy
58
+ source:
59
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
60
+ name: Open Portuguese LLM Leaderboard
61
+ - task:
62
+ type: text-generation
63
+ name: Text Generation
64
+ dataset:
65
+ name: OAB Exams
66
+ type: eduagarcia/oab_exams
67
+ split: train
68
+ args:
69
+ num_few_shot: 3
70
+ metrics:
71
+ - type: acc
72
+ value: 27.11
73
+ name: accuracy
74
+ source:
75
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
76
+ name: Open Portuguese LLM Leaderboard
77
+ - task:
78
+ type: text-generation
79
+ name: Text Generation
80
+ dataset:
81
+ name: Assin2 RTE
82
+ type: assin2
83
+ split: test
84
+ args:
85
+ num_few_shot: 15
86
+ metrics:
87
+ - type: f1_macro
88
+ value: 70.86
89
+ name: f1-macro
90
+ source:
91
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
92
+ name: Open Portuguese LLM Leaderboard
93
+ - task:
94
+ type: text-generation
95
+ name: Text Generation
96
+ dataset:
97
+ name: Assin2 STS
98
+ type: eduagarcia/portuguese_benchmark
99
+ split: test
100
+ args:
101
+ num_few_shot: 15
102
+ metrics:
103
+ - type: pearson
104
+ value: 1.51
105
+ name: pearson
106
+ source:
107
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
108
+ name: Open Portuguese LLM Leaderboard
109
+ - task:
110
+ type: text-generation
111
+ name: Text Generation
112
+ dataset:
113
+ name: FaQuAD NLI
114
+ type: ruanchaves/faquad-nli
115
+ split: test
116
+ args:
117
+ num_few_shot: 15
118
+ metrics:
119
+ - type: f1_macro
120
+ value: 43.97
121
+ name: f1-macro
122
+ source:
123
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
124
+ name: Open Portuguese LLM Leaderboard
125
+ - task:
126
+ type: text-generation
127
+ name: Text Generation
128
+ dataset:
129
+ name: HateBR Binary
130
+ type: ruanchaves/hatebr
131
+ split: test
132
+ args:
133
+ num_few_shot: 25
134
+ metrics:
135
+ - type: f1_macro
136
+ value: 40.05
137
+ name: f1-macro
138
+ source:
139
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
140
+ name: Open Portuguese LLM Leaderboard
141
+ - task:
142
+ type: text-generation
143
+ name: Text Generation
144
+ dataset:
145
+ name: PT Hate Speech Binary
146
+ type: hate_speech_portuguese
147
+ split: test
148
+ args:
149
+ num_few_shot: 25
150
+ metrics:
151
+ - type: f1_macro
152
+ value: 51.83
153
+ name: f1-macro
154
+ source:
155
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
156
+ name: Open Portuguese LLM Leaderboard
157
+ - task:
158
+ type: text-generation
159
+ name: Text Generation
160
+ dataset:
161
+ name: tweetSentBR
162
+ type: eduagarcia/tweetsentbr_fewshot
163
+ split: test
164
+ args:
165
+ num_few_shot: 25
166
+ metrics:
167
+ - type: f1_macro
168
+ value: 30.42
169
+ name: f1-macro
170
+ source:
171
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/gemma-portuguese-luana-2b
172
+ name: Open Portuguese LLM Leaderboard
173
+ ---
174
+
175
+ # NikolayKozloff/gemma-portuguese-luana-2b-Q8_0-GGUF
176
+ This model was converted to GGUF format from [`rhaymison/gemma-portuguese-luana-2b`](https://huggingface.co/rhaymison/gemma-portuguese-luana-2b) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
177
+ Refer to the [original model card](https://huggingface.co/rhaymison/gemma-portuguese-luana-2b) for more details on the model.
178
+ ## Use with llama.cpp
179
+
180
+ Install llama.cpp through brew.
181
+
182
+ ```bash
183
+ brew install ggerganov/ggerganov/llama.cpp
184
+ ```
185
+ Invoke the llama.cpp server or the CLI.
186
+
187
+ CLI:
188
+
189
+ ```bash
190
+ llama-cli --hf-repo NikolayKozloff/gemma-portuguese-luana-2b-Q8_0-GGUF --model gemma-portuguese-luana-2b.Q8_0.gguf -p "The meaning to life and the universe is"
191
+ ```
192
+
193
+ Server:
194
+
195
+ ```bash
196
+ llama-server --hf-repo NikolayKozloff/gemma-portuguese-luana-2b-Q8_0-GGUF --model gemma-portuguese-luana-2b.Q8_0.gguf -c 2048
197
+ ```
198
+
199
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
200
+
201
+ ```
202
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m gemma-portuguese-luana-2b.Q8_0.gguf -n 128
203
+ ```