felipe-carlos-ipms commited on
Commit
06c90c5
1 Parent(s): 2dd3dea

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +209 -0
README.md ADDED
@@ -0,0 +1,209 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - pt
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - portugues
8
+ - portuguese
9
+ - QA
10
+ - instruct
11
+ - phi
12
+ - llama-cpp
13
+ - gguf-my-repo
14
+ base_model: rhaymison/phi-3-portuguese-tom-cat-4k-instruct
15
+ datasets:
16
+ - rhaymison/superset
17
+ pipeline_tag: text-generation
18
+ model-index:
19
+ - name: phi-3-portuguese-tom-cat-4k-instruct
20
+ results:
21
+ - task:
22
+ type: text-generation
23
+ name: Text Generation
24
+ dataset:
25
+ name: ENEM Challenge (No Images)
26
+ type: eduagarcia/enem_challenge
27
+ split: train
28
+ args:
29
+ num_few_shot: 3
30
+ metrics:
31
+ - type: acc
32
+ value: 61.58
33
+ name: accuracy
34
+ source:
35
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
36
+ name: Open Portuguese LLM Leaderboard
37
+ - task:
38
+ type: text-generation
39
+ name: Text Generation
40
+ dataset:
41
+ name: BLUEX (No Images)
42
+ type: eduagarcia-temp/BLUEX_without_images
43
+ split: train
44
+ args:
45
+ num_few_shot: 3
46
+ metrics:
47
+ - type: acc
48
+ value: 50.63
49
+ name: accuracy
50
+ source:
51
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
52
+ name: Open Portuguese LLM Leaderboard
53
+ - task:
54
+ type: text-generation
55
+ name: Text Generation
56
+ dataset:
57
+ name: OAB Exams
58
+ type: eduagarcia/oab_exams
59
+ split: train
60
+ args:
61
+ num_few_shot: 3
62
+ metrics:
63
+ - type: acc
64
+ value: 43.69
65
+ name: accuracy
66
+ source:
67
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
68
+ name: Open Portuguese LLM Leaderboard
69
+ - task:
70
+ type: text-generation
71
+ name: Text Generation
72
+ dataset:
73
+ name: Assin2 RTE
74
+ type: assin2
75
+ split: test
76
+ args:
77
+ num_few_shot: 15
78
+ metrics:
79
+ - type: f1_macro
80
+ value: 91.54
81
+ name: f1-macro
82
+ source:
83
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
84
+ name: Open Portuguese LLM Leaderboard
85
+ - task:
86
+ type: text-generation
87
+ name: Text Generation
88
+ dataset:
89
+ name: Assin2 STS
90
+ type: eduagarcia/portuguese_benchmark
91
+ split: test
92
+ args:
93
+ num_few_shot: 15
94
+ metrics:
95
+ - type: pearson
96
+ value: 75.27
97
+ name: pearson
98
+ source:
99
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
100
+ name: Open Portuguese LLM Leaderboard
101
+ - task:
102
+ type: text-generation
103
+ name: Text Generation
104
+ dataset:
105
+ name: FaQuAD NLI
106
+ type: ruanchaves/faquad-nli
107
+ split: test
108
+ args:
109
+ num_few_shot: 15
110
+ metrics:
111
+ - type: f1_macro
112
+ value: 47.46
113
+ name: f1-macro
114
+ source:
115
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
116
+ name: Open Portuguese LLM Leaderboard
117
+ - task:
118
+ type: text-generation
119
+ name: Text Generation
120
+ dataset:
121
+ name: HateBR Binary
122
+ type: ruanchaves/hatebr
123
+ split: test
124
+ args:
125
+ num_few_shot: 25
126
+ metrics:
127
+ - type: f1_macro
128
+ value: 83.01
129
+ name: f1-macro
130
+ source:
131
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
132
+ name: Open Portuguese LLM Leaderboard
133
+ - task:
134
+ type: text-generation
135
+ name: Text Generation
136
+ dataset:
137
+ name: PT Hate Speech Binary
138
+ type: hate_speech_portuguese
139
+ split: test
140
+ args:
141
+ num_few_shot: 25
142
+ metrics:
143
+ - type: f1_macro
144
+ value: 70.19
145
+ name: f1-macro
146
+ source:
147
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
148
+ name: Open Portuguese LLM Leaderboard
149
+ - task:
150
+ type: text-generation
151
+ name: Text Generation
152
+ dataset:
153
+ name: tweetSentBR
154
+ type: eduagarcia/tweetsentbr_fewshot
155
+ split: test
156
+ args:
157
+ num_few_shot: 25
158
+ metrics:
159
+ - type: f1_macro
160
+ value: 57.78
161
+ name: f1-macro
162
+ source:
163
+ url: https://huggingface.co/spaces/eduagarcia/open_pt_llm_leaderboard?query=rhaymison/phi-3-portuguese-tom-cat-4k-instruct
164
+ name: Open Portuguese LLM Leaderboard
165
+ ---
166
+
167
+ # felipe-carlos-ipms/phi-3-portuguese-tom-cat-4k-instruct-Q4_K_M-GGUF
168
+ This model was converted to GGUF format from [`rhaymison/phi-3-portuguese-tom-cat-4k-instruct`](https://huggingface.co/rhaymison/phi-3-portuguese-tom-cat-4k-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
169
+ Refer to the [original model card](https://huggingface.co/rhaymison/phi-3-portuguese-tom-cat-4k-instruct) for more details on the model.
170
+
171
+ ## Use with llama.cpp
172
+ Install llama.cpp through brew (works on Mac and Linux)
173
+
174
+ ```bash
175
+ brew install llama.cpp
176
+
177
+ ```
178
+ Invoke the llama.cpp server or the CLI.
179
+
180
+ ### CLI:
181
+ ```bash
182
+ llama-cli --hf-repo felipe-carlos-ipms/phi-3-portuguese-tom-cat-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-portuguese-tom-cat-4k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
183
+ ```
184
+
185
+ ### Server:
186
+ ```bash
187
+ llama-server --hf-repo felipe-carlos-ipms/phi-3-portuguese-tom-cat-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-portuguese-tom-cat-4k-instruct-q4_k_m.gguf -c 2048
188
+ ```
189
+
190
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
191
+
192
+ Step 1: Clone llama.cpp from GitHub.
193
+ ```
194
+ git clone https://github.com/ggerganov/llama.cpp
195
+ ```
196
+
197
+ Step 2: Move into the llama.cpp folder and build it with `LLAMA_CURL=1` flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
198
+ ```
199
+ cd llama.cpp && LLAMA_CURL=1 make
200
+ ```
201
+
202
+ Step 3: Run inference through the main binary.
203
+ ```
204
+ ./llama-cli --hf-repo felipe-carlos-ipms/phi-3-portuguese-tom-cat-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-portuguese-tom-cat-4k-instruct-q4_k_m.gguf -p "The meaning to life and the universe is"
205
+ ```
206
+ or
207
+ ```
208
+ ./llama-server --hf-repo felipe-carlos-ipms/phi-3-portuguese-tom-cat-4k-instruct-Q4_K_M-GGUF --hf-file phi-3-portuguese-tom-cat-4k-instruct-q4_k_m.gguf -c 2048
209
+ ```