rhaymison commited on
Commit
f3c1bc3
1 Parent(s): 305211a

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md ADDED
@@ -0,0 +1,85 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - pt
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ tags:
7
+ - portugues
8
+ - portuguese
9
+ - QA
10
+ - instruct
11
+ - phi
12
+ - gguf
13
+ - f16
14
+ base_model: microsoft/Phi-3-mini-4k-instruct
15
+ datasets:
16
+ - rhaymison/superset
17
+ pipeline_tag: text-generation
18
+ ---
19
+
20
+ # Phi3 portuguese tom cat 4k instruct GGUF
21
+
22
+ <p align="center">
23
+ <img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/tom-cat.webp" width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
24
+ </p>
25
+
26
+ This GGUF model, derived from the Phi3 Tom cat 4k, has been quantized in f16. The model was trained with a superset of 300,000 instructions in Portuguese, aiming to help fill the gap in models available in Portuguese.
27
+ Tuned from Phi3-4k, this model has been primarily adjusted for instructional tasks.
28
+
29
+ This model was trained with a superset of 300,000 instructions in Portuguese.
30
+ The model comes to help fill the gap in models in Portuguese. Tuned from the microsoft/Phi-3-mini-4k.
31
+
32
+ Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response.
33
+ Important points like these help models (even smaller models like 4b) to perform much better.
34
+
35
+
36
+ ```python
37
+ !git lfs install
38
+ !pip install langchain
39
+ !pip install langchain-community langchain-core
40
+ !pip install llama-cpp-python
41
+
42
+ !git clone https://huggingface.co/rhaymison/phi-3-portuguese-tom-cat-4k-instruct-f16-gguf
43
+
44
+ def llamacpp():
45
+ from langchain.llms import LlamaCpp
46
+ from langchain.prompts import PromptTemplate
47
+ from langchain.chains import LLMChain
48
+
49
+ llm = LlamaCpp(
50
+ model_path="/content/phi-3-portuguese-tom-cat-4k-instruct-f16-gguf",
51
+ n_gpu_layers=40,
52
+ n_batch=512,
53
+ verbose=True,
54
+ )
55
+
56
+ template = """<s>[INST] Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto.
57
+ Escreva uma resposta que complete adequadamente o pedido.
58
+ ### {question}
59
+ [/INST]"""
60
+
61
+ prompt = PromptTemplate(template=template, input_variables=["question"])
62
+
63
+ llm_chain = LLMChain(prompt=prompt, llm=llm)
64
+
65
+ question = "instrução: aja como um professor de matemática e me explique porque 2 + 2 = 4?"
66
+ response = llm_chain.run({"question": question})
67
+ print(response)
68
+
69
+ ```
70
+
71
+
72
+
73
+ ### Comments
74
+
75
+ Any idea, help or report will always be welcome.
76
+
77
+ email: rhaymisoncristian@gmail.com
78
+
79
+ <div style="display:flex; flex-direction:row; justify-content:left">
80
+ <a href="https://www.linkedin.com/in/rhaymison-cristian-betini-2b3016175/" target="_blank">
81
+ <img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white">
82
+ </a>
83
+ <a href="https://github.com/rhaymisonbetini" target="_blank">
84
+ <img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white">
85
+ </a>