File size: 2,498 Bytes
2898883
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
language:
- pt
license: apache-2.0
library_name: transformers
tags:
- portugues
- portuguese
- QA
- instruct
base_model: meta-llama/Meta-Llama-3-8B-Instruct
datasets:
- rhaymison/superset
pipeline_tag: text-generation

---

# Llama 3 portuguese Tom cat 8b instruct GGUF

<p align="center">
  <img src="https://raw.githubusercontent.com/rhaymisonbetini/huggphotos/main/tom-cat-8b.webp"  width="50%" style="margin-left:'auto' margin-right:'auto' display:'block'"/>
</p>


This model was trained with a superset of 300,000 chat in Portuguese. 
The model comes to help fill the gap in models in Portuguese. Tuned from the  Tom cat 8b instruct , the model was adjusted mainly for chat.

```python
!git lfs install
!pip install langchain
!pip install langchain-community langchain-core
!pip install llama-cpp-python

!git clone https://huggingface.co/rhaymison/Llama-3-portuguese-Tom-cat-8b-instruct-q8-gguf/

def llamacpp():
    from langchain.llms import LlamaCpp
    from langchain.prompts import PromptTemplate
    from langchain.chains import LLMChain
    
    llm = LlamaCpp(
        model_path="/content/Llama-3-portuguese-Tom-cat-8b-instruct-q8-gguf",
        n_gpu_layers=40,
        n_batch=512,
        verbose=True,
    )

    
    template = f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>
    Abaixo está uma instrução que descreve uma tarefa, juntamente com uma entrada que fornece mais contexto. Escreva uma resposta que complete adequadamente o pedido.<|eot_id|><|start_header_id|>user<|end_header_id|>
    { question }<|eot_id|><|start_header_id|>assistant<|end_header_id|>"""


    prompt = PromptTemplate(template=template, input_variables=["question"])
    
    llm_chain = LLMChain(prompt=prompt, llm=llm)
    
    question = "instrução: aja como um professor de matemática e me explique porque 2 + 2 = 4?"
    response = llm_chain.run({"question": question})
    print(response)

```


### Comments

Any idea, help or report will always be welcome.

email: rhaymisoncristian@gmail.com

 <div style="display:flex; flex-direction:row; justify-content:left">
    <a href="https://www.linkedin.com/in/rhaymison-cristian-betini-2b3016175/" target="_blank">
    <img src="https://img.shields.io/badge/LinkedIn-0077B5?style=for-the-badge&logo=linkedin&logoColor=white">
  </a>
  <a href="https://github.com/rhaymisonbetini" target="_blank">
    <img src="https://img.shields.io/badge/GitHub-100000?style=for-the-badge&logo=github&logoColor=white">
  </a>
 </div>