--- language: - pt license: apache-2.0 library_name: transformers tags: - Misral - Portuguese - 7b - q8 - gguf base_model: mistralai/Mistral-7B-Instruct-v0.2 datasets: - pablo-moreira/gpt4all-j-prompt-generations-pt - rhaymison/superset --- # Mistral portuguese luana 7b Q8 GUFF
This GGUF model, derived from the Mixtrla Luana 7b, has been quantized in Q8/8bits. The model was trained with a superset of 200,000 instructions in Portuguese, aiming to help fill the gap in models available in Portuguese. Tuned from the Mistral 7b, this model has been primarily adjusted for instructional tasks. Remember that verbs are important in your prompt. Tell your model how to act or behave so that you can guide them along the path of their response. Important points like these help models (even smaller models like 7b) to perform much better. ```python !git lfs install !pip install langchain !pip install langchain-community langchain-core !pip install llama-cpp-python !git clone https://huggingface.co/rhaymison/Mistral-portuguese-luana-7b-q8-gguf def llamacpp(): from langchain.llms import LlamaCpp from langchain.prompts import PromptTemplate from langchain.chains import LLMChain llm = LlamaCpp( model_path="/content/Mistral-portuguese-luana-7b-q8-gguf.gguf", n_gpu_layers=40, n_batch=512, verbose=True, ) template = """