DavidAU's picture
Upload README.md with huggingface_hub
9630f4c verified
metadata
language:
  - pt
  - en
license: mit
tags:
  - llama-cpp
  - gguf-my-repo
pipeline_tag: text-generation
widget:
  - text: >
      Below is an instruction that describes a task, paired with an input that
      provides further context. Write a response that appropriately completes
      the request.


      ### Instruction: 

      Sua instrução aqui


      ### Response:

DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-Q8_0-GGUF

This model was converted to GGUF format from cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Use with llama.cpp

Install llama.cpp through brew.

brew install ggerganov/ggerganov/llama.cpp

Invoke the llama.cpp server or the CLI.

CLI:

llama-cli --hf-repo DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-Q8_0-GGUF --model tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.Q8_0.gguf -p "The meaning to life and the universe is"

Server:

llama-server --hf-repo DavidAU/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k-Q8_0-GGUF --model tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.Q8_0.gguf -c 2048

Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.

git clone https://github.com/ggerganov/llama.cpp &&             cd llama.cpp &&             make &&             ./main -m tinyllama-1.1b-intermediate-1.5t-ptbr-instruct-v3-8k.Q8_0.gguf -n 128