|
--- |
|
license: mit |
|
language: |
|
- pt |
|
- en |
|
pipeline_tag: text-generation |
|
widget: |
|
- text: "Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction: \nSua instrução aqui\n\n### Response:\n" |
|
--- |
|
|
|
|
|
Finetuned version of PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T, on a Portuguese instruct dataset, using axolotl. |
|
|
|
v0, v1 and v2 were finetuned for the default 2048 context length. For this v3, I have used the existing v2 and finetuned the model on a 8k context length dataset. |
|
It works fairly well, but it's reasoning capabilities are not so strong. It works well for basic RAG / question answering on retrieved content. |
|
|
|
Prompt format: |
|
|
|
f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:\n" |