Edit model card

Finetuned version of PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T, on a Portuguese instruct dataset, using axolotl.

v0, v1 and v2 were finetuned for the default 2048 context length. For this v3, I have used the existing v2 and finetuned the model on a 8k context length dataset. It works fairly well, but it's reasoning capabilities are not so strong. It works well for basic RAG / question answering on retrieved content.

Prompt format:

f"Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:\n"

Downloads last month
59
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for cnmoro/TinyLlama-1.1B-intermediate-1.5T-PTBR-Instruct-v3-8k

Quantizations
1 model