Edit model card

A Fishy Model

This model was trained on a special ChatML format with 8k context.

There are 3 different response tokens that can be used.

You can use this one if you want a medium length response. ( Greater than 64 and less than 256 tokens )

<|im_start|>assistant\n

This one if you want a short length response. ( Less than 64 tokens )

<|im_start|>assistantshort\n

This one if you want a long response. ( Greater than 256 tokens )

<|im_start|>assistantlong\n

Uploaded model

  • Developed by: TheTsar1209
  • License: apache-2.0
  • Finetuned from model : unsloth/llama-3-8b-Instruct-bnb-4bit

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
2
Safetensors
Model size
8.03B params
Tensor type
BF16
·

Finetuned from