Edit model card

L3-Picaro-8B

This is a Llama 3-based model consisting of a merge between:

This merge was performed with permission from the Lora creator (Trappu)

Mergekit config (Inspired from Charles Goddard):

merge_method: passthrough
models:
  - model: F:\AI\models\Meta-Llama-3-8B+F:\AI\loras\Picaro-lora-l3
dtype: float16

Usage

This model will follow the ChatML instruct format without the system prompt:

<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant

Bias, Risks, and Limitations

The model will show biases similar to those observed in niche roleplaying forums on the Internet, besides those exhibited by the base model. It is not intended for supplying factual information or advice in any form.

Training Details

This model is a merge. Please refer to the linked repositories of the merged models for details.

Donate?

All my infrastructure and cloud expenses are paid out of pocket. If you'd like to donate, you can do so here: https://ko-fi.com/kingbri

You should not feel obligated to donate, but if you do, I'd appreciate it.

Downloads last month
2
Safetensors
Model size
8.03B params
Tensor type
FP16
·