Edit model card

Llama 2-13b-alpaca-spanish LoRA

This is a LoRA for Llama 2 13B trained on a translated alpaca dataset on an attempt to improve spanish performance of the Llama-2 foundation model with a conversational focus.

Base model used was The Bloke's Llama-2-13B-fp16 trained in 4bit precision with an added padding token.

Training parameteres
LoRA scale 2
Epochs 0.75
Learning Rate 2e-5
Warmup Steps 100
Loss 1.07
Downloads last month
9

Dataset used to train marianbasti/Llama-2-13b-fp16-alpaca-spanish