This is an extended (16K) context version of LLaMA 3 8B (base, not instruct). Trained for five hours on 8x A6000 GPUs, using the Yukang/LongAlpaca-16k-length dataset.

rope_theta was set to 1000000.0. Trained with Axolotl.

Downloads last month
11
Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.

Dataset used to train LoneStriker/Llama-3-8B-16K-3.0bpw-h6-exl2