Edit model card

This repo is a clone of mattshumer/Llama-3-8B-16K

This is an extended (16K) context version of LLaMA 3. Trained for five hours on 8x A6000 GPUs, using the Yukang/LongAlpaca-16k-length dataset.

rope_theta was set to 1000000.0. Trained with Axolotl.

Downloads last month
4
Safetensors
Model size
8.03B params
Tensor type
BF16
·
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train lucataco/Llama-3-8B-16K