--- datasets: - Yukang/LongAlpaca-16k-length --- This repo is a clone of [mattshumer/Llama-3-8B-16K](https://huggingface.co/mattshumer/Llama-3-8B-16K) This is an extended (16K) context version of LLaMA 3. Trained for five hours on 8x A6000 GPUs, using the `Yukang/LongAlpaca-16k-length` dataset. `rope_theta` was set to `1000000.0`. Trained with Axolotl.