OwenArli's picture
Update README.md
843f94d verified
metadata
license: llama3

Based on Meta-Llama-3-8b-Instruct, and is governed by Meta Llama 3 License agreement: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct

Experimental model trying to make Llama 3 8B better in general overall. Pretty difficult to do.

Base model: https://huggingface.co/failspy/Meta-Llama-3-8B-Instruct-abliterated-v3

Training:

  • 4096 sequence length
  • Training duration is around 2 days on 2x3090Ti
  • 1 epoch training with a massive dataset for minimized repetition sickness.
  • LORA with 64-rank 128-alpha resulting in ~2% trainable weights.

Llama 3 Instruct format:

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{{ system_prompt }}<|eot_id|><|start_header_id|>user<|end_header_id|>

{{ user_message_1 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{{ model_answer_1 }}<|eot_id|><|start_header_id|>user<|end_header_id|>

{{ user_message_2 }}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

Quants:

FP16: https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Argon-v1.0

GGUF: https://huggingface.co/OwenArli/ArliAI-Llama-3-8B-Argon-v1.0-GGUF