Edit model card

Alpaca-Llama-3-8B

Full model can be found in https://huggingface.co/akumaburn/Alpaca-Llama-3-8B

mistral-7b-openorca.Q8_0.gguf:

  • MMLU-Test: Final result: 41.5836 +/- 0.4174
  • Arc-Easy: Final result: 72.6316 +/- 1.8691
  • Truthful QA: Final result: 32.0685 +/- 1.6339
  • Arc-Challenge: Final result: 48.8294 +/- 2.8956

llama-3-8b-bnb-4bit.Q8_0.gguf:

  • MMLU-Test: Final result: 40.4074 +/- 0.4156
  • Arc-Easy: Final result: 73.8596 +/- 1.8421
  • Truthful QA: Final result: 26.6830 +/- 1.5484
  • Arc-Challenge: Final result: 46.8227 +/- 2.8906

Open_Orca_Llama-3-8B-unsloth.Q8_0.gguf:

  • MMLU-Test: Final result: 39.3818 +/- 0.4138
  • Arc-Easy: Final result: 67.3684 +/- 1.9656
  • Truthful QA: Final result: 29.0086 +/- 1.5886
  • Arc-Challenge: Final result: 42.1405 +/- 2.8604

Alpaca-Llama-3-8B-GGUF-unsloth.Q8_0.gguf:

  • MMLU-Test: Final result: 40.6441 +/- 0.4160
  • Arc-Easy: Final result: 77.5439 +/- 1.7494
  • Truthful QA: Final result: 29.7430 +/- 1.6003
  • Arc-Challenge: Final result: 50.5017 +/- 2.8963

Meta-Llama-3-8B.Q8_0.gguf:

  • MMLU-Test: Final result: 40.8664 +/- 0.4163
  • Arc-Easy: Final result: 74.3860 +/- 1.8299
  • Truthful QA: Final result: 28.6414 +/- 1.5826
  • Arc-Challenge: Final result: 47.1572 +/- 2.8917

Llama.cpp Options For Testing: --samplers "tfs;typical;temp" --draft 32 --ctx-size 8192 --temp 0.82 --tfs 0.8 --typical 1.1 --repeat-last-n 512 --batch-size 8192 --repeat-penalty 1.0 --n-gpu-layers 100 --threads 12

This llama model was trained 2x faster with Unsloth and Huggingface's TRL library.

Downloads last month
356
GGUF
Model size
8.03B params
Architecture
llama
Unable to determine this model’s pipeline type. Check the docs .

Quantized from

Dataset used to train akumaburn/Alpaca-Llama-3-8B-GGUF