Edit model card

Exllamav2 quant (exl2 / 8.0 bpw) made with ExLlamaV2 v0.0.21

Other EXL2 quants:

Quant Model Size lm_head
2.2
3944 MB
6
2.5
4258 MB
6
3.0
4829 MB
6
3.5
5403 MB
6
3.75
5688 MB
6
4.0
5975 MB
6
4.25
6260 MB
6
5.0
7115 MB
6
6.0
8369 MB
8
6.5
8934 MB
8
8.0
10593 MB
8
Aura-llama-3 Data Card

Aura-llama-3-Abliterated

Aura-llama-Abliterated Image

Now that the cute anime girl has your attention.

UPDATE: Model is now using the abliterated version of meta llama 3 8b

Aura-llama is using the methodology presented by SOLAR for scaling LLMs called depth up-scaling (DUS), which encompasses architectural modifications with continued pretraining. Using the solar paper as a base, I integrated Llama-3 weights into the upscaled layers, and In the future plan to continue training the model.

Aura-llama is a merge of the following models to create a base model to work from:

Abliterated Merged Evals (Has Not Been Finetuned):

Aura-llama-Abliterated

  • Avg: ?
  • ARC: ?
  • HellaSwag: ?
  • MMLU: ?
  • T-QA: ?
  • Winogrande: ?
  • GSM8K: ?

Non Abliterated Merged Evals (Has Not Been Finetuned):

Aura-llama-Original

  • Avg: 63.13
  • ARC: 58.02
  • HellaSwag: 77.82
  • MMLU: 65.61
  • T-QA: 51.94
  • Winogrande: 73.40
  • GSM8K: 52.01

🧩 Configuration


dtype: bfloat16
merge_method: passthrough
slices:
- sources:
  - layer_range: [0, 12]
    model: failspy/Llama-3-8B-Instruct-abliterated
- sources:
  - layer_range: [8, 20]
    model: failspy/Llama-3-8B-Instruct-abliterated
- sources:
  - layer_range: [16, 28]
    model: failspy/Llama-3-8B-Instruct-abliterated
- sources:
  - layer_range: [24, 32]
    model: failspy/Llama-3-8B-Instruct-abliterated
        

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 53.46
AI2 Reasoning Challenge (25-Shot) 49.23
HellaSwag (10-Shot) 72.27
MMLU (5-Shot) 55.71
TruthfulQA (0-shot) 46.63
Winogrande (5-shot) 69.30
GSM8k (5-shot) 27.60
Downloads last month
3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Quantized from

Evaluation results