YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Aspire-8B-model_stock - GGUF

Original model description:

license: apache-2.0 library_name: transformers tags: - mergekit - merge base_model: - Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 - kloodia/lora-8b-bio - ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1 - Blackroot/Llama-3-8B-Abomination-LORA - Sao10K/L3-8B-Stheno-v3.2 - grimjim/Llama-3-Instruct-abliteration-LoRA-8B - arcee-ai/Llama-3.1-SuperNova-Lite - grimjim/Llama-3-Instruct-abliteration-LoRA-8B - mlabonne/Hermes-3-Llama-3.1-8B-lorablated - kloodia/lora-8b-physic - aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored - kloodia/lora-8b-medic model-index: - name: Aspire-8B-model_stock results: - task: type: text-generation name: Text Generation dataset: name: IFEval (0-Shot) type: HuggingFaceH4/ifeval args: num_few_shot: 0 metrics: - type: inst_level_strict_acc and prompt_level_strict_acc value: 71.41 name: strict accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DreadPoor/Aspire-8B-model_stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: BBH (3-Shot) type: BBH args: num_few_shot: 3 metrics: - type: acc_norm value: 32.53 name: normalized accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DreadPoor/Aspire-8B-model_stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MATH Lvl 5 (4-Shot) type: hendrycks/competition_math args: num_few_shot: 4 metrics: - type: exact_match value: 12.99 name: exact match source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DreadPoor/Aspire-8B-model_stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GPQA (0-shot) type: Idavidrein/gpqa args: num_few_shot: 0 metrics: - type: acc_norm value: 8.61 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DreadPoor/Aspire-8B-model_stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MuSR (0-shot) type: TAUR-Lab/MuSR args: num_few_shot: 0 metrics: - type: acc_norm value: 13.46 name: acc_norm source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DreadPoor/Aspire-8B-model_stock name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU-PRO (5-shot) type: TIGER-Lab/MMLU-Pro config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 30.7 name: accuracy source: url: https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard?query=DreadPoor/Aspire-8B-model_stock name: Open LLM Leaderboard

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

Merge Method

This model was merged using the Model Stock merge method using Sao10K/L3-8B-Stheno-v3.2 + grimjim/Llama-3-Instruct-abliteration-LoRA-8B as a base.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2+kloodia/lora-8b-bio
  - model: arcee-ai/Llama-3.1-SuperNova-Lite+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
  - model: mlabonne/Hermes-3-Llama-3.1-8B-lorablated+kloodia/lora-8b-physic
  - model: aifeifei798/DarkIdol-Llama-3.1-8B-Instruct-1.2-Uncensored+kloodia/lora-8b-medic
  - model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1+Blackroot/Llama-3-8B-Abomination-LORA
merge_method: model_stock
base_model: Sao10K/L3-8B-Stheno-v3.2+grimjim/Llama-3-Instruct-abliteration-LoRA-8B
normalize: false
int8_mask: true
dtype: bfloat16

image.png

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 28.28
IFEval (0-Shot) 71.41
BBH (3-Shot) 32.53
MATH Lvl 5 (4-Shot) 12.99
GPQA (0-shot) 8.61
MuSR (0-shot) 13.46
MMLU-PRO (5-shot) 30.70
Downloads last month
9
GGUF
Model size
8.03B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API: The model has no library tag.