Edit model card
YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Quantum-Citrus-9B - GGUF

Original model description:

license: other library_name: transformers tags: - mergekit - merge - mistral - not-for-all-audiences base_model: - ABX-AI/Cerebral-Infinity-7B - ABX-AI/Starfinite-Laymospice-v2-7B model-index: - name: Quantum-Citrus-9B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.19 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.75 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.58 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 55.96 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 50.57 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard

image/png

Quantum-Citrus-9B

This merge is another attempt at making and intelligent, refined and unaligned model.

Based on my tests so far, it has accomplished the goals, and I am continuing to experiment with my interactions with it.

It includes previous merges of Starling, Cerebrum, LemonadeRP, InfinityRP, and deep down has a base of layla v0.1, as I am not that happy with the result form using v0.2.

The model is intended for fictional storytelling and roleplaying and may not be intended for all audences.

GGUF / IQ / Imatrix

Merge Details

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the passthrough merge method.

Models Merged

The following models were included in the merge:

  • ABX-AI/Starfinite-Laymospice-v2-7B
  • ABX-AI/Cerebral-Infinity-7B

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: ABX-AI/Cerebral-Infinity-7B
        layer_range: [0, 20]
  - sources:
      - model: ABX-AI/Starfinite-Laymospice-v2-7B
        layer_range: [12, 32]
merge_method: passthrough
dtype: float16

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 66.74
AI2 Reasoning Challenge (25-Shot) 65.19
HellaSwag (10-Shot) 84.75
MMLU (5-Shot) 64.58
TruthfulQA (0-shot) 55.96
Winogrande (5-shot) 79.40
GSM8k (5-shot) 50.57
Downloads last month
2,136
GGUF
Model size
8.99B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Unable to determine this model's library. Check the docs .