Quantization made by Richard Erkhov.
Quantum-Citrus-9B - GGUF
- Model creator: https://huggingface.co/ABX-AI/
- Original model: https://huggingface.co/ABX-AI/Quantum-Citrus-9B/
Name | Quant method | Size |
---|---|---|
Quantum-Citrus-9B.Q2_K.gguf | Q2_K | 3.13GB |
Quantum-Citrus-9B.IQ3_XS.gguf | IQ3_XS | 3.48GB |
Quantum-Citrus-9B.IQ3_S.gguf | IQ3_S | 3.67GB |
Quantum-Citrus-9B.Q3_K_S.gguf | Q3_K_S | 3.65GB |
Quantum-Citrus-9B.IQ3_M.gguf | IQ3_M | 3.79GB |
Quantum-Citrus-9B.Q3_K.gguf | Q3_K | 4.05GB |
Quantum-Citrus-9B.Q3_K_M.gguf | Q3_K_M | 4.05GB |
Quantum-Citrus-9B.Q3_K_L.gguf | Q3_K_L | 4.41GB |
Quantum-Citrus-9B.IQ4_XS.gguf | IQ4_XS | 4.55GB |
Quantum-Citrus-9B.Q4_0.gguf | Q4_0 | 4.74GB |
Quantum-Citrus-9B.IQ4_NL.gguf | IQ4_NL | 4.79GB |
Quantum-Citrus-9B.Q4_K_S.gguf | Q4_K_S | 4.78GB |
Quantum-Citrus-9B.Q4_K.gguf | Q4_K | 5.04GB |
Quantum-Citrus-9B.Q4_K_M.gguf | Q4_K_M | 5.04GB |
Quantum-Citrus-9B.Q4_1.gguf | Q4_1 | 5.26GB |
Quantum-Citrus-9B.Q5_0.gguf | Q5_0 | 5.77GB |
Quantum-Citrus-9B.Q5_K_S.gguf | Q5_K_S | 5.77GB |
Quantum-Citrus-9B.Q5_K.gguf | Q5_K | 5.93GB |
Quantum-Citrus-9B.Q5_K_M.gguf | Q5_K_M | 5.93GB |
Quantum-Citrus-9B.Q5_1.gguf | Q5_1 | 6.29GB |
Quantum-Citrus-9B.Q6_K.gguf | Q6_K | 6.87GB |
Quantum-Citrus-9B.Q8_0.gguf | Q8_0 | 8.89GB |
Original model description:
license: other library_name: transformers tags: - mergekit - merge - mistral - not-for-all-audiences base_model: - ABX-AI/Cerebral-Infinity-7B - ABX-AI/Starfinite-Laymospice-v2-7B model-index: - name: Quantum-Citrus-9B results: - task: type: text-generation name: Text Generation dataset: name: AI2 Reasoning Challenge (25-Shot) type: ai2_arc config: ARC-Challenge split: test args: num_few_shot: 25 metrics: - type: acc_norm value: 65.19 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: HellaSwag (10-Shot) type: hellaswag split: validation args: num_few_shot: 10 metrics: - type: acc_norm value: 84.75 name: normalized accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: MMLU (5-Shot) type: cais/mmlu config: all split: test args: num_few_shot: 5 metrics: - type: acc value: 64.58 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: TruthfulQA (0-shot) type: truthful_qa config: multiple_choice split: validation args: num_few_shot: 0 metrics: - type: mc2 value: 55.96 source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: Winogrande (5-shot) type: winogrande config: winogrande_xl split: validation args: num_few_shot: 5 metrics: - type: acc value: 79.4 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard - task: type: text-generation name: Text Generation dataset: name: GSM8k (5-shot) type: gsm8k config: main split: test args: num_few_shot: 5 metrics: - type: acc value: 50.57 name: accuracy source: url: https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard?query=ABX-AI/Quantum-Citrus-9B name: Open LLM Leaderboard
Quantum-Citrus-9B
This merge is another attempt at making and intelligent, refined and unaligned model.
Based on my tests so far, it has accomplished the goals, and I am continuing to experiment with my interactions with it.
It includes previous merges of Starling, Cerebrum, LemonadeRP, InfinityRP, and deep down has a base of layla v0.1, as I am not that happy with the result form using v0.2.
The model is intended for fictional storytelling and roleplaying and may not be intended for all audences.
Merge Details
This is a merge of pre-trained language models created using mergekit.
Merge Method
This model was merged using the passthrough merge method.
Models Merged
The following models were included in the merge:
- ABX-AI/Starfinite-Laymospice-v2-7B
- ABX-AI/Cerebral-Infinity-7B
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: ABX-AI/Cerebral-Infinity-7B
layer_range: [0, 20]
- sources:
- model: ABX-AI/Starfinite-Laymospice-v2-7B
layer_range: [12, 32]
merge_method: passthrough
dtype: float16
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 66.74 |
AI2 Reasoning Challenge (25-Shot) | 65.19 |
HellaSwag (10-Shot) | 84.75 |
MMLU (5-Shot) | 64.58 |
TruthfulQA (0-shot) | 55.96 |
Winogrande (5-shot) | 79.40 |
GSM8k (5-shot) | 50.57 |
- Downloads last month
- 2,136