Edit model card

GGUF / IQ / Imatrix for Silver-Sun-v2-11B

image/png

Why Importance Matrix?

Importance Matrix, at least based on my testing, has shown to improve the output and performance of "IQ"-type quantizations, where the compression becomes quite heavy. The Imatrix performs a calibration, using a provided dataset. Testing has shown that semi-randomized data can help perserve more important segments as the compression is applied.

Related discussions in Github: [1] [2]

The imatrix.txt file that I used contains general, semi-random data, with some custom kink.

Silver-Sun-v2-11B

This is an updated version of Silver-Sun-11B. The change is that now the Solstice-FKL-v2-10.7B merge uses Sao10K/Fimbulvetr-11B-v2 instead of v1. Additionally, the config of the original Silver-Sun was wrong, and I have also updated that. As expected, this is a HIGHLY uncensored model. It should perform even better than the v1 due to the updated Fimb, and the fixed config.

Works with Alpaca, and from my tests, also ChatML. However Alpaca may be a better option. Try it out and use whatever works better for you. Due to a quirk with Solar, if you want the best quality either launch at 4K context, or launch at 8K (and possibly beyond - have not tested it that high) with 4k context pre-loaded in the prompt.

This model is intended for fictional storytelling and writing, focusing on NSFW capabilities and lack of censorship for RP reasons.

Merge Details

This is a merge of pre-trained language models created using mergekit.

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

OpenLLM Eval Results

Detailed Results + Failed GSM8K

I had to remove GSM8K from the results and manually re-average the rest. GSM8K failed due to some issue with formatting, which is not something I experienced during practical usage. By removing the GSM8K score, the average is VERY close to upstage/SOLAR-10.7B-v1.0 (74.20), which would make sense. Feel free to ignore the actual average and use the other scores individually for reference.

Metric Value
Avg. 74.04
AI2 Reasoning Challenge (25-Shot) 69.88
HellaSwag (10-Shot) 87.81
MMLU (5-Shot) 66.74
TruthfulQA (0-shot) 62.49
Winogrande (5-shot) 83.27

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: ./MODELS/Solstice-FKL-v2-10.7B
        layer_range: [0, 48]
      - model: Himitsui/Kaiju-11B
        layer_range: [0, 48]
merge_method: slerp
base_model: ./MODELS/Solstice-FKL-v2-10.7B
parameters:
  t:
    - filter: self_attn
      value: [0.6, 0.7, 0.8, 0.9, 1]
    - filter: mlp
      value: [0.4, 0.3, 0.2, 0.1, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
287
GGUF
Model size
10.7B params
Architecture
llama
+1
Unable to determine this model’s pipeline type. Check the docs .
Invalid base_model specified in model card metadata. Needs to be a model id from hf.co/models.

Collections including ABX-AI/Silver-Sun-v2-11B-GGUF-IQ-Imatrix