These are GGUF quants for https://huggingface.co/saishf/Nous-Lotus-10.7B

merge

This is a merge of pre-trained language models created using mergekit.

Merge Details

This model is a slerp between SnowLotus-v2 & Nous-Hermes-2-SOLAR, I found snowlotus was awesome to talk to but lacked when prompting with out-there characters. Nous Hermes seemed to handle those characters a lot better, so i decided to merge the two.

This is my first merge so it could perform badly or may not even work

Extra Info

Both models are solar based so context should be 4096

SnowLotus uses Alpaca

Nous Hermes uses ChatML

Both seem to work but i don't exactly know which performs better

Merge Method

This model was merged using the SLERP merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:

slices:
  - sources:
      - model: BlueNipples/SnowLotus-v2-10.7B
        layer_range: [0, 48]
      - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
        layer_range: [0, 48]
merge_method: slerp
base_model: BlueNipples/SnowLotus-v2-10.7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
Downloads last month
9
GGUF
Model size
10.7B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for saishf/Nous-Lotus-10.7B-GGUF