Edit model card

Nymeria

  • Upgraded SimPO.
  • A touch of 3SOME, Lumimaid and Jamet Blackroot resulting a slightly different prose and wider RP vocab.
  • Leans slightly more on nsfw than the original.

All quants made using imatrix option with dataset provided by bartowski here

SillyTavern

Text Completion presets

temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1

Advanced Formatting

Context & Instruct preset by Virt-io

Instruct Mode: Enabled

merge

This is a merge of pre-trained language models created using mergekit.

This model was merged using the slerp merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


slices:
  - sources:
      - model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
        layer_range: [0, 32]
      - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
        layer_range: [0, 32]
merge_method: slerp
base_model: NeverSleep/Llama-3-Lumimaid-8B-v0.1
parameters:
  t:
    - filter: self_attn
      value: [0.7, 0.3, 0.3, 0.3]
    - filter: mlp
      value: [0.3, 0.7, 0.7, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
      - model: chujiezheng/Llama-3-Instruct-8B-SimPO-ExPO
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Lumimaid-Jamet-Blackroot-8B
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.7, 0.7, 0.7]
    - filter: mlp
      value: [0.7, 0.3, 0.3, 0.3]
    - value: 0.6
dtype: bfloat16

L3-SimPO-Lumimaid-Jamet-Blackroot-8B


slices:
  - sources:
      - model: Sao10K/L3-8B-Stheno-v3.2
        layer_range: [0, 32]
      - model: TheDrummer/Llama-3SOME-8B-v2
        layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  t:
    - filter: self_attn
      value: [0.3, 0.3, 0.7, 0.3]
    - filter: mlp
      value: [0.7, 0.7, 0.3, 0.7]
    - value: 0.4
dtype: bfloat16

L3-Stheno-3SOME-8B


slices:
  - sources:
      - model: tannedbum/L3-Stheno-3SOME-8B
        layer_range: [0, 32]
      - model: tannedbum/L3-SimPO-Lumimaid-Jamet-Blackroot-8B
        layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Stheno-3SOME-8B
parameters:
  t:
    - filter: self_attn
      value: [0.4, 0.3, 0.3, 0.6]
    - filter: mlp
      value: [0.6, 0.7, 0.7, 0.4]
    - value: 0.4
dtype: bfloat16

L3-Nymeria-v2-8B


Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum

Downloads last month
145
GGUF
Model size
8.03B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for tannedbum/L3-Nymeria-v2-8B-iGGUF