6.5 bpw is recommended.
Available quants
Download with git:
git clone --single-branch --branch 6.5 https://huggingface.co/tannedbum/L3-Rhaenys-8B-exl2 L3-Rhaenys-8B-exl2-6.5
SillyTavern
Text Completion presets
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
Advanced Formatting
Context & Instruct preset by Virt-io
Instruct Mode: Enabled
merge
This is a merge of pre-trained language models created using mergekit.
This model was merged using the slerp merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: Sao10K/L3-8B-Niitama-v1
layer_range: [0, 32]
- model: Sao10K/L3-8B-Stheno-v3.2
layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Niitama-v1
parameters:
t:
- filter: self_attn
value: [0.2, 0.4, 0.6, 0.2, 0.4]
- filter: mlp
value: [0.8, 0.6, 0.4, 0.8, 0.6]
- value: 0.4
dtype: bfloat16
slices:
- sources:
- model: tannedbum/L3-Niitama-Stheno-8B
layer_range: [0, 32]
- model: princeton-nlp/Llama-3-Instruct-8B-SimPO-v0.2
layer_range: [0, 32]
merge_method: slerp
base_model: tannedbum/L3-Niitama-Stheno-8B
parameters:
t:
- filter: self_attn
value: [0.2, 0.4, 0.6, 0.2, 0.4]
- filter: mlp
value: [0.8, 0.6, 0.4, 0.8, 0.6]
- value: 0.4
dtype: bfloat16
Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum
Model tree for tannedbum/L3-Rhaenys-8B-exl2
Merge model
this model