The smartest L3 8B model combined with high-end RP model. What could go wrong.
The idea was to fuse a bit of SimPO's realism with Stheno. It took a few days to come up with a balanced slerp configuration, but I'm more than satisfied with the end result.
All quants made using imatrix option with dataset provided by bartowski here
SillyTavern
Text Completion presets
temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1
Advanced Formatting
Context & Instruct preset by Virt-io
Instruct Mode: Enabled
merge
This is a merge of pre-trained language models created using mergekit.
This model was merged using the slerp merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
slices:
- sources:
- model: Sao10K/L3-8B-Stheno-v3.2
layer_range: [0, 32]
- model: princeton-nlp/Llama-3-Instruct-8B-SimPO
layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
t:
- filter: self_attn
value: [0.4, 0.5, 0.6, 0.4, 0.6]
- filter: mlp
value: [0.6, 0.5, 0.4, 0.6, 0.4]
- value: 0.5
dtype: bfloat16
Original model information:
Model: Sao10K/L3-8B-Stheno-v3.2
Stheno-v3.2-Zeta
Changes compared to v3.1
- Included a mix of SFW and NSFW Storywriting Data, thanks to Gryphe
- Included More Instruct / Assistant-Style Data
- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it.
- Hyperparameter tinkering for training, resulting in lower loss levels.
Testing Notes - Compared to v3.1
- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced.
- Better at Storywriting / Narration.
- Better at Assistant-type Tasks.
- Better Multi-Turn Coherency -> Reduced Issues?
- Slightly less creative? A worthy tradeoff. Still creative.
- Better prompt / instruction adherence.
Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum
- Downloads last month
- 16