L3-Nymeria-Maid-8B / README.md
tannedbum's picture
Update README.md
74a74b6 verified
|
raw
history blame
2.75 kB
metadata
base_model:
  - princeton-nlp/Llama-3-Instruct-8B-SimPO
  - Sao10K/L3-8B-Stheno-v3.2
library_name: transformers
tags:
  - mergekit
  - merge
  - roleplay
  - sillytavern
  - llama3
  - not-for-all-audiences
license: cc-by-nc-4.0
language:
  - en

Nymeria

This version is solely for scientific purposes, of course.

Nymeria is the balanced version, doesn't force nsfw. Nymeria-Maid has more Stheno's weights, leans more on nsfw and is more submissive.

SillyTavern

Text Completion presets

temp 0.9
top_k 30
top_p 0.75
min_p 0.2
rep_pen 1.1
smooth_factor 0.25
smooth_curve 1

Advanced Formatting

Context & Instruct preset by Virt-io

Instruct Mode: Enabled

merge

This is a merge of pre-trained language models created using mergekit.

This model was merged using the slerp merge method.

Models Merged

The following models were included in the merge:

Configuration

The following YAML configuration was used to produce this model:


slices:
  - sources:
      - model: Sao10K/L3-8B-Stheno-v3.2
        layer_range: [0, 32]
      - model: princeton-nlp/Llama-3-Instruct-8B-SimPO
        layer_range: [0, 32]
merge_method: slerp
base_model: Sao10K/L3-8B-Stheno-v3.2
parameters:
  t:
    - filter: self_attn
      value: [0.2, 0.4, 0.4, 0.6]
    - filter: mlp
      value: [0.8, 0.6, 0.6, 0.4]
    - value: 0.4
dtype: bfloat16


Original model information:

Model: Sao10K/L3-8B-Stheno-v3.2

Stheno-v3.2-Zeta

Changes compared to v3.1
- Included a mix of SFW and NSFW Storywriting Data, thanks to Gryphe
- Included More Instruct / Assistant-Style Data
- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it.
- Hyperparameter tinkering for training, resulting in lower loss levels.

Testing Notes - Compared to v3.1
- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced.
- Better at Storywriting / Narration.
- Better at Assistant-type Tasks.
- Better Multi-Turn Coherency -> Reduced Issues?
- Slightly less creative? A worthy tradeoff. Still creative.
- Better prompt / instruction adherence.


Want to support my work ? My Ko-fi page: https://ko-fi.com/tannedbum