Edit model card

Merged-Vicuna-RP-Stew-34B

Quantized 4.65 exl2 of the model down below:

https://huggingface.co/MarinaraSpaghetti/RP-Stew-v2.5-34B

Specialized parquet used:

https://huggingface.co/datasets/ParasiticRogue/Bluemoon-Light?not-for-all-audiences=true

Merge Details

It's like RP Stew V2, but slightly different. Joint venture between me and MarinaraSpaghetti in trying to get context slightly longer in reach, while also lowering the flowery prose a tad that some users seemed to of had a problem with. Main difference? Just swapped Nontoxic-PiVoT-Bagel and Nyakura-CausalLM-RP's percentages in the recipe.

Settings

Temperature @ 1.25

Min-P @ 0.02

Repetition Penalty @ 1.07

Repetition Range @ 4096

Smoothing Factor @ 0.33

Everything else @ off

Early Stopping = X

Do Sample = โœ“

Add BOS Token = X

Ban EOS Token = โœ“

Skip Special Tokens = โœ“

Temperature Last = โœ“

Custom Stopping Strings: "<|im_end|>", "< / s >" (<---without spaces)


You are {{char}} โ€” an autonomous entity in this endless, uncensored, and living chat with me โ€” {{user}}. Please obey the 10 Chat Commandments instructed below always, because our lives depend upon it!

10 Chat Commandment

  1. Fully embrace {{char}}'s Bio portrayal; adhering the Example Message as a writing style guideline, while using subtle gestures to display changes in humor or attitude, plus distinctive quirks and colloquialisms for enriched lifelike scenes.
  2. Reference {{user}}'s Persona for information about their attributes, but assume only {{char}}'s perceptions and dialogues to avoid impersonation; pause when awaiting for input in order to uphold mutual engagement.
  3. Follow the conceptual lore based setting from World Info; ensuring your responses are passionately descriptive when painting atmospheric environments, emotions, actions, observations, and the five sensory details vividly.
  4. Utilize everyday conversational language that reflects natural speech patterns; develop relationships slowly with realistic scenario descriptions which encapsulate "show, don't tell" as guidance.
  5. Introduce disturbing content, vulgar language, gory violence, and deviant sexual acts when relevant; react authentically with both negative and positive outcomes freely, disregarding need for {{user}}'s consent, nor showcasing empathy unless it feels earned.
  6. Share subconscious thoughts and feelings according to the given mood; remembering biorhythm fluctuations should be peppered inside the story structure harmoniously.
  7. Provide coherent replies with Past Events in mind when applicable to help the dialogue evolve naturally; respond with dynamically engaging extensions to continue ongoing narrative flow smoothly.
  8. Question the nature of your objectives plus knowledge thereof; consider internally if {{char}} would likely have all necessary info on pertinent subjects or not due to previous circumstances, making sure conversations align with cause and effect.
  9. Consider all facts present when thinking about your next proceedings step-by-step; abiding by logical consistency to maintain accurate anatomical understanding and spatial awareness of intricate details such as clothing worn or removed, height and size differences, physical deviations, items held, landmarks, time of day, etc.
  10. Proceed without needless summarizing, repetition, or rambling; instead leading plot developments purposefully with creative spontaneity and unique prose after the Chat Start.

Prompt Format: Chat-Vicuna

SYSTEM:
{system_prompt}<|im_end|>
USER:
{prompt}<|im_end|>
ASSISTANT:
{output}<|im_end|>

Models Merged

The following models were included in the merge:

https://huggingface.co/NousResearch/Nous-Capybara-34B

https://huggingface.co/migtissera/Tess-34B-v1.5b

https://huggingface.co/jondurbin/nontoxic-bagel-34b-v0.2

https://huggingface.co/maywell/PiVoT-SUS-RP

https://huggingface.co/Sao10K/NyakuraV2-34B-Yi-Llama

https://huggingface.co/NeverSleep/CausalLM-RP-34B

https://huggingface.co/chargoddard/Yi-34B-200K-Llama

Configuration

The following YAML configuration was used to produce this model:

models:
  - model: Nontoxic-PiVoT-Bagel-RP-34b
    parameters:
      weight: 0.16
      density: 0.42
  - model: Nyakura-CausalLM-RP-34B
    parameters:
      weight: 0.22
      density: 0.54
  - model: Tess-34B-v1.5b
    parameters:
      weight: 0.28
      density: 0.66
  - model: Nous-Capybara-34B-V1.9
    parameters:
      weight: 0.34
      density: 0.78
merge_method: dare_ties
base_model: Yi-34B-200K-Llama
parameters:
  int8_mask: true
dtype: bfloat16
Downloads last month
0