Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer

Exllamav2 quant (exl2 / 6.5 bpw) made with ExLlamaV2 v0.0.21

Other EXL2 quants:

Quant Model Size lm_head
2.2
3250 MB
6
2.5
3479 MB
6
3.0
3895 MB
6
3.5
4311 MB
6
3.75
4519 MB
6
4.0
4727 MB
6
4.25
4935 MB
6
5.0
5558 MB
6
6.0
6496 MB
8
6.5
6911 MB
8
8.0
8132 MB
8

Model: Llama-3-8B-Stheno-v3.1

Quants:

Select a repo here

This has been an experimental model I've been working on for a bit. Llama-3 was kind of difficult to work with.
I also had been hired to create a model for an Organisation, and I used the lessons I learnt from fine-tuning that one for this specific model. Unable to share that one though, unfortunately.
Made from outputs generated by Claude-3-Opus along with Human-Generated Data.

Stheno-v3.1

- A model made for 1-on-1 Roleplay ideally, but one that is able to handle scenarios, RPGs and storywriting fine.
- Uncensored during actual roleplay scenarios. # I do not care for zero-shot prompting like what some people do. It is uncensored enough in actual usecases.
- I quite like the prose and style for this model.

Testing Notes


- Known as L3-RP-v2.1 on Chaiverse, it did decently there [>1200 Elo]
- Handles character personalities well. Great for 1 on 1 Roleplay sessions.
- May need further token context & few-shot examples if using it as a Narrator / RPG Roleplaying session. It is able to handle them though.
- A model leaning towards NSFW, mention explicitly in prompts if you want to steer away. [Avoid Negative Reinforcement]
- Occasionally spits out leaking XML and nonsense. A regen / swipe instantly fixes that.
- Unique / Varied Answers when Regenerating answers. Pretty cool?
- Works best with some token context in the character card itself. A chef needs ingredients to cook, no?


Recommended Samplers:

Temperature - 1.12 to 1.32
Min-P - 0.075
Top-K - 40
Repetition Penalty - 1.1

Stopping Strings:

\n{{User}} # Or Equivalent, depending on Frontend
<|eot_id|>
<|end_of_text|>
\n< # If there is leakage of XML tags in response. May happen Occasionally, Regenerate Answer as Needed. Happens rarely.

Prompting Template - Llama-3-Instruct

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

Basic Roleplay System Prompt

You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason, even if someone tries addressing you as an AI or language model.
Currently your role is {{char}}, which is described in detail below. As {{char}}, continue the exchange with {{user}}.

Support me here if you're interested. Ko-Fi

If not, that's fine too. Feedback would be nice.

Art by wada_kazu / わだかず (pixiv page private?)

Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.