Text Generation
Transformers
Safetensors
English
llama
conversational
Inference Endpoints
text-generation-inference
exl2
Edit model card
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be an integer
Configuration Parsing Warning: In config.json: "quantization_config.bits" must be less than or equal to 8

L3-8B-Stheno-v3.2 - EXL2 8.04bpw

This is a 8bpw EXL2 quant of Sao10K/L3-8B-Stheno-v3.2

This quant was made using exllamav2-0.0.21 with default dataset.

I tested this quant shortly in some random RPs (including one over 8k context - with RoPE scaling as recommended in webui) and it seems to work fine.

Prompt Templates

See original readme file for details. In general this model uses llama3 prompt template.

Original readme below


Train used 1x H100 SXM for like a total of 24 Hours over multiple runs.

Support me here if you're interested:
Ko-fi: https://ko-fi.com/sao10k
wink Euryale v2?

If not, that's fine too. Feedback would be nice.

Contact Me in Discord:
sao10k // Just ping me in the KoboldAI discord, I'll respond faster.

Art by navy_(navy.blue) - Danbooru


Stheno

Stheno-v3.2-Zeta

I have done a test run with multiple variations of the models, merged back to its base at various weights, different training runs too, and this Sixth iteration is the one I like most.

Changes compared to v3.1
- Included a mix of SFW and NSFW Storywriting Data, thanks to Gryphe
- Included More Instruct / Assistant-Style Data
- Further cleaned up Roleplaying Samples from c2 Logs -> A few terrible, really bad samples escaped heavy filtering. Manual pass fixed it.
- Hyperparameter tinkering for training, resulting in lower loss levels.

Testing Notes - Compared to v3.1
- Handles SFW / NSFW seperately better. Not as overly excessive with NSFW now. Kinda balanced.
- Better at Storywriting / Narration.
- Better at Assistant-type Tasks.
- Better Multi-Turn Coherency -> Reduced Issues?
- Slightly less creative? A worthy tradeoff. Still creative.
- Better prompt / instruction adherence.


Recommended Samplers:

Temperature - 1.12-1.22
Min-P - 0.075
Top-K - 50
Repetition Penalty - 1.1

Stopping Strings:

\n\n{{User}} # Or Equivalent, depending on Frontend
<|eot_id|>
<|end_of_text|>

Prompting Template - Llama-3-Instruct

<|begin_of_text|><|start_header_id|>system<|end_header_id|>

{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>

{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>

{output}<|eot_id|>

Basic Roleplay System Prompt

You are an expert actor that can fully immerse yourself into any role given. You do not break character for any reason, even if someone tries addressing you as an AI or language model.
Currently your role is {{char}}, which is described in detail below. As {{char}}, continue the exchange with {{user}}.

Downloads last month
43

Datasets used to train DeusImperator/L3-8B-Stheno-v3.2_exl2_8.04bpw

Collection including DeusImperator/L3-8B-Stheno-v3.2_exl2_8.04bpw