Tissint-14B-v1.1-128k-RP


Chat Example

The model is based on SuperNova-Medius (as the current best 14B model) with a 128k context with an emphasis on creativity, including NSFW and multi-turn conversations.

According to my tests, this finetune is much more stable with different samplers than the original model. Censorship and refusals have been reduced.

The model started to follow the system prompt better, and the responses in ChatML format with bad samplers stopped reaching 800+ tokens for no reason.

V1.1

I have increased the amount of dataset and added instructions mixed with NSFW RP, which in theory will improve the quality of the model.

Chat Template - ChatML

Samplers

Balance

Temp : 0.8 - 1.15
Min P : 0.1

Repetition Penalty : 1.02

DRY 0.8, 1.75, 2, 2048 (change to 4096 or more if needed)

Creativity

Temp : 1.15 - 1.5
Top P : 0.9

Repetition Penalty : 1.03

DRY 0.82, 1.75, 2, 2048 (change to 4096 or more if needed)
Downloads last month
72
Safetensors
Model size
14.8B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for Ttimofeyka/Tissint-14B-v1.1-128k-RP

Base model

Qwen/Qwen2.5-14B
Finetuned
(2)
this model
Quantizations
2 models