Edit model card

Fett-uccine

This model is created by training Mistral base model on LimaRP (ShareGPT format provided by SAO), theory of mind, and gnosis(provided by jeiku).

The 8-bit lora was then merged into Mistral Instruct resulting in what you see here.

Works best with ChatML Instruct

This model is in honor of the SillyTavern community, keep being awesome!

Optimal Settings provided by Nitral:

{
    "temp": 5,
    "temperature_last": true,
    "top_p": 1,
    "top_k": 0,
    "top_a": 0,
    "tfs": 1,
    "epsilon_cutoff": 0,
    "eta_cutoff": 0,
    "typical_p": 1,
    "min_p": 0.05,
    "rep_pen": 1,
    "rep_pen_range": 0,
    "no_repeat_ngram_size": 0,
    "penalty_alpha": 0,
    "num_beams": 1,
    "length_penalty": 0,
    "min_length": 0,
    "encoder_rep_pen": 1,
    "freq_pen": 0,
    "presence_pen": 0,
    "do_sample": true,
    "early_stopping": false,
    "dynatemp": false,
    "min_temp": 1,
    "max_temp": 5,
    "dynatemp_exponent": 1,
    "smoothing_factor": 0.3,
    "add_bos_token": true,
    "truncation_length": 2048,
    "ban_eos_token": false,
    "skip_special_tokens": true,
    "streaming": false,
    "mirostat_mode": 0,
    "mirostat_tau": 5,
    "mirostat_eta": 0.1,
    "guidance_scale": 1,
    "negative_prompt": "",
    "grammar_string": "",
    "banned_tokens": "",
    "ignore_eos_token_aphrodite": false,
    "spaces_between_special_tokens_aphrodite": true,
    "sampler_order": [
        6,
        0,
        1,
        3,
        4,
        2,
        5
    ],
    "logit_bias": [],
    "n": 1,
    "rep_pen_size": 0,
    "genamt": 150,
    "max_length": 8192
}
Downloads last month
3,468
Safetensors
Model size
7.24B params
Tensor type
BF16
·

Datasets used to train Epiculous/Fett-uccine-7B

Collection including Epiculous/Fett-uccine-7B