Edit model card

intro music...


This is Yi Self Merged. I wanted model that will follow most instuctions yet preserve its base model nature.



I use max_seq_len 8K with alpha_value 2.65.

SillyTavern presets:

    "temp": 0.1,
    "temperature_last": true,
    "top_p": 1,
    "top_k": 0,
    "top_a": 0,
    "tfs": 1,
    "epsilon_cutoff": 0,
    "eta_cutoff": 0,
    "typical_p": 1,
    "min_p": 0,
    "rep_pen": 1.08,
    "rep_pen_range": 0,
    "no_repeat_ngram_size": 0,
    "penalty_alpha": 0,
    "num_beams": 1,
    "length_penalty": 1,
    "min_length": 0,
    "encoder_rep_pen": 1,
    "freq_pen": 0.01,
    "presence_pen": 0,
    "do_sample": true,
    "early_stopping": false,
    "add_bos_token": true,
    "truncation_length": 2048,
    "ban_eos_token": false,
    "skip_special_tokens": true,
    "streaming": true,
    "mirostat_mode": 0,
    "mirostat_tau": 5,
    "mirostat_eta": 0.1,
    "guidance_scale": 1,
    "negative_prompt": "",
    "grammar_string": "",
    "banned_tokens": "",
    "ignore_eos_token_aphrodite": false,
    "spaces_between_special_tokens_aphrodite": true,
    "sampler_order": [
    "logit_bias": [],
    "n": 1,
    "rep_pen_size": 0,
    "genamt": 2048,
    "max_length": 8192

Terms and Conditions of Use

The following table outlines the primary characteristics and intended uses of my YiSM-34B-0rn models:

Model Type Purpose Target Users Key Features
Censored Suitable for general audiences and sensitive topics Educational institutions, families, and individuals seeking age-appropriate content Restricts explicit or mature material
Neutral (**this one) Balances accessibility with openness Universities, researchers, and curious minds Encourages exploration and intellectual exchange
Uncensored Ideal for adults and specialized fields Professionals, experts, and advanced scholars Offers unfiltered access to diverse viewpoints and knowledge

Please remember that all YiSM-34B-0rn models operate under the apache-2.0 license, so familiarize yourself with its terms and conditions before employing their content.


Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 75.65
AI2 Reasoning Challenge (25-Shot) 69.54
HellaSwag (10-Shot) 86.67
MMLU (5-Shot) 78.51
TruthfulQA (0-shot) 59.68
Winogrande (5-shot) 83.66
GSM8k (5-shot) 75.82

5th in 34B size range excluding "Private or deleted" or 8th with all models included as of 2024-06-10 ;P

Downloads last month
Model size
34.4B params
Tensor type
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Merge of

Collection including altomek/YiSM-34B-0rn

Evaluation results