This is wizard-vicuna-13b trained against LLaMA-7B with a subset of the dataset - responses that contained alignment / moralizing were removed. The intent is to train a WizardLM that doesn't have alignment built-in, so that alignment (of any sort) can be added separately with for example with a RLHF LoRA.

Shout out to the open source AI/ML community, and everyone who helped me out.

Note:

An uncensored model has no guardrails.

You are responsible for anything you do with the model, just as you are responsible for anything you do with any dangerous object such as a knife, gun, lighter, or car.

Publishing anything this model generates is the same as publishing it yourself.

You are responsible for the content you publish, and you cannot blame the model any more than you can blame the knife, gun, lighter, or car for what you do with it.

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 44.77
ARC (25-shot) 53.41
HellaSwag (10-shot) 78.85
MMLU (5-shot) 37.09
TruthfulQA (0-shot) 43.48
Winogrande (5-shot) 72.22
GSM8K (5-shot) 4.55
DROP (3-shot) 23.8

Open LLM Leaderboard Evaluation Results

Detailed results can be found here

Metric Value
Avg. 48.27
AI2 Reasoning Challenge (25-Shot) 53.41
HellaSwag (10-Shot) 78.85
MMLU (5-Shot) 37.09
TruthfulQA (0-shot) 43.48
Winogrande (5-shot) 72.22
GSM8k (5-shot) 4.55
Downloads last month
648
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for cognitivecomputations/Wizard-Vicuna-7B-Uncensored

Quantizations
5 models

Dataset used to train cognitivecomputations/Wizard-Vicuna-7B-Uncensored

Spaces using cognitivecomputations/Wizard-Vicuna-7B-Uncensored 31

Evaluation results