Base model - https://huggingface.co/cognitivecomputations/WizardLM-30B-Uncensored (61GB on disk) converted to 4.0 bit exl2 (~16GB on disk) for ease of use. [ First model post here, I hope it saves others time, and brings enjoyment... ]
- Downloads last month
- 0
Dataset used to train n810x/WizardLM-30B-Uncensored-202403.exl2
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set Open LLM Leaderboard60.240
- normalized accuracy on HellaSwag (10-Shot)validation set Open LLM Leaderboard82.930
- accuracy on MMLU (5-Shot)test set Open LLM Leaderboard56.800
- mc2 on TruthfulQA (0-shot)validation set Open LLM Leaderboard51.570
- accuracy on Winogrande (5-shot)validation set Open LLM Leaderboard74.350
- accuracy on GSM8k (5-shot)test set Open LLM Leaderboard12.890