--- license: apache-2.0 language: - en --- Base Model: togethercomputer/RedPajama-INCITE-Base-3B-v1 Dataset from: https://github.com/allenai/open-instruct and uncensored it using code in ehartford/wizard_vicuna_70k_unfiltered Usage ``` ### Human: your instruction ### ASSISANT: output will be generated and ended with <|endoftext|> ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_heegyu__RedTulu-Uncensored-3B-0719) | Metric | Value | |-----------------------|---------------------------| | Avg. | 37.47 | | ARC (25-shot) | 40.02 | | HellaSwag (10-shot) | 62.55 | | MMLU (5-shot) | 30.37 | | TruthfulQA (0-shot) | 37.59 | | Winogrande (5-shot) | 62.35 | | GSM8K (5-shot) | 2.27 | | DROP (3-shot) | 27.1 |