--- license: apache-2.0 datasets: - totally-not-an-llm/everything-sharegptformat-morecleaned language: - en pipeline_tag: text-generation --- Buy Me A Coffee This is [OpenLLaMA 3B V2](https://huggingface.co/openlm-research/open_llama_3b_v2) finetuned on [EverythingLM Data(ShareGPT format more cleaned)](https://huggingface.co/datasets/totally-not-an-llm/everything-sharegptformat-morecleaned) for 1 epochs. Prompt template: ``` ### HUMAN: {prompt} ### RESPONSE: ``` GGML quants available [here](https://huggingface.co/TheBloke/Marx-3b-GGML).
GPTQ quants available [here](https://huggingface.co/TheBloke/Marx-3b-GPTQ). Note: Don't expect this model to be good, I was just starting out to finetune. So don't roast me please! # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_acrastt__Marx-3B) | Metric | Value | |-----------------------|---------------------------| | Avg. | 36.5 | | ARC (25-shot) | 43.17 | | HellaSwag (10-shot) | 72.68 | | MMLU (5-shot) | 28.46 | | TruthfulQA (0-shot) | 39.09 | | Winogrande (5-shot) | 65.59 | | GSM8K (5-shot) | 1.29 | | DROP (3-shot) | 5.22 |