--- license: apache-2.0 datasets: - shibing624/sharegpt_gpt4 language: - zh - en - ko - ja pipeline_tag: text-generation --- # Model Card for Model ID This model is a pythia 1.4B finetuned on the sharegpt dataset. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_HWERI__pythia-1.4b-deduped-sharegpt) | Metric | Value | |-----------------------|---------------------------| | Avg. | 30.79 | | ARC (25-shot) | 34.3 | | HellaSwag (10-shot) | 54.49 | | MMLU (5-shot) | 24.0 | | TruthfulQA (0-shot) | 41.81 | | Winogrande (5-shot) | 55.25 | | GSM8K (5-shot) | 0.83 | | DROP (3-shot) | 4.88 |