--- license: apache-2.0 datasets: - BelleGroup/train_2M_CN - BelleGroup/train_3.5M_CN - BelleGroup/train_1M_CN - BelleGroup/train_0.5M_CN - BelleGroup/school_math_0.25M language: - zh --- ## GoGPT 基于多样性中文指令数据微调的中文BLOOM底座模型 ![img.png](resources/img.png) > 训练第一轮足够了,后续第二轮和第三轮提升不大 - 🚀多样性指令数据 - 🚀筛选高质量中文数据 | 模型名字 | 参数量 | 模型地址 | |------------|--------|------| | gogpt-560m | 5.6亿参数 | 🤗[golaxy/gogpt-560m](https://huggingface.co/golaxy/gogpt-560m) | | gogpt-3b | 30亿参数 | 🤗[golaxy/gogpt-3b-bloom](https://huggingface.co/golaxy/gogpt-3b-bloom) | | gogpt-7b | 70亿参数 | 🤗[golaxy/gogpt-7b-bloom](https://huggingface.co/golaxy/gogpt-7b-bloom) | ## 测试效果 ![img.png](resources/test1.png) ![img.png](resources/test2.png) ![img.png](resources/test3.png) ![img.png](resources/test4.png) ![img.png](resources/test5.png) ![img.png](resources/test6.png) ## TODO - 进行RLFH训练 - 后续加入中英平行语料 ## 感谢 - [@hz-zero_nlp](https://github.com/yuanzhoulvpi2017/zero_nlp) - [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca) - [Belle数据](https://huggingface.co/BelleGroup) ## Citation 如果你在研究中使用了GoGPT,请按如下格式引用: ``` @misc{GoGPT, title={GoGPT: Training Medical GPT Model}, author={Qiang Yan}, year={2023}, howpublished={\url{https://github.com/yanqiangmiffy/GoGPT}}, } ``` # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_golaxy__gogpt-3b-bloom) | Metric | Value | |-----------------------|---------------------------| | Avg. | 30.6 | | ARC (25-shot) | 31.91 | | HellaSwag (10-shot) | 50.32 | | MMLU (5-shot) | 25.2 | | TruthfulQA (0-shot) | 41.79 | | Winogrande (5-shot) | 54.38 | | GSM8K (5-shot) | 0.15 | | DROP (3-shot) | 10.48 |