数据集

使用以下8个数据集 image/png 对Qwen1.5-7B-Chat进行微调并测试,结果显示,微调后的模型在CEVAL和MMLU的评分上均有所提升。

基础模型:

训练工具

https://github.com/hiyouga/LLaMA-Factory

测评方式:

使用opencompass(https://github.com/open-compass/OpenCompass/ ), 测试工具基于CEval和MMLU对微调之后的模型和原始模型进行测试。
测试模型分别为:

  • Qwen1.5-7B-Chat
  • Qwen1.5-7B-Chat-750Mb-lora,使用8DataSets数据集对Qwen1.5-7B-Chat模型进行sft方式lora微调

8DataSets数据集:

大约750Mb的微调数据集

结果

模型名称 CEVAL MMLU
Qwen1.5-7B-Chat 68.61 61.56
Qwen1.5-7B-Chat-750Mb-lora 71.36 61.78

License

This project utilizes certain datasets and checkpoints that are subject to their respective original licenses. Users must comply with all terms and conditions of these original licenses. The content of this project itself is licensed under the Apache license 2.0.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference API
Unable to determine this model's library. Check the docs .

Datasets used to train REILX/Qwen1.5-7B-Chat-750Mb-lora

Collection including REILX/Qwen1.5-7B-Chat-750Mb-lora