πŸ€– BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement

BlenderLLM is built using Qwen2.5-Coder-7B-Instruct as the base model. It has been fine-tuned on the BlendNet training dataset and further optimized through Self-improvement techniques to achieve the best performance.

For more details, please visit our GitHub repository or refer to our arXiv paper.

πŸ“– Citation

@misc{du2024blenderllmtraininglargelanguage,
      title={BlenderLLM: Training Large Language Models for Computer-Aided Design with Self-improvement}, 
      author={Yuhao Du and Shunian Chen and Wenbo Zan and Peizhao Li and Mingxuan Wang and Dingjie Song and Bo Li and Yan Hu and Benyou Wang},
      year={2024},
      eprint={2412.14203},
      archivePrefix={arXiv},
      primaryClass={cs.HC},
      url={https://arxiv.org/abs/2412.14203}, 
}

We are from the School of Data Science (SDS), the Chinese University of Hong Kong, Shenzhen (CUHKSZ).

Downloads last month
11
Safetensors
Model size
7.62B params
Tensor type
BF16
Β·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for FreedomIntelligence/BlenderLLM

Base model

Qwen/Qwen2.5-7B
Finetuned
(151)
this model

Dataset used to train FreedomIntelligence/BlenderLLM