Teaching Assistance For Introduction to Computers

This teaching assistance model was fine-tuned using LlamaFactory (Zheng et al., 2024), leveraging the LLaMA 2 7B architecture. The fine-tuning process involved around 10GB of open-source bilingual data (Chinese and English) collected from Hugging Face and CSDN. Additionally, specialized datasets focused on introductory computer science topics were integrated to tailor the model for educational purposes. The result is an AI-powered assistant capable of supporting foundational computer science education.

Instruction To Test This Model

Please follow the guide in the LLaMA-Factory README file by cloning it from GitHub and accessing the LLaMA Board GUI (powered by Gradio) to launch the model: LLaMA-Factory GitHub README.

https://github.com/hiyouga/LLaMA-Factory/blob/main/README.md

@inproceedings{zheng2024llamafactory, title={LlamaFactory: Unified Efficient Fine-Tuning of 100+ Language Models}, author={Yaowei Zheng and Richong Zhang and Junhao Zhang and Yanhan Ye and Zheyan Luo and Zhangchi Feng and Yongqiang Ma}, booktitle={Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 3: System Demonstrations)}, address={Bangkok, Thailand}, publisher={Association for Computational Linguistics}, year={2024}, url={http://arxiv.org/abs/2403.13372} }

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for 2imi9/Llama2_7B_TeachingAssistant_Introduction_to_Computers

Finetuned
(26)
this model

Datasets used to train 2imi9/Llama2_7B_TeachingAssistant_Introduction_to_Computers