YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
CoLLaMA: A Multi-task Instruction Dataset and Large Language Model for Code
Model details
Trained in June 2023.
CoMA comprises a fine-tuned coding LLM and a multi-task instruction tuning dataset featuring 77K data samples encompassing 8 diverse tasks.
Please refer to the README of the GitHub repository for detailed information.
Training dataset
The model was trained on a 77k rows instruction following dataset, which is released in the GitHub repository.
Citation
1School of Information Science & Engineering, Yunnan University
2ChanceFocus AMC
3School of Computer Science, Wuhan University
@misc{Hu2023CoLLaMA,
title={CoLLaMA: A Multi-task Instruction Dataset and Large Language Model for Code},
author={Gang Hu and Xi Wen and Xin Liu and Jimin Huang and Qianqian Xie},
year={2023},
}
- Downloads last month
- 11
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.