File size: 899 Bytes
2a54bd4
 
9cb8c6b
 
 
 
 
 
fc9898d
2a54bd4
 
 
 
 
 
71ead51
 
3bdace1
2a54bd4
9cb8c6b
2a54bd4
9cb8c6b
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
---
library_name: transformers
license: apache-2.0
datasets:
- m-a-p/COIG-CQIA
language:
- zh
pipeline_tag: text-generation
inference: false
---

# Model Card for Model ID

<!-- Provide a quick summary of what the model is/does. -->

This model is fine-tuned on the Qwen1.5-4B-Chat model and COIG-CQIA/ruozhiba dataset by QLoRA. 

Be noted that the model file contains the adapter LoRA weights only, so you are suggested to merge adapters with the base model for inference usage. Check the official reference [here](https://huggingface.co/docs/peft/main/en/developer_guides/lora#merge-adapters)

This whole training process is running on Google Colab with free computing resources, detail can be accessed via [link](https://colab.research.google.com/drive/1GiI8drsinxhFdprWbqlXtN0DqbHHs1fe?hl=en#scrollTo=5o3OgCMdRGgp)

This project is for the demonstration only of course DSAA5009, HKUST Guangzhou