Chinese-Shepherd / README.md
frankminors123's picture
Update README.md
e607fff
|
raw
history blame
684 Bytes
metadata
license: apache-2.0
datasets:
  - frankminors123/chinese-shepherd-critic-dataset
language:
  - zh
pipeline_tag: question-answering

We trained a Chinese version of Shepherd based on Chinese-LLaMA-2-7B, and we used 2 V100 GPUs with 32G for supervised fine-tuning based on LoRA.

We designed the appropriate prompt template, and the dataset we used has been published in this HuggingFace repository: frankminors123/chinese-shepherd-critic-dataset, please go to the data page to view details.

The prompt template used is as follows:

PROMPT_TEMPLATE = (
    "请试着评论下面问题的答案.\n"
    "### 问题:\n{question}\n### 答案:\n{answer}\n### 评论:\n"
)