Edit model card

🔎 Taiwan-inquiry_7B_v1.0

"The model was fine-tuned based on the Taiwan-LLM-7B-v2.1-chat model using a dataset that includes 614 authentic dialogues from the National Cheng Kung University Hospital. Additionally, 101 synthetic dialogues were included in the training set. The dialogue content covers topics referenced from OSCE (臨床技能測驗) sample questions in Taiwan. These synthetic dialogues were generated using GPT-3.5 and Gemini-Pro. The training process aimed to enhance the model's performance in understanding and generating responses related to clinical skills assessment scenarios in a medical context."

Model Description

  • Developed by: Joseph (Chen-Wei) Li, researcher assistant from National Taiwan University Hospital.
  • Model type: A 7B parameter GPT-like model fine-tuned on a combination of private and synthetic dialogue datasets.
  • Language(s) (NLP): Traditional Chinese (zh-tw)
  • Finetuned from model : yentinglin/Taiwan-LLM-7B-v2.1-chat

Usage of the model

  • The user can take on the role of a doctor, and the model can engage in conversation with you as if it were a patient.
  • You can provide the model with a brief patient background in the system prompt, and the model will respond based on that prompt.

DEMO

  • prompt: 一名50歲先生抱怨視線模糊和眼睛乾澀,請進行眼科檢查。

image/png image/png image/png

Downloads last month
16
Safetensors
Model size
6.74B params
Tensor type
BF16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including ChenWeiLi/Taiwan-inquiry_7B_v1.0