Edit model card

Leading the New Chapter in Intelligent TCM - "Five Phases Mindset" Intelligent TCM Consultation Large Model

In the era where artificial intelligence technology is advancing rapidly, to inherit and promote the essence of traditional Chinese medicine (TCM), the Tencent Yantai New Engineering Institute team, after in-depth research and collaboration with TCM colleges, has successfully developed an intelligent TCM consultation large model based on QWEN - "Five Phases Mindset". This model utilizes a scale of one hundred thousand-level consultation data for fine-tuning, aiming to provide users with more accurate and personalized TCM consultation services.

The "Five Phases Mindset" intelligent TCM consultation large model, as the name suggests, integrates the TCM Five Phases theory into artificial intelligence technology. Through deep learning, natural language processing, and other technologies, it achieves intelligent analysis, diagnosis, and recommendation of treatment plans for patients' symptoms. Compared to traditional manual consultation methods, the intelligent TCM consultation large model has the following advantages:

1,Efficient and Convenient: Patients only need to input their symptoms on devices such as mobile phones or computers, and the system can quickly provide diagnostic results and treatment plans, saving patients' time spent in queues and waiting.

2,Accurate and Personalized: Based on big data and deep learning technology, the intelligent TCM consultation large model can provide more accurate and personalized diagnoses and suggestions according to the patient's age, gender, medical history, and other information.

3,Round-the-Clock Service: The intelligent TCM consultation large model can provide consultation services anytime and anywhere, without time and location constraints, meeting the needs of patients in different scenarios.

4,Intelligent Recommendations: Alongside the diagnostic results, the intelligent TCM consultation large model also recommends suitable TCM treatment plans based on the patient's actual situation, including Chinese medicine, acupuncture, massage, etc., helping patients recover better.

5,Continuous Learning: The intelligent TCM consultation large model has the ability to continuously learn. As data accumulates and technology iterates, its diagnostic accuracy and treatment effectiveness will continue to improve.

The advent of the "Five Phases Mindset" intelligent TCM consultation large model marks an important step forward in the intelligent development of TCM in our country. We believe that in the near future, this model will bring more convenient and efficient TCM consultation experiences to a wide range of patients, contributing to the development of the TCM industry. At the same time, we also look forward to collaborating with more industry colleagues to jointly promote the intelligent process of TCM and make greater contributions to the cause of human health

How to Use

The large model includes versions with 1.8B, 7B, and 14B parameters. The version being open-sourced this time is the 1.8B version, which has been quantized using GPTQ to int4, enabling it to run on consumer-grade graphics cards.

For convenience, we recommend that you clone the original QWEN repository to complete the installation of dependencies.

git clone https://github.com/QwenLM/Qwen.git
cd Qwen
pip install -r requirements.txt

Secondly, please ensure that you have installed autogptq.

pip install auto-gptq optimum

Additionally, if your hardware supports it, you can use flash-attention to enhance the efficiency of inference.

git clone https://github.com/Dao-AILab/flash-attention
pip uninstall -y ninja && pip install ninja
cd flash-attention && pip install .

Afterward, you can proceed with the inference.

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig


tokenizer = AutoTokenizer.from_pretrained("cookey39/Five_Phases_Mindset", trust_remote_code=True)

model = AutoModelForCausalLM.from_pretrained("cookey39/Five_Phases_Mindset", device_map="auto", trust_remote_code=True).eval()

# You can specify different hyperparameters such as generation length and top_p for the model.
model.generation_config = GenerationConfig.from_pretrained("cookey39/Five_Phases_Mindset", trust_remote_code=True)

# Set the temperature parameter to control the diversity of the generated text.
model.generation_config.temperature = 0.6

# The dialogue should follow the consultation format.:你是一位经验丰富中医医生,会根据患者的症状给出诊断和药方/n症状:
# For example:
response, _ = model.chat(tokenizer, "你是一位经验丰富中医医生,会根据患者的症状给出诊断和药方/n症状:感冒发热流鼻涕", history=None)
print(response)

Deployment

For large model deployment, please refer to the official QWEN documentation. It is recommended to use the NVIDIA Triton + VLLM deployment method, which can achieve over thirty times acceleration in inference speed.

Let us look forward to the widespread application of the "Five Phases Mindset" intelligent TCM consultation large model in the field of TCM, contributing to the inheritance and promotion of our traditional medical culture!

引领中医智能化新篇章——“五行心智”智能中医问诊大模型

在人工智能技术飞速发展的今天,为了传承和发扬中医药的精髓,腾讯烟台新工科研究院团队经过深入调研和与中医药学院合作,成功研发出一款基于QWEN的智能中医问诊大模型——“五行心智”。这款模型利用了十万条数量级的问诊数据进行微调,旨在为用户提供更加精准、个性化的中医问诊服务。

“五行心智”智能中医问诊大模型,顾名思义,是将中医五行理论融入人工智能技术,通过深度学习、自然语言处理等技术,实现对患者病症的智能分析、诊断和推荐治疗方案。相较于传统的人工问诊方式,智能中医问诊大模型具有以下优势:

1,高效便捷:患者只需在手机、电脑等设备上输入自己的症状,系统即可在短时间内给出诊断结果和治疗方案,节省了患者排队、等待的时间。

2,精准个性化:基于大数据和深度学习技术,智能中医问诊大模型能够根据患者的年龄、性别、病史等信息,提供更加精准、个性化的诊断和建议。

3,全天候服务:智能中医问诊大模型可以随时随地为患者提供问诊服务,不受时间和地点的限制,满足了患者在不同场景下的需求。

4,智能推荐:在给出诊断结果的同时,智能中医问诊大模型还会根据患者的实际情况,推荐合适的中医治疗方案,包括中药、针灸、推拿等,帮助患者更好地恢复健康。

5,持续学习:智能中医问诊大模型具有持续学习的能力,随着数据的不断积累和技术的迭代更新,其诊断准确率和治疗效果将不断提高。

“五行心智”智能中医问诊大模型的问世,标志着我国中医智能化发展迈出了重要的一步。我们相信,在不久的将来,这款模型将为广大患者带来更加便捷、高效的中医问诊体验,助力中医药事业的发展。同时,我们也期待与更多行业同仁携手合作,共同推动中医药智能化进程,为人类健康事业作出更大贡献。

如何使用

大模型包含1.8B版本,7B版本和14B版本,本次开源的为1.8B版本,并且使用gptq量化为int4,支持在消费级别的显卡上运行。

方便起见,我们建议您克隆qwen的原始存储库完成依赖的安装

git clone https://github.com/QwenLM/Qwen.git
cd Qwen
pip install -r requirements.txt

其次,请确保您安装了autogptq

pip install auto-gptq optimum

附加的,如果您的硬件支持,您可以使用flash-attention来提升推理效率

git clone https://github.com/Dao-AILab/flash-attention
pip uninstall -y ninja && pip install ninja
cd flash-attention && pip install .

之后即可进行推理

from transformers import AutoModelForCausalLM, AutoTokenizer
from transformers.generation import GenerationConfig


tokenizer = AutoTokenizer.from_pretrained("cookey39/Five_Phases_Mindset", trust_remote_code=True)

model = AutoModelForCausalLM.from_pretrained("cookey39/Five_Phases_Mindset", device_map="auto", trust_remote_code=True).eval()

# 可指定不同的生成长度、top_p等相关超参
model.generation_config = GenerationConfig.from_pretrained("cookey39/Five_Phases_Mindset", trust_remote_code=True)

# 设置温度参数,用于控制生成文本的多样性
model.generation_config.temperature = 0.6

# 对话需要遵循问诊格式:你是一位经验丰富中医医生,会根据患者的症状给出诊断和药方/n症状:
# 例如
response, _ = model.chat(tokenizer, "你是一位经验丰富中医医生,会根据患者的症状给出诊断和药方/n症状:感冒发热流鼻涕", history=None)
print(response)

部署

大模型部署请参考qwen官方文档,建议使用nvidia trion+vllm的部署方式进行部署,可以获得三十倍以上的推理加速提升。

让我们共同期待“五行心智”智能中医问诊大模型在中医药领域的广泛应用,为传承和发扬我国传统医学文化贡献力量!

Downloads last month
26
Safetensors
Model size
786M params
Tensor type
I32
·
FP16
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.