munish0838
commited on
Commit
•
3f68638
1
Parent(s):
682887e
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
---
|
3 |
+
|
4 |
+
license: afl-3.0
|
5 |
+
tags:
|
6 |
+
- medical
|
7 |
+
|
8 |
+
---
|
9 |
+
|
10 |
+
![](https://lh7-rt.googleusercontent.com/docsz/AD_4nXeiuCm7c8lEwEJuRey9kiVZsRn2W-b4pWlu3-X534V3YmVuVc2ZL-NXg2RkzSOOS2JXGHutDuyyNAUtdJI65jGTo8jT9Y99tMi4H4MqL44Uc5QKG77B0d6-JfIkZHFaUA71-RtjyYZWVIhqsNZcx8-OMaA?key=xt3VSDoCbmTY7o-cwwOFwQ)
|
11 |
+
|
12 |
+
# QuantFactory/ClinicalGPT-base-zh-GGUF
|
13 |
+
This is quantized version of [medicalai/ClinicalGPT-base-zh](https://huggingface.co/medicalai/ClinicalGPT-base-zh) created using llama.cpp
|
14 |
+
|
15 |
+
# Original Model Card
|
16 |
+
|
17 |
+
# ClinicalGPT
|
18 |
+
|
19 |
+
This model card introduces ClinicalGPT model, a large language model designed and optimized for clinical scenarios. ClinicalGPT is fine-tuned on extensive and diverse medical datasets, including medical records, domain-specific knowledge, and multi-round dialogue consultations. The model is undergoing ongoing and continuous updates.
|
20 |
+
|
21 |
+
## Model Fine-tuning
|
22 |
+
|
23 |
+
We set the learning rate to 5e-5, with a batch size of 128 and a maximum length of 1,024, training across 3 epochs.
|
24 |
+
|
25 |
+
## How to use the model
|
26 |
+
|
27 |
+
Load the model via the transformers library:
|
28 |
+
```python
|
29 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
30 |
+
tokenizer = AutoTokenizer.from_pretrained("medicalai/ClinicalGPT-base-zh")
|
31 |
+
model = AutoModelForCausalLM.from_pretrained("medicalai/ClinicalGPT-base-zh")
|
32 |
+
```
|
33 |
+
|
34 |
+
## Limitations
|
35 |
+
|
36 |
+
The project is intended for research purposes only and restricted from commercial or clinical use. The generated content by the model is subject to factors such as model computations, randomness, misinterpretation, and biases, and this project cannot guarantee its accuracy. This project assumes no legal liability for any content produced by the model. Users are advised to exercise caution and independently verify the generated results.
|