TheBloke commited on
Commit
c46a050
1 Parent(s): af36049

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +143 -1
README.md CHANGED
@@ -1,3 +1,145 @@
1
  ---
2
- license: other
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: cc
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
+ tags:
8
+ - medical
9
  ---
10
+
11
+ # medalpaca-13B GPTQ 4bit
12
+
13
+ This is a [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa) 4bit quantisation of [medalpaca-13b](https://huggingface.co/medalpaca/medalpaca-13b).
14
+
15
+ ## GIBBERISH OUTPUT IN `text-generation-webui`?
16
+
17
+ Please read the Provided Files section below. You should use `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors` unless you are able to update GPTQ-for-LLaMa.
18
+
19
+ ## Provided files
20
+
21
+ Two files are provided.
22
+
23
+ **The second file will not work unless you use recent GPTQ-for-LLaMa code**
24
+
25
+ Specifically, the file that uses act-order it will not work with oobabooga's fork of GPTQ-for-LLaMa and therefore it will not work with `text-generation-webui` one-click installers.
26
+
27
+ Unless you are able to use the latest GPTQ-for-LLaMa code, please use `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors`
28
+
29
+ * `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors`
30
+ * Created with the latest GPTQ-for-LLaMa code
31
+ * Parameters: Groupsize = 128g. No act-order.
32
+ * Command: `CUDA_VISIBLE_DEVICES=0 python3 llama.py medalpaca-13b c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors
33
+ * `medalpaca-13B-GPTQ-4bit-128g.safetensors`
34
+ * Created with the latest GPTQ-for-LLaMa code
35
+ * Parameters: Groupsize = 128g. act-order.
36
+ * Offers highest quality quantisation, but requires recent GPTQ-for-LLaMa code
37
+ * Command: `CUDA_VISIBLE_DEVICES=0 python3 llama.py medalpaca-13b c4 --wbits 4 --true-sequential --act-order --groupsize 128 --save_safetensors medalpaca-13B-GPTQ-4bit-128g.safetensors
38
+
39
+ ## How to run in `text-generation-webui`
40
+
41
+ File `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors` can be loaded the same as any other GPTQ file, without requiring any updates to [oobaboogas text-generation-webui](https://github.com/oobabooga/text-generation-webui).
42
+
43
+ [Instructions on using GPTQ 4bit files in text-generation-webui are here](https://github.com/oobabooga/text-generation-webui/wiki/GPTQ-models-\(4-bit-mode\)).
44
+
45
+ The other `safetensors` model file was created with the latest GPTQ code, and uses `--act-order` to give the maximum possible quantisation quality, but this means it requires that the latest GPTQ-for-LLaMa is used inside the UI.
46
+
47
+ If you want to use the `safetensors` file and need to update GPTQ-for-LLaMa, here are the commands I used to clone the Triton branch of GPTQ-for-LLaMa, clone text-generation-webui, and install GPTQ into the UI:
48
+ ```
49
+ # We need to clone GPTQ-for-LLaMa as of April 13th, due to breaking changes in more recent commits
50
+ git clone -n https://github.com/qwopqwop200/GPTQ-for-LLaMa gptq-safe
51
+ cd gptq-safe && git checkout 58c8ab4c7aaccc50f507fd08cce941976affe5e0
52
+
53
+ # Now clone text-generation-webui, if you don't already have it
54
+ git clone https://github.com/oobabooga/text-generation-webui
55
+ # And link GPTQ-for-Llama into text-generation-webui
56
+ mkdir -p text-generation-webui/repositories
57
+ ln -s gptq-safe text-generation-webui/repositories/GPTQ-for-LLaMa
58
+ ```
59
+
60
+ Then install this model into `text-generation-webui/models` and launch the UI as follows:
61
+ ```
62
+ cd text-generation-webui
63
+ python server.py --model vicuna-13B-1.1-GPTQ-4bit-128g --wbits 4 --groupsize 128 --model_type Llama # add any other command line args you want
64
+ ```
65
+
66
+ The above commands assume you have installed all dependencies for GPTQ-for-LLaMa and text-generation-webui. Please see their respective repositories for further information.
67
+
68
+ If you are on Windows, or cannot use the Triton branch of GPTQ for any other reason, you can try the CUDA branch instead:
69
+ ```
70
+ git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa -b cuda
71
+ cd GPTQ-for-LLaMa
72
+ python setup_cuda.py install
73
+ ```
74
+ Then link that into `text-generation-webui/repositories` as described above.
75
+
76
+ However I have heard reports that the CUDA code may run quite slow.
77
+
78
+ Or just use `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
79
+
80
+ # Original model card: MedAlpaca 13b
81
+
82
+
83
+ ## Table of Contents
84
+
85
+ [Model Description](#model-description)
86
+ - [Architecture](#architecture)
87
+ - [Training Data](#trainig-data)
88
+ [Model Usage](#model-usage)
89
+ [Limitations](#limitations)
90
+
91
+ ## Model Description
92
+ ### Architecture
93
+ `medalpaca-13b` is a large language model specifically fine-tuned for medical domain tasks.
94
+ It is based on LLaMA (Large Language Model Meta AI) and contains 13 billion parameters.
95
+ The primary goal of this model is to improve question-answering and medical dialogue tasks.
96
+
97
+ ### Training Data
98
+ The training data for this project was sourced from various resources.
99
+ Firstly, we used Anki flashcards to automatically generate questions,
100
+ from the front of the cards and anwers from the back of the card.
101
+ Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
102
+ We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
103
+ to generate questions from the headings and using the corresponding paragraphs
104
+ as answers. This dataset is still under development and we believe
105
+ that approximately 70% of these question answer pairs are factual correct.
106
+ Thirdly, we used StackExchange to extract question-answer pairs, taking the
107
+ top-rated question from five categories: Academia, Bioinformatics, Biology,
108
+ Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
109
+ consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
110
+
111
+ | Source | n items |
112
+ |------------------------------|--------|
113
+ | ChatDoc large | 200000 |
114
+ | wikidoc | 67704 |
115
+ | Stackexchange academia | 40865 |
116
+ | Anki flashcards | 33955 |
117
+ | Stackexchange biology | 27887 |
118
+ | Stackexchange fitness | 9833 |
119
+ | Stackexchange health | 7721 |
120
+ | Wikidoc patient information | 5942 |
121
+ | Stackexchange bioinformatics | 5407 |
122
+
123
+ ## Model Usage
124
+ To evaluate the performance of the model on a specific dataset, you can use the Hugging Face Transformers library's built-in evaluation scripts. Please refer to the evaluation guide for more information.
125
+ Inference
126
+
127
+ You can use the model for inference tasks like question-answering and medical dialogues using the Hugging Face Transformers library. Here's an example of how to use the model for a question-answering task:
128
+
129
+ ```python
130
+
131
+ from transformers import pipeline
132
+
133
+ qa_pipeline = pipeline("question-answering", model="medalpaca/medalpaca-7b", tokenizer="medalpaca/medalpaca-7b")
134
+ question = "What are the symptoms of diabetes?"
135
+ context = "Diabetes is a metabolic disease that causes high blood sugar. The symptoms include increased thirst, frequent urination, and unexplained weight loss."
136
+ answer = qa_pipeline({"question": question, "context": context})
137
+ print(answer)
138
+ ```
139
+
140
+ ## Limitations
141
+ The model may not perform effectively outside the scope of the medical domain.
142
+ The training data primarily targets the knowledge level of medical students,
143
+ which may result in limitations when addressing the needs of board-certified physicians.
144
+ The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
145
+ It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.