TheBloke commited on
Commit
bc22dca
1 Parent(s): bf16319

Updating model files

Browse files
Files changed (1) hide show
  1. README.md +43 -21
README.md CHANGED
@@ -8,6 +8,17 @@ tags:
8
  - medical
9
  inference: false
10
  ---
 
 
 
 
 
 
 
 
 
 
 
11
 
12
  # medalpaca-13B GPTQ 4bit
13
 
@@ -90,35 +101,46 @@ The above commands assume you have installed all dependencies for GPTQ-for-LLaMa
90
 
91
  If you can't update GPTQ-for-LLaMa to the latest Triton branch, or don't want to, you can use `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
92
 
 
 
 
 
 
 
 
 
 
 
 
93
  # Original model card: MedAlpaca 13b
94
 
95
 
96
  ## Table of Contents
97
 
98
- [Model Description](#model-description)
99
- - [Architecture](#architecture)
100
- - [Training Data](#trainig-data)
101
- [Model Usage](#model-usage)
102
- [Limitations](#limitations)
103
 
104
  ## Model Description
105
  ### Architecture
106
- `medalpaca-13b` is a large language model specifically fine-tuned for medical domain tasks.
107
- It is based on LLaMA (Large Language Model Meta AI) and contains 13 billion parameters.
108
  The primary goal of this model is to improve question-answering and medical dialogue tasks.
109
 
110
  ### Training Data
111
- The training data for this project was sourced from various resources.
112
- Firstly, we used Anki flashcards to automatically generate questions,
113
- from the front of the cards and anwers from the back of the card.
114
- Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
115
- We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
116
- to generate questions from the headings and using the corresponding paragraphs
117
- as answers. This dataset is still under development and we believe
118
- that approximately 70% of these question answer pairs are factual correct.
119
- Thirdly, we used StackExchange to extract question-answer pairs, taking the
120
- top-rated question from five categories: Academia, Bioinformatics, Biology,
121
- Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
122
  consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
123
 
124
  | Source | n items |
@@ -152,7 +174,7 @@ print(answer)
152
 
153
  ## Limitations
154
  The model may not perform effectively outside the scope of the medical domain.
155
- The training data primarily targets the knowledge level of medical students,
156
  which may result in limitations when addressing the needs of board-certified physicians.
157
- The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
158
- It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.
 
8
  - medical
9
  inference: false
10
  ---
11
+ <div style="width: 100%;">
12
+ <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
13
+ </div>
14
+ <div style="display: flex; justify-content: space-between; width: 100%;">
15
+ <div style="display: flex; flex-direction: column; align-items: flex-start;">
16
+ <p><a href="https://discord.gg/UBgz4VXf">Chat & support: my new Discord server</a></p>
17
+ </div>
18
+ <div style="display: flex; flex-direction: column; align-items: flex-end;">
19
+ <p><a href="https://www.patreon.com/TheBlokeAI">Want to contribute? Patreon coming soon!</a></p>
20
+ </div>
21
+ </div>
22
 
23
  # medalpaca-13B GPTQ 4bit
24
 
 
101
 
102
  If you can't update GPTQ-for-LLaMa to the latest Triton branch, or don't want to, you can use `medalpaca-13B-GPTQ-4bit-128g.no-act-order.safetensors` as mentioned above, which should work without any upgrades to text-generation-webui.
103
 
104
+ ## Want to support my work?
105
+
106
+ I've had a lot of people ask if they can contribute. I love providing models and helping people, but it is starting to rack up pretty big cloud computing bills.
107
+
108
+ So if you're able and willing to contribute, it'd be most gratefully received and will help me to keep providing models, and work on various AI projects.
109
+
110
+ Donaters will get priority support on any and all AI/LLM/model questions, and I'll gladly quantise any model you'd like to try.
111
+
112
+ * Patreon: coming soon! (just awaiting approval)
113
+ * Ko-Fi: https://ko-fi.com/TheBlokeAI
114
+ * Discord: https://discord.gg/UBgz4VXf
115
  # Original model card: MedAlpaca 13b
116
 
117
 
118
  ## Table of Contents
119
 
120
+ [Model Description](#model-description)
121
+ - [Architecture](#architecture)
122
+ - [Training Data](#trainig-data)
123
+ [Model Usage](#model-usage)
124
+ [Limitations](#limitations)
125
 
126
  ## Model Description
127
  ### Architecture
128
+ `medalpaca-13b` is a large language model specifically fine-tuned for medical domain tasks.
129
+ It is based on LLaMA (Large Language Model Meta AI) and contains 13 billion parameters.
130
  The primary goal of this model is to improve question-answering and medical dialogue tasks.
131
 
132
  ### Training Data
133
+ The training data for this project was sourced from various resources.
134
+ Firstly, we used Anki flashcards to automatically generate questions,
135
+ from the front of the cards and anwers from the back of the card.
136
+ Secondly, we generated medical question-answer pairs from [Wikidoc](https://www.wikidoc.org/index.php/Main_Page).
137
+ We extracted paragraphs with relevant headings, and used Chat-GPT 3.5
138
+ to generate questions from the headings and using the corresponding paragraphs
139
+ as answers. This dataset is still under development and we believe
140
+ that approximately 70% of these question answer pairs are factual correct.
141
+ Thirdly, we used StackExchange to extract question-answer pairs, taking the
142
+ top-rated question from five categories: Academia, Bioinformatics, Biology,
143
+ Fitness, and Health. Additionally, we used a dataset from [ChatDoctor](https://arxiv.org/abs/2303.14070)
144
  consisting of 200,000 question-answer pairs, available at https://github.com/Kent0n-Li/ChatDoctor.
145
 
146
  | Source | n items |
 
174
 
175
  ## Limitations
176
  The model may not perform effectively outside the scope of the medical domain.
177
+ The training data primarily targets the knowledge level of medical students,
178
  which may result in limitations when addressing the needs of board-certified physicians.
179
+ The model has not been tested in real-world applications, so its efficacy and accuracy are currently unknown.
180
+ It should never be used as a substitute for a doctor's opinion and must be treated as a research tool only.