File size: 12,514 Bytes
8de2953
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
Quantization made by Richard Erkhov.

[Github](https://github.com/RichardErkhov)

[Discord](https://discord.gg/pvy7H8DZMG)

[Request more models](https://github.com/RichardErkhov/quant_request)


MELT-TinyLlama-1.1B-Chat-v1.0 - GGUF
- Model creator: https://huggingface.co/IBI-CAAI/
- Original model: https://huggingface.co/IBI-CAAI/MELT-TinyLlama-1.1B-Chat-v1.0/


| Name | Quant method | Size |
| ---- | ---- | ---- |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q2_K.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q2_K.gguf) | Q2_K | 0.4GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.IQ3_XS.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.IQ3_XS.gguf) | IQ3_XS | 0.44GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.IQ3_S.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.IQ3_S.gguf) | IQ3_S | 0.47GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K_S.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K_S.gguf) | Q3_K_S | 0.47GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.IQ3_M.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.IQ3_M.gguf) | IQ3_M | 0.48GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K.gguf) | Q3_K | 0.51GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K_M.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K_M.gguf) | Q3_K_M | 0.51GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K_L.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q3_K_L.gguf) | Q3_K_L | 0.55GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.IQ4_XS.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.IQ4_XS.gguf) | IQ4_XS | 0.57GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q4_0.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q4_0.gguf) | Q4_0 | 0.59GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.IQ4_NL.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.IQ4_NL.gguf) | IQ4_NL | 0.6GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q4_K_S.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q4_K_S.gguf) | Q4_K_S | 0.6GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q4_K.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q4_K.gguf) | Q4_K | 0.62GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q4_K_M.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q4_K_M.gguf) | Q4_K_M | 0.62GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q4_1.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q4_1.gguf) | Q4_1 | 0.65GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q5_0.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q5_0.gguf) | Q5_0 | 0.71GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q5_K_S.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q5_K_S.gguf) | Q5_K_S | 0.71GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q5_K.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q5_K.gguf) | Q5_K | 0.73GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q5_K_M.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q5_K_M.gguf) | Q5_K_M | 0.73GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q5_1.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q5_1.gguf) | Q5_1 | 0.77GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q6_K.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q6_K.gguf) | Q6_K | 0.84GB |
| [MELT-TinyLlama-1.1B-Chat-v1.0.Q8_0.gguf](https://huggingface.co/RichardErkhov/IBI-CAAI_-_MELT-TinyLlama-1.1B-Chat-v1.0-gguf/blob/main/MELT-TinyLlama-1.1B-Chat-v1.0.Q8_0.gguf) | Q8_0 | 1.09GB |




Original model description:
---
license: apache-2.0
language:
- en
library_name: transformers
---

# Model Card MELT-TinyLlama-1.1B-Chat-v1.0

The MELT-TinyLlama-1.1B-Chat-v1.0 Large Language Model (LLM) is a pretrained generative text model pre-trained and fine-tuned on using publically avalable medical data.

MELT-TinyLlama-1.1B-Chat-v1.0 demonstrates a 13.76% improvement over TinyLlama-1.1B-Chat-v1.0 across 3 medical benchmarks including, USMLE, Indian AIIMS, and NEET medical examination examples. 

## Model Details

The Medical Education Language Transformer (MELT) models have been trained on a wide-range of text, chat, Q/A, and instruction data in the medical domain.   

While the model was evaluated using publically avalable [USMLE](https://www.usmle.org/), Indian AIIMS, and NEET medical examination example questions, its use it intented to be more broadly applicable.   

### Model Description

<!-- Provide a longer summary of what this model is. -->

- **Developed by:** [Center for Applied AI](https://caai.ai.uky.edu/)
- **Funded by:** [Institute or Biomedical Informatics](https://www.research.uky.edu/IBI)
- **Model type:** LLM
- **Language(s) (NLP):** English
- **License:** Apache 2.0
- **Finetuned from model:** [TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)


## Uses

MELT is intended for research purposes only. MELT models are best suited for prompts using a QA or chat format.

### Out-of-Scope Use

<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
MELT is intended for research purposes only and should not be used for medical advice. 

## Bias, Risks, and Limitations

<!-- This section is meant to convey both technical and sociotechnical limitations. -->
MELT was training using collections publicly available, which likely contain biased and inaccurate information.  The training and evaluation datasets have not been evaluated for content or accuracy. 

## How to Get Started with the Model

Use this model like you would any llama-2-7b-chat-hf model.

## Training Details

### Training Data

<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The following datasets were used for training:

[Expert Med](https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/Q3A969)
[MedQA train](https://huggingface.co/datasets/bigbio/med_qa)
[MedMCQA train](https://github.com/MedMCQA/MedMCQA?tab=readme-ov-file#data-download-and-preprocessing)
[LiveQA](https://github.com/abachaa/LiveQA_MedicalTask_TREC2017)
[MedicationQA](https://huggingface.co/datasets/truehealth/medicationqa)
[MMLU clinical topics](https://huggingface.co/datasets/Stevross/mmlu)
[Medical Flashcards](https://huggingface.co/datasets/medalpaca/medical_meadow_medical_flashcards)
[Wikidoc](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc)
[Wikidoc Patient Information](https://huggingface.co/datasets/medalpaca/medical_meadow_wikidoc_patient_information)
[MEDIQA](https://huggingface.co/datasets/medalpaca/medical_meadow_mediqa)
[MMMLU](https://huggingface.co/datasets/medalpaca/medical_meadow_mmmlu)
[icliniq 10k](https://drive.google.com/file/d/1ZKbqgYqWc7DJHs3N9TQYQVPdDQmZaClA/view?usp=sharing)
[HealthCare Magic 100k](https://drive.google.com/file/d/1lyfqIwlLSClhgrCutWuEe_IACNq6XNUt/view?usp=sharing)
[GenMedGPT-5k](https://drive.google.com/file/d/1nDTKZ3wZbZWTkFMBkxlamrzbNz0frugg/view?usp=sharing)
[Mental Health Conversational](https://huggingface.co/datasets/heliosbrahma/mental_health_conversational_dataset)

### Training Procedure 

<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->

#### Training Hyperparameters

- **Lora Rank:** 64 
- **Lora Alpha:** 16
- **Lora Targets:** "o_proj","down_proj","v_proj","gate_proj","up_proj","k_proj","q_proj"
- **LR:** 2e-4
- **Epoch:** 3
- **Precision:** bf16 <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->

## Evaluation

<!-- This section describes the evaluation protocols and provides the results. -->

MELT-TinyLlama-1.1B-Chat-v1.0 demonstrates an average 13.76% improvement over TinyLlama-1.1B-Chat-v1.0 across 3 USMLE, Indian AIIMS, and NEET medical examination benchmarks. 


### TinyLlama-1.1B-Chat-v1.0
- **medqa:** {'base': {'Average': 25.49, 'STEP-1': 24.48, 'STEP-2&3': 26.64}}
- **mausmle:** {'base': {'Average': 19.71, 'STEP-1': 21.18, 'STEP-2': 20.69, 'STEP-3': 17.76}} 
- **medmcqa:** {'base': {'Average': 28.52, 'MEDICINE': 29.35, 'OPHTHALMOLOGY': 28.57, 'ANATOMY': 30.82, 'PATHOLOGY': 29.07, 'PHYSIOLOGY': 20.45, 'DENTAL': 30.09, 'RADIOLOGY': 14.29, 'BIOCHEMISTRY': 22.31, 'ANAESTHESIA': 26.09, 'GYNAECOLOGY': 24.84, 'PHARMACOLOGY': 32.02, 'SOCIAL': 31.11, 'PEDIATRICS': 31.82, 'ENT': 28.95, 'SURGERY': 31.45, 'MICROBIOLOGY': 26.03, 'FORENSIC': 16.28, 'PSYCHIATRY': 22.22, 'SKIN': 40.0, 'ORTHOPAEDICS': 21.43, 'UNKNOWN': 0.0}}
- **average:** 24.57%
  
### MELT-TinyLlama-1.1B-Chat-v1.0
- **medqa:** {'base': {'Average': 29.5, 'STEP-1': 28.17, 'STEP-2&3': 31.03}}  
- **mausmle:** {'base': {'Average': 21.51, 'STEP-1': 27.06, 'STEP-2': 19.54, 'STEP-3': 18.69}}   
- **medmcqa:** {'base': {'Average': 32.84, 'MEDICINE': 27.72, 'OPHTHALMOLOGY': 38.1, 'ANATOMY': 39.73, 'PATHOLOGY': 32.56, 'PHYSIOLOGY': 35.61, 'DENTAL': 32.23, 'RADIOLOGY': 41.07, 'BIOCHEMISTRY': 33.06, 'ANAESTHESIA': 39.13, 'GYNAECOLOGY': 22.88, 'PHARMACOLOGY': 32.58, 'SOCIAL': 26.67, 'PEDIATRICS': 34.09, 'ENT': 42.11, 'SURGERY': 33.47, 'MICROBIOLOGY': 30.14, 'FORENSIC': 41.86, 'PSYCHIATRY': 55.56, 'SKIN': 60.0, 'ORTHOPAEDICS': 35.71, 'UNKNOWN': 100.0}}
- **average:** 27.95%

### Testing Data, Factors & Metrics

#### Testing Data

<!-- This should link to a Dataset Card if possible. -->
[MedQA test](https://huggingface.co/datasets/bigbio/med_qa)
[MedMCQA test](https://github.com/MedMCQA/MedMCQA?tab=readme-ov-file#data-download-and-preprocessing)
[MA USMLE](https://huggingface.co/datasets/medalpaca/medical_meadow_usmle_self_assessment)

## Disclaimer:

The use of large language models, such as this one, is provided without warranties or guarantees of any kind. While every effort has been made to ensure accuracy, completeness, and reliability of the information generated, it should be noted that these models may produce responses that are inaccurate, outdated, or inappropriate for specific purposes. Users are advised to exercise discretion and judgment when relying on the information generated by these models. The outputs should not be considered as professional, legal, medical, financial, or any other form of advice. It is recommended to seek expert advice or consult appropriate sources for specific queries or critical decision-making. The creators, developers, and providers of these models disclaim any liability for damages, losses, or any consequences arising from the use, reliance upon, or interpretation of the information provided by these models. The user assumes full responsibility for their interactions and usage of the generated content. By using these language models, users agree to indemnify and hold harmless the developers, providers, and affiliates from any claims, damages, or liabilities that may arise from their use. Please be aware that these models are constantly evolving, and their capabilities, limitations, and outputs may change over time without prior notice. Your use of this language model signifies your acceptance and understanding of this disclaimer.