File size: 4,220 Bytes
4f7f85f
 
 
388034a
57257ad
 
 
 
 
 
388034a
 
 
 
4f7f85f
 
a551892
388034a
a551892
388034a
4f7f85f
a551892
 
4f7f85f
a551892
388034a
a551892
388034a
a551892
 
4f7f85f
ec1b39e
 
4f7f85f
a551892
 
4f7f85f
a551892
 
ec1b39e
 
 
 
a551892
 
ec1b39e
 
 
 
 
 
a551892
ec1b39e
a551892
ec1b39e
 
 
 
 
d34ac5f
ec1b39e
a551892
0ef4e27
 
8b00874
0ef4e27
a551892
ec1b39e
 
 
 
 
 
 
 
 
8b00874
 
a551892
0ef4e27
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8b00874
a551892
 
 
 
 
8b00874
a551892
 
 
8b00874
a551892
 
 
388034a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
---
library_name: peft
base_model: mistralai/Mistral-7B-Instruct-v0.2
license: apache-2.0
tags:
- text-generation-inference
- transformers
- ruslanmv
- mistral
- trl
datasets:
- ruslanmv/ai-medical-chatbot
language:
- en
---

# Medical-Mixtral-7B-v2k
[![](future.jpg)](https://ruslanmv.com/)
## Description
Fine-tuned Mixtral model for answering medical assistance questions. This model is a novel version of mistralai/Mistral-7B-Instruct-v0.2, adapted to a subset of 2.0k records from the AI Medical Chatbot dataset, which contains 250k records (https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot). The purpose of this model is to provide a ready chatbot to answer questions related to medical assistance.

## Intended Use
This model is intended for providing assistance and answering questions related to medical inquiries. It is suitable for use in chatbot applications where users seek medical advice, information, or assistance.

## Installation 
```
pip install -qU  transformers==4.36.2  datasets python-dotenv peft bitsandbytes accelerate
```
## Example Usage
```python

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, logging, BitsAndBytesConfig
import os, torch

# Define the name of your fine-tuned model
finetuned_model = 'ruslanmv/Medical-Mixtral-7B-v2k'

# Load fine-tuned model
bnb_config = BitsAndBytesConfig(
    load_in_4bit=True,
    bnb_4bit_quant_type="nf4",
    bnb_4bit_compute_dtype=torch.bfloat16,
    bnb_4bit_use_double_quant=False,
)
model_pretrained = AutoModelForCausalLM.from_pretrained(
    finetuned_model,
    load_in_4bit=True,
    quantization_config=bnb_config,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)

# Load tokenizer
tokenizer = AutoTokenizer.from_pretrained(finetuned_model, trust_remote_code=True)

# Set pad_token_id to eos_token_id
model_pretrained.config.pad_token_id = tokenizer.eos_token_id

pipe = pipeline(task="text-generation", model=model_pretrained, tokenizer=tokenizer, max_length=100)

def build_prompt(question):
  prompt=f"[INST]@Enlighten. {question} [/INST]"
  return prompt

question = "What does abutment of the nerve root mean?"
prompt = build_prompt(question)

# Generate text based on the prompt
result = pipe(prompt)[0]
generated_text = result['generated_text']

# Remove the prompt from the generated text
generated_text = generated_text.replace(prompt, "", 1).strip()

print(generated_text)


```
you will get somethinng like
```
Please help. For more information consult an internal medicine physician online ➜ http://iclinic.com/e/gastroenterologist-online-consultation.php.
```



also you can 

```python
def ask(question):
  promptEnding = "[/INST]"
  # Guide for answering questions
  testGuide = 'Answer the following question, at the end of your response say thank you for your query.\n'
  # Build the question prompt
  question = testGuide + question + "\n"
  print(question)
  # Build the prompt
  prompt = build_prompt(question)
  # Generate answer
  result = pipe(prompt)
  llmAnswer = result[0]['generated_text']
  # Remove the prompt from the generated answer
  index = llmAnswer.find(promptEnding)
  llmAnswer = llmAnswer[len(promptEnding) + index:]
  print("LLM Answer:")
  print(llmAnswer)

question = "For how long should I take Kalachikai powder to overcome PCOD problem?"
ask(question)
```



## Training Data
- **Dataset Name:** AI Medical Chatbot
- **Dataset URL:** https://huggingface.co/datasets/ruslanmv/ai-medical-chatbot
- **Dataset Size:** 250k records
- **Subset Used:** 2.0k records

## Limitations
The model's performance may vary depending on the complexity and specificity of the medical questions.
The model may not provide accurate answers for every medical query, and users should consult medical professionals for critical healthcare concerns.

## Ethical Considerations
Users should be informed that the model's responses are generated based on patterns in the training data and may not always be accurate or suitable for medical decision-making.
The model should not be used as a replacement for professional medical advice or diagnosis.
Sensitive patient data should not be shared with the model, and user privacy should be protected.