File size: 3,450 Bytes
75a9408
 
 
 
 
 
 
 
 
 
 
 
331c6a3
 
aa90ee7
331c6a3
 
75a9408
92de5a1
75a9408
92de5a1
 
 
 
 
 
 
 
75a9408
92de5a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
331c6a3
92de5a1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
75a9408
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
---
base_model: unsloth/mistral-7b-v0.3-bnb-4bit
language:
- en
license: apache-2.0
tags:
- text-generation-inference
- transformers
- unsloth
- mistral
- trl
---

<div class="alert alert-block alert-danger">  
<h2><center><strong>Mental Health Chatbot using Fine-Tuned 7B Mistral Model</strong></center></h2>
        
</div>

## Inference

```python
from unsloth import FastLanguageModel
model, tokenizer = FastLanguageModel.from_pretrained(
model_name = "ImranzamanML/7B_finetuned_Mistral",
max_seq_length = 5020,
dtype = None,
load_in_4bit = True)
```

## Prompt to use for model answer

```python
data_prompt = """Analyze the provided text from a mental health perspective. Identify any indicators of emotional distress, coping mechanisms, or psychological well-being. Highlight any potential concerns or positive aspects related to mental health, and provide a brief explanation for each observation.

### Input:
{}

### Response:
{}"""

EOS_TOKEN = tokenizer.eos_token
def formatting_prompt(examples):
    inputs       = examples["Context"]
    outputs      = examples["Response"]
    texts = []
    for input_, output in zip(inputs, outputs):
        text = data_prompt.format(input_, output) + EOS_TOKEN
        texts.append(text)
    return { "text" : texts, }
```

## Using this prompt text to feed into model
text="I'm going through some things with my feelings and myself. I barely sleep and I do nothing but think about how I'm worthless and how I shouldn't be here. I've never tried or contemplated suicide. I've always wanted to fix my issues, but I never get around to it. How can I change my feeling of being worthless to everyone?"

<div style="background-color: #f2f2f2; border-left: 5px solid #4CAF50; padding: 15px; margin: 20px 0;">
    <strong>Note:</strong> Lets use the fine-tuned model for inference in order to generate responses based on mental health-related prompts !
</div>

<h3 style="color: #388e3c; font-family: Arial, sans-serif;">Here is some keys to note:</h3>

<ol style="margin-left: 20px;">
        <p>The <code>model = FastLanguageModel.for_inference(model)</code> configures the model specifically for inference, optimizing its performance for generating responses.</p>
    </li>
        <p>The input text is tokenized using the <code>tokenizer</code>, it convert the text into a format that model can process. We are using <code>data_prompt</code> to format the input text, while the response placeholder is left empty to get response from model. The <code>return_tensors = "pt"</code> parameter specifies that the output should be in PyTorch tensors, which are then moved to the GPU using <code>.to("cuda")</code> for faster processing.</p>
    </li>
        <p>The <code>model.generate</code> method generating response based on the tokenized inputs. The parameters <code>max_new_tokens = 5020</code> and <code>use_cache = True</code> ensure that the model can produce long and coherent responses efficiently by utilizing cached computation from previous layers.</p>
    </li>
</ol>

```python
model = FastLanguageModel.for_inference(model)
inputs = tokenizer(
[
    data_prompt.format(
        #instructions
        text,
        #answer
        "",
    )
], return_tensors = "pt").to("cuda")

outputs = model.generate(**inputs, max_new_tokens = 5020, use_cache = True)
answer=tokenizer.batch_decode(outputs)
answer = answer[0].split("### Response:")[-1]
print("Answer of the question is:", answer)
```