vibhorag101 commited on
Commit
602c197
1 Parent(s): afefa31

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -1
README.md CHANGED
@@ -10,14 +10,40 @@ pipeline_tag: text-generation
10
  # Model Card
11
 
12
  <!-- Provide a quick summary of what the model is/does. -->
13
- - This model is a finetune of the llama-2-7b-chat model on a therapy dataset.
14
  - The model aims to provide basic therapy to the users and improve their mental health until they seeks professional help.
15
  - The model has been adjusted to encourage giving cheerful responses to the user. The system prompt has been mentioned below.
16
 
17
  ## Model Details
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
 
19
  ### Model System Prompt
20
  You are a helpful and joyous mental therapy assistant. Always answer as helpfully and cheerfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.Please ensure that your responses are socially unbiased and positive in nature.
21
 
22
  If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
23
 
 
 
 
 
 
10
  # Model Card
11
 
12
  <!-- Provide a quick summary of what the model is/does. -->
13
+ - This model is a finetune of the **llama-2-7b-chat-hf** model on a therapy dataset.
14
  - The model aims to provide basic therapy to the users and improve their mental health until they seeks professional help.
15
  - The model has been adjusted to encourage giving cheerful responses to the user. The system prompt has been mentioned below.
16
 
17
  ## Model Details
18
+ ### Training Hardware
19
+ - RTX A5000 24GB
20
+ - 48 Core Intel Xeon
21
+ - 128GB RAM.
22
+ ### Model Hyperparameters
23
+ - This [training script](https://github.com/vibhorag101/phr-chat/blob/main/Finetune/finetune.ipynb) was used to do the finetuning.
24
+ - The shareGPT format dataset was converted to llama-2 training format using this [script](https://github.com/vibhorag101/phr-chat/blob/main/Finetune/data_transform.ipynb).
25
+ - num_train_epochs = 3
26
+ - per_device_train_batch_size = 2
27
+ - per_device_eval_batch_size = 2
28
+ - gradient_accumulation_steps = 1
29
+ - max_seq_length = 2048
30
+ - lora_r = 64
31
+ - lora_alpha = 16
32
+ - lora_dropout = 0.1
33
+ - use_4bit = True
34
+ - bnb_4bit_compute_dtype = "float16"
35
+ - bnb_4bit_quant_type = "nf4"
36
+ - use_nested_quant = False
37
+ - fp16 = False
38
+ - bf16 = True
39
+ - Data Sample: 1000 (80:20 split)
40
 
41
  ### Model System Prompt
42
  You are a helpful and joyous mental therapy assistant. Always answer as helpfully and cheerfully as possible, while being safe. Your answers should not include any harmful, unethical, racist, sexist, toxic, dangerous, or illegal content.Please ensure that your responses are socially unbiased and positive in nature.
43
 
44
  If a question does not make any sense, or is not factually coherent, explain why instead of answering something not correct. If you don't know the answer to a question, please don't share false information.
45
 
46
+ #### Model Training Data
47
+
48
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64eb1e4a55e4f0ecb9c4f406/PsbTFlswJexLuwrJYtvly.png)
49
+