vibhorag101 commited on
Commit
29b091b
1 Parent(s): 0e44625

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -11,7 +11,7 @@ pipeline_tag: text-generation
11
 
12
  <!-- Provide a quick summary of what the model is/does. -->
13
  - This model is a finetune of the **llama-2-13b-chat-hf** model on a therapy dataset.
14
- - The model aims to provide basic therapy to the users and improve their mental health until they seeks professional help.
15
  - The model has been adjusted to encourage giving cheerful responses to the user. The system prompt has been mentioned below.
16
 
17
  ## Model Details
@@ -20,8 +20,8 @@ pipeline_tag: text-generation
20
  - 48 Core Intel Xeon
21
  - 128GB Ram.
22
  ### Model Hyperparameters
23
- - This [training script](https://github.com/vibhorag101/llama2-mental-therapy/blob/main/finetuneModel/finetuneScript.ipynb) was used to do the finetuning.
24
- - The shareGPT format dataset was converted to llama-2 training format using this [script](https://github.com/vibhorag101/llama2-mental-therapy/blob/main/finetuneModel/llamaDataMaker.ipynb).
25
  - num_train_epochs = 2
26
  - per_device_train_batch_size = 2
27
  - per_device_eval_batch_size = 2
 
11
 
12
  <!-- Provide a quick summary of what the model is/does. -->
13
  - This model is a finetune of the **llama-2-13b-chat-hf** model on a therapy dataset.
14
+ - The model aims to provide basic therapy to the users and improve their mental health until they seek professional help.
15
  - The model has been adjusted to encourage giving cheerful responses to the user. The system prompt has been mentioned below.
16
 
17
  ## Model Details
 
20
  - 48 Core Intel Xeon
21
  - 128GB Ram.
22
  ### Model Hyperparameters
23
+ - This [training script](https://github.com/phr-winter23/phr-mental-chat/blob/main/finetuneModel/finetuneScriptLLaMA-2.ipynb) was used to do the finetuning.
24
+ - The shareGPT format dataset was converted to llama-2 training format using this [script](https://github.com/phr-winter23/phr-mental-chat/blob/main/finetuneModel/llamaDataMaker.ipynb).
25
  - num_train_epochs = 2
26
  - per_device_train_batch_size = 2
27
  - per_device_eval_batch_size = 2