frankmorales2020 commited on
Commit
f504d80
1 Parent(s): 3931d92

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -2
README.md CHANGED
@@ -22,7 +22,11 @@ This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.1](https
22
 
23
  ## Model description
24
 
25
- More information needed
 
 
 
 
26
 
27
  ## Intended uses & limitations
28
 
@@ -30,7 +34,22 @@ More information needed
30
 
31
  ## Training and evaluation data
32
 
33
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
  ## Training procedure
36
 
 
22
 
23
  ## Model description
24
 
25
+ Article: https://ai.plainenglish.io/fine-tuning-the-mistral-7b-instruct-v0-1-model-with-the-emotion-dataset-c84c50b553dc
26
+
27
+ Fine tunning: https://github.com/frank-morales2020/MLxDL/blob/main/FineTuning_Mistral_7b_hfdeployment_dataset_Emotion.ipynb
28
+
29
+ Evaluation: https://github.com/frank-morales2020/MLxDL/blob/main/FineTunning_Testing_For_EmotionQADataset.ipynb
30
 
31
  ## Intended uses & limitations
32
 
 
34
 
35
  ## Training and evaluation data
36
 
37
+ Evaluation: https://github.com/frank-morales2020/MLxDL/blob/main/FineTunning_Testing_For_EmotionQADataset.ipynb
38
+
39
+ The following hyperparameters were used during training:
40
+ learning_rate: 0.0002 train_batch_size: 3 eval_batch_size: 8 seed: 42 gradient_accumulation_steps: 2 total_train_batch_size: 6 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant lr_scheduler_warmup_ratio: 0.03
41
+ num_epochs: 1
42
+ NOTE: test - Accuracy (Eval dataset and predict) for a sample of 2000: 59.45%
43
+
44
+ The following hyperparameters were used during training:
45
+ learning_rate: 0.0002 train_batch_size: 3 eval_batch_size: 8 seed: 42 gradient_accumulation_steps: 2 total_train_batch_size: 6 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant lr_scheduler_warmup_ratio: 0.03
46
+ num_epochs: 25
47
+ NOTE: test - Accuracy (Eval dataset and predict) for a sample of 2000: 79.95%
48
+
49
+ The following hyperparameters were used during training:
50
+ learning_rate: 0.0002 train_batch_size: 3 eval_batch_size: 8 seed: 42 gradient_accumulation_steps: 2 total_train_batch_size: 6 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant lr_scheduler_warmup_ratio: 0.03
51
+ num_epochs: 40
52
+ NOTE: test - Accuracy (Eval dataset and predict) for a sample of 2000: 80.70%
53
 
54
  ## Training procedure
55