frankmorales2020 commited on
Commit
dac3bf6
1 Parent(s): d61f2a3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -5
README.md CHANGED
@@ -36,25 +36,32 @@ More information needed
36
 
37
  Evaluation: https://github.com/frank-morales2020/MLxDL/blob/main/FineTunning_Testing_For_EmotionQADataset.ipynb
38
 
39
- -------------
 
 
40
  The following hyperparameters were used during training:
41
  learning_rate: 0.0002 train_batch_size: 3 eval_batch_size: 8 seed: 42 gradient_accumulation_steps: 2 total_train_batch_size: 6 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant lr_scheduler_warmup_ratio: 0.03
42
  num_epochs: 1
43
 
44
- NOTE: test - Accuracy (Eval dataset and predict) for a sample of 2000: 59.45%
45
- --------------/
 
46
 
47
  The following hyperparameters were used during training:
48
  learning_rate: 0.0002 train_batch_size: 3 eval_batch_size: 8 seed: 42 gradient_accumulation_steps: 2 total_train_batch_size: 6 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant lr_scheduler_warmup_ratio: 0.03
49
  num_epochs: 25
50
 
51
- NOTE: test - Accuracy (Eval dataset and predict) for a sample of 2000: 79.95%
 
 
52
 
53
  The following hyperparameters were used during training:
54
  learning_rate: 0.0002 train_batch_size: 3 eval_batch_size: 8 seed: 42 gradient_accumulation_steps: 2 total_train_batch_size: 6 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant lr_scheduler_warmup_ratio: 0.03
55
  num_epochs: 40
56
 
57
- NOTE: test - Accuracy (Eval dataset and predict) for a sample of 2000: 80.70%
 
 
58
 
59
  ## Training procedure
60
 
 
36
 
37
  Evaluation: https://github.com/frank-morales2020/MLxDL/blob/main/FineTunning_Testing_For_EmotionQADataset.ipynb
38
 
39
+
40
+ *************
41
+
42
  The following hyperparameters were used during training:
43
  learning_rate: 0.0002 train_batch_size: 3 eval_batch_size: 8 seed: 42 gradient_accumulation_steps: 2 total_train_batch_size: 6 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant lr_scheduler_warmup_ratio: 0.03
44
  num_epochs: 1
45
 
46
+ Accuracy (Eval dataset and predict) for a sample of 2000: 59.45%
47
+
48
+ *************
49
 
50
  The following hyperparameters were used during training:
51
  learning_rate: 0.0002 train_batch_size: 3 eval_batch_size: 8 seed: 42 gradient_accumulation_steps: 2 total_train_batch_size: 6 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant lr_scheduler_warmup_ratio: 0.03
52
  num_epochs: 25
53
 
54
+ Accuracy (Eval dataset and predict) for a sample of 2000: 79.95%
55
+
56
+ *************
57
 
58
  The following hyperparameters were used during training:
59
  learning_rate: 0.0002 train_batch_size: 3 eval_batch_size: 8 seed: 42 gradient_accumulation_steps: 2 total_train_batch_size: 6 optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 lr_scheduler_type: constant lr_scheduler_warmup_ratio: 0.03
60
  num_epochs: 40
61
 
62
+ Accuracy (Eval dataset and predict) for a sample of 2000: 80.70%
63
+
64
+ *************
65
 
66
  ## Training procedure
67