Mayuresh87 commited on
Commit
af1dfb6
1 Parent(s): 9baeada

End of training

Browse files
Files changed (1) hide show
  1. README.md +5 -5
README.md CHANGED
@@ -5,7 +5,7 @@ tags:
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
- base_model: TinyLlama/TinyLlama-1.1B-Chat-v0.6
9
  model-index:
10
  - name: TinyLlama-1.1B-python-v0.1
11
  results: []
@@ -16,7 +16,7 @@ should probably proofread and complete it, then remove this comment. -->
16
 
17
  # TinyLlama-1.1B-python-v0.1
18
 
19
- This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v0.6](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v0.6) on the None dataset.
20
 
21
  ## Model description
22
 
@@ -36,13 +36,13 @@ More information needed
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0002
39
- - train_batch_size: 16
40
- - eval_batch_size: 16
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: constant
44
  - lr_scheduler_warmup_ratio: 0.03
45
- - training_steps: 1000
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results
 
5
  - trl
6
  - sft
7
  - generated_from_trainer
8
+ base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
9
  model-index:
10
  - name: TinyLlama-1.1B-python-v0.1
11
  results: []
 
16
 
17
  # TinyLlama-1.1B-python-v0.1
18
 
19
+ This model is a fine-tuned version of [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0) on the None dataset.
20
 
21
  ## Model description
22
 
 
36
 
37
  The following hyperparameters were used during training:
38
  - learning_rate: 0.0002
39
+ - train_batch_size: 8
40
+ - eval_batch_size: 8
41
  - seed: 42
42
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
43
  - lr_scheduler_type: constant
44
  - lr_scheduler_warmup_ratio: 0.03
45
+ - training_steps: 10
46
  - mixed_precision_training: Native AMP
47
 
48
  ### Training results