metadata
license: apache-2.0
base_model: TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T
tags:
- generated_from_trainer
model-index:
- name: storage/tiny
results: []
storage/tiny
This model is a fine-tuned version of TinyLlama/TinyLlama-1.1B-intermediate-step-955k-token-2T on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0598
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 10
- num_epochs: 3
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.9849 | 0.03 | 1 | 2.0811 |
1.3107 | 0.17 | 5 | 1.1992 |
0.6399 | 0.34 | 10 | 0.6359 |
0.2779 | 0.51 | 15 | 0.2862 |
0.1807 | 0.68 | 20 | 0.1634 |
0.1256 | 0.85 | 25 | 0.1177 |
0.097 | 1.0 | 30 | 0.0891 |
0.1063 | 1.17 | 35 | 0.0734 |
0.0769 | 1.34 | 40 | 0.0723 |
0.0694 | 1.51 | 45 | 0.0633 |
0.0687 | 1.69 | 50 | 0.0624 |
0.0575 | 1.86 | 55 | 0.0622 |
0.0516 | 2.01 | 60 | 0.0609 |
0.0582 | 2.18 | 65 | 0.0603 |
0.0611 | 2.35 | 70 | 0.0600 |
0.0515 | 2.52 | 75 | 0.0598 |
0.0704 | 2.69 | 80 | 0.0598 |
0.0525 | 2.86 | 85 | 0.0598 |
Framework versions
- Transformers 4.35.2
- Pytorch 2.0.1+cu118
- Datasets 2.15.0
- Tokenizers 0.15.0