#TODO model card
GEITje-7B-chat-v2
This model is a fine-tuned version of Rijgersberg/GEITje-7B on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.8011
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.7832 | 0.05 | 609 | 0.8844 |
0.6904 | 0.1 | 1218 | 0.8698 |
0.8195 | 0.15 | 1827 | 0.8583 |
0.7463 | 0.2 | 2436 | 0.8475 |
0.6739 | 0.25 | 3045 | 0.8395 |
0.7604 | 0.3 | 3654 | 0.8332 |
0.8024 | 0.35 | 4263 | 0.8261 |
0.6881 | 0.4 | 4872 | 0.8203 |
0.6466 | 0.45 | 5481 | 0.8167 |
0.7042 | 0.5 | 6090 | 0.8121 |
0.702 | 0.55 | 6699 | 0.8081 |
0.7255 | 0.6 | 7308 | 0.8054 |
0.7558 | 0.65 | 7917 | 0.8036 |
0.7587 | 0.7 | 8526 | 0.8022 |
0.9217 | 0.75 | 9135 | 0.8016 |
0.6938 | 0.8 | 9744 | 0.8011 |
0.6962 | 0.85 | 10353 | 0.8011 |
0.664 | 0.9 | 10962 | 0.8011 |
0.6544 | 0.95 | 11571 | 0.8011 |
0.6782 | 1.0 | 12180 | 0.8011 |
Framework versions
- Transformers 4.36.0.dev0
- Pytorch 2.1.1+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0
- Downloads last month
- 5
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.