metadata
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: Mistral_Sparse_refined_web_relu_2024-02-16
results: []
Mistral_Sparse_refined_web_relu_2024-02-16
This model is a fine-tuned version of mistralai/Mistral-7B-v0.1 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 2.4640
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 0
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
8.7883 | 0.0 | 25 | 8.7175 |
8.1895 | 0.01 | 50 | 8.1790 |
7.7646 | 0.01 | 75 | 7.8165 |
7.5412 | 0.02 | 100 | 7.6103 |
7.2131 | 0.02 | 125 | 7.1825 |
4.8603 | 0.02 | 150 | 4.8039 |
3.7952 | 0.03 | 175 | 3.8330 |
3.2884 | 0.03 | 200 | 3.4432 |
3.106 | 0.04 | 225 | 3.2520 |
3.0004 | 0.04 | 250 | 3.1245 |
2.8648 | 0.04 | 275 | 3.0460 |
2.8349 | 0.05 | 300 | 2.9954 |
2.7982 | 0.05 | 325 | 2.9562 |
2.6109 | 0.06 | 350 | 2.9206 |
2.7517 | 0.06 | 375 | 2.8975 |
2.7817 | 0.06 | 400 | 2.8770 |
2.7346 | 0.07 | 425 | 2.8580 |
2.7019 | 0.07 | 450 | 2.8443 |
2.5852 | 0.08 | 475 | 2.8288 |
2.6452 | 0.08 | 500 | 2.8196 |
2.7203 | 0.08 | 525 | 2.8109 |
2.627 | 0.09 | 550 | 2.8013 |
2.7272 | 0.09 | 575 | 2.7899 |
2.5443 | 0.1 | 600 | 2.7826 |
2.6178 | 0.1 | 625 | 2.7782 |
2.656 | 0.1 | 650 | 2.7680 |
2.676 | 0.11 | 675 | 2.7593 |
2.6061 | 0.11 | 700 | 2.7539 |
2.6263 | 0.12 | 725 | 2.7511 |
2.5305 | 0.12 | 750 | 2.7474 |
2.5344 | 0.12 | 775 | 2.7408 |
2.655 | 0.13 | 800 | 2.7377 |
2.6113 | 0.13 | 825 | 2.7332 |
2.5946 | 0.14 | 850 | 2.7296 |
2.4564 | 0.14 | 875 | 2.7270 |
2.5591 | 0.14 | 900 | 2.7272 |
2.4965 | 0.15 | 925 | 2.7205 |
2.6231 | 0.15 | 950 | 2.7195 |
2.5395 | 0.16 | 975 | 2.7162 |
2.5741 | 0.16 | 1000 | 2.7145 |
Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0