vxbrandon's picture
End of training
e4c797a verified
---
license: apache-2.0
base_model: mistralai/Mistral-7B-v0.1
tags:
- generated_from_trainer
model-index:
- name: Mistral_Sparse_refined_web_70p_2024-02-15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Mistral_Sparse_refined_web_70p_2024-02-15
This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.2065
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 0
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 16
- total_eval_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.0372 | 0.0 | 25 | 3.1256 |
| 2.6176 | 0.01 | 50 | 2.8951 |
| 2.5321 | 0.01 | 75 | 2.7409 |
| 2.4603 | 0.02 | 100 | 2.6753 |
| 2.4033 | 0.02 | 125 | 2.6424 |
| 2.4821 | 0.02 | 150 | 2.6147 |
| 2.4008 | 0.03 | 175 | 2.5858 |
| 2.3651 | 0.03 | 200 | 2.5688 |
| 2.3873 | 0.04 | 225 | 2.5565 |
| 2.4145 | 0.04 | 250 | 2.5470 |
| 2.3295 | 0.04 | 275 | 2.5321 |
| 2.3458 | 0.05 | 300 | 2.5185 |
| 2.3587 | 0.05 | 325 | 2.5146 |
| 2.1873 | 0.06 | 350 | 2.5093 |
| 2.3502 | 0.06 | 375 | 2.5093 |
| 2.3837 | 0.06 | 400 | 2.5021 |
| 2.3747 | 0.07 | 425 | 2.4994 |
| 2.3292 | 0.07 | 450 | 2.4957 |
| 2.2438 | 0.08 | 475 | 2.4940 |
| 2.3102 | 0.08 | 500 | 2.4889 |
| 2.3791 | 0.08 | 525 | 2.4858 |
| 2.2743 | 0.09 | 550 | 2.4827 |
| 2.4148 | 0.09 | 575 | 2.4813 |
| 2.2115 | 0.1 | 600 | 2.4830 |
| 2.2963 | 0.1 | 625 | 2.4834 |
| 2.3762 | 0.1 | 650 | 2.4805 |
| 2.3657 | 0.11 | 675 | 2.4764 |
| 2.3219 | 0.11 | 700 | 2.4746 |
| 2.3166 | 0.12 | 725 | 2.4712 |
| 2.2193 | 0.12 | 750 | 2.4747 |
| 2.2629 | 0.12 | 775 | 2.4703 |
| 2.3504 | 0.13 | 800 | 2.4732 |
| 2.3523 | 0.13 | 825 | 2.4662 |
| 2.3362 | 0.14 | 850 | 2.4645 |
| 2.202 | 0.14 | 875 | 2.4659 |
| 2.2795 | 0.14 | 900 | 2.4682 |
| 2.2254 | 0.15 | 925 | 2.4621 |
| 2.3507 | 0.15 | 950 | 2.4642 |
| 2.2825 | 0.16 | 975 | 2.4624 |
| 2.3301 | 0.16 | 1000 | 2.4603 |
### Framework versions
- Transformers 4.36.2
- Pytorch 2.1.2+cu121
- Datasets 2.15.0
- Tokenizers 0.15.0