File size: 3,003 Bytes
aae0f9f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8a252cd
aae0f9f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a6daa8c
aae0f9f
 
 
093a83c
 
aae0f9f
093a83c
a6daa8c
093a83c
aae0f9f
8a252cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aae0f9f
 
093a83c
 
aae0f9f
093a83c
aae0f9f
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: apache-2.0
library_name: peft
tags:
- trl
- sft
- generated_from_trainer
datasets:
- generator
base_model: mistralai/Mistral-7B-Instruct-v0.2
model-index:
- name: mistral7binstruct_summarize
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# mistral7binstruct_summarize

This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.2](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2) on the generator dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6323

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 0.03
- training_steps: 150

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.5172        | 0.02  | 5    | 2.3926          |
| 2.2822        | 0.04  | 10   | 2.1537          |
| 2.1109        | 0.06  | 15   | 2.0087          |
| 1.8571        | 0.08  | 20   | 1.9020          |
| 1.8964        | 0.11  | 25   | 1.8310          |
| 1.7335        | 0.13  | 30   | 1.7901          |
| 1.7744        | 0.15  | 35   | 1.7607          |
| 1.8654        | 0.17  | 40   | 1.7396          |
| 1.7379        | 0.19  | 45   | 1.7235          |
| 1.7442        | 0.21  | 50   | 1.7113          |
| 1.6483        | 0.23  | 55   | 1.7011          |
| 1.7006        | 0.25  | 60   | 1.6919          |
| 1.6783        | 0.28  | 65   | 1.6833          |
| 1.6468        | 0.3   | 70   | 1.6754          |
| 1.6116        | 0.32  | 75   | 1.6678          |
| 1.5899        | 0.34  | 80   | 1.6605          |
| 1.7426        | 0.36  | 85   | 1.6538          |
| 1.7244        | 0.38  | 90   | 1.6491          |
| 1.6652        | 0.4   | 95   | 1.6457          |
| 1.7859        | 0.42  | 100  | 1.6422          |
| 1.5836        | 0.44  | 105  | 1.6395          |
| 1.6265        | 0.47  | 110  | 1.6374          |
| 1.5187        | 0.49  | 115  | 1.6358          |
| 1.5989        | 0.51  | 120  | 1.6345          |
| 1.684         | 0.53  | 125  | 1.6336          |
| 1.6257        | 0.55  | 130  | 1.6329          |
| 1.7211        | 0.57  | 135  | 1.6325          |
| 1.6235        | 0.59  | 140  | 1.6324          |
| 1.5885        | 0.61  | 145  | 1.6323          |
| 1.5885        | 0.64  | 150  | 1.6323          |


### Framework versions

- PEFT 0.9.0
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2