File size: 2,345 Bytes
44fe4dd
 
 
 
61239f0
44fe4dd
 
 
 
 
 
 
 
 
 
61239f0
44fe4dd
61239f0
44fe4dd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
61239f0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44fe4dd
 
 
 
61239f0
 
44fe4dd
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
library_name: peft
tags:
- generated_from_trainer
base_model: meta-llama/Llama-2-7b-hf
model-index:
- name: mistral-stock-finetune
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# mistral-stock-finetune

This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6325

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- training_steps: 500

### Training results

| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.2624        | 0.07  | 25   | 1.0074          |
| 0.8582        | 0.13  | 50   | 0.7799          |
| 0.7578        | 0.2   | 75   | 0.7358          |
| 0.7276        | 0.26  | 100  | 0.7133          |
| 0.7104        | 0.33  | 125  | 0.6944          |
| 0.6791        | 0.4   | 150  | 0.6819          |
| 0.6856        | 0.46  | 175  | 0.6734          |
| 0.6723        | 0.53  | 200  | 0.6658          |
| 0.6629        | 0.59  | 225  | 0.6601          |
| 0.6526        | 0.66  | 250  | 0.6553          |
| 0.6395        | 0.73  | 275  | 0.6505          |
| 0.6537        | 0.79  | 300  | 0.6471          |
| 0.6317        | 0.86  | 325  | 0.6445          |
| 0.6401        | 0.92  | 350  | 0.6405          |
| 0.6412        | 0.99  | 375  | 0.6375          |
| 0.6303        | 1.06  | 400  | 0.6367          |
| 0.6135        | 1.12  | 425  | 0.6347          |
| 0.6107        | 1.19  | 450  | 0.6336          |
| 0.605         | 1.25  | 475  | 0.6330          |
| 0.6062        | 1.32  | 500  | 0.6325          |


### Framework versions

- PEFT 0.10.1.dev0
- Transformers 4.40.0.dev0
- Pytorch 2.1.0+cu118
- Datasets 2.18.0
- Tokenizers 0.15.2