File size: 3,077 Bytes
7b53b4e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
license: llama2
library_name: peft
tags:
- trl
- dpo
- generated_from_trainer
base_model: tokyotech-llm/Swallow-7b-instruct-v0.1
model-index:
- name: train_logs
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

# train_logs

This model is a fine-tuned version of [tokyotech-llm/Swallow-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-7b-instruct-v0.1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6776
- Rewards/chosen: 0.1044
- Rewards/rejected: 0.0678
- Rewards/accuracies: 0.5983
- Rewards/margins: 0.0365
- Logps/rejected: -195.0584
- Logps/chosen: -198.8751
- Logits/rejected: -1.2872
- Logits/chosen: -1.2718

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 300

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
|:-------------:|:------:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
| 0.6922        | 0.0351 | 50   | 0.6910          | -0.0173        | -0.0222          | 0.5433             | 0.0050          | -195.9592      | -200.0917    | -1.3115         | -1.2970       |
| 0.6915        | 0.0702 | 100  | 0.6841          | 0.0935         | 0.0721           | 0.5900             | 0.0214          | -195.0160      | -198.9837    | -1.2971         | -1.2823       |
| 0.6819        | 0.1053 | 150  | 0.6792          | 0.1455         | 0.1116           | 0.5900             | 0.0339          | -194.6210      | -198.4638    | -1.2865         | -1.2708       |
| 0.6825        | 0.1404 | 200  | 0.6784          | 0.1161         | 0.0811           | 0.5933             | 0.0350          | -194.9258      | -198.7577    | -1.2871         | -1.2717       |
| 0.6791        | 0.1754 | 250  | 0.6769          | 0.1049         | 0.0670           | 0.6183             | 0.0378          | -195.0665      | -198.8701    | -1.2885         | -1.2730       |
| 0.6826        | 0.2105 | 300  | 0.6776          | 0.1044         | 0.0678           | 0.5983             | 0.0365          | -195.0584      | -198.8751    | -1.2872         | -1.2718       |


### Framework versions

- PEFT 0.11.1
- Transformers 4.41.0
- Pytorch 2.3.0+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1