qhar0h commited on
Commit
7ee85e0
1 Parent(s): 3a4932f

Model save

Browse files
README.md CHANGED
@@ -16,16 +16,6 @@ should probably proofread and complete it, then remove this comment. -->
16
  # openhermes-mistral-dpo-gptq
17
 
18
  This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
19
- It achieves the following results on the evaluation set:
20
- - Loss: 0.0522
21
- - Rewards/chosen: 0.2306
22
- - Rewards/rejected: -9.1473
23
- - Rewards/accuracies: 0.9940
24
- - Rewards/margins: 9.3779
25
- - Logps/rejected: -139.6582
26
- - Logps/chosen: -54.3255
27
- - Logits/rejected: -1.8763
28
- - Logits/chosen: -2.0675
29
 
30
  ## Model description
31
 
@@ -54,15 +44,6 @@ The following hyperparameters were used during training:
54
  - training_steps: 30
55
  - mixed_precision_training: Native AMP
56
 
57
- ### Training results
58
-
59
- | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
60
- |:-------------:|:-----:|:----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
61
- | 0.4801 | 0.0 | 10 | 0.0657 | 0.3409 | -2.6602 | 0.9980 | 3.0011 | -74.7870 | -53.2224 | -2.0627 | -2.2254 |
62
- | 0.0563 | 0.0 | 20 | 0.0414 | 0.3013 | -7.6547 | 0.9940 | 7.9559 | -124.7320 | -53.6186 | -1.9099 | -2.1024 |
63
- | 0.0001 | 0.01 | 30 | 0.0522 | 0.2306 | -9.1473 | 0.9940 | 9.3779 | -139.6582 | -54.3255 | -1.8763 | -2.0675 |
64
-
65
-
66
  ### Framework versions
67
 
68
  - Transformers 4.35.2
 
16
  # openhermes-mistral-dpo-gptq
17
 
18
  This model is a fine-tuned version of [TheBloke/OpenHermes-2-Mistral-7B-GPTQ](https://huggingface.co/TheBloke/OpenHermes-2-Mistral-7B-GPTQ) on the None dataset.
 
 
 
 
 
 
 
 
 
 
19
 
20
  ## Model description
21
 
 
44
  - training_steps: 30
45
  - mixed_precision_training: Native AMP
46
 
 
 
 
 
 
 
 
 
 
47
  ### Framework versions
48
 
49
  - Transformers 4.35.2
runs/Feb17_22-12-20_1827e93c0501/events.out.tfevents.1708211737.1827e93c0501.461.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aeb6d0d8fe0672babd0a522295c0f5a88587e097183b9d7d0a2dfc3ca2f24d74
3
+ size 5496