afrideva commited on
Commit
fa1853d
1 Parent(s): 12179ab

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +111 -0
README.md ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model: amazingvince/zephyr-smol_llama-100m-dpo-full
3
+ inference: false
4
+ license: apache-2.0
5
+ model-index:
6
+ - name: zephyr-smol_llama-100m-dpo-full
7
+ results: []
8
+ model_creator: amazingvince
9
+ model_name: zephyr-smol_llama-100m-dpo-full
10
+ pipeline_tag: text-generation
11
+ quantized_by: afrideva
12
+ tags:
13
+ - generated_from_trainer
14
+ - gguf
15
+ - ggml
16
+ - quantized
17
+ - q2_k
18
+ - q3_k_m
19
+ - q4_k_m
20
+ - q5_k_m
21
+ - q6_k
22
+ - q8_0
23
+ ---
24
+ # amazingvince/zephyr-smol_llama-100m-dpo-full-GGUF
25
+
26
+ Quantized GGUF model files for [zephyr-smol_llama-100m-dpo-full](https://huggingface.co/amazingvince/zephyr-smol_llama-100m-dpo-full) from [amazingvince](https://huggingface.co/amazingvince)
27
+
28
+
29
+ | Name | Quant method | Size |
30
+ | ---- | ---- | ---- |
31
+ | [zephyr-smol_llama-100m-dpo-full.fp16.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-dpo-full-GGUF/resolve/main/zephyr-smol_llama-100m-dpo-full.fp16.gguf) | fp16 | 204.25 MB |
32
+ | [zephyr-smol_llama-100m-dpo-full.q2_k.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-dpo-full-GGUF/resolve/main/zephyr-smol_llama-100m-dpo-full.q2_k.gguf) | q2_k | 51.90 MB |
33
+ | [zephyr-smol_llama-100m-dpo-full.q3_k_m.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-dpo-full-GGUF/resolve/main/zephyr-smol_llama-100m-dpo-full.q3_k_m.gguf) | q3_k_m | 58.04 MB |
34
+ | [zephyr-smol_llama-100m-dpo-full.q4_k_m.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-dpo-full-GGUF/resolve/main/zephyr-smol_llama-100m-dpo-full.q4_k_m.gguf) | q4_k_m | 66.38 MB |
35
+ | [zephyr-smol_llama-100m-dpo-full.q5_k_m.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-dpo-full-GGUF/resolve/main/zephyr-smol_llama-100m-dpo-full.q5_k_m.gguf) | q5_k_m | 75.31 MB |
36
+ | [zephyr-smol_llama-100m-dpo-full.q6_k.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-dpo-full-GGUF/resolve/main/zephyr-smol_llama-100m-dpo-full.q6_k.gguf) | q6_k | 84.80 MB |
37
+ | [zephyr-smol_llama-100m-dpo-full.q8_0.gguf](https://huggingface.co/afrideva/zephyr-smol_llama-100m-dpo-full-GGUF/resolve/main/zephyr-smol_llama-100m-dpo-full.q8_0.gguf) | q8_0 | 109.33 MB |
38
+
39
+
40
+
41
+ ## Original Model Card:
42
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
43
+ should probably proofread and complete it, then remove this comment. -->
44
+
45
+ # zephyr-smol_llama-100m-dpo-full
46
+
47
+ This model is a fine-tuned version of [amazingvince/zephyr-smol_llama-100m-sft-full](https://huggingface.co/amazingvince/zephyr-smol_llama-100m-sft-full) on the None dataset.
48
+ It achieves the following results on the evaluation set:
49
+ - Loss: 0.5465
50
+ - Rewards/chosen: -0.0518
51
+ - Rewards/rejected: -0.7661
52
+ - Rewards/accuracies: 0.7170
53
+ - Rewards/margins: 0.7143
54
+ - Logps/rejected: -450.2018
55
+ - Logps/chosen: -588.7877
56
+ - Logits/rejected: -4.9602
57
+ - Logits/chosen: -5.2468
58
+
59
+ ## Model description
60
+
61
+ More information needed
62
+
63
+ ## Intended uses & limitations
64
+
65
+ More information needed
66
+
67
+ ## Training and evaluation data
68
+
69
+ More information needed
70
+
71
+ ## Training procedure
72
+
73
+ ### Training hyperparameters
74
+
75
+ The following hyperparameters were used during training:
76
+ - learning_rate: 5e-07
77
+ - train_batch_size: 8
78
+ - eval_batch_size: 8
79
+ - seed: 42
80
+ - distributed_type: multi-GPU
81
+ - num_devices: 2
82
+ - total_train_batch_size: 16
83
+ - total_eval_batch_size: 16
84
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
85
+ - lr_scheduler_type: linear
86
+ - lr_scheduler_warmup_ratio: 0.1
87
+ - num_epochs: 3
88
+
89
+ ### Training results
90
+
91
+ | Training Loss | Epoch | Step | Validation Loss | Rewards/chosen | Rewards/rejected | Rewards/accuracies | Rewards/margins | Logps/rejected | Logps/chosen | Logits/rejected | Logits/chosen |
92
+ |:-------------:|:-----:|:-----:|:---------------:|:--------------:|:----------------:|:------------------:|:---------------:|:--------------:|:------------:|:---------------:|:-------------:|
93
+ | 0.6549 | 0.26 | 1000 | 0.6037 | -0.1205 | -0.4850 | 0.6550 | 0.3644 | -447.3903 | -589.4750 | -4.7410 | -5.0341 |
94
+ | 0.5349 | 0.52 | 2000 | 0.5779 | -0.0126 | -0.5080 | 0.6770 | 0.4955 | -447.6208 | -588.3951 | -4.8645 | -5.1463 |
95
+ | 0.6029 | 0.77 | 3000 | 0.5657 | 0.0902 | -0.4636 | 0.6900 | 0.5538 | -447.1767 | -587.3674 | -5.0016 | -5.2911 |
96
+ | 0.5273 | 1.03 | 4000 | 0.5596 | 0.0496 | -0.5449 | 0.7040 | 0.5944 | -447.9891 | -587.7738 | -4.9972 | -5.2892 |
97
+ | 0.5 | 1.29 | 5000 | 0.5557 | 0.0585 | -0.6110 | 0.7050 | 0.6695 | -448.6505 | -587.6843 | -5.0108 | -5.3047 |
98
+ | 0.5056 | 1.55 | 6000 | 0.5499 | 0.0054 | -0.6719 | 0.7130 | 0.6773 | -449.2598 | -588.2154 | -4.9988 | -5.2907 |
99
+ | 0.4608 | 1.81 | 7000 | 0.5500 | -0.0376 | -0.7494 | 0.7030 | 0.7118 | -450.0341 | -588.6455 | -5.0549 | -5.3406 |
100
+ | 0.426 | 2.07 | 8000 | 0.5472 | -0.0106 | -0.7021 | 0.7100 | 0.6916 | -449.5617 | -588.3751 | -4.9750 | -5.2626 |
101
+ | 0.3875 | 2.32 | 9000 | 0.5464 | -0.0011 | -0.7171 | 0.7140 | 0.7159 | -449.7113 | -588.2810 | -4.9935 | -5.2796 |
102
+ | 0.397 | 2.58 | 10000 | 0.5462 | -0.0391 | -0.7566 | 0.7190 | 0.7175 | -450.1064 | -588.6602 | -4.9737 | -5.2618 |
103
+ | 0.4486 | 2.84 | 11000 | 0.5459 | -0.0493 | -0.7667 | 0.7110 | 0.7174 | -450.2074 | -588.7629 | -4.9569 | -5.2441 |
104
+
105
+
106
+ ### Framework versions
107
+
108
+ - Transformers 4.35.0
109
+ - Pytorch 2.1.0
110
+ - Datasets 2.14.6
111
+ - Tokenizers 0.14.1