WDong commited on
Commit
ba5b4f5
1 Parent(s): 0820be6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +85 -0
README.md CHANGED
@@ -1,3 +1,88 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ # 0425
5
+
6
+ This model is a fine-tuned version of [Qwen/Qwen1.5-7B](https://huggingface.co/Qwen/Qwen1.5-7B) on the alpaca_formatted_ift_eft_Justification dataset.
7
+ It achieves the following results on the evaluation set:
8
+
9
+ - Loss: 0.8213
10
+
11
+ ## Model description
12
+
13
+ Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
14
+
15
+ * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
16
+ * Significant performance improvement in Chat models;
17
+ * Multilingual support of both base and chat models;
18
+ * Stable support of 32K context length for models of all sizes
19
+ * No need of `trust_remote_code`.
20
+
21
+ For more details, please refer to the [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+
37
+ - learning_rate: 5e-05
38
+ - train_batch_size: 2
39
+ - eval_batch_size: 1
40
+ - seed: 42
41
+ - distributed_type: multi-GPU
42
+ - num_devices: 3
43
+ - gradient_accumulation_steps: 2
44
+ - total_train_batch_size: 12
45
+ - total_eval_batch_size: 3
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: cosine
48
+ - lr_scheduler_warmup_steps: 20
49
+ - num_epochs: 5.0
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss |
55
+ | :-----------: | :----: | :--: | :-------------: |
56
+ | 1.0669 | 0.2018 | 100 | 0.8823 |
57
+ | 0.9156 | 0.4036 | 200 | 0.8593 |
58
+ | 0.9509 | 0.6054 | 300 | 0.8491 |
59
+ | 0.8287 | 0.8073 | 400 | 0.8423 |
60
+ | 0.8772 | 1.0091 | 500 | 0.8390 |
61
+ | 0.9101 | 1.2109 | 600 | 0.8385 |
62
+ | 0.8212 | 1.4127 | 700 | 0.8342 |
63
+ | 0.8721 | 1.6145 | 800 | 0.8327 |
64
+ | 1.0033 | 1.8163 | 900 | 0.8319 |
65
+ | 0.9879 | 2.0182 | 1000 | 0.8276 |
66
+ | 0.964 | 2.2200 | 1100 | 0.8276 |
67
+ | 0.8409 | 2.4218 | 1200 | 0.8264 |
68
+ | 0.8055 | 2.6236 | 1300 | 0.8262 |
69
+ | 1.0026 | 2.8254 | 1400 | 0.8240 |
70
+ | 0.881 | 3.0272 | 1500 | 0.8241 |
71
+ | 1.0058 | 3.2291 | 1600 | 0.8226 |
72
+ | 0.8747 | 3.4309 | 1700 | 0.8205 |
73
+ | 0.8686 | 3.6327 | 1800 | 0.8215 |
74
+ | 0.8838 | 3.8345 | 1900 | 0.8208 |
75
+ | 0.8246 | 4.0363 | 2000 | 0.8218 |
76
+ | 0.8727 | 4.2381 | 2100 | 0.8216 |
77
+ | 0.8737 | 4.4400 | 2200 | 0.8214 |
78
+ | 0.8955 | 4.6418 | 2300 | 0.8214 |
79
+ | 0.8909 | 4.8436 | 2400 | 0.8215 |
80
+
81
+
82
+ ### Framework versions
83
+
84
+ - PEFT 0.10.0
85
+ - Transformers 4.40.0
86
+ - Pytorch 2.1.0+cu121
87
+ - Datasets 2.14.5
88
+ - Tokenizers 0.19.1