WDong commited on
Commit
101fbea
1 Parent(s): d68ee66

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -0
README.md CHANGED
@@ -1,3 +1,84 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ # 0428
5
+
6
+ This model is a fine-tuned version of [../../models/Qwen1.5-7B-sft-0425](https://huggingface.co/../../models/Qwen1.5-7B-sft-0425) on the alpaca_formatted_review_new_data_greater_7 dataset.
7
+ It achieves the following results on the evaluation set:
8
+
9
+ - Loss: 1.0733
10
+
11
+ ## Model description
12
+
13
+ Qwen1.5 is the beta version of Qwen2, a transformer-based decoder-only language model pretrained on a large amount of data. In comparison with the previous released Qwen, the improvements include:
14
+
15
+ * 8 model sizes, including 0.5B, 1.8B, 4B, 7B, 14B, 32B and 72B dense models, and an MoE model of 14B with 2.7B activated;
16
+ * Significant performance improvement in Chat models;
17
+ * Multilingual support of both base and chat models;
18
+ * Stable support of 32K context length for models of all sizes
19
+ * No need of `trust_remote_code`.
20
+
21
+ For more details, please refer to the [blog post](https://qwenlm.github.io/blog/qwen1.5/) and [GitHub repo](https://github.com/QwenLM/Qwen1.5).
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+
37
+ - learning_rate: 5e-05
38
+ - train_batch_size: 2
39
+ - eval_batch_size: 1
40
+ - seed: 42
41
+ - distributed_type: multi-GPU
42
+ - num_devices: 2
43
+ - gradient_accumulation_steps: 2
44
+ - total_train_batch_size: 8
45
+ - total_eval_batch_size: 2
46
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
47
+ - lr_scheduler_type: cosine
48
+ - lr_scheduler_warmup_steps: 5
49
+ - num_epochs: 5.0
50
+ - mixed_precision_training: Native AMP
51
+
52
+ ### Training results
53
+
54
+ | Training Loss | Epoch | Step | Validation Loss |
55
+ | :-----------: | :---: | :--: | :-------------: |
56
+ | 0.8554 | 0.25 | 10 | 1.1541 |
57
+ | 0.6139 | 0.5 | 20 | 1.1258 |
58
+ | 0.629 | 0.75 | 30 | 1.1057 |
59
+ | 0.7943 | 1.0 | 40 | 1.0993 |
60
+ | 0.6658 | 1.25 | 50 | 1.0964 |
61
+ | 0.778 | 1.5 | 60 | 1.0892 |
62
+ | 0.593 | 1.75 | 70 | 1.0868 |
63
+ | 0.8847 | 2.0 | 80 | 1.0816 |
64
+ | 0.5067 | 2.25 | 90 | 1.0806 |
65
+ | 0.9706 | 2.5 | 100 | 1.0789 |
66
+ | 0.7302 | 2.75 | 110 | 1.0763 |
67
+ | 0.6855 | 3.0 | 120 | 1.0768 |
68
+ | 0.4358 | 3.25 | 130 | 1.0754 |
69
+ | 0.5777 | 3.5 | 140 | 1.0740 |
70
+ | 0.5687 | 3.75 | 150 | 1.0732 |
71
+ | 0.6462 | 4.0 | 160 | 1.0732 |
72
+ | 0.5465 | 4.25 | 170 | 1.0733 |
73
+ | 0.7926 | 4.5 | 180 | 1.0737 |
74
+ | 0.4968 | 4.75 | 190 | 1.0735 |
75
+ | 0.6406 | 5.0 | 200 | 1.0733 |
76
+
77
+
78
+ ### Framework versions
79
+
80
+ - PEFT 0.10.0
81
+ - Transformers 4.40.0
82
+ - Pytorch 2.1.0+cu121
83
+ - Datasets 2.14.5
84
+ - Tokenizers 0.19.1