qgyd2021 commited on
Commit
2b195f6
1 Parent(s): e3a6d76

Model save

Browse files
Files changed (2) hide show
  1. README.md +56 -3
  2. pytorch_model.bin +1 -1
README.md CHANGED
@@ -12,8 +12,61 @@ should probably proofread and complete it, then remove this comment. -->
12
 
13
  # chinese_chitchat
14
 
15
- 这个模型是基于 [uer/gpt2-chinese-cluecorpussmall](https://huggingface.co/uer/gpt2-chinese-cluecorpussmall) [qgyd2021/chinese_chitchat](https://huggingface.co/datasets/qgyd2021/chinese_chitchat) 数据集的 [xiaohuangji](https://huggingface.co/datasets/qgyd2021/chinese_chitchat/viewer/xiaohuangji) 子集上进行微调的。
 
 
16
 
17
- 由于该数据集(xiaohuangji)中问答不相关(答非所问)的样本很多,噪音大,因此虽然有45万样本,但感觉效果并不太好。
18
 
19
- 训练了 2 次,第一次 26000 步,第二次 8000 步,总共大约是 10 个 epoch 的样子。
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
 
13
  # chinese_chitchat
14
 
15
+ This model is a fine-tuned version of [qgyd2021/chinese_chitchat](https://huggingface.co/qgyd2021/chinese_chitchat) on the None dataset.
16
+ It achieves the following results on the evaluation set:
17
+ - Loss: 2.1314
18
 
19
+ ## Model description
20
 
21
+ More information needed
22
+
23
+ ## Intended uses & limitations
24
+
25
+ More information needed
26
+
27
+ ## Training and evaluation data
28
+
29
+ More information needed
30
+
31
+ ## Training procedure
32
+
33
+ ### Training hyperparameters
34
+
35
+ The following hyperparameters were used during training:
36
+ - learning_rate: 0.0002
37
+ - train_batch_size: 16
38
+ - eval_batch_size: 8
39
+ - seed: 42
40
+ - distributed_type: multi-GPU
41
+ - num_devices: 2
42
+ - gradient_accumulation_steps: 4
43
+ - total_train_batch_size: 128
44
+ - total_eval_batch_size: 16
45
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
46
+ - lr_scheduler_type: linear
47
+ - lr_scheduler_warmup_steps: 10000
48
+ - num_epochs: 40.0
49
+
50
+ ### Training results
51
+
52
+ | Training Loss | Epoch | Step | Validation Loss |
53
+ |:-------------:|:-----:|:-----:|:---------------:|
54
+ | 1.5203 | 0.29 | 1000 | 2.0882 |
55
+ | 1.4243 | 0.58 | 2000 | 2.1525 |
56
+ | 1.3502 | 0.86 | 3000 | 2.1544 |
57
+ | 1.5332 | 1.15 | 4000 | 2.0826 |
58
+ | 1.5208 | 1.44 | 5000 | 2.0789 |
59
+ | 1.5521 | 1.73 | 6000 | 2.0613 |
60
+ | 1.5634 | 2.02 | 7000 | 2.1124 |
61
+ | 1.5067 | 2.3 | 8000 | 2.1014 |
62
+ | 1.5573 | 2.59 | 9000 | 2.0972 |
63
+ | 1.5949 | 2.88 | 10000 | 2.0907 |
64
+ | 1.5491 | 3.17 | 11000 | 2.1314 |
65
+
66
+
67
+ ### Framework versions
68
+
69
+ - Transformers 4.33.0
70
+ - Pytorch 2.0.0
71
+ - Datasets 2.1.0
72
+ - Tokenizers 0.13.3
pytorch_model.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:a784aa34ddf249484a1c8ec63289232467021f4d43bc7f023e6aeeacb616b6bd
3
  size 408322909
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:05886975fd5d351ea65cf7d0426cdd19bb82dc0f36dde5657e50c0f306debe0f
3
  size 408322909