Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,108 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: zh
|
3 |
+
datasets: couplet
|
4 |
+
inference:
|
5 |
+
parameters:
|
6 |
+
max_length: 108
|
7 |
+
num_return_sequences: 1
|
8 |
+
do_sample: True
|
9 |
+
widget:
|
10 |
+
- text: "燕子归来,问昔日雕梁何处。 -"
|
11 |
+
example_title: "对联1"
|
12 |
+
- text: "笑取琴书温旧梦。 -"
|
13 |
+
example_title: "对联2"
|
14 |
+
- text: "煦煦春风,吹暖五湖四海。 -"
|
15 |
+
example_title: "对联3"
|
16 |
+
---
|
17 |
+
|
18 |
+
|
19 |
+
# 对联
|
20 |
+
|
21 |
+
## Model description
|
22 |
+
|
23 |
+
对联AI生成,给出上联,生成下联。
|
24 |
+
|
25 |
+
## How to use
|
26 |
+
使用 pipeline 调用模型:
|
27 |
+
|
28 |
+
```python
|
29 |
+
>>> # 调用微调后的模型
|
30 |
+
>>> senc="燕子归来,问昔日雕梁何处。 -"
|
31 |
+
>>> model_id="couplet-gpt2-finetuning"
|
32 |
+
>>> from transformers import BertTokenizer, GPT2LMHeadModel, TextGenerationPipeline
|
33 |
+
|
34 |
+
>>> tokenizer = BertTokenizer.from_pretrained(model_id)
|
35 |
+
>>> model = GPT2LMHeadModel.from_pretrained(model_id)
|
36 |
+
>>> text_generator = TextGenerationPipeline(model, tokenizer)
|
37 |
+
>>> text_generator.model.config.pad_token_id = text_generator.model.config.eos_token_id
|
38 |
+
>>> text_generator( senc,max_length=25, do_sample=True)
|
39 |
+
[{'generated_text': '燕子归来,问昔日雕梁何处。 - 风 儿 吹 醒 , 叹 今 朝 烟 雨 无'}]
|
40 |
+
```
|
41 |
+
Here is how to use this model to get the features of a given text in PyTorch:
|
42 |
+
|
43 |
+
```python
|
44 |
+
from transformers import AutoTokenizer, AutoModelForCausalLM
|
45 |
+
tokenizer = AutoTokenizer.from_pretrained("supermy/couplet")
|
46 |
+
model = AutoModelForCausalLM.from_pretrained("supermy/couplet")
|
47 |
+
```
|
48 |
+
|
49 |
+
|
50 |
+
|
51 |
+
## Training data
|
52 |
+
|
53 |
+
此数据集基于couplet-dataset的70w条数据集,在此基础上利用敏感词词库对数据进行了过滤,删除了低俗或敏感的内容,删除后剩余约74w条对联数据。
|
54 |
+
|
55 |
+
## 统计信息
|
56 |
+
|
57 |
+
```
|
58 |
+
|
59 |
+
```
|
60 |
+
|
61 |
+
## Training procedure
|
62 |
+
|
63 |
+
模型:[GPT2](https://huggingface.co/gpt2)
|
64 |
+
训练环境:英伟达16G显卡
|
65 |
+
|
66 |
+
bpe分词:"vocab_size"=50000
|
67 |
+
```
|
68 |
+
[INFO|trainer.py:1608] 2022-11-29 16:00:16,391 >> ***** Running training *****
|
69 |
+
[INFO|trainer.py:1609] 2022-11-29 16:00:16,391 >> Num examples = 249327
|
70 |
+
[INFO|trainer.py:1610] 2022-11-29 16:00:16,391 >> Num Epochs = 38
|
71 |
+
[INFO|trainer.py:1611] 2022-11-29 16:00:16,391 >> Instantaneous batch size per device = 96
|
72 |
+
[INFO|trainer.py:1612] 2022-11-29 16:00:16,391 >> Total train batch size (w. parallel, distributed & accumulation) = 96
|
73 |
+
[INFO|trainer.py:1613] 2022-11-29 16:00:16,391 >> Gradient Accumulation steps = 1
|
74 |
+
[INFO|trainer.py:1614] 2022-11-29 16:00:16,391 >> Total optimization steps = 98724
|
75 |
+
[INFO|trainer.py:1616] 2022-11-29 16:00:16,392 >> Number of trainable parameters = 124439808
|
76 |
+
|
77 |
+
{'loss': 6.4109, 'learning_rate': 4.975031400672582e-05, 'epoch': 0.19}
|
78 |
+
{'loss': 5.8476, 'learning_rate': 4.9497082776224627e-05, 'epoch': 0.38}
|
79 |
+
......
|
80 |
+
......
|
81 |
+
......
|
82 |
+
{'loss': 3.4331, 'learning_rate': 1.3573193954864066e-07, 'epoch': 37.91}
|
83 |
+
{'train_runtime': 65776.233, 'train_samples_per_second': 144.04, 'train_steps_per_second': 1.501, 'train_loss': 3.74187503763847, 'epoch': 38.0}
|
84 |
+
***** train metrics *****
|
85 |
+
epoch = 38.0
|
86 |
+
train_loss = 3.7419
|
87 |
+
train_runtime = 18:16:16.23
|
88 |
+
train_samples = 249327
|
89 |
+
train_samples_per_second = 144.04
|
90 |
+
train_steps_per_second = 1.501
|
91 |
+
11/30/2022 10:16:35 - INFO - __main__ - *** Evaluate ***
|
92 |
+
[INFO|trainer.py:2929] 2022-11-30 10:16:35,902 >> ***** Running Evaluation *****
|
93 |
+
[INFO|trainer.py:2931] 2022-11-30 10:16:35,902 >> Num examples = 1290
|
94 |
+
[INFO|trainer.py:2934] 2022-11-30 10:16:35,902 >> Batch size = 96
|
95 |
+
100%|██████████| 14/14 [00:03<00:00, 4.13it/s]
|
96 |
+
[INFO|modelcard.py:449] 2022-11-30 10:16:40,821 >> Dropping the following result as it does not have all the necessary fields:
|
97 |
+
{'task': {'name': 'Causal Language Modeling', 'type': 'text-generation'}, 'metrics': [{'name': 'Accuracy', 'type': 'accuracy', 'value': 0.39426602682416634}]}
|
98 |
+
***** eval metrics *****
|
99 |
+
epoch = 38.0
|
100 |
+
eval_accuracy = 0.3943
|
101 |
+
eval_loss = 3.546
|
102 |
+
eval_runtime = 0:00:03.67
|
103 |
+
eval_samples = 1290
|
104 |
+
eval_samples_per_second = 351.199
|
105 |
+
eval_steps_per_second = 3.811
|
106 |
+
perplexity = 34.6733
|
107 |
+
|
108 |
+
```
|