yunfan
commited on
Commit
•
9622afe
1
Parent(s):
6e62b6b
update model to version 2.0
Browse files- README.md +32 -3
- config.json +3 -2
- pytorch_model.bin +2 -2
- special_tokens_map.json +1 -1
- tokenizer_config.json +1 -1
- vocab.txt +0 -0
README.md
CHANGED
@@ -1,5 +1,5 @@
|
|
1 |
---
|
2 |
-
|
3 |
- fill-mask
|
4 |
- text2text-generation
|
5 |
- fill-mask
|
@@ -13,9 +13,38 @@ tags:
|
|
13 |
|
14 |
language: zh
|
15 |
---
|
16 |
-
|
17 |
# Chinese CPT-Base
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
## Model description
|
20 |
|
21 |
This is an implementation of CPT-Base. To use CPT, please import the file `modeling_cpt.py` (**Download** [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) that define the architecture of CPT into your project.
|
@@ -50,4 +79,4 @@ Yunfan Shao, Zhichao Geng, Yitao Liu, Junqi Dai, Fei Yang, Li Zhe, Hujun Bao, Xi
|
|
50 |
journal={arXiv preprint arXiv:2109.05729},
|
51 |
year={2021}
|
52 |
}
|
53 |
-
```
|
|
|
1 |
---
|
2 |
+
initializedtags:
|
3 |
- fill-mask
|
4 |
- text2text-generation
|
5 |
- fill-mask
|
|
|
13 |
|
14 |
language: zh
|
15 |
---
|
|
|
16 |
# Chinese CPT-Base
|
17 |
|
18 |
+
### News
|
19 |
+
|
20 |
+
**12/30/2022**
|
21 |
+
|
22 |
+
An updated version of CPT & Chinese BART are released. In the new version, we changed the following parts:
|
23 |
+
|
24 |
+
- **Vocabulary** We replace the old BERT vocabulary with a larger one of size 51271 built from the training data, in which we 1) add missing 6800+ Chinese characters (most of them are traditional Chinese characters); 2) remove redundant tokens (e.g. Chinese character tokens with ## prefix); 3) add some English tokens to reduce OOV.
|
25 |
+
- **Position Embeddings** We extend the max_position_embeddings from 512 to 1024.
|
26 |
+
|
27 |
+
We initialize the new version of models with the old version of checkpoints with vocabulary alignment. Token embeddings found in the old checkpoints are copied. And other newly added parameters are randomly initialized. We further train the new CPT & Chinese BART 50K steps with batch size 2048, max-seq-length 1024, peak learning rate 2e-5, and warmup ratio 0.1.
|
28 |
+
|
29 |
+
The result compared to the previous checkpoints is as followings:
|
30 |
+
|
31 |
+
| | AFQMC | IFLYTEK | CSL-sum | LCSTS | AVG |
|
32 |
+
| :--------- | :---: | :-----: | :-----: | :---: | :---: |
|
33 |
+
| Previous | | | | | |
|
34 |
+
| bart-base | 73.0 | 60 | 62.1 | 37.8 | 58.23 |
|
35 |
+
| cpt-base | 75.1 | 60.5 | 63.0 | 38.2 | 59.20 |
|
36 |
+
| bart-large | 75.7 | 62.1 | 64.2 | 40.6 | 60.65 |
|
37 |
+
| cpt-large | 75.9 | 61.8 | 63.7 | 42.0 | 60.85 |
|
38 |
+
| Updataed | | | | | |
|
39 |
+
| bart-base | 73.03 | 61.25 | 61.51 | 38.78 | 58.64 |
|
40 |
+
| cpt-base | 74.40 | 61.23 | 62.09 | 38.81 | 59.13 |
|
41 |
+
| bart-large | 75.81 | 61.52 | 64.62 | 40.90 | 60.71 |
|
42 |
+
| cpt-large | 75.97 | 61.63 | 63.83 | 42.08 | 60.88 |
|
43 |
+
|
44 |
+
The result shows that the updated models maintain comparative performance compared with previous checkpoints. There are still some cases that the updated model is slightly worse than the previous one, which results from the following reasons: 1) Training additional a few steps did not lead to significant performance improvement; 2) some downstream tasks are not affected by the newly added tokens and longer encoding sequences, but sensitive to the fine-tuning hyperparameters.
|
45 |
+
|
46 |
+
- Note that to use updated models, please update the `modeling_cpt.py` (new version download [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) and the vocabulary (refresh the cache).
|
47 |
+
|
48 |
## Model description
|
49 |
|
50 |
This is an implementation of CPT-Base. To use CPT, please import the file `modeling_cpt.py` (**Download** [Here](https://github.com/fastnlp/CPT/blob/master/finetune/modeling_cpt.py)) that define the architecture of CPT into your project.
|
|
|
79 |
journal={arXiv preprint arXiv:2109.05729},
|
80 |
year={2021}
|
81 |
}
|
82 |
+
```
|
config.json
CHANGED
@@ -4,7 +4,7 @@
|
|
4 |
"add_bias_logits": false,
|
5 |
"add_final_layer_norm": false,
|
6 |
"architectures": [
|
7 |
-
"
|
8 |
],
|
9 |
"attention_dropout": 0.1,
|
10 |
"bos_token_id": 101,
|
@@ -68,5 +68,6 @@
|
|
68 |
},
|
69 |
"transformers_version": "4.4.1",
|
70 |
"use_cache": true,
|
71 |
-
"
|
|
|
72 |
}
|
|
|
4 |
"add_bias_logits": false,
|
5 |
"add_final_layer_norm": false,
|
6 |
"architectures": [
|
7 |
+
"BartForConditionalGeneration"
|
8 |
],
|
9 |
"attention_dropout": 0.1,
|
10 |
"bos_token_id": 101,
|
|
|
68 |
},
|
69 |
"transformers_version": "4.4.1",
|
70 |
"use_cache": true,
|
71 |
+
"tokenizer_class": "BertTokenizer",
|
72 |
+
"vocab_size": 51271
|
73 |
}
|
pytorch_model.bin
CHANGED
@@ -1,3 +1,3 @@
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
-
oid sha256:
|
3 |
-
size
|
|
|
1 |
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:c895b7e87b22a84618c1e4e84ac907004f6ba35ea0b939e4401f1a93609486cd
|
3 |
+
size 579972445
|
special_tokens_map.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
|
|
|
1 |
+
{"bos_token": "[CLS]", "eos_token": "[EOS]", "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]"}
|
tokenizer_config.json
CHANGED
@@ -1 +1 @@
|
|
1 |
-
{"do_lower_case": true, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "
|
|
|
1 |
+
{"do_lower_case": false, "do_basic_tokenize": true, "never_split": null, "unk_token": "[UNK]", "sep_token": "[SEP]", "pad_token": "[PAD]", "cls_token": "[CLS]", "mask_token": "[MASK]", "tokenize_chinese_chars": true, "strip_accents": null, "bos_token": "[CLS]", "eos_token": "[EOS]", "name_or_path": "/remote-home/yfshao/workdir/code-base/Megatron-LM/init_models_ckpt/cpt/base", "special_tokens_map_file": "vocab/cpt_v3_vocab/special_tokens_map.json", "tokenizer_file": null}
|
vocab.txt
CHANGED
The diff for this file is too large to render.
See raw diff
|
|