Fill-Mask
Transformers
PyTorch
Chinese
bert
pretraining
dongjunwei.djw commited on
Commit
aec7254
·
1 Parent(s): a34420c

first commit for pai-dkplm-finincial-base-zh model

Browse files
Files changed (4) hide show
  1. README.md +34 -0
  2. config.json +31 -0
  3. pytorch_model.bin +3 -0
  4. vocab.txt +0 -0
README.md CHANGED
@@ -1,3 +1,37 @@
1
  ---
 
 
 
 
 
 
 
2
  license: apache-2.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: zh
3
+ pipeline_tag: fill-mask
4
+ widget:
5
+ - text: "[MASK]"
6
+ - text: "人类的[MASK]温是37度"
7
+ tags:
8
+ - bert
9
  license: apache-2.0
10
  ---
11
+ ## Chinese DKPLM (Decomposable Knowledge-enhanced Pre-trained Language Model) for the financial domain
12
+ For Chinese natural language processing in specific domains, we provide **Chinese DKPLM (Decomposable Knowledge-enhanced Pre-trained Language Model)** for the financial domain named **pai-dkplm-bert-zh**, from our AAAI 2021 paper named **DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding**.
13
+
14
+ This repository is developed based on the EasyNLP framework: [https://github.com/alibaba/EasyNLP](https://github.com/alibaba/EasyNLP ) developed by the Alibaba PAI team.
15
+
16
+ ## Citation
17
+ If you find the resource is useful, please cite the following papers in your work.
18
+
19
+ - For the EasyNLP framework:
20
+ ```
21
+ @article{easynlp,
22
+ title = {EasyNLP: A Comprehensive and Easy-to-use Toolkit for Natural Language Processing}, publisher = {arXiv},
23
+ author = {Wang, Chengyu and Qiu, Minghui and Zhang, Taolin and Liu, Tingting and Li, Lei and Wang, Jianing and Wang, Ming and Huang, Jun and Lin, Wei},
24
+ url = {https://arxiv.org/abs/2205.00258},
25
+ year = {2022}
26
+ }
27
+ ```
28
+ - For DKPLM:
29
+ ```
30
+ @article{dkplm,
31
+ title = {DKPLM: Decomposable Knowledge-enhanced Pre-trained Language Model for Natural Language Understanding},
32
+ author = {Zhang, Taolin and Wang, Chengyu and Hu, Nan and Qiu, Minghui and Tang, Chengguang and He, Xiaofeng and Huang, Jun},
33
+ url = {https://arxiv.org/abs/2112.01047},
34
+ publisher = {arXiv},
35
+ year = {2021}
36
+ }
37
+ ```
config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/home/ruyaoXu/Fin/FinBERT/bert-base-chinese",
3
+ "architectures": [
4
+ "BertForPreTraining"
5
+ ],
6
+ "attention_probs_dropout_prob": 0.1,
7
+ "classifier_dropout": null,
8
+ "directionality": "bidi",
9
+ "hidden_act": "gelu",
10
+ "hidden_dropout_prob": 0.1,
11
+ "hidden_size": 768,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 3072,
14
+ "layer_norm_eps": 1e-12,
15
+ "max_position_embeddings": 512,
16
+ "model_type": "bert",
17
+ "num_attention_heads": 12,
18
+ "num_hidden_layers": 12,
19
+ "pad_token_id": 0,
20
+ "pooler_fc_size": 768,
21
+ "pooler_num_attention_heads": 12,
22
+ "pooler_num_fc_layers": 3,
23
+ "pooler_size_per_head": 128,
24
+ "pooler_type": "first_token_transform",
25
+ "position_embedding_type": "absolute",
26
+ "torch_dtype": "float32",
27
+ "transformers_version": "4.12.5",
28
+ "type_vocab_size": 2,
29
+ "use_cache": true,
30
+ "vocab_size": 21128
31
+ }
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:39dbae2574c99747a681e56be672d1ed249cff62fd52ed9fb0a97fe18a5dc65f
3
+ size 411619295
vocab.txt ADDED
The diff for this file is too large to render. See raw diff