wangyulong commited on
Commit
8429dcf
•
1 Parent(s): b05b469

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -2
README.md CHANGED
@@ -6,7 +6,8 @@ license: apache-2.0
6
  # Mengzi-BERT base fin model (Chinese)
7
  Continue trained mengzi-bert-base with 20G financial news and research reports. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
8
 
9
- [Mengzi: A lightweight yet Powerful Chinese Pre-trained Language Model](www.example.com)
 
10
  ## Usage
11
  ```python
12
  from transformers import BertTokenizer, BertModel
@@ -17,5 +18,13 @@ model = BertModel.from_pretrained("Langboat/mengzi-bert-base-fin")
17
  ## Citation
18
  If you find the technical report or resource is useful, please cite the following technical report in your paper.
19
  ```
20
- example
 
 
 
 
 
 
 
 
21
  ```
 
6
  # Mengzi-BERT base fin model (Chinese)
7
  Continue trained mengzi-bert-base with 20G financial news and research reports. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.
8
 
9
+ [Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese](https://arxiv.org/abs/2110.06696)
10
+
11
  ## Usage
12
  ```python
13
  from transformers import BertTokenizer, BertModel
 
18
  ## Citation
19
  If you find the technical report or resource is useful, please cite the following technical report in your paper.
20
  ```
21
+ @misc{zhang2021mengzi,
22
+ title={Mengzi: Towards Lightweight yet Ingenious Pre-trained Models for Chinese},
23
+ author={Zhuosheng Zhang and Hanqing Zhang and Keming Chen and Yuhang Guo and Jingyun Hua and Yulong Wang and Ming Zhou},
24
+ year={2021},
25
+ eprint={2110.06696},
26
+ archivePrefix={arXiv},
27
+ primaryClass={cs.CL}
28
+ }
29
+
30
  ```