uer commited on
Commit
31f953a
1 Parent(s): a84b2d5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -1
README.md CHANGED
@@ -9,7 +9,7 @@ widget:
9
 
10
  ## Model description
11
 
12
- The model is used for named entity recognition. You can download the model from the link [roberta-base-finetuned-cluener2020-chinese](https://huggingface.co/uer/roberta-base-finetuned-cluener2020-chinese).
13
 
14
  ## How to use
15
 
@@ -60,6 +60,20 @@ python3 scripts/convert_bert_token_classification_from_uer_to_huggingface.py --i
60
  ### BibTeX entry and citation info
61
 
62
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
63
  @article{zhao2019uer,
64
  title={UER: An Open-Source Toolkit for Pre-training Models},
65
  author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
 
9
 
10
  ## Model description
11
 
12
+ The model is used for named entity recognition. You can download the model either from the [UER-py Modelzoo page](https://github.com/dbiir/UER-py/wiki/Modelzoo) (in UER-py format), or via HuggingFace from the link [roberta-base-finetuned-cluener2020-chinese](https://huggingface.co/uer/roberta-base-finetuned-cluener2020-chinese).
13
 
14
  ## How to use
15
 
 
60
  ### BibTeX entry and citation info
61
 
62
  ```
63
+ @article{devlin2018bert,
64
+ title={BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding},
65
+ author={Devlin, Jacob and Chang, Ming-Wei and Lee, Kenton and Toutanova, Kristina},
66
+ journal={arXiv preprint arXiv:1810.04805},
67
+ year={2018}
68
+ }
69
+
70
+ @article{liu2019roberta,
71
+ title={Roberta: A robustly optimized bert pretraining approach},
72
+ author={Liu, Yinhan and Ott, Myle and Goyal, Naman and Du, Jingfei and Joshi, Mandar and Chen, Danqi and Levy, Omer and Lewis, Mike and Zettlemoyer, Luke and Stoyanov, Veselin},
73
+ journal={arXiv preprint arXiv:1907.11692},
74
+ year={2019}
75
+ }
76
+
77
  @article{zhao2019uer,
78
  title={UER: An Open-Source Toolkit for Pre-training Models},
79
  author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},