language: | |
- zh | |
license: "apache-2.0" | |
## Chinese Pre-Trained XLNet | |
This project provides a XLNet pre-training model for Chinese, which aims to enrich Chinese natural language processing resources and provide a variety of Chinese pre-training model selection. | |
We welcome all experts and scholars to download and use this model. | |
This project is based on CMU/Google official XLNet: https://github.com/zihangdai/xlnet | |
You may also interested in, | |
- Chinese BERT series: https://github.com/ymcui/Chinese-BERT-wwm | |
- Chinese ELECTRA: https://github.com/ymcui/Chinese-ELECTRA | |
- Chinese XLNet: https://github.com/ymcui/Chinese-XLNet | |
- Knowledge Distillation Toolkit - TextBrewer: https://github.com/airaria/TextBrewer | |
More resources by HFL: https://github.com/ymcui/HFL-Anthology | |
## Citation | |
If you find our resource or paper is useful, please consider including the following citation in your paper. | |
- https://arxiv.org/abs/2004.13922 | |
``` | |
@inproceedings{cui-etal-2020-revisiting, | |
title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing", | |
author = "Cui, Yiming and | |
Che, Wanxiang and | |
Liu, Ting and | |
Qin, Bing and | |
Wang, Shijin and | |
Hu, Guoping", | |
booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings", | |
month = nov, | |
year = "2020", | |
address = "Online", | |
publisher = "Association for Computational Linguistics", | |
url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58", | |
pages = "657--668", | |
} | |
``` |