Chinese BERT with Whole Word Masking

For further accelerating Chinese natural language processing, we provide Chinese pre-trained BERT with Whole Word Masking.

Pre-Training with Whole Word Masking for Chinese BERT
Yiming Cui, Wanxiang Che, Ting Liu, Bing Qin, Ziqing Yang, Shijin Wang, Guoping Hu

This repository is developed based on:https://github.com/google-research/bert

You may also interested in,

More resources by HFL: https://github.com/ymcui/HFL-Anthology

Citation

If you find the technical report or resource is useful, please cite the following technical report in your paper.

  • Primary: https://arxiv.org/abs/2004.13922
    @inproceedings{cui-etal-2020-revisiting,
      title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
      author = "Cui, Yiming  and
        Che, Wanxiang  and
        Liu, Ting  and
        Qin, Bing  and
        Wang, Shijin  and
        Hu, Guoping",
      booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
      month = nov,
      year = "2020",
      address = "Online",
      publisher = "Association for Computational Linguistics",
      url = "https://www.aclweb.org/anthology/2020.findings-emnlp.58",
      pages = "657--668",
    }
  • Secondary: https://arxiv.org/abs/1906.08101
    @article{chinese-bert-wwm,
    title={Pre-Training with Whole Word Masking for Chinese BERT},
    author={Cui, Yiming and Che, Wanxiang and Liu, Ting and Qin, Bing and Yang, Ziqing and Wang, Shijin and Hu, Guoping},
    journal={arXiv preprint arXiv:1906.08101},
    year={2019}
    }
Downloads last month
1,820
Hosted inference API

Unable to determine this model’s pipeline type. Check the docs .