This model is specifically designed for legal domain.


Google and Stanford University released a new pre-trained model called ELECTRA, which has a much compact model size and relatively competitive performance compared to BERT and its variants. For further accelerating the research of the Chinese pre-trained model, the Joint Laboratory of HIT and iFLYTEK Research (HFL) has released the Chinese ELECTRA models based on the official code of ELECTRA. ELECTRA-small could reach similar or even higher scores on several NLP tasks with only 1/10 parameters compared to BERT and its variants.

This project is based on the official code of ELECTRA:

You may also interested in,

More resources by HFL:


If you find our resource or paper is useful, please consider including the following citation in your paper.

      title = "Revisiting Pre-Trained Models for {C}hinese Natural Language Processing",
      author = "Cui, Yiming  and
        Che, Wanxiang  and
        Liu, Ting  and
        Qin, Bing  and
        Wang, Shijin  and
        Hu, Guoping",
      booktitle = "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings",
      month = nov,
      year = "2020",
      address = "Online",
      publisher = "Association for Computational Linguistics",
      url = "",
      pages = "657--668",
Downloads last month
Hosted inference API

Unable to determine this model’s pipeline type. Check the docs .