mengzi-bert-base / README.md
wangyulong's picture
Create README.md
f97122a
|
raw
history blame
978 Bytes
metadata
language:
  - zh
license: apache-2.0

Mengzi-BERT base model (Chinese)

Pretrained model on 300G Chinese corpus. Masked language modeling(MLM), part-of-speech(POS) tagging and sentence order prediction(SOP) are used as training task.

Mengzi: A lightweight yet Powerful Chinese Pre-trained Language Model

Usage

from transformers import BertTokenizer, BertModel
tokenizer = BertTokenizer.from_pretrained("Langboat/mengzi-bert-base")
model = BertModel.from_pretrained("Langboat/mengzi-bert-base")

Scores on nine chinese tasks (without any data augmentation)

Model AFQMC TNEWS IFLYTEK CMNLI WSC CSL CMRC C3 CHID
CLUE RoBERTa-wwm-ext Baseline 74.04 56.94 60.31 80.51 67.80 81 75.20 66.5 83.62
Mengzi-BERT-base 74.58 57.97 60.68 82.12 87.50 85.4 78.54 71.7 0

Citation

If you find the technical report or resource is useful, please cite the following technical report in your paper.

example