gxy's picture
FEAT: first commit
41af8ad
|
raw
history blame
No virus
1.82 kB
metadata
language:
  - zh
license: apache-2.0
tags:
  - bert
inference: true
widget:
  - text: 中国首都位于[MASK]。

Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece,one model of Fengshenbang-LM

The 186 million parameter deberta-V2 base model, using 180G Chinese data, 8 3090TI(24G) training for 21 days,which is a encoder-only transformer structure. Consumed totally 500M samples.

We pretrained a 128000 vocab from train datasets using sentence piece. And achieve a better in downstream task.

Task Description

Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece is pre-trained by bert like mask task from Deberta paper

Usage

from transformers import AutoModelForMaskedLM, AutoTokenizer, FillMaskPipeline
import torch

tokenizer=AutoTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece', use_fast=False)
model=AutoModelForMaskedLM.from_pretrained('IDEA-CCNL/Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece')
text = '中国首都位于[MASK]。'
fillmask_pipe = FillMaskPipeline(model, tokenizer)
print(fillmask_pipe(text, top_k=10))

Finetune

We present the dev results on some tasks.

Model OCNLI CMNLI
RoBERTa-base 0.743 0.7973
Erlangshen-DeBERTa-v2-186M-Chinese-SentencePiece 0.7625 0.81

Citation

If you find the resource is useful, please cite the following website in your paper.

@misc{Fengshenbang-LM,
  title={Fengshenbang-LM},
  author={IDEA-CCNL},
  year={2022},
  howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}