--- language: - zh license: apache-2.0 tags: - bert - NLU - Sentiment inference: true widget: - text: "今天心情不好" --- # Erlangshen-MegatronBert-1.3B-Semtiment, model (Chinese),one model of [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM). We collect 8 sentiment datasets in the Chinese domain for finetune, with a total of 227347 samples. Our model is mainly based on [MegatronBert-1.3B](https://huggingface.co/IDEA-CCNL/Erlangshen-MegatronBert-1.3B) ## Usage ```python from transformers import BertForSequenceClassification from transformers import BertTokenizer import torch tokenizer=BertTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Sentiment') model=BertForSequenceClassification.from_pretrained('IDEA-CCNL/Erlangshen-MegatronBert-1.3B-Sentiment') text='今天心情不好' output=model(torch.tensor([tokenizer.encode(text)])) print(torch.nn.functional.softmax(output.logits,dim=-1)) ``` ## Scores on downstream chinese tasks | Model | ASAP-SENT | ASAP-ASPECT | ChnSentiCorp | | :--------: | :-----: | :----: | :-----: | | Erlangshen-Roberta-110M-Sentiment | 97.77 | 97.31 | 96.61 | | Erlangshen-Roberta-330M-Sentiment | 97.9 | 97.51 | 96.66 | | Erlangshen-MegatronBert-1.3B-Sentiment | 98.1 | 97.8 | 97 | ## Citation If you find the resource is useful, please cite the following website in your paper. ``` @misc{Fengshenbang-LM, title={Fengshenbang-LM}, author={IDEA-CCNL}, year={2021}, howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}}, } ```