Edit model card

tags:

  • bert
  • NLU
  • NLI

inference: true

widget:

  • text: "湖北省黄冈市麻城市中国中部(麻城)石材循环经济产业园厦门路麻城盈泰环保科技有限公司[SEP]黄冈市麻城市中国中部石材循环经济产业园厦门路麻城盈泰环保科技有限公司"

Erlangshen-Roberta-110M-POI, model (Chinese).

We add POI datasets, with a total of 5000000 samples. Our model is mainly based on roberta

Usage

from transformers import BertForSequenceClassification
from transformers import BertTokenizer
import torch

tokenizer=BertTokenizer.from_pretrained('swtx/Erlangshen-Roberta-110M-POI')
model=BertForSequenceClassification.from_pretrained('swtx/Erlangshen-Roberta-110M-POI')

texta='湖北省黄冈市麻城市中国中部(麻城)石材循环经济产业园厦门路麻城盈泰环保科技有限公司'
textb='黄冈市麻城市中国中部石材循环经济产业园厦门路麻城盈泰环保科技有限公司'

output=model(torch.tensor([tokenizer.encode(texta,textb)]))
print(torch.nn.functional.softmax(output.logits,dim=-1))
Downloads last month
5