Edit model card

Data

unsupervise train data is E-commerce dialogue, about 20w sentence pairs.

Model

model is chinese-roberta-wwm-ext

Usage

>>> from transformers import AutoTokenizer, AutoModel
>>> model = AutoModel.from_pretrained("tuhailong/chinese-roberta-wwm-ext")
>>> tokenizer = AutoTokenizer.from_pretrained("tuhailong/chinese-roberta-wwm-ext")
>>> sentences_str_list = ["δ»Šε€©ε€©ζ°”δΈι”™ηš„","ε€©ζ°”δΈι”™ηš„"]
>>> inputs = tokenizer(sentences_str_list,return_tensors="pt", padding='max_length', truncation=True, max_length=32)
>>> outputs = model(**inputs)
Downloads last month
2