metadata
language: zh
tags:
- cross-encoder
datasets:
- dialogue
Data
train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs.
Model
model created by sentence-tansformers,model struct is cross-encoder, pretrained model is hfl/chinese-roberta-wwm-ext. This model structure is as same as tuhailong/cross_encoder_roberta-wwm-ext_v1,the difference is changing the epoch from 5 to 1, the performance is better in my dataset.
Usage
>>> from sentence_transformers.cross_encoder import CrossEncoder
>>> model = CrossEncoder(model_save_path, device="cuda", max_length=64)
>>> sentences = ["今天天气不错", "今天心情不错"]
>>> score = model.predict([sentences])
>>> print(score[0])