File size: 2,757 Bytes
408cc04
1e1b01f
408cc04
 
 
 
 
 
 
5d1fcf7
 
 
 
 
 
408cc04
5d1fcf7
 
 
 
 
e2b809c
5d1fcf7
 
 
 
 
 
 
e2b809c
5d1fcf7
 
e2b809c
5d1fcf7
 
 
 
e2b809c
5d1fcf7
 
 
 
 
4acd943
5d1fcf7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
---
language: zh
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
license: apache-2.0
widget:
    source_sentence: "那个人很开心"
    sentences:
        - 那个人非常开心
        - 那只猫很开心
        - 那个人在吃东西
---

# Chinese Sentence BERT

## Model description

This is the sentence embedding model pre-trained by [UER-py](https://github.com/dbiir/UER-py/), which is introduced in [this paper](https://arxiv.org/abs/1909.05658).

## Training data

[ChineseTextualInference](https://github.com/liuhuanyong/ChineseTextualInference/) is used as training data. 

## Training procedure

The model is fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune five epochs with a sequence length of 128 on the basis of the pre-trained model [chinese_roberta_L-12_H-768](https://huggingface.co/uer/chinese_roberta_L-12_H-768). At the end of each epoch, the model is saved when the best performance on development set is achieved.

```
python3 finetune/run_classifier_siamese.py --pretrained_model_path models/cluecorpussmall_roberta_base_seq512_model.bin-250000 \
                                           --vocab_path models/google_zh_vocab.txt \
                                           --config_path models/sbert/base_config.json \
                                           --train_path datasets/ChineseTextualInference/train.tsv \
                                           --dev_path datasets/ChineseTextualInference/dev.tsv \
                                           --learning_rate 5e-5 --epochs_num 5 --batch_size 64
```

Finally, we convert the pre-trained model into Huggingface's format:

```
python3 scripts/convert_sbert_from_uer_to_huggingface.py --input_model_path models/finetuned_model.bin \                                                                
                                                         --output_model_path pytorch_model.bin \                                                                                            
                                                         --layers_num 12
```

### BibTeX entry and citation info

```
@article{reimers2019sentence,
  title={Sentence-bert: Sentence embeddings using siamese bert-networks},
  author={Reimers, Nils and Gurevych, Iryna},
  journal={arXiv preprint arXiv:1908.10084},
  year={2019}
}
@article{zhao2019uer,
  title={UER: An Open-Source Toolkit for Pre-training Models},
  author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
  journal={EMNLP-IJCNLP 2019},
  pages={241},
  year={2019}
}
```