hhou435 commited on
Commit
5d1fcf7
1 Parent(s): 36b1bf5
Files changed (1) hide show
  1. README.md +58 -1
README.md CHANGED
@@ -1,4 +1,5 @@
1
  ---
 
2
  pipeline_tag: sentence-similarity
3
  tags:
4
  - sentence-transformers
@@ -6,5 +7,61 @@ tags:
6
  - sentence-similarity
7
  - transformers
8
  license: apache-2.0
 
 
 
 
 
 
9
  ---
10
- 模型正在测试中
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language: Chinese
3
  pipeline_tag: sentence-similarity
4
  tags:
5
  - sentence-transformers
 
7
  - sentence-similarity
8
  - transformers
9
  license: apache-2.0
10
+ widget:
11
+ source_sentence: "那个人很开心"
12
+ sentences:
13
+ - 那个人非常开心
14
+ - 那只猫很开心
15
+ - 那个人在吃东西
16
  ---
17
+
18
+ # Chinese Sentence BERT
19
+
20
+ ## Model description
21
+
22
+ This model is pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
23
+
24
+ ## Training data
25
+
26
+ [ChineseTextualInference](https://github.com/liuhuanyong/ChineseTextualInference/) is used as training data.
27
+
28
+ ## Training procedure
29
+
30
+ This model is fine-tuned by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud](https://cloud.tencent.com/). We fine-tune three epochs with a sequence length of 512 on the basis of the Google's pre-trained Chinese BERT model. At the end of each epoch, the model is saved when the best performance on development set is achieved.
31
+
32
+ ```
33
+ python3 finetune/run_classifier_siamese.py --pretrained_model_path models/google_zh_model.bin \
34
+ --vocab_path models/google_zh_vocab.txt \
35
+ --config_path models/sbert/base_config.json \
36
+ --train_path datasets/ChineseTextualInference/train.tsv \
37
+ --dev_path datasets/ChineseTextualInference/dev.tsv \
38
+ --epochs_num 3 --batch_size 32
39
+ ```
40
+
41
+
42
+ Finally, we convert the pre-trained model into Huggingface's format:
43
+
44
+ ```
45
+ python3 scripts/convert_sbert_from_uer_to_huggingface.py --input_model_path cluecorpussmall_bart_base_seq512_model.bin-250000 \
46
+ --output_model_path pytorch_model.bin \
47
+ --layers_num 12
48
+ ```
49
+
50
+
51
+ ### BibTeX entry and citation info
52
+
53
+ ```
54
+ @article{reimers2019sentence,
55
+ title={Sentence-bert: Sentence embeddings using siamese bert-networks},
56
+ author={Reimers, Nils and Gurevych, Iryna},
57
+ journal={arXiv preprint arXiv:1908.10084},
58
+ year={2019}
59
+ }
60
+ @article{zhao2019uer,
61
+ title={UER: An Open-Source Toolkit for Pre-training Models},
62
+ author={Zhao, Zhe and Chen, Hui and Zhang, Jinbin and Zhao, Xin and Liu, Tao and Lu, Wei and Chen, Xi and Deng, Haotang and Ju, Qi and Du, Xiaoyong},
63
+ journal={EMNLP-IJCNLP 2019},
64
+ pages={241},
65
+ year={2019}
66
+ }
67
+ ```